text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Averaging gaussian functionals
This paper consists of two parts. In the first part, we focus on the average of a functional over shifted Gaussian homogeneous noise and as the averaging domain covers the whole space, we establish a Breuer-Major type Gaussian fluctuation based on various assumptions on the covariance kernel and/or the spectral measure. Expand abstract.
This paper consists of two parts. In the first part, we focus on the average of a functional over shifted Gaussian homogeneous noise and as the averaging domain covers the whole space, we establish a Breuer-Major type Gaussian fluctuation based on various assumptions on the covariance kernel and/or the spectral measure. Our methodology for the first part begins with the application of Malliavin calculus around Nualart-Peccati's Fourth Moment Theorem, and in addition we apply the Fourier techniques as well as a soft approximation argument based on Bessel functions of first kind. The same methodology leads us to investigate a closely related problem in the second part. We study the spatial average of a linear stochastic heat equation driven by space-time Gaussian colored noise. The temporal covariance kernel $\gamma_0$ is assumed to be locally integrable in this paper. If the spatial covariance kernel is nonnegative and integrable on the whole space, then the spatial average admits Gaussian fluctuation; with some extra mild integrability condition on $\gamma_0$, we are able to provide a functional central limit theorem. These results complement recent studies on the spatial average for SPDEs. Our analysis also allows us to consider the case where the spatial covariance kernel is not integrable: For example, in the case of the Riesz kernel, the first chaotic component of the spatial average is dominant so that the Gaussian fluctuation also holds true.
Spin-Locality of Higher-Spin Theories and Star-Product Functional Classes
The analysis of spin-locality of higher-spin gauge theory is formulated in terms of star-product functional classes appropriate for the $\beta\to -\infty$ limiting shifted homotopy proposed recently in arXiv:1909.04876 where all $\omega^2 C^2$ higher-spin vertices were shown to be spin-local. Expand abstract.
The analysis of spin-locality of higher-spin gauge theory is formulated in terms of star-product functional classes appropriate for the $\beta\to -\infty$ limiting shifted homotopy proposed recently in arXiv:1909.04876 where all $\omega^2 C^2$ higher-spin vertices were shown to be spin-local. For the $\beta\to -\infty$ limiting shifted contracting homotopy we identify the class of functions ${\mathcal H}^{+0}$, that do not contribute to the r.h.s. of HS field equations at a given order. A number of theorems and relations that organize analysis of the higher-spin equations are derived including extension of the Pfaffian Locality Theorem of arXiv:1805.11941 to the $\beta$-shifted contracting homotopy and the relation underlying locality of the $\omega^2 C^2$ sector of higher-spin equations. Space-time interpretation of spin-locality of theories involving infinite towers of fields is proposed as the property that the theory is space-time local in terms of original constituent fields $\Phi$ and their local currents $J(\Phi)$ of all ranks. Spin-locality is argued to be a proper substitute of locality for theories with finite sets of fields for which the two concepts are equivalent.
Entropy and Gravitation: From Black Hole Computers to Dark Energy and Dark Matter
We also show that there are deep similarities between the problem of "quantum gravity" (more specifically, the holographic space-time foam) and turbulence. Expand abstract.
We show that the concept of entropy and the dynamics of gravitation provide the linchpin in a unified scheme to understand the physics of black hole computers, space-time foam, dark energy, dark matter and the phenomenon of turbulence. We use three different methods to estimate the foaminess of space-time, which, in turn, provides a back-door way to derive the Bekenstein-Hawking formula for black hole entropy and the holographic principle. Generalizing the discussion for a static space-time region to the cosmos, we find a component of dark energy (resembling an effective positive cosmological constant of the correct magnitude) in the current epoch of the universe. The conjunction of entropy and gravitation is shown to give rise to a phenomenological model of dark matter, revealing the natural emergence, in galactic and cluster dynamics, of a critical acceleration parameter related to the cosmological constant; the resulting mass profiles are consistent with observations. Unlike ordinary matter, the quanta of the dark sector are shown to obey infinite statistics. This property of dark matter may lead to some non-particle phenomenology, and may explain why dark matter particles have not been detected in dark matter search experiments. We also show that there are deep similarities between the problem of "quantum gravity" (more specifically, the holographic space-time foam) and turbulence.
Lower Bound and Space-time Decay Rates of Higher Order Derivatives of Solution for the Compressible Navier-Stokes and Hall-MHD Equations
In this paper, we address the lower bound and space-time decay rates for the compressible Navier-Stokes and Hall-MHD equations under $H^3-$framework in $\mathbb{R}^3$. Expand abstract.
In this paper, we address the lower bound and space-time decay rates for the compressible Navier-Stokes and Hall-MHD equations under $H^3-$framework in $\mathbb{R}^3$. First of all, the lower bound of decay rate for the density, velocity and magnetic field converging to the equilibrium status in $L^2$ is $(1+t)^{-\frac{3}{4}}$; the lower bound of decay rate for the first order spatial derivative of density and velocity converging to zero in $L^2$ is $(1+t)^{-\frac{5}{4}}$, and the $k(\in [1, 3])-$th order spatial derivative of magnetic field converging to zero in $L^2$ is $(1+t)^{-\frac{3+2k}{4}}$. Secondly, the lower bound of decay rate for time derivatives of density and velocity converging to zero in $L^2$ is $(1+t)^{-\frac{5}{4}}$; however, the lower bound of decay rate for time derivatives of magnetic field converging to zero in $L^2$ is $(1+t)^{-\frac{7}{4}}$. Finally, we address the decay rate of solution in weighted Sobolev space $H^3_{\gamma}$. More precisely, the upper bound of decay rate of the $k(\in [0, 2])$-th order spatial derivatives of density and velocity converging to the $k(\in [0, 2])$-th order derivatives of constant equilibrium in weighted space $L^2_{\gamma}$ is $t^{-\frac{3}{4}+{\gamma}-\frac{k}{2}}$; however, the upper bounds of decay rate of the $k(\in [0, 3])$-th order spatial derivatives of magnetic field converging to zero in weighted space $L^2_{\gamma}$ is $t^{-\frac{3}{4}+\frac{{\gamma}}{2}-\frac{k}{2}}$.
Experimental observation of non-reciprocal band-gaps in a space-time modulated beam using a shunted piezoelectric array
The latter are connected to shunt circuits and switches which allow for a periodic modulation in time of the cell properties. Expand abstract.
In this work we experimentally achieve 1 kHz-wide directional band-gaps for elastic waves spanning a frequency range from approximately 8 to 11 kHz. One-way propagation is induced by way of a periodic waveguide consisting in an aluminum beam partially covered by a tightly packed array of piezoelectric patches. The latter are connected to shunt circuits and switches which allow for a periodic modulation in time of the cell properties. A traveling stiffness profile is obtained by opportunely phasing the temporal modulation of each active element, mimicking the propagation of a plane wave along the material, therefore establishing unidirectional wave propagation at bandgap frequencies.
Analysis of an Asymptotic Preserving Low Mach Number Accurate IMEX-RK Scheme for the Wave Equation System
The accuracy of a numerical scheme at low Mach numbers is its ability to maintain the solution close to the incompressible solution for all times, and this can be formulated in terms of the invariance of a space of constant densities and divergence-free velocities. Expand abstract.
In this paper the analysis of an asymptotic preserving (AP) IMEX-RK finite volume scheme for the wave equation system in the zero Mach number limit is presented. The accuracy of a numerical scheme at low Mach numbers is its ability to maintain the solution close to the incompressible solution for all times, and this can be formulated in terms of the invariance of a space of constant densities and divergence-free velocities. An IMEX-RK methodology is employed to obtain a time semi-discrete scheme, and a space-time fully-discrete scheme is derived by using standard finite volume techniques. The existence of a unique numerical solution, its uniform stability with respect to the Mach number, the AP property, and the accuracy at low Mach numbers are established for both time semi-discrete, and space-time fully-discrete schemes. Extensive numerical case studies confirm uniform second order convergence of the scheme with respect to the Mach number, and all the above-mentioned properties.
Space-time calibration of wind speed forecasts from regional climate models
This paper proposes space-time models that extend the main statistical postprocessing approaches to calibrate NWP model outputs. Expand abstract.
Numerical weather predictions (NWP) are systematically subject to errors due to the deterministic solutions used by numerical models to simulate the atmosphere. Statistical postprocessing techniques are widely used nowadays for NWP calibration. However, time-varying bias is usually not accommodated by such models. Its calibration performance is also sensitive to the temporal window used for training. This paper proposes space-time models that extend the main statistical postprocessing approaches to calibrate NWP model outputs. Trans-Gaussian random fields are considered to account for meteorological variables with asymmetric behavior. Data augmentation is used to account for censuring in the response variable. The benefits of the proposed extensions are illustrated through the calibration of hourly 10 m wind speed forecasts in Southeastern Brazil coming from the Eta model.
New Mathematical Models of GPS Intersatellite Communications in the Gravitational Field of the Near-Earth Space
The other important consequence is that the geodesic distance turns out to be the space-time interval, but with account also of the "condition for ISC". Expand abstract.
Several space missions such as GRACE, GRAIL, ACES and others rely on intersatellite communications (ISC) between two satellites at a large distance one from another. The main goal of the theory is to formulate all the navigation observables within the General Relativity Theory (GRT). The same approach should be applied also to the intersatellite GPS-communications (in perspective also between the GPS, GLONASS and Galileo satellite constellations). In this paper a theoretical approach has been developed for ISC between two satellites moving on (one-plane) elliptical orbits based on the introduction of two gravity null cones with origins at the emitting-signal and receiving-signal satellites. The two null cones account for the variable distance between the satellites during their uncorrelated motion. This intersection of the two null cones gives the space-time interval in GRT. Applying some theorems from higher algebra, it was proved that this space-time distance can become zero, consequently it can be also negative and positive. But in order to represent the geodesic distance travelled by the signal, the space-time interval has to be "compatible" with the Euclidean distance. So this "compatibility condition", conditionally called "condition for ISC", is the most important consequence of the theory. The other important consequence is that the geodesic distance turns out to be the space-time interval, but with account also of the "condition for ISC". The geodesic distance is proved to be greater than the Euclidean distance - a result, entirely based on the "two null cones approach" and moreover, without any use of the Shapiro delay formulae. Application of the same higher algebra theorems shows that the geodesic distance cannot have any zeroes, in accord with being greater than the Euclidean distance.
Free-space propagation of spatio-temporal optical vortices (STOVs)
Our measurements and simulations demonstrate STOV mediation of space-time energy flow within the pulse and conservation of OAM in space-time. Expand abstract.
Spatio-temporal optical vortices (STOVs) are a new type of optical orbital angular momentum (OAM) with optical phase circulation in space-time. In prior work [N. Jhajj et al., Phys. Rev X 6, 031037 (2016)], we demonstrated that a STOV is a universal structure emerging from the arrest of self-focusing collapse leading to nonlinear self-guiding in material media. Here, we demonstrate linear generation and propagation in free space of STOV-carrying pulses. Our measurements and simulations demonstrate STOV mediation of space-time energy flow within the pulse and conservation of OAM in space-time. Single-shot amplitude and phase images of STOVs are taken using a new diagnostic, transient grating single-shot supercontinuum spectral interferometry (TG-SSSI).
Induction of hierarchy and time through one-dimensional probability space with certain topologies
Utilizing a Clifford algebra, a congruent zeta function, and a Weierstra{beta} p-function in conjunction with a type VI Painleve equation, we confirmed the induction of hierarchy and time through one-dimensional probability space with certain topologies. Expand abstract.
In a previous study, the authors utilized a single dimensional operationalization of species density that at least partially demonstrated dynamic system behavior. For completeness, a theory needs to be developed related to homology/cohomology, induction of the time dimension, and system hierarchies. The topological nature of the system is carefully examined and for testing purposes, species density data for a wild Dictyostelia community data are used in conjunction with data derived from liquid-chromatography mass spectrometry of proteins. Utilizing a Clifford algebra, a congruent zeta function, and a Weierstra{beta} p-function in conjunction with a type VI Painleve equation, we confirmed the induction of hierarchy and time through one-dimensional probability space with certain topologies. This process also served to provide information concerning interactions in the model. The previously developed "small s" metric can characterize dynamical system hierarchy and interactions, using only abundance data along time development.
|
CommonCrawl
|
Understanding Poles and Zeros in Transfer Functions
May 26, 2019 by Robert Keim
This article explains what poles and zeros are and discusses the ways in which transfer-function poles and zeros are related to the magnitude and phase behavior of analog filter circuits.
In the previous article, I presented two standard ways of formulating an s-domain transfer function for a first-order RC low-pass filter. Let's briefly review some essential concepts.
A transfer function mathematically expresses the frequency-domain input-to-output behavior of a filter.
We can write a transfer function in terms of the variable s, which represents complex frequency, and we can replace s with jω when we need to calculate magnitude and phase response at a specific frequency.
The standardized form of a transfer function is like a template that helps us to quickly determine the filter's defining characteristics.
Mathematical manipulation of the standardized first-order transfer function allows us to demonstrate that a filter's cutoff frequency is the frequency at which magnitude is reduced by 3 dB and phase is shifted by –45°.
Poles and Zeros
Let's assume that we have a transfer function in which the variable s appears in both the numerator and the denominator. In this situation, at least one value of s will cause the numerator to be zero, and at least one value of s will cause the denominator to be zero. A value that causes the numerator to be zero is a transfer-function zero, and a value that causes the denominator to be zero is a transfer-function pole.
Let's consider the following example:
$$T(s)=\frac{Ks}{s+\omega _{O}}$$
In this system, we have a zero at s = 0 and a pole at s = –ωO.
Poles and zeros are defining characteristics of a filter. If you know the locations of the poles and zeros, you have a lot of information about how the system will respond to signals with different input frequencies.
The Effect of Poles and Zeros
A Bode plot provides a straightforward visualization of the relationship between a pole or zero and a system's input-to-output behavior.
A pole frequency corresponds to a corner frequency at which the slope of the magnitude curve decreases by 20 dB/decade, and a zero corresponds to a corner frequency at which the slope increases by 20 dB/decade. In the following example, the Bode plot is the approximation of the magnitude response of a system that has a pole at 102 radians per second (rad/s) and a zero at 104 rad/s.
Phase Effects
In the previous article, we saw that the mathematical origin of a low-pass filter's phase response is the inverse tangent function. If we use the inverse tangent function (more specifically, the negative inverse tangent function) to generate a plot of phase (in degrees) versus logarithmic frequency, we end up with the following shape:
The Bode plot approximation for phase shift generated by a pole is a straight line representing –90° of phase shift. The line is centered on the pole frequency and has a slope of –45 degrees per decade, which means that the downward-sloping line begins one decade before the pole frequency and ends one decade after the pole frequency. The effect of a zero is the same except that the line has a positive slope, such that the total phase shift is +90°.
The following example represents a system that has a pole at 102 rad/s and a zero at 105 rad/s.
The Hidden Zero
If you have read the previous article, you know that the transfer function of a low-pass filter can be written as follows:
$$T(s)=\frac{a_{O}}{s+\omega _{O}}$$
Does this system have a zero? If we apply the definition given earlier in this article, we will conclude that it does not—the variable s does not appear in the numerator, and therefore no value of s will cause the numerator to equal zero.
It turns out, though, that it does have a zero, and to understand why, we need to consider a more generalized definition of transfer-function poles and zeros: a zero (z) occurs at a value of s that causes the transfer function to decrease to zero, and a pole (p) occurs at a value of s that causes the transfer function to tend toward infinity:
$$\lim_{s\rightarrow z}T(s)=0$$
$$\lim_{s\rightarrow p}T(s)=∞$$
Does the first-order low-pass filter have a value of s that results in T(s) → 0? Yes, it does, namely, s = ∞. Thus, the first-order low-pass system has a pole at ωO and a zero at ω = ∞.
I'll attempt to provide a physical interpretation of the zero at ω = ∞: It indicates that the filter cannot continue attenuating "forever" (where "forever" refers to frequency, not time). If you manage to create an input signal whose frequency continues to increase until it "reaches" infinity rad/s, the zero at s = ∞ causes the filter to stop attenuating, i.e., the slope of the magnitude response increases from –20 dB/decade to 0 dB/decade.
We've explored the basic theoretical and practical aspects of transfer-function poles and zeros, and we've seen that we can create a direct relationship between a filter's pole and zero frequencies and its magnitude and phase response. In the next article, we'll examine the transfer function of a first-order high-pass filter.
The Founding of IBM to Deep Blue's Chess Victory: The February 2018 Hardware History Roundup
Build a Solar Battery Charger For Ni-MH Batteries
Passive Intermodulation (PIM) Effects in Base Stations: Understanding the Challenges and Solutions
Noise Figure and Noise Temperature Calculator
transfer function
RIGOL Newest MSO8000 Oscilloscope Offers 2 GHz Bandwidth
What Is Clock Frequency? Embedded Firmware Tips and Tricks
Silicon Labs Unveils Automotive Timing Portfolio, Emphasizing Ease of Use for OEMs and Suppliers
EV Battery Management Gets Updated with Cloud-Connected Batteries and Thermal Management Techniques
by Robin Mitchell
Frequency Response of Op-Amp Circuits
pylee2001 May 28, 2019
Implementing a Low-Pass Filter on FPGA with Verilog
Like. Reply
Lonne Mays May 31, 2019
Excellent article, Robert! The mathematical subject matter was presented clearly and supported by explanations that supported an intuitive understanding.
|
CommonCrawl
|
Generalized algebra-valued models of set theory
Benedikt Löwe1, Sourav Tarafder2• Institutions (2)
University of Amsterdam1, University of Calcutta2
01 Mar 2015-Review of Symbolic Logic (Cambridge University Press)-Vol. 8, Iss: 1, pp 192-205
TL;DR: A model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory is shown.
Abstract: We generalize the construction of lattice-valued models of set theory due to Takeuti, Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory.
Summary (1 min read)
Jump to: [Reasonable implication algebras.] – [3.1. Definitions and basic properties.] and [Negation and paraconsistency.]
Reasonable implication algebras.
These facts are being used in the calculations later in the paper.
It is easy to check that all Boolean algebras and Heyting algebras are reasonable and deductive implication algebras.
The following two examples will be crucial during the rest of the paper:.
3.1. Definitions and basic properties.
In the Boolean-valued case, the names behave nicely with respect to their interpretations as names for sets.
If two names denote the same object, then the properties of the object do not depend on the name you are using.
Negation and paraconsistency.
This definition, together with minimal requirements, makes it impossible to have paraconsistency.
This gives us the following result immediately: THEOREM 6.1.
In particular, the class of names x such that N (x) does not form a ∼-equivalence class.
The authors paraconsistent set theory behaves very differently from the considerations of paraconsistent set theory in the mentioned papers, as the authors can show that the axiom scheme of Comprehension is not valid in their model: THEOREM 6.3.
Did you find this useful? Give us your feedback
Read full summary
Content maybe subject to copyright Report
UvA-DARE is a service provided by the library of the University of Amsterdam (http
://dare.uva.nl)
UvA-DARE (Digital Academic Repository)
Löwe, B.; Tarafder, S.
10.1017/S175502031400046X
Review of Symbolic Logic
Citation for published version (APA):
Löwe, B., & Tarafder, S. (2015). Generalized algebra-valued models of set theory.
Review of
Symbolic Logic
(1), 192-205. https://doi.org/10.1017/S175502031400046X
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s)
and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open
content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please
let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material
inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter
to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You
will be contacted as soon as possible.
Download date:10 Aug 2022
EVIEW OF
YMBOLIC
OGIC
Volume 8, Number 1, March 2015
BENEDIKT LÖWE
Institute for Logic, Language and Computation, Universiteit van Amsterdam and
Fachbereich Mathematik, Universität Hamburg
SOURAV TARAFDER
Department of Commerce (Morning), St. Xavier's College and Department of Pure
Mathematics, Calcutta University
Abstract. We generalize the construction of lattice-valued models of set theory due to Takeuti,
Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a
paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel
set theory.
§1. Introduction. If B is any Boolean algebra and V a model of set theory, we can
construct by transfinite recursion the Boolean-valued model of set theory V
consisting of
names for sets, an extended language L
, and an interpretation function · : L
→ B
assigning truth values in B to formulas of the extended language. Using the notion of
validity derived from ·, all of the axioms of ZFC are valid in V
. Boolean-valued models
were introduced in the 1960s by Scott, Solovay, and Vop
enka; an excellent exposition of
the theory can be found in Bell (2005).
Replacing the Boolean algebra in the above construction by a Heyting algebra H, one
obtains a Heyting-valued model of set theory V
. The proofs of the Boolean case transfer
to the Heyting-valued case to yield that V
is a model of IZF, intuitionistic ZF, where
the logic of the Heyting algebra H determines the logic of the Heyting-valued model of
set theory (cf. Grayson, 1979; Bell, 2005, chap. 8). This idea was further generalized
by Takeuti & Titani (1992), Titani (1999), Titani & Kozawa (2003), Ozawa (2007), and
Ozawa (2009), replacing the Heyting algebra H by appropriate lattices that allow models
of quantum set theory (where the algebra is an algebra of truth-values in quantum logic) or
fuzzy set theory.
In this paper, we shall generalize this model construction further to work on algebras that
we shall call reasonable implication algebras (§2). These algebras do not have a negation
symbol, and hence we shall be focusing on the negation-free fragment of first-order logic:
the closure under the propositional connectives ∧, ∨, ⊥, and →. Classically, of course,
every formula is equivalent to one in the negation-free fragment (since ¬ϕ is equivalent to
ϕ →⊥). In §3, we define the model construction and prove that assuming a number
of additional assumptions (among them a property we call the bounded quantification
property), we have constructed a model of the negation-free fragment of ZF
(which is
classically equivalent to ZF
In §4 and §5, we apply the results of §3 to a particular three-valued algebra where we
prove the bounded quantification property (§4) and the axiom scheme of Foundation (§5).
Received: June 20, 2014.
Association for Symbolic Logic, 2014
doi:10.1017/S175502031400046X
available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/S175502031400046X
Downloaded from https:/www.cambridge.org/core. UVA Universiteitsbibliotheek, on 09 Mar 2017 at 14:43:11, subject to the Cambridge Core terms of use,
GENERALIZED ALGEBRA
VALUED MODELS OF SET THEORY
Finally, in §6, we add a negation symbol to our language. With the appropriate negation,
our example from §4 and §5 becomes a model of a paraconsistent set theory that validates
all formulas from the negation-free fragment of ZF. We compare our paraconsistent set
theory to other paraconsistent set theories from the literature and observe that it is funda-
mentally different from them.
We should like to mention that Joel Hamkins independently investigated the construction
that is at the heart of this paper and proved a result equivalent to our Theorem 6.3 (presented
at the Workshop on Paraconsistent Set Theory in Storrs, CT in October 2013).
§2. Reasonable implication algebras.
Implication algebras and implication-negation algebras. In this paper, all structures
(A, ∧, ∨, 0, 1) will be complete distributive lattices with smallest element 0 and largest
element 1. As usual, we abbreviate x ∧ y = x as x ≤ y. An expansion of this structure by
an additional binary operation ⇒ is called an implication algebra and an expansion with
⇒ and another unary operation
∗
is called an implication-negation algebra. We emphasize
that no requirements are made for ⇒ and
at this point.
Interpreting propositional logic in algebras. By L
we denote the language of
propositional logic without negation (with connectives ∧, ∨, →, and ⊥ and countably
many variables Var); we write L
Prop,¬
for the expansion of this language to include the
negation symbol ¬. Let L be either L
, and let A be either an implication
algebra or an implication-negation algebra, respectively. Any map ι from Var to A (called
an assignment) allows us to interpret L-formulas ϕ as elements ι(ϕ) of the algebra. Par
abus de langage, for an L-formula ϕ and some X ⊆ A, we write ϕ ∈ X for "for all
assignments ι :Var→ A, we have that ι(ϕ) ∈ X ". As usual, we call a set D ⊆ A a filter if
the following four conditions hold: (i) 1 ∈ D, (ii) 0 /∈ D, (iii) if x, y ∈ D, then x ∧ y ∈ D,
and (iv) if x ∈ D and x ≤ y, then y ∈ D; in this context, we call filters designated sets of
truth values, since the algebra A and a filter D together determine a logic
,D
by defining
for every set of L
-formulas and every L
-formula ϕ
ϕ : ⇐⇒ if for all ψ ∈ ,wehaveψ ∈ D, then ϕ ∈ D.
We write Pos
:={x ∈ A ; x = 0} for the set of positive elements in A. In all of the
examples considered in this paper, this set will be a filter.
The negation-free fragment. If L is any first-order language including the connectives
∧, ∨, ⊥ and → and any class of L-formulas, we denote closure of under ∧, ∨, ⊥,
∃, ∀, and → by Cl() and call it the negation-free closure of . A class of formulas
is negation-free closed if Cl() = . By NFF we denote the negation-free closure of the
atomic formulas; its elements are called the negation-free formulas.
Obviously, if L does not contain any connectives beyond ∧, ∨, ⊥, and →, then NFF =
L. Similarly, if the logic we are working in allows to define negation in terms of the other
connectives (as is the case, e.g., in classical logic), then every formula is equivalent to one
in NFF.
In some contexts, our negation-free fragment is called the positive fragment; in other contexts,
the positive closure is the closure under
∧, ∨, ⊥, ∃, and ∀ (not including →). In order to avoid
confusion with the latter contexts, we use the phrase "negation-free" rather than "positive".
BENEDIKT LÖWE AND SOURAV TARAFDER
Reasonable implication algebras. We call an implication algebra A = (A, ∧, ∨, 0,
1, ⇒) reasonable if the operation ⇒ satisfies the following axioms:
P1 (x ∧ y) ≤ z implies x ≤ (y ⇒ z),
P2 y ≤ z implies (x ⇒ y) ≤ (x ⇒ z), and
P3 y ≤ z implies (z ⇒ x) ≤ (y ⇒ x).
We say that a reasonable implication algebra is deductive if
((x ∧ y) ⇒ z) = (x ⇒ (y ⇒ z)).
It is easy to see that any reasonable implication algebra satisfies that x ≤ y implies x ⇒
1. Similarly, it is easy to see that in reasonable and deductive implication algebras, we
have (x ⇒ y) = (x ⇒ (x ∧ y)). These facts are being used in the calculations later in
the paper. It is easy to check that all Boolean algebras and Heyting algebras are reasonable
and deductive implication algebras.
Recurring examples. The following two examples will be crucial during the rest of
the paper: The three-valued Łukasiewicz algebra Ł
= ({0,
, 1}, ∧, ∨, ⇒, 0, 1) with
operations defined as in Figure 1 is a reasonable, but non-deductive implication algebra.
The three-valued algebra PS
, 1}, ∧, ∨, ⇒, 0, 1) with operations defined as in
Figure 2 is a reasonable and deductive implication algebra which is not a Heyting algebra.
Let us emphasize that, contrary to usage in other papers, we consider Ł
and PS
implication algebras without negation (cf. §6 for adding negations to PS
§3. The model construction.
3.1. Definitions and basic properties. Our construction follows very closely the
Boolean-valued construction as it can be found in Bell (2005). We fix a model of set theory
V and an implication algebra A = (A, ∧, ∨, 0, 1, ⇒) and construct a universe of names
by transfinite recursion:
={x ; x is a function and ran(x) ⊆ A
and there is ξ<αwith dom(x) ⊆ V
)} and
={x ;∃α(x ∈ V
)}.
We note that this definition does not depend on the algebraic operations in A, but only on
the set A, so any expansion of A to a richer language will give the same class of names
.ByL
∈
, we denote the first-order language of set theory using only the propositional
connectives ∧, ∨, ⊥, and →. We can now expand this language by adding all of the
Fig. 1. Connectives for the algebra Ł
Fig. 2. Connectives for PS
elements of V
as constants; the expanded (class-sized) language will be called L
.Asin
the Boolean case (Bell, 2005, Induction Principle 1.7), the (meta-)induction principle for
can be proved by a simple induction on the rank function: for every property of
names, if for all x ∈ V
,wehave
∀y ∈ dom(x )((y)) implies (x),
then all names x ∈ V
have the property .
As in the Boolean case, we can now define a map · assigning to each negation-free
formula in L
a truth value in A as follows. If u,v in V
and ϕ,ψ ∈ NFF, we define
⊥ = 0,
u ∈ v =
x∈dom(v)
(v(x) ∧ x = u),
u = v =
x∈dom(u)
(u(x) ⇒ x ∈ v) ∧
y∈dom(v)
(v(y) ⇒ y ∈ u),
ϕ ∧ ψ = ϕ ∧ ψ,
ϕ ∨ ψ = ϕ ∨ ψ,
ϕ → ψ = ϕ ⇒ ψ,
∀xϕ(x) =
u∈V
ϕ(u), and
∃xϕ(x) =
ϕ(u).
As usual, we abbreviate ∃x(x ∈ u ∧ ϕ(x)) by ∃x ∈ u ϕ(x) and ∀x(x ∈ u → ϕ(x)) by
∀x ∈ u ϕ(x) and call these bounded quantifiers. Bounded quantifiers will play a crucial
role in this paper.
If D is a filter on A and σ is a sentence of L
, we say that σ is D-valid in V
if σ ∈ D
and write V
|
σ .
In the Boolean-valued case, the names behave nicely with respect to their interpretations
as names for sets. For instance, if two names denote the same object, then the properties
of the object do not depend on the name you are using. In our generalized setting, we have
to be very careful since many of these reasonable rules do not hold in general: cf. §4 for
ROPOSITION
3.1. If A is a reasonable implication algebra and u ∈ V
, we have that
u = u = 1 and u(x) ≤ x ∈ u (for each x ∈ dom(u)).
Proof. This is an easy induction, using the fact that we have that in all reasonable
implication algebras, x ≤ y implies x ⇒ y = 1.
However, things break down rather quickly if you go beyond Proposition 3.1. The in-
equality u = v ∧ v = w ≤ u = w representing transitivity of equality of names does
not hold in general in the model constructed over Ł
: consider the functions
={∅, 0},
={∅,
}, and
={∅, 1}.
Then it can be easily checked that p
=
= p
> p
= 0.
Read PDF in full screen
HTML Viewer
Fig. 2. Connectives for PS3.
Fig. 1. Connectives for the algebra Ł3.
Games for Functions: Baire Classes, Weihrauch Degrees, Transfinite Computations, and Ranks
Hugo Nobrega
01 Dec 2019-The Bulletin of Symbolic Logic
TL;DR: Modifications of Semmes's game characterization of the Borel functions are defined, obtaining game characterizations of the Baire class $\alpha$ functions for each fixed $\alpha < \omega_1$.
Abstract: Game characterizations of classes of functions in descriptive set theory have their origins in the seminal work of Wadge, with further developments by several others. In this thesis we study such characterizations from several perspectives. We define modifications of Semmes's game characterization of the Borel functions, obtaining game characterizations of the Baire class $\alpha$ functions for each fixed $\alpha < \omega_1$. We also define a construction of games which transforms a game characterizing a class $\Lambda$ of functions into a game characterizing the class of functions which are piecewise $\Lambda$ on a countable partition by $\Pi^0_\alpha$ sets, for each $0 < \alpha < \omega_1$. We then define a parametrized Wadge game by using computable analysis, and show how the parameters affect the class of functions that is characterized by the game. As an application, we recast our games characterizing the Baire classes into this framework. Furthermore, we generalize our game characterizations of function classes to generalized Baire spaces, show how the notion of computability on Baire space can be transferred to generalized Baire spaces, and show that this is appropriate for computable analysis by defining a representation of Galeotti's generalized real line and analyzing the Weihrauch degree of the intermediate value theorem for that space. Finally, we show how the game characterizations of function classes discussed lead in a natural way to a stratification of each class into a hierarchy, intuitively measuring the complexity of functions in that class. This idea and the results presented open new paths for further research.
Ordinals in an Algebra-Valued Model of a Paraconsistent Set Theory
Sourav Tarafder1, Sourav Tarafder2• Institutions (2)
St. Xavier's College-Autonomous, Mumbai1, University of Calcutta2
TL;DR: It is proved that the collection of all ordinals is not a set in this model which is dissimilar to the other existing paraconsistent set theories.
Abstract: This paper deals with ordinal numbers in an algebra-valued model of a paraconsistent set theory. It is proved that the collection of all ordinals is not a set in this model which is dissimilar to the other existing paraconsistent set theories. For each ordinal α of classical set theory α-like elements are defined in the mentioned algebra-valued model whose collection is not singleton. It is shown that two α-like elements (for same α) may perform conversely to validate a given formula of the corresponding paraconsistent set theory.
A Bridge between Q-Worlds
Andreas Döring, Benjamin Eva, Masanao Ozawa
18 Dec 2018-arXiv: Quantum Physics
TL;DR: A unifying framework is provided that allows to better understand the relationship between different Q-worlds, and a general method for transferring concepts and results between TQT and QST is defined, thereby significantly increasing the expressive power of both approaches.
Abstract: Quantum set theory (QST) and topos quantum theory (TQT) are two long running projects in the mathematical foundations of quantum mechanics that share a great deal of conceptual and technical affinity. Most pertinently, both approaches attempt to resolve some of the conceptual difficulties surrounding quantum mechanics by reformulating parts of the theory inside of non-classical mathematical universes, albeit with very different internal logics. We call such mathematical universes, together with those mathematical and logical structures within them that are pertinent to the physical interpretation, `Q-worlds'. Here, we provide a unifying framework that allows us to (i) better understand the relationship between different Q-worlds, and (ii) define a general method for transferring concepts and results between TQT and QST, thereby significantly increasing the expressive power of both approaches. Along the way, we develop a novel connection to paraconsistent logic and introduce a new class of structures that have significant implications for recent work on paraconsistent set theory.
Cites background from "Generalized algebra-valued models o..."
...In recent years, Weber [36, 37], Brady [4], Löwe and Tarafder [21] and others have done exciting work in exploring the possibility of developing a nontrivial set theory built over a paraconsistent logic....
...In recent years, Weber [36, 37], Brady [4], Löwe and Tarafder [21] and others have done exciting work in exploring the possibility of developing a nontrivial set theory built over a paraconsistent logic....
..., Libert [20], Löwe and Tarafder [21])....
...However, there is still no well established model theory for paraconsistent set theory, despite some promising recent developments (see e.g., Libert [20], Löwe and Tarafder [21])....
...[21] Löwe, B. & Tarafder, S. (2015)....
A Paraconsistent Logic Obtained from an Algebra-Valued Model of Set Theory
Sourav Tarafder1, Sourav Tarafder2, Mihir Kr. Chakraborty3• Institutions (3)
St. Xavier's College-Autonomous, Mumbai1, University of Calcutta2, Jadavpur University3
TL;DR: Soundness and completeness theorems are established in a three-valued paraconsistent logic obtained from some algebra-valued model of set theory.
Abstract: This paper presents a three-valued paraconsistent logic obtained from some algebra-valued model of set theory. Soundness and completeness theorems are established. The logic has been compared with other three-valued paraconsistent logics.
Posted Content•
Twist-Valued Models for Three-valued Paraconsistent Set Theory
Walter Carnielli1, Marcelo E. Coniglio1• Institutions (1)
State University of Campinas1
26 Nov 2019-arXiv: Logic
TL;DR: It is argued that the implication operator of LPT0 is more suitable for a paraconsistent set theory than the implication of PS3, since it allows for genuinely inconsistent sets w such that [(w = w)] = 1/2 .
Abstract: Boolean-valued models of set theory were independently introduced by Scott, Solovay and Vop\v{e}nka in 1965, offering a natural and rich alternative for describing forcing. The original method was adapted by Takeuti, Titani, Kozawa and Ozawa to lattice-valued models of set theory. After this, L\"{o}we and Tarafder proposed a class of algebras based on a certain kind of implication which satisfy several axioms of ZF. From this class, they found a specific 3-valued model called PS3 which satisfies all the axioms of ZF, and can be expanded with a paraconsistent negation *, thus obtaining a paraconsistent model of ZF. The logic (PS3 ,*) coincides (up to language) with da Costa and D'Ottaviano logic J3, a 3-valued paraconsistent logic that have been proposed independently in the literature by several authors and with different motivations such as CluNs, LFI1 and MPT. We propose in this paper a family of algebraic models of ZFC based on LPT0, another linguistic variant of J3 introduced by us in 2016. The semantics of LPT0, as well as of its first-order version QLPT0, is given by twist structures defined over Boolean agebras. From this, it is possible to adapt the standard Boolean-valued models of (classical) ZFC to twist-valued models of an expansion of ZFC by adding a paraconsistent negation. We argue that the implication operator of LPT0 is more suitable for a paraconsistent set theory than the implication of PS3, since it allows for genuinely inconsistent sets w such that [(w = w)] = 1/2 . This implication is not a 'reasonable implication' as defined by L\"{o}we and Tarafder. This suggests that 'reasonable implication algebras' are just one way to define a paraconsistent set theory. Our twist-valued models are adapted to provide a class of twist-valued models for (PS3,*), thus generalizing L\"{o}we and Tarafder result. It is shown that they are in fact models of ZFC (not only of ZF).
A Taxonomy of C-systems
Walter Carnielli1, João Marcos1• Institutions (1)
06 Aug 2001-arXiv: Logic
TL;DR: An enormous variety of paraconsistent logics in the literature is shown to constitute C- System, and a novel notion of consistency is introduced.
Abstract: A thorough investigation of the foundations of paraconsistent logics. Relations between logical principles are formally studied, a novel notion of consistency is introduced, the logics of formal inconsistency, and the subclasses of C-systems and dC-systems are defined and studied. An enormous variety of paraconsistent logics in the literature is shown to constitute C-systems.
Paraconsistent Logic: Essays on the Inconsistent
Graham Priest, Richard Sylvan, Jean Norman
Quantum Set Theory
Gaisi Takeuti1• Institutions (1)
University of Illinois at Urbana–Champaign1
TL;DR: This paper studies set theory based on quantum logic, which is the lattice of all closed linear subspaces of a Hilbert space and shows the fact that there are many complete Boolean algebras inside quantum logic.
Abstract: In this paper, we study set theory based on quantum logic. By quantum logic, we mean the lattice of all closed linear subspaces of a Hilbert space. Since quantum logic is an intrinsic logic, i.e. the logic of the quantum world, (cf. 1) it is an important problem to develop mathematics based on quantum logic, more specifically set theory based on quantum logic. It is also a challenging problem for logicians since quantum logic is drastically different from the classical logic or the intuitionistic logic and consequently mathematics based on quantum logic is extremely difficult. On the other hand, mathematics based on quantum logic has a very rich mathematical content. This is clearly shown by the fact that there are many complete Boolean algebras inside quantum logic. For each complete Boolean algebra B, mathematics based on B has been shown by our work on Boolean valued analysis 4, 5, 6 to have rich mathematical meaning. Since mathematics based on B can be considered as a sub-theory of mathematics based on quantum logic, there is no doubt about the fact that mathematics based on quantum logic is very rich. The situation seems to be the following. Mathematics based on quantum logic is too gigantic to see through clearly.
View 2 reference excerpts
"Generalized algebra-valued models o..." refers methods in this paper
...This idea was further generalized by Takeuti & Titani (1992), Titani (1999), Titani & Kozawa (2003), Ozawa (2007), and Ozawa (2009), replacing the Heyting algebra H by appropriate lattices that allow models of quantum set theory (where the algebra is an algebra of truth-values in quantum logic) or…...
...In the Boolean and Heyting cases, as well as in the algebras considered by Takeuti & Titani (1992), Titani (1999), Titani & Kozawa (2003), and Ozawa (2007, 2009), negation is defined in terms of implication via a∗ := a ⇒ 0....
Heyting-valued models for intuitionistic set theory
R. J. Grayson1• Institutions (1)
University of Amsterdam1
"Generalized algebra-valued models o..." refers background in this paper
...The proofs of the Boolean case transfer to the Heyting-valued case to yield that VH is a model of IZF, intuitionistic ZF, where the logic of the Heyting algebra H determines the logic of the Heyting-valued model of set theory (cf. Grayson, 1979; Bell, 2005, chap....
Set Theory: Boolean-Valued Models and Independence Proofs
Abstract: Foreword Preface List of Problems 0 Boolean and Heyting Algebras: The Essentials 1 Boolean-Valued Models: First Steps 2 Forcing and Some Independece Proofs 3 Group Actions on V(B) and the Independence of the Axiom of Choice 4 Generic Ultrafilters and Transitive Models of ZFC 5 Cardinal Collapsing, Boolean Isomorphism and Applications to the Theory of Boolean Algebras 6 Iterated Boolean Extensions, Martin's Axiom and Souslin's Hypothesis 7 Boolean-Valued Analysis 8 Intuitionistic Set Theory and Heyting-Algebra-Valued Models Appendix Boolean- and Heyting-Algebra-Valued Models as Categories Historical Notes Bibliography Index of Symbols Index of Terms
"Generalized algebra-valued models o..." refers background or methods in this paper
...…the axioms and axiom schemes that we use in our proofs (in the schemes, ϕ is a formula with n + 2 free variables); the concrete formulations follows Bell (2005) very closely: ∀x∀y[∀z(z ∈ x ↔ z ∈ y) → x = y] (Extensionality) ∀x∀y∃z∀w(w ∈ z ↔ (w = x ∨ w = y)) (Pairing) ∃x[∃y(∀z(z ∈ y → ⊥) ∧ y ∈ x)…...
...Using the notion of validity derived from · , all of the axioms of ZFC are valid in VB. Boolean-valued models were introduced in the 1960s by Scott, Solovay, and Vopěnka; an excellent exposition of the theory can be found in Bell (2005)....
...Our construction follows very closely the Boolean-valued construction as it can be found in Bell (2005)....
...In the Boolean case, the inequality proved in Proposition 3.2 is an equality (Bell, 2005, p. 23): ∃x ∈ u ϕ(x) = ∨ x∈dom(u) ( u(x) ∧ ϕ(x) ) and ∀x ∈ u ϕ(x) = ∧ x∈dom(u) ( u(x) ⇒ ϕ(x) ) ....
...As in the Boolean case (Bell, 2005, Induction Principle 1.7), the (meta-)induction principle for VA can be proved by a simple induction on the rank function: for every property of names, if for all x ∈ VA, we have ∀y ∈ dom(x)( (y)) implies (x), then all names x ∈ VA have the property ....
Transfer principle in quantum set theory
01 Jun 2007-Journal of Symbolic Logic
Masanao Ozawa
A lattice-valued set theory
01 Aug 1999-Archive for Mathematical Logic
Satoko Titani
Orthomodular-valued models for quantum set theory
01 Dec 2017-Review of Symbolic Logic
Sourav Tarafder, Sourav Tarafder
Towards a Paraconsistent Quantum Set Theory
Benjamin Eva
Q1. What are the contributions in this paper?
The authors generalize the construction of lattice-valued models of set theory due to Takeuti, Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory. §
|
CommonCrawl
|
Spray drying OZ439 nanoparticles to form stable, water-dispersible powders for oral malaria therapy
Kurt D. Ristroph1,
Jie Feng1,
Simon A. McManus1,
Yingyue Zhang1,
Kai Gong2,3,
Hanu Ramachandruni4,
Claire E. White2,3 &
Robert K. Prud'homme ORCID: orcid.org/0000-0003-2858-00971
OZ439 is a new chemical entity which is active against drug-resistant malaria and shows potential as a single-dose cure. However, development of an oral formulation with desired exposure has proved problematic, as OZ439 is poorly soluble (BCS Class II drug). In order to be feasible for low and middle income countries (LMICs), any process to create or formulate such a therapeutic must be inexpensive at scale, and the resulting formulation must survive without refrigeration even in hot, humid climates. We here demonstrate the scalability and stability of a nanoparticle (NP) formulation of OZ439. Previously, we applied a combination of hydrophobic ion pairing and Flash NanoPrecipitation (FNP) to formulate OZ439 NPs 150 nm in diameter using the inexpensive stabilizer hydroxypropyl methylcellulose acetate succinate (HPMCAS). Lyophilization was used to process the NPs into a dry form, and the powder's in vitro solubilization was over tenfold higher than unprocessed OZ439.
In this study, we optimize our previous formulation using a large-scale multi-inlet vortex mixer (MIVM). Spray drying is a more scalable and less expensive operation than lyophilization and is, therefore, optimized to produce dry powders. The spray dried powders are then subjected to a series of accelerated aging stability trials at high temperature and humidity conditions.
The spray dried OZ439 powder's dissolution kinetics are superior to those of lyophilized NPs. The powder's OZ439 solubilization profile remains constant after 1 month in uncapped vials in an oven at 50 °C and 75% RH, and for 6 months in capped vials at 40 °C and 75% RH. In fasted-state intestinal fluid, spray dried NPs achieved 80–85% OZ439 dissolution, to a concentration of 430 µg/mL, within 3 h. In fed-state intestinal fluid, 95–100% OZ439 dissolution is achieved within 1 h, to a concentration of 535 µg/mL. X-ray powder diffraction and differential scanning calorimetry profiles similarly remain constant over these periods.
The combined nanofabrication and drying process described herein, which utilizes two continuous unit operations that can be operated at scale, is an important step toward an industrially-relevant method of formulating the antimalarial OZ439 into a single-dose oral form with good stability against humidity and temperature.
Great strides have been taken in the fight to eradicate malaria, and the number of deaths from the disease has been reduced by as much as 62% over the past decade and a half [1]. However, malaria remains one of the most prevalent infectious diseases in the world, infecting 219 million individuals and killing 435,000 in 2017 [2]. Among the most successful tools in this fight is the artemisinin combination therapy (ACT) [3], but recent years have seen the development of resistance to ACT therapy [4]. Resistance is attributed, in part, to poor patient adherence to the ACT regimen [5], which consists of twelve pills taken over the course of 3 days [5, 6]. A single-dose malaria cure—ideally, in oral dosage form—is therefore highly desirable.
OZ439 is a promising antimalarial drug that is being pursued as a single-dose oral malaria therapeutic, in part because of its high potency and the fact that resistance to it has not been observed [7,8,9,10]. To formulate as a single dose, the bioavailability of OZ439 needs to be increased. This work is a continuation of our previous study, in which we formulated OZ439 into polymeric nanoparticles via the scalable nanofabrication process Flash NanoPrecipitation (FNP) using Hypromellose Acetate Succinate as a stabilizer [11]. Formulation into NPs helps OZ439 overcome its poor oral bioavailability via two mechanisms: first, the high surface-to-volume ratio of a NP formulation increases dissolution rate; and second, x-ray powder diffraction (XRPD) and differential scanning calorimetry (DSC) profiles showed that OZ439 within the NPs is amorphous, rather than crystalline, leading to higher solubility and faster dissolution kinetics [11].
In this paper we focus on the translation of the earlier laboratory study to a large-scale process that could be used in a commercial, cost effective, good manufacturing practice (GMP) drug production line. The key elements of this translation are (1) moving the NP formation process from the Confined Impinging Jet (CIJ) mixer to the large-scale and continuous Multi-Inlet Vortex Mixer (MIVM), and (2) moving from lyophilization to continuous spray drying to produce dry powders. The characterization of NP stability and crystallinity are compared for samples made by the CIJ versus the MIVM process. Spray drying conditions including inlet temperature and gas flow rate are optimized. The dissolution kinetics of the powders in simulated gastric fluid and intestinal fluids in fasted and fed state conditions are presented. Results from a 6-month aging study show that the spray dried NPs are completely stable over this time period. An interesting final conclusion is that the dissolution kinetics of OZ439 NP powders processed by spray drying are superior to those of lyophilized NP powders.
Affinisol HPMCAS 126 G (> 94% purity) and Methocel E3 Premium LV Hydroxypropyl Methylcellulose (HPMC E3) were generously provided by Dow Chemical. Tetrahydrofuran (HPLC grade, 99.9%), methanol (HPLC grade, 99.9% purity) and acetonitrile (HPLC grade, 99.9% purity) were purchased from Fisher Chemicals. Sodium oleate (> 97% purity) was purchased from TCI America. Fasted-state simulated intestinal fluid (FaSSIF), fed-state simulated intestinal fluid (FeSSIF) and fasted-state simulated gastric fluid (FaSSGF) powders were purchased from biorelevant.com. OZ439 mesylate was supplied by Medicines for Malaria Venture (MMV).
Nanoparticle formation and characterization
Nanoparticles stabilized by HPMCAS and containing OZ439:oleate were formed via FNP. The FNP process has been described in detail previously [12, 13]. It involves two components: (1) rapid micromixing between a water-miscible organic solvent stream and an aqueous anti-solvent stream, and (2) kinetically arrested aggregation of the drug nanoparticle by adsorption of the stabilizer on its surface. The drug and stabilizing polymer are dissolved in the solvent stream. Upon mixing, which occurs on time scales of O(1) ms, the drug and amphiphilic portions of the stabilizing polymer adsorb on the growing aggregate and arrest growth. Nanoparticles from 25 to 450 nm can be produced with narrow size distributions and at high loadings.
OZ439 is a synthetic trioxolane which was provided in a mesylate salt form (Fig. 1). In the mesylate salt form or free base form, the solubility of OZ439 is too high to create stable nanoparticles by antisolvent precipitation. When either of these forms is used, NPs initially formed during FNP rapidly succumb to Ostwald ripening and grow in size [14, 15]. To form stable NPs, sodium oleate was included in the organic feed stream and acted as a hydrophobic ion pairing agent. Cationic OZ439 and anionic oleate ions paired together, and the resulting complex was sufficiently hydrophobic to precipitate during the mixing step.
From left: OZ439 cation; oleate anion; mesylate anion
Previously, we had applied FNP to OZ439 using a two-inlet lab-scale CIJ mixer [11], which requires a quenching step to stabilize the NPs against Ostwald ripening. As the process is intended to be continuous and at large scale, we here employed a multi-inlet vortex mixer (MIVM) for the formation of nanoparticles. The MIVM allows unequal volumetric flow rates between its four inlets. By introducing three water antisolvent streams, each at three times the volumetric flow rate of the sole organic stream, the MIVM achieved the same final nanoparticle quenching by dilution of the organic solvent concentration, and thus bypassed the quenching step. Figure 2 is a schematic of the two mixers as applied to this process.
Schematic of CIJ mixer (left) and MIVM (right) to form OZ439 nanoparticles by FNP. The MIVM operates continuously and does not require the additional quenching step required of the CIJ mixing geometry
Nanoparticles were produced via FNP in the MIVM using sodium oleate as a hydrophobic counterion. OZ439 mesylate (5 mg/mL), sodium oleate (5.38 mg/mL), and HPMCAS 126 (5 mg/mL) were dissolved in a mixture of 33% methanol and 67% THF. This stream was loaded into a syringe and attached to the MIVM, along with three syringes containing DI water. Using a syringe pump (Harvard Apparatus, Massachusetts, USA), the organic stream and water streams were fed into the MIVM at controlled flow rates. The organic stream was fed at 16 mL/min, and each of the water streams was fed at 48 mL/min, such that the resulting NP suspension contained 10% organic solvent by volume.
Nanoparticle mean size, size distribution, and polydispersity were measured by dynamic light scattering (DLS) in a Malvern Zetasizer Nano (Malvern Instruments, Worcestershire, United Kingdom). Following formation, nanoparticle samples were diluted tenfold in DI water immediately prior to measurement to reduce multiple scattering. The Zetasizer was operated at room temperature and used a detection angle of 173°. Measurements were taken in triplicate. DLS data were processed with Malvern's software using a distribution analysis based on a cumulant model. The cumulant analysis is defined in International Organization for Standardization (ISO) standard document 13321. The calculations of PDI are defined in the ISO standard document 13321:1996 E.
Lyophilization conditions
In order to process nanoparticle suspensions into dry powders for long-term storage and ease of shipping, a drying unit operation like lyophilization or spray drying was required. In lyophilization, a frozen sample is subjected to low temperatures and pressures, and ice and frozen organic solvents are removed by sublimation. Nanoparticles in the suspension are preserved during the freezing process through the addition of a cryoprotectant, usually an inert species that sterically prevents particle–particle interactions, overlap, and aggregation.
The lyophilization protocol used herein was the one optimized in our previous study [11]. In brief, HPMC E3 was added to nanoparticle suspensions following FNP at a 1:1 HPMC E3:solids ratio. The E3 acted as a cryoprotectant as the nanoparticle suspension was immersed in a bath of dry ice and acetone (− 78 °C) and rapidly frozen. Frozen samples were then transferred to a − 80 °C freezer overnight. Lyophilization took place in a VirTis AdVantage Pro BenchTop Freeze Dryer (SP Scientific, Pennsylvania, USA) at − 20 °C under vacuum.
Spray drying conditions
Spray drying was performed using a similar protocol to the one described in Feng et al. [16]. In brief, following nanoparticle formation, HPMC E3 was added to the nanoparticle suspension at a 1:1 HPMC E3:mass ratio to prevent particle aggregation during the drying process. Next, the suspension was fed into a Büchi B-290 spray drier (Büchi Corp., Delaware, USA) via a peristaltic pump at a flow rate of 8 mL/min. Drying parameters such as inlet temperature, mass ratio of added HPMC E3, and aspirator gas flow rate were optimized. The optimal inlet temperature was found to be 145 °C. Following drying, powders were collected and weighed in order to calculate the yield efficiency (YE) of the process. The powder particle size was observed using an Eclipse E200 bright-field microscope (Nikon Instruments, Japan).
Powder characterization: X-ray powder diffraction (XRPD), differential scanning calorimetry (DSC), and water content
XRPD: A D8 Advance diffractometer (Bruker Corporation, Massachusetts, USA) with Ag Kα radiation (λ = 0.56 Å) and a LynxEye-Xe detector was used for XRPD. A polyimide capillary tube (inner diameter = 1 mm) was loaded with 5–10 mg of powder and sealed with quick-setting epoxy. Scattering data were collected over values of 2θ from 3 to 20°, which correspond to Cu Kα 2θ values from 8.2 to 57.0°. A step size of 0.025° (0.067° for Cu Kα radiation) and a rate of 5 s/step were used. Note that in the following sections, all the XRPD results are presented in momentum transfer Q, where Q is a function of wavelength λ and diffraction angle θ \(\left( {Q = \frac{4 \cdot \pi \cdot \sin \left( \theta \right)}{\lambda }} \right)\).
DSC A Q200 DSC (TA Instruments, Delaware, USA) was used for DSC measurements. 5–10 mg of sample was weighed into a pan and equilibrated at 20 °C under dry N2 atmosphere (50 mL/min). The samples were then heated at 5 °C/min from 20 to 300 °C. The scan was analyzed by TA Instruments Universal Analysis 2000 software.
Water content A V20S Compact Volumetric KF Titrator (Mettler Toledo, Ohio, USA) was used to measure the water content of spray dried powders. 20–30 mg of powder was weighed and then deposited into the device's titration chamber. After 5 min of stirring, the automatic titration process was performed. Aquastar Titrant 5 and Aquastar Combimethanol (EMD Millipore, Massachusetts, USA) were used as titrants with two-component reagents and solvent, respectively.
OZ439 dissolution
The in vitro solubilization of OZ439 from nanoparticle powders over time in simulated biorelevant media was measured for comparison against unencapsulated OZ439 mesylate. The solubilization protocol was designed to mimic the intended conditions of oral pediatric administration in the developing world; namely, that a mother would add water to the nanoparticle powder before feeding the suspension to an infant.
25 mg of powder, containing 3.37 mg OZ439, was weighed into a scintillation vial. 0.515 mL of water was added, and the powder was allowed to redisperse for 15 min (Step 1, Fig. 3). 0.057 mL of concentrated simulated gastric fluid (FaSSGF) was then added, such that the resulting mixture was at the proper pH and salt concentration of gastric fluid, and the suspension was placed in a water bath at 37 °C (Step 2, Fig. 3). After 15 min, 5.72 mL of either fasted-state (FaSSIF) or fed-state (FeSSIF) simulated intestinal fluid was added to the suspension (Step 3, Fig. 3). Thus the total amount of fluid added was 6.29 mL, and the maximum concentration of solubilized OZ439 was approximately 0.535 mg/mL. It should be noted that during long-term stability studies, the maximum possible concentration of OZ439 in a 25 mg powder sample was lowered slightly due to the sample having absorbed water over time; this was accounted for when calculating percent solubilization of OZ439.
Flow diagram showing steps taken during OZ439 in vitro solubilization tests. Following intestinal fluid addition in step 3, the maximum theoretical concentration of OZ439 was approximately 0.535 mg/mL. Pelleted NPs (after step 5) or bile salts (after step 7) are denoted by white ovals. As dissolution matching 100% of theoretical dissolution was achieved via this protocol, we found that the method results in negligible OZ439 losses despite its several steps
After intestinal fluid was added, the suspension remained in a water bath at 37 °C, and 0.8 mL aliquots were removed at t = 0, 0.25. 0.5, 1, 3, 6, and 24 h (Step 4, Fig. 3). Aliquots, which contained bile salts, dissolved OZ439, and nanoparticles, were centrifuged in an Eppendorf Centrifuge 5430R at 28,000 rpm for 10 min to pellet nanoparticles (Step 5, Fig. 3). The supernatant was then removed, frozen, and lyophilized (Step 6, Fig. 3). The lyophilized powder was resuspended in a mixture of acetonitrile and THF (90/10, v/v), which dissolved any OZ439 present, but not residual bile salts. This suspension was sonicated to help dissolve OZ439, then centrifuged to pellet the insoluble bile salts (Step 7, Fig. 3). The supernatant was removed and filtered through a GE Healthcare Life Sciences Whatman™ 0.1 µm syringe filter. OZ439 concentration was determined by high performance liquid chromatography (HPLC) using a Gemini C18 column (particle size 5 μm, pore size 110 Å). The OZ439 detection method used an isocratic mobile phase of 99.95%/0.05% acetonitrile/trifluoroacetic acid at 45 °C and a detection wavelength of 221 nm. OZ439 concentration was calculated from a standard curve. Measurements were performed in triplicate.
Figure 3 shows a flow diagram of the in vitro dissolution test conditions and subsequent OZ439 separation train. The loss of OZ439 throughout the steps was minimal; in several instances, an amount of dissolved OZ439 over 98% of the theoretical maximum was observed.
Long-term powder stability
For a nanoparticle formulation in dry powder form to be effective at combatting malaria in the developing world, it must retain its superior drug solubilization properties through long-term storage in hot, humid conditions. The tests described below were intended to rapidly age the powders in harsh conditions before assessing their physical characteristics and dissolution kinetics. A future study in the formulation's development will include temperature cycling and use commercially suitable storage containers and conditions that reflect the real world conditions. Here, three phases of experiments were employed to assess powder stability. First, vials containing lyophilized OZ439 NPs were placed uncapped in an oven at 50 °C and 75% relative humidity (RH). After 1 day, and again after 1 week, aliquots of powder were removed and their OZ439 dissolution kinetics were measured using the protocol above.
In the second phase, vials of spray dried OZ439 NPs were placed in the same conditions (uncapped, 50 °C, 75% RH). OZ439 dissolution was measured after 1, 3, 7, 14, 21, and 28 days. At each time point, some powder was removed for quantification by XRPD, DSC, and titration to determine water content. This phase is referred to as the '28-day time course.'
In the third phase, referred to as the '6-month time course,' spray dried OZ439 NPs in capped vials (hand tight, without sealant or tape) were placed in an oven at 40 °C and 75% RH. After 3, 7, 14, and 28 days, and 2, 3, and 6 months, a vial was removed, and OZ439 solubilization was tested and XRPD was performed. In addition, at t = 0, 2, and 6 months, water content was determined and DSC was performed.
Nanoparticles containing OZ439:oleate and stabilized by HPMCAS 126 were formed by FNP in both the CIJ and MIVM mixers. HPMCAS 126, a cellulosic derivative polymer with acetate and succinate groups along its backbone, was chosen as a stabilizer because of its relatively low cost—approximately two orders of magnitude lower—compared to the block copolymers usually used in FNP [17]. We have previously demonstrated that HPMCAS is a suitable stabilizer for FNP [11, 16, 18]. Sodium oleate, OZ439 mesylate, and HPMCAS 126 were dissolved in a mixture of methanol and THF (1:2, v/v) and rapidly mixed with water. During the mixing, in situ hydrophobic ion pairing took place between oleate anions and OZ439 cations, resulting in a hydrophobic OZ439:oleate complex. HPMCAS 126 and the OZ439:oleate complex nucleated and self-assembled into nanoparticles with a narrow size distribution under both mixing geometries.
In the CIJ, NPs approximately 150 nm in diameter formed (hereafter, 'CIJ NPs'), and the initial particle size of NPs produced by the MIVM (hereafter, 'MIVM NPs') was approximately 100 nm. Over time, NPs produced by both mixers increased in size by Ostwald ripening; the MIVM NPs, which were initially smaller, ripened somewhat more rapidly than the CIJ NPs (Fig. 4). This difference between ripening profiles is consistent with time scale for Ostwald ripening scaling with R3, which we have demonstrated previously [15]; i.e. smaller particles grow more rapidly.
Size over time of nanoparticles produced via FNP either in the CIJ mixer or the MIVM. NPs produced by the CIJ (red squares) were initially larger but ripened more slowly than those produced by the MIVM (blue circles). Nanoparticles produced by both mixers remained in an acceptable size range, i.e. less than 400 nm, and monodisperse 6 h after fabrication and were therefore suitable for additional drying unit operations such as lyophilization or spray drying
For our purposes, nanoparticles should remain stable and at the nano-scale for at least 6 h to allow for drying steps such as spray drying or freezing before lyophilizing. Though the HPMCAS-stabilized NPs ripen much more quickly than traditional block copolymer-stabilized NPs produced by FNP, NPs produced by both mixers remained under 400 nm for at least 10 h (Fig. 4). As such, the scaled-up MIVM formulation was deemed acceptable for moving into further processing by spray drying.
Lyophilization and spray drying
Lyophilization and spray drying were both optimized to produce a dry powder from the OZ439 NP suspension. In both cases, the addition of HPMC E3 at a 1:1 mass E3:mass solids ratio prior to the drying operation stabilized NPs against aggregation during processing. The size of NPs in suspensions of redispersed lyophilized powder has been shown previously [11]. For spray drying, multiple ratios of E3 were tested: when 0.5 equivalents or 1 equivalent (by mass) of E3 were added, the resulting dry powders redispersed to NPs in water. In both cases, the redispersed NPs were smaller on average than the size to which fresh NPs from the MIVM had ripened by three hours (Fig. 5). Ideally, the outlet from an MIVM will be fed directly into a spray drier to minimize the effect of size growth. However, at the lab scale the liquid flow rates from the CIJ or MIVM are greater than the drying rates that can be achieved by the lab-scale spray drier. Thus, in these tests, the MIVM was run in a batch mode, producing 350 mL of NP suspension in a batch in 2.5 min. This batch was then spray dried over 40 min, during which some ripening took place. It is, therefore, imprecise to compare the size of reconstituted NPs with the original output of the MIVM, which is why we note that the reconstituted NPs fall within an acceptable and expected size range.
Effect of the amount of HPMC E3 added prior to spray drying on the redispersion of nanoparticles from the spray dried powder. Size distributions of nanoparticles immediately after formation (blue square), 3 h after formation (yellow circle), upon redispersion after spray drying with 0.5 (red triangle) and 1 (green triangle) mass equivalents of added HPMC E3. NPs sprayed 1:1 with HPMC E3 (green) redispersed better then NPs sprayed 1:0.5 with E3 (red), based on the size of the ~ 5000 nm aggregation peak seen by DLS. Both spray dried formulations redispersed to a size smaller than the size to which the original NPs had ripened by 3 h after formation
Once spray drying parameters had been optimized, a large volume of NP suspension (~ 1500 mL) was dried in preparation for the long-term stability studies. The yield efficiency of this process, calculated by the equation below, was 45 ± 5%. This is expected to increase with batch size in a full-scale process.
$${\text{Yield efficiency }}\left( {\text{\% }} \right) = \frac{{{\text{mass}}\;{\text{of collected spray dried powder}}}}{\text{mass of solids fed to spray drier}} \times 100$$
As measured by microscopy, spray-drying produced fine particles with median diameter of 7.8 μm based on number distribution. The morphology of the spray-dried powders was observed to be shriveled, instead of dense spheres (Fig. 6). During the fast drying at high temperature, NPs accumulated on the droplet surface and formed a shell, which further buckled due to the capillary force of the shrinking droplet. The wrinkled surface may increase the surface area and hence the wettability, assisting redispersity in water. This morphology observation is also consistent with our previous work [16, 18].
Bright-field microscopy image of the spray-dried HPMCAS NP powders (mass ratio of NP:HPMC E3 = 1:1). The scale bar is 10 µm
OZ439 solubilization and dissolution
The in vitro dissolution of OZ439 from lyophilized or spray dried nanoparticles in simulated biorelevant media was determined and compared to the OZ439 mesylate powder dissolution under the same conditions. When swapped from water through FaSSGF to FaSSIF, spray dried nanoparticles exhibited dissolution superior to both unencapsulated powder and lyophilized NPs (Fig. 7). Spray dried NPs achieved over 20-fold higher solubilized OZ439 than unencapsulated powder after 6 h and solubilized up to 86% of the OZ439 in the system. Since OZ439's solubility limit in FaSSIF is approximately 140 µg/mL (0.26 on the y-axis in Fig. 7), both the spray dried and lyophilized NPs achieved OZ439 supersaturation after 1 h and maintained this state for the duration of the study. The decrease in solubilization after 24 h can be explained by possible recrystallization from the supersaturated system.
Dissolution kinetics of OZ439 when unencapsulated (green triangles) or encapsulated into nanoparticles via FNP and processed into a dry powder by lyophilization (blue diamonds) or spray drying (red squares). Spray dried NPs achieved up to 20-fold superior OZ439 solubilization compared to OZ439 mesylate powder in FaSSIF, and also outpaced lyophilized NPs by up to 1.3 times
When swapped from water through FaSSGF into FeSSIF, unencapsulated powder and lyophilized NPs exhibited similar dissolution profiles. Spray dried NPs, in contrast, achieved 100% solubilization by 0.5 h and maintained this state for the duration of the study. OZ439 solubility in FeSSIF is higher than in FaSSIF (2.5 mg/mL vs. 0.14 mg/mL), so the system was not supersaturated and never demonstrated recrystallization.
In both FaSSIF and FeSSIF, spray dried NPs provide more complete OZ439 solubilization than either lyophilized NPs or unencapsulated powder. In so doing, spray dried NPs may be an effective means of minimizing the 'food effect,' i.e. the difference in OZ439 solubilization between the fed and fasted states. By reducing this difference, our NPs may remove or reduce the necessity of co-administering OZ439 with enough food to induce fed-state GI conditions. Simplifying administration in this way is particularly important for pediatric malaria patients, who have poor appetites and may have difficulty eating the quantity of food required. Additionally, reducing the food effect should reduce variability in drug PK and efficacy in vivo, since variable GI conditions will have less impact on drug solubilization.
In the case of both FaSSIF and FeSSIF, spray dried NP powders achieved faster and more complete OZ439 solubilization than lyophilized powders. This phenomenon may be due to wettability issues that arose during the course of small-scale lyophilization. At the walls and bottom of the glass vial in which they were dried, lyophilized samples sometimes formed a dense lyophilization cake that was difficult to redisperse. Another possible explanation for the difference in performance between the powders may arise from the HPMCAS' ability to protect nanoparticles from aggregation during lyophilization. In our previous study, we found that adding HPMC E3 equivalent to 1:1 solids prior to freezing and lyophilizing helped with redispersibility; nevertheless, a small population of aggregates was observed, which may have hindered the powder's ability to enhance OZ439 solubilization.
The grade of HPMCAS used herein has been optimized for formulating spray dried dispersions and hot melt extrusions, but this alone may not explain the poorer performance of lyophilized powders compared to spray dried powders. Chiang et al. found no significant difference in in vivo performance between dried HPMCAS-based dispersions of Griseofulvin processed by spray drying and lyophilization [19]. In our case, nanoparticle aggregation during freezing or lyophilization has the potential for reducing OZ439 solubilization, as mentioned above; this was not a consideration for Chiang et al., whose formulation did not use nanoparticles.
Lyophilized NPs powders were placed in an oven at 50 °C and 75% RH in uncapped vials for up to 1 week. The in vitro solubilization of OZ439 was assessed on the powder prior to, after 1 day in, and after 7 days in the oven. OZ439 dissolution remained constant across this period, despite the potential for water uptake by the HPMCAS stabilizer in the powders (Fig. 8). Unlike hot melt extrusions, in which drug fused to the HPMCAS backbone would, upon the hydration of that backbone, potentially diffuse throughout the polymer matrix and crystallize, in our nanoparticle system we expect discrete regions of drug to be distributed throughout the HPMCAS matrix from the onset. Thus, the drug does not gain freedom to diffuse upon HPMCAS hydration, and remains in its initial state despite water uptake.
Dissolution kinetics of lyophilized OZ439 NP powder after storage in an oven at 50 °C and 75% RH in uncapped vials. Though the powder's appearance changed radically after 1 day in the oven (see Additional file 1: Figure S1), the dissolution kinetics of encapsulated OZ439 remained largely the same over the course of a week in these conditions. After 1 day (red squares) and 7 days (green triangles) in the oven, OZ439 dissolution kinetic profiles matched those of the powder immediately after lyophilization, both in terms of completeness and shape. In all cases, 60–70% of OZ439 was solubilized, with NPs FeSSIF reaching this plateau faster than NPs in FaSSIF
Spray dried powders, when subjected to these same oven conditions over the course of a month, also retained their OZ439 dissolution profiles (Fig. 9). After 1, 3, 7, 14, 21, and 28 days, aliquots were removed from the oven for in vitro solubilization tests and XRPD. There was no discernable trend toward loss of solubilization as a function of time in the oven, and solubilization profiles after 28 days in these harsh conditions are largely the same as before the test began.
Dissolution kinetics of spray dried OZ439 NP powder after storage in an oven at 50 °C and 75% RH in uncapped vials. In all cases, NPs in FaSSIF achieved 80–90% maximum OZ439 solubilization, and NPs in FeSSIF reached 90–100% solubilization. Though there is more variability in the FeSSIF results (right), no trend of decreasing activity as a function of incubation time is observed
Through the 6-month time course at 40 °C and 75% RH, the spray dried nanoparticle powder retained its in vitro OZ439 solubilization potential (Fig. 10). As in the 1-month course, OZ439 solubilization at the end of the time course is the same as before the powder was exposed to the oven. It should be noted that the dissolution kinetics did not change despite some water uptake by the powder over time (Table 1).
Dissolution kinetics of spray dried OZ439 NP powder after storage in an oven at 40 °C and 75% RH in capped vials. In all cases, NPs in FaSSIF achieved 80–90% maximum OZ439 solubilization, and NPs in FeSSIF achieved complete solubilization
Table 1 Water uptake by spray dried NP powder over 6-month stability time course
XRPD results from each time throughout the (a) 28-day and (b) 6-month time courses are reported in Fig. 11. The samples are shown to contain some degree of crystallinity, indicated by sharp Bragg peaks at Q = 1.3 and 1.4 Å−1. Importantly, neither these peaks nor the overall profiles of the powder across over time appear to change significantly, again demonstrating powder stability. These peaks likely are due to a sodium mesylate salt formed during drying from spectator sodium and mesylate ions. See Additional file 1: Figure S2 for the XRPD profiles of the individual components used in the study, which can be compared to the profiles of the powder at t = 0 and sodium mesylate.
XRPD of spray dried OZ439 NP powder after oven storage at a 50 °C and 75% RH in uncapped vials for a month and b 40 °C and 75% RH in capped vials for 6 months. Distinct Bragg peaks are observed, but do not change in intensity or width over time. Individual profiles are offset vertically to facilitate comparison
DSC results from the 6-month time course are reported in Fig. 12. The profiles closely match one another, with the exception of a peak at 90 °C matching sodium mesylate. This broadens and disappears by 6 months, potentially because of water uptake by hygroscopic sodium mesylate.
DSC profiles of spray dried OZ439 NP powder after oven storage at 40 °C and 75% RH in capped vials for 6 months. Profiles are similar across 6 months, with the exception of the small peak at 90 °C, which was initially present but disappears by 6 months. This peak corresponds to sodium mesylate, which may be formed from spectator sodium and mesylate ions during drying and disappears over time due to water uptake
The work presented herein demonstrates that the lab-scale nanoparticle formulation of the potent antimalarial OZ439 can be scaled up using industrially relevant unit operations. As before, Flash NanoPrecipitation with hydrophobic ion pairing was used to form nanoparticles stabilized by HPMCAS and containing a hydrophobic complex of OZ439 and oleate. The limitation of the dilution step following nanoparticle formation in a two-stream confined impinging jet mixer was overcome by forming NPs in an industrial-scale four stream multi-inlet vortex mixer, which was operated at 160 mL/min and can be operated at up to 1.5 L/min. The lyophilization drying unit operation used previously was replaced with scalable spray drying, which formed nanoparticle powders that redispersed to nano-scale in water and showed in vitro OZ439 solubilization superior to that of both un-encapsulated OZ439 mesylate and lyophilized nanoparticle powders. The spray dried powder also demonstrated robust stability, maintaining its XRPD, DSC, and solubilization profiles over 28 days in harsh conditions (50 °C, 75% RH, uncapped) and for 6 months in accelerated conditions (40 °C, 75% RH, capped).
Considering the scale of malaria therapeutics produced worldwide each year, to be industrially relevant, any process to formulate OZ439 must be scalable to at least the scale of hundreds or thousands of kilograms of drug product per year. The steps taken here are a move toward a fully scalable process. FNP and spray drying are both continuous unit operations, which will aid significantly in future efforts to scale the process up. We have demonstrated scalability of our multi-inlet vortex mixer to operate at flow rates of more than 5 L/min, and even larger units can be readily designed through simple geometric and flow rate scale-up. The next steps for scaling up this particular formulation are to go to the pilot scale for GMP production of powders that can be evaluated for in vivo exposure in humans.
Another major consideration for a scalable process is the cost of goods. This FNP formulation effectively adds three excipients to OZ439—sodium oleate, HPMCAS-126, and HPMC E3—all of which add minimal cost to the final product. These excipients and their grades were chosen specifically because of their low costs; all three are available at scale for $10–100 per kilogram. Moreover, it should be noted that the potential benefits of a single-dose cure for malaria may justify slightly higher production costs for a therapy than traditional multi-dose regimens due to improved compliance. The acceptable range for cost of goods was published in the TPP paper published in 2017 [20].
The aging studies included herein are not intended to precisely mimic environmental conditions in endemic countries where this formulation would eventually be used, but are instead intended to quickly age the formulation in a consistently harsh environment. Stability tests reflective of actual environmental conditions would include temperature cycling studies in commercially suitable containers. These tests are planned for a later part of this formulation's development.
It should be noted that in vitro dissolution kinetics using biorelevant media, as performed here, are the most accurate way to predict in vivo drug absorption in humans. OZ439 has a unique PK profile, with low oral bioavailability in humans, but significantly high oral bioavailability in all animal models tested to date (greater than 80%, regardless of formulation). Therefore, to obtain useful in vivo data, a formulation must be tested in humans, requiring GMP manufacturing. These experiments are part of the future plans for this formulation, and were beyond the scope of this paper, which focused on formulation, scale-up, and physical stability.
The formulation and method development in this study may offer an inexpensive and scalable means of improving the oral bioavailability of OZ439 and help the drug realize its potential as a single-dose oral malaria therapeutic. Future work will include an investigation of concentrating the nanoparticle suspension following its formation in the MIVM and prior to its entry into the spray dryer. Pre-concentrating the NP dispersion would reduce spray drying requirements in terms of time and cost. To this end, we will next investigate the impact of continuous tangential flow ultrafiltration (TFF) on the stability of the NP formulation. Additional unit operations such as flash evaporation, which will reduce the volume of organic solvent in the NP suspension and further stabilize NPs from Ostwald ripening, may be required in conjunction with TFF.
NPs:
MMV:
BMGF:
hydrophobic ion pairing
FNP:
Flash NanoPrecipitation
HPMCAS:
HPLC:
high performance liquid chromatography
FaSSGF:
fasted-state simulated gastric fluid
FaSSIF:
fasted-state simulated intestinal fluid
FeSSIF:
fed-state simulated intestinal fluid
CIJ:
confined impinging jets
MIVM:
multi-inlet vortex mixer
XRPD:
x-ray powder diffraction
DSC:
GI:
PK:
Greenwood B. Elimination of malaria: halfway there. Trans R Soc Trop Med Hyg. 2017;111(1):1–2.
WHO. World malaria report. Geneva: World Health Organization; 2018.
Van Agtmael MA, Eggelte TA, Van Boxtel CJ. Artemisinin drugs in the treatment of malaria: from medicinal herb to registered medication. Trends Pharmacol Sci. 1999;20:199–205.
Fairhurst RM, Nayyar GML, Breman JG, Hallett R, Vennerstrom JL, Duong S, Ringwald P, Wellems TE, Plowe CV, Dondorp AM. Artemisinin-resistant malaria: research challenges, opportunities, and public health implications. Am J Trop Med Hyg. 2012;87:231–41.
Foundation, B. M. G. What We do, malaria, strategy overview. 2014.
Biamonte MA, Wanner J, Le KG. Recent advances in malaria drug discovery. Bioorg Med Chem Lett. 2013;23:2829–43.
Charman SA, Arbe-Barnes S, Bathurst IC, Brun R, Campbell M, Charman WN, Chiu FCK, Chollet J, Craft JC, Creek DJ, Dong Y, Matile H, Maurer M, Morizzi J, Nguyen T, Papastogiannidis P, Scheurer C, Shackleford DM, Sriraghavan K, Stingelin L, Tang Y, Urwyler H, Wang X, White KL, Wittlin S, Zhou L, Vennerstrom JL. Synthetic ozonide drug candidate OZ439 offers new hope for a single-dose cure of uncomplicated malaria. Proc Natl Acad Sci USA. 2011;108:4400–5.
Moehrle JJ, Duparc S, Siethoff C, van Giersbergen PLM, Craft JC, Arbe-Barnes S, Charman SA, Gutierrez M, Wittlin S, Vennerstrom JL. First-in-man safety and pharmacokinetics of synthetic ozonide OZ439 demonstrates an improved exposure profile relative to other peroxide antimalarials. Br J Clin Pharmacol. 2013;75:535–48.
Phyo AP, Jittamala P, Nosten FH, Pukrittayakamee S, Imwong M, White NJ, Duparc S, Macintyre F, Baker M, Möhrle JJ. Antimalarial activity of artefenomel (OZ439), a novel synthetic antimalarial endoperoxide, in patients with Plasmodium falciparum and Plasmodium vivax malaria: an open-label phase 2 trial. Lancet Infect Dis. 2015;16:61–9.
McCarthy JS, Baker M, O'Rourke P, Marquart L, Griffin P, van Huijsduijnen RH, Möhrle JJ. Efficacy of OZ439 (artefenomel) against early Plasmodium falciparum blood-stage malaria infection in healthy volunteers. J Antimicrob Chemother. 2016;71(9):2620–7. https://doi.org/10.1093/jac/dkw174.
Lu HD, Ristroph KD, Dobrijevic ELK, Feng J, McManus SA, Zhang Y, Mulhearn WD, Ramachandruni H, Patel A, Prudhomme RK. Encapsulation of OZ439 into nanoparticles for supersaturated drug release in oral malaria therapy. ACS Infect Dis. 2018;4(6):970–9. https://doi.org/10.1021/acsinfecdis.7b00278.
Johnson BK, Prud'homme RK. Mechanism for rapid self-assembly of block copolymer nanoparticles. Phys Rev Lett. 2003;91:118302.
Johnson BK, Saad W, Prud'homme RK. Nanoprecipitation of pharmaceuticals using mixing and block copolymer stabilization. ACS Symp Ser. 2006;924:278–91.
Liu Y, Kathan K, Saad W, Prud'homme RK. Ostwald ripening of β-carotene nanoparticles. Phys Rev Lett. 2007;98(3):036102.
Kumar V, Prud'homme RK. Thermodynamic limits on drug loading in nanoparticle cores. J Pharm Sci. 2008;97(11):4904–14.
Feng J, Zhang Y, McManus SA, Ristroph KD, Lu HD, Gong K, White CE, Prud'homme RK. Rapid recovery of clofazimine-loaded nanoparticles with long-term storage stability as anti-cryptosporidium therapy. ACS Appl Nano Mater. 2018;1(5):2184–94. https://doi.org/10.1021/acsanm.8b00234.
Chemicals D. Affinisol; solving the insoluble with dow. 2013.
Zhang Y, Feng J, McManus SA, Lu H, Ristroph KD. Design and solidification of fast-releasing clofazimine nanoparticles for treatment of cryptosporidiosis. Mol Pharm. 2017;14(10):3480–8.
Chiang P-C, Cui Y, Ran Y, Lubach J, Chou K-J, Bao L, Jia W, La H, Hau J, Sambrone A, Qin A, Deng Y, Wong H. In vitro and in vivo evaluation of amorphous solid dispersions generated by different bench-scale processes, using griseofulvin as a model compound. AAPS J. 2013;15(2):608–17.
Burrows JN, Duparc S, Gutteridge WE, van Huijsduijnen RB, Kaszubska W, Macintyre F, Mazzuri S, Möhrle JJ, Wells TNC. New developments in anti-malarial target candidate and product profiles. Malaria J. 2017;16:26.
All authors read and approved the final manuscript.
The authors thank Dr. Pius Tse, Dr. Chih-Duen Tse, Dr. Niya Bowers, and Dr. Ben Boyd for intellectual discussion. The authors also thank Dr. Hoang Lu and Ellen Dobrijevic for their careful work on the preceding study.
The work was supported by the Bill and Melinda Gates Foundation (BMGF, OPP1150755). This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. #DGE-1656466 awarded to K.D.R.
Department of Chemical and Biological Engineering, Princeton University, Princeton, NJ, 08854, USA
Kurt D. Ristroph
, Jie Feng
, Simon A. McManus
, Yingyue Zhang
& Robert K. Prud'homme
Department of Civil and Environmental Engineering, Princeton University, Princeton, NJ, 08854, USA
Kai Gong
& Claire E. White
Andlinger Center for Energy and the Environment, Princeton University, Princeton, NJ, 08854, USA
Medicines for Malaria Venture, Route de Pré-Bois 20, 1215, Meyrin, Switzerland
Hanu Ramachandruni
Search for Kurt D. Ristroph in:
Search for Jie Feng in:
Search for Simon A. McManus in:
Search for Yingyue Zhang in:
Search for Kai Gong in:
Search for Hanu Ramachandruni in:
Search for Claire E. White in:
Search for Robert K. Prud'homme in:
Correspondence to Robert K. Prud'homme.
12967_2019_1849_MOESM1_ESM.docx
Additional file 1: Figure S1. Lyophilized NP powder (left) before being placed in an oven uncapped at 50 °C and 75% RH and (right) after 1 day in the oven. Figure S2. XRPD profiles of the raw individual components used in the study, along with the t = 0 spray dried nanoparticle powder (light blue, bottom). The signal of 'OZ439 oleate, etc.' was obtained by physically mixing OZ439 mesylate dissolved in methanol and sodium oleate dissolved in methanol with water. The resulting solution became cloudy, indicating the formation of an insoluble OZ439:oleate complex. The solution was dried and XRPD was performed. This profile can be thought of as a physical mixture of sodium oleate, sodium mesylate, OZ439 mesylate, and OZ439 oleate. The peaks at Q = 1.3, 1.4, and 1.6 nm−1 in the NP powder align closely with similar peaks in sodium mesylate (green, second from top), suggesting these peaks are due to sodium mesylate that formed from spectator sodium and mesylate ions during drying. These sodium mesylate crystals likely formed outside the NPs and are not associated with the amorphous OZ439:oleate core.
Ristroph, K.D., Feng, J., McManus, S.A. et al. Spray drying OZ439 nanoparticles to form stable, water-dispersible powders for oral malaria therapy. J Transl Med 17, 97 (2019). https://doi.org/10.1186/s12967-019-1849-8
Nanocarrier
Oral therapeutic
Drug solubilization
OZ439
Artefenomel
|
CommonCrawl
|
MathOverflow is a question and answer site for professional mathematicians. Join them; it only takes a minute:
Representations of Lorentz group
What is the connection between representation theory of complex semisimple Lie groups and representations of (maybe "proper") Lorentz groups?
Why should one read Bargmann's paper on irred. unitary representations of Lorentz group if one wants to know unitary representation?
unitary-representations lie-groups rt.representation-theory
mathphysicist
$\begingroup$ One possible answer to this question is that the Lorentz group (in dimension at least 3) is semisimple and not compact, and it is a somewhat paradigmatic example. The Lorentz group is dimension 4 (which is what is treated in Bargmann's paper) is locally isomorphic to $SL(2,\mathbb{C})$. Perhaps this group plays a similarly motivating rôle as $SU(2)$ plays in studying the representation theory of compact Lie groups. $\endgroup$ – José Figueroa-O'Farrill Nov 26 '10 at 17:20
$\begingroup$ The word proper is overloaded in mathematics and very overloaded in special relatvity. As well as the usual "proper Lorentz group" there is Ungar's proper-time proper-velocity Lorentz group, which could also be called for short "proper Lorentz group". "The relativistic proper-velocity transformation group", A Ungar, Progress In Electromagnetics Research, 2006, pier.engg.hku.hk/pier/pier60/04.0512151.Ungar.pdf $\endgroup$ – Roy Maclean Nov 27 '10 at 22:27
Weyl's theorem states that any finite dimensional representation of a compact Lie group is completely reducible. The Lorentz group is not compact, but its maximal compact subgroup is $SU(2)$. This is why there is a 1-1 correspondence between the representations of the Lorentz group (algebra) and those of $SU(2)$ (respectively $su(2)$).
You can find more details about this relation in
R. O. Wells, Jr. Differential analysis on complex manifolds. Published 1980 by Springer-Verlag in New York. I quote from page 173:
Proposition 3.1: The mappings $r_1$, $r_2$ and $d$ in (3.7) are all bijective, i.e., there is a one-to-one correspondence between representations of $SL(2,\mathbb C)$, $sl(2,\mathbb > C)$, $SU(2)$ and $su(2)$.
The representations of $SU(2)$ and $su(2)$ are treated in most books on representation theory.
Indeed, Wigner's and Bargmann's articles are useful if you are interested in how the spin particles occur from representations of the Lorentz group:
E. P. Wigner. On unitary representations of the inhomogeneous Lorentz group. Annals of Mathematics, (40):149{204, 1939.
V Bargmann. On Unitary Ray Representations of Continuous Groups. Ann. of Math., 59:1{46,
E. P. Wigner. Group Theory and its Application to Quantum Mechanics of Atomic Spectra. Academic Press, New York, 1959.
The main idea is that the wavefunctions should transform in wavefunctions at a Poincare transformation, and the transformation should be unitary. So, we need unitary representations of the Poincare group.
In order to classify the irreducible representations of a group, one can use the Casimir invariants. The Lie algebra of the $ISL(2,\mathbb C)$ group, $isl(2,\mathbb C )$ (isomorphic to the Poincare Lie algebra $so(1,3)$) has two Casimir invariants, namely $m^2=p^a p_a$ and the squared angular momentum about the center of mass, $S^2=s(s+1)$, where the spin $s$ takes semi-integer values. Usually is considered that only the representations corresponding to $m^2\geq 0$ have physical meaning, the ones with $m^2<0$ being tachyonic. For the case $m^2>0$ $s$ is of the form $0,\frac 1 2, 1, \frac 3 2, \ldots \frac n 2 \ldots$. For the case $m^2=0$, $s$ can be $0,\pm\frac 1 2, \pm1, \pm\frac 3 2, \ldots\pm\frac n 2 \ldots$. In this last case there exists also representations with continuous spin, but no physical evidence support this kind of representations.
Added. The nice relation between the representations of $SL(2,\mathbb C)$ and $SU(2)$ refers, as I stated, to the finite dimensional case. But what's the connection between the finite-dimensional and the infinite-dimensional representations? The infinite-dimensional reps of $SL(2,\mathbb C)$ which are of interest in quantum mechanics are spinor fields. That is, they are superpositions of sections in finite-dimensional complex vector bundles which are associated to $SL(2,\mathbb C)$. To construct such an associated finite-dimensional bundle, you start with a finite-dimensional representation. Strictly speaking, the things are more complicated for infinite-dimensional representations, but for quantum mechanical systems (with a finite number of particles), there is this nice connection between infinite-dimensional and finite-dimensional representations.
Cristi StoicaCristi Stoica
$\begingroup$ -1. I'm sorry to say that this answer is largely incorrect. Why has this been accepted? Or even upvoted? The OP is not talking about the Poincaré group (a.k.a. the "inhomogeneous" Lorentz group in the old literature). The unitary irreps of the Poincaré group are indeed found in the papers by Wigner mentioned here, but also in more modern work, e.g. papers of Niederer and O'Raifeartaigh in the mid 1970s. The Casimirs of the Poincaré Lie algebra are not the spin and momentum, but the mass and then either spin or helicity for massive or massless reps, respectively. (continued below) $\endgroup$ – José Figueroa-O'Farrill Nov 27 '10 at 17:45
$\begingroup$ (continued from above) Wells's book seems like an odd reference for the question. In Wells's book, what you find is a description of the Hodge-Lefschetz theory for Kähler manifolds, on which you do have an action of $sl(2,\mathbb{C})$ in the cohomology, but the ensuing representaiton is certainly not unitary, which is what this question is about. It cannot be because Hodge theory tells you the cohomology of a compact Kähler manifold is finite-dimension, yet unitary representations of $SL(2,\mathbb{C})$ (or any noncompact Lie group) are necessarily infinite-dimensional. (continued below) $\endgroup$ – José Figueroa-O'Farrill Nov 27 '10 at 17:49
$\begingroup$ (and finally) There is no one-to-one correspondence between unitary irreps of $\su(2)$ and those of $\sl(2,\mathbb{C})$. In fact, Bargmann's paper proves that the unitary irreps of $\sl(2,\mathbb{C})$ come in two families, one labelled by a positive real number and the other by pair $(r,k)$ consisting of a real number $r$ and where $0< 2k \in \mathbb{Z}$. Any of these representations contains an infinite number of irreps of the maximal compact $su(2)$ subalgebra. $\endgroup$ – José Figueroa-O'Farrill Nov 27 '10 at 17:57
$\begingroup$ @José Figueroa-O'Farrill Actually, R.O. Wells's book contains not only the Lefshetz decomposition, but an introduction to the representations of $sl(2,\mathbb C)$. Thanks for your comments. I updated my answer with the correct Casimir invariants. $\endgroup$ – Cristi Stoica Nov 27 '10 at 19:28
$\begingroup$ @Cristi: Wells only discusses finite-dimensional reps of $sl(2,\mathbb{C}$, since those are the only relevant reps in the Hodge-Lefschetz theory of compact Kähler manifolds. There is nothing in that book (at least in the second edition, which is the one I have) about the unitary representation theory of $sl(2,\mathbb{C})$. $\endgroup$ – José Figueroa-O'Farrill Nov 27 '10 at 19:54
The Lorenz group is essentially a semidirect product of $SL(2,\mathbb C)$ and a four dimensional abelian group. (I am only considering the connected component of identity, but that is not a big deal.) Now, there are general results of George Mackey which describe unitary representations of a semidirect product in terms of those of each factor. A good place to read about Mackey theory is Varadarajan's book Geometry of quantum theory. It also has a chapter on representations of the Lorenz group.
To work with unitary representations you don't need to read Bargmann's paper. There are many other sources which explain the representation theory of $SL(2,\mathbb R)$ and $SL(2,\mathbb C)$ in more modern language. See R. Howe's book, Nonabelian harmonic analysis, S. Lang's book $SL(2,\mathbb R)$, or M. Taylor's book Noncommutative harmonic analysis.
HadiHadi
$\begingroup$ Yeah. Howe & Lang's books are preferable. $\endgroup$ – Alex Nov 28 '10 at 1:22
$\begingroup$ Sorry, but what you are calling the Lorentz group is actually the Poincaré group. The connected component of the identity of the Lorentz group is $SO(3,1)_0$ whose universal covering group is $SL(2,\mathbb{C})$. It was Wigner who solved the problem of classifying the unitary irreps of the Poincaré group and Mackey who later generalised this result to other groups of that type. $\endgroup$ – José Figueroa-O'Farrill Nov 28 '10 at 2:03
$\begingroup$ Yes, you're right that what I described should be called the Poincare group. $\endgroup$ – Hadi Nov 29 '10 at 21:24
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged unitary-representations lie-groups rt.representation-theory or ask your own question.
Induction and Coinduction of Representations
Unitary representations of SL(2, R)
Algebraicity of holomorphic representations of a semisimple complex linear algebraic group
When is a finite dimensional real or complex Lie Group not a matrix group
p-adic representations of groups
Definitions of Reductive and Semisimple Groups
Representation of quotient group
Unipotent representations of SL(2,R) by quantization
Measurable representations of semi simple Lie groups
Purely algebraic proof for unitarizability of representations of a compact real semisimple Lie algebra
|
CommonCrawl
|
A circular plate rolls in contact with the sides of a rectangular tray. How much of its circumference comes into contact with the sides of the tray when it rolls around one circuit?
An Introduction to Irrational Numbers
Tim Rowland introduces irrational numbers
Approximating Pi
Age 14 to 18 Challenge Level:
A method is to calculate the perimeters or areas of the inscribed and circumscribed polygons to find an upper and lower limit for Pi. I have included a solution from Andrei below based on the perimeter of the two polygons. You could then find the average of the two limits to find an approximation to Pi. As Michael points out Pi is irrational and therefore cannot be calculated exactly. We can still obtain very good approximations to Pi using this method by increasing the number of sides. No one really addressed the issue of the problems with this method, especially for Archimedes as he did not have a calculator or tables to look up roots of numbers. For Archimedes methos of calculating square roots see the problem Archimedes and Numerical Roots.
Andrei's Solution
The length of the side of the circumscribed square is $2r$, where $r$ is the radius of the circle. So, its perimeter is $8r$.
Now, I calculate the length of the inscribed square, using the Pythagoras Theorem, in a right-angled isosceles triangle, with the right angle in the centre of the circle, the two congruent sides - radii of the circle, and the a vertex of the hypotenuses - the side of the inscribed square:
$2r^2 = l^2$
$l = r\sqrt{2}$
So, the perimeter of the inscribed square is $4r\sqrt{2}$.
Knowing that the circumference of a circle is $2{\pi}r$, I can find a minimum and a maximum limit for $p$ from the condition that the circumference of the circle is lower than the perimeter of the circumscribed square and higher than the circumference of the inscribed square:
$4r\sqrt{2} < 2{\pi}r < 8r$
$4\sqrt{2} < 2\pi < 8$
$2\sqrt{2} < \pi < 4$
and approximating $\sqrt{2}$:
$2.8284 < \pi < 4$
Now, I use the same method for the hexagon. First I calculate the side of the circumscribed hexagon in a right-angled triangle with one side (the hypotenuses) the line connecting the centre of the circle with the vertex of the circumscribed hexagon, a side - half the side of the circumscribed hexagon, and the other - the radius of the circle. The side of the hexagon is perpendicular on the radius in the tangency point. The angle between the radius and the hypotenuses is $30^o$ (Here I use the Pythagoras Theorem and also the theorem that says than in a right-angled triangle with one angle of $30^o$, the side opposite to the angle of $30^o$ is 2 times smaller than the hypotenuses): $${ \left({L \over2}\right)^2 + r^2 = L^2 }$$ $${ {{L^2} \over 4} + r^2 = L^2 }$$ or $${ {3L^2 \over{4}} = r^2 }$$ so, $${ L = {2r \over{\sqrt{3}}}= {{2 \sqrt{3}r \over{3}}} }$$ The side of the inscribed hexagon is $r$. So, I found the following inequalities:
$6r < 2\pi r < 6\times (2r\sqrt{3})/3$
$6r < 2\pi r < 4\sqrt{3}r$
$6 < 2\pi < 4\sqrt{3}$
$3 < \pi < 2\sqrt{3}$
$3 < \pi < 3.4641$
Mathematical reasoning & proof. Circumference and arc length. Pi. Inequalities. Limits of Sequences. 2D shapes and their properties. Selecting and using information. Area - circles, sectors and segments. Regular polygons and circles. Sine, cosine, tangent.
|
CommonCrawl
|
Analysis of total corneal astigmatism with a rotating Scheimpflug camera in keratoconus
Jinho Kim1,
Woong-Joo Whang ORCID: orcid.org/0000-0002-2018-56932 &
Hyun-Seung Kim1
BMC Ophthalmology volume 20, Article number: 475 (2020) Cite this article
To analyze mean corneal powers and astigmatisms on anterior, posterior, and total cornea in patients with keratoconus as calculated according to various keratometric measurements using a Scheimpflug camera.
We examined the left eyes of 64 patients (41 males and 23 females; mean age 29.94 ± 6.63 years) with keratoconus. We measured simulated K (Sim-K), posterior K, true net power (TNP) and four types of total corneal refractive powers (TCRP). We then used the obtained values to analyze mean K, and corneal astigmatism. TCRP were measured at 2.0 ~ 5.0 mm.
Mean corneal powers from Sim K, posterior K, and TNP were 49.12 ± 3.99, − 7.39 ± 0.79, and 47.78 ± 4.09 diopters, respectively. For TCRP centered on the pupil, mean K tended to decrease with measurement area (all p < 0.01). While, both mean K and astigmatism measured using TCRP centered on the apex decreased with measurement area (all p < 0.001). TCRP centered on the apex were greater than those centered on the pupil for mean K values calculated using TCRP (all p < 0.001). The proportion of WTR was greatest on the anterior and total cornea. As the measurement area moved to the periphery, the proportion of WTR increased.
Mean corneal powers and astigmatisms on total cornea with keratoconus change depending on calculation methods and measurement areas.
Keratoconus is a progressive non-inflammatory disease characterized by thinning and protrusion of the cornea, resulting in high degrees of irregular astigmatism and myopia that lead to impairment of visual quality and distorted vision [1].
The Pentacam® is a rotating Scheimpflug camera used to evaluate the topography of the corneal surface and measure corneal thickness, which may be helpful in the diagnosis of keratoconus and identification of disease stage [2]. Although corneal power has traditionally been assessed using instruments that measure anterior corneal power alone, a rotating Scheimpflug camera has made it possible to measure posterior corneal curvature as well [3]. Furthermore, keratometric and pachymetric measurements by a Scheimpflug rotating camera were repeatable than placido topographer combined with slit-scanning technology [3].
Kamiya et al. [4] investigated eyes with keratoconus and concluded that 78.8% exhibit ATR astigmatism, while only 10.2 and 10.9% exhibit with-the-rule (WTR) and oblique astigmatism, respectively. The authors of the aforementioned study further reported a mean magnitude of posterior corneal astigmatism in keratoconus of 0.93 diopters. Naderan et al. [5] also reported a similar value for the magnitude of posterior corneal astigmatism in keratoconic eyes (0.90 diopters), which is far greater than the magnitude observed in normal eyes (0.26 to 0.78 D) [6,7,8,9,10,11,12]. In contrast with mean magnitude of posterior corneal astigmatism, WTR astigmatism was more prevalent than oblique and ATR astigmatism in their study.
Keratometric measurements taking into account only the anterior corneal power utilize a corneal index of refraction of 1.3375. This assumption is derived from the concept that the posterior radius of curvature is 1.2 mm steeper than the anterior corneal radius of curvature [13]. However, this value is not consistent in eyes with keratoconus, in which the relationship between the anterior and posterior corneal radii has become distorted [14,15,16]. Therefore, use of 1.3375 as the keratometric index in patients with keratoconus is imprecise and may result in an overestimation of corneal power [17]. Camps et al. [18] calculated an adjusted keratometric index ranging from 1.3190 to 1.3324 using the Gullstrand eye model, though such a value would be affected by the degree of disease progression. Watson et al. [19] further identified overestimation of corneal power as the primary cause of postoperative hyperopic prediction error when a conventional keratometer was used.
Purpose of this study is to analyze the various corneal measurements including Simulated K, posterior K, true net power (TNP), and 4 types of total corneal refractive power (TCRP) obtained from the eyes with keratoconus using the Pentacam® rotating Scheimpflug camera and to evaluate changes in mean corneal power and corneal astigmatism due to measurement method and area.
Patients and study design
In the present retrospective study, we analyzed the left eyes of 64 patients (41 males and 23 females) who had been diagnosed with keratoconus between Jan 2017 and Jan 2019. The Institutional Review Board for Human studies at Yeouido St. Mary Hospital (Seoul, Korea) reviewed and approved this study protocol (SC19RESI0111). As this study was a retrospective study, verbal informed consent was obtained from all patients before beginning data collection and analyses. All study conduct adhered to the tenets of the Declaration of Helsinki for the use of human participants in biomedical research.
Diagnoses of keratoconus were confirmed by an experienced clinician (W.J.W.) based on slit-lamp observation and measurements obtained using a Scheimpflug rotating camera. Characteristic features of keratoconus were confirmed in all cases: asymmetric bow-tie pattern with/without skewed axis, Fleischer rings, or Vogt's striae. They were also confirmed by the Amsler-Krumeich classification based on corneal astigmatism, corneal power, corneal transparency, and corneal thickness [20]. Patients with visually significant cataracts, corneal scarring, iris abnormalities, history of glaucoma or retinal disease, macular disease, retinopathy, neuro-ophthalmic disease, history of ocular inflammation, or previous ocular surgeries were excluded.
Patients were instructed to discontinue use of rigid gas permeable (RGP) or soft contact lenses for 3 weeks, following which imaging with the Pentacam® rotating Scheimpflug camera (Oculus; Wetzler, Germany) was performed. A 25-picture scan was used examine each cornea, and only scans graded as being "OK" according to instrument specifications were included in this study. One skilled operator (W.J.W) obtained three measurements and analyzed the average value from three measurements.
Total corneal power is calculated based on anterior corneal power, posterior corneal power, and corneal thickness. Both the Gaussian optic formula and ray-tracing method are applied when calculating total corneal power. Furthermore, total corneal power can be calculated for each zone or ring. Consequently, currently available Scheimpflug cameras allow one to investigate almost 40 combinations of corneal power [20].
Simulated K (Sim-K)
The Sim-K value represents the mean corneal power calculated by simulated keratometry and is the arithmetic mean of a pair of meridians spaced 90 degrees apart, with the greatest difference in axial power lying within a central 3.0 mm zone. Sim-K is calculated by entering the corneal curvature radius into a thin-lens formula for paraxial imagery, which considers the cornea as a single refractive sphere. The cornea radii are converted into dioptric power values using the keratometric index of refraction (1.3375).
True net power (TNP)
The True Net Power (TNP) represent the optical power of the cornea based on two different refractive indices: one for the anterior surface (corneal tissue: 1.376) and one for the posterior surface (aqueous humor: 1.336). TNP is calculated using a Gaussian optic formula that also takes into account the sagittal curvature of each surface.
Total corneal refractive power (TCRP)
The Total Corneal Refractive Power (TCRP) value is automatically measured according to the ray-tracing method. TCRP is calculated using the values for anterior radius, posterior radius, and corneal thickness. Snell's law and the specific refractive indices of air, cornea, and aqueous humor are used to calculate the corneal power, resulting in four types of TCRP measurements: (1) Pupil (zone), corneal power centered on the pupil and measured over the inner zone; (2) Pupil (ring), corneal power centered on the pupil and measured over a ring; (3) Apex (zone), corneal power centered on the apex and measured over the inner zone; (4) Apex (ring), corneal power centered on the apex and measured over a ring.
Assessment of keratometric measurement and statistical analysis
From the above measurements, we calculated the flattest keratometric value (flat K), steepest keratometric value (steep K), mean keratometric value (mean K) in order to assess overall types and degrees of astigmatism. TCRP values were measured at 2.0 mm, 3.0 mm, 4.0 mm, and 5.0 mm. We also divided total 64 eyes into two groups (28 eyes with stage 1 and 36 eyes with stage 2 ~ 4) according to Amsler-Krumeich classification and calculated corneal refractive power [20]. All types of astigmatism except that of the posterior cornea were classified as with-the-rule (WTR) when the steep meridian was within the range of 60–120 degrees and against-the-rule (ATR) when the steep meridian was within the range of either 150–180 degrees or 0–30 degrees. The remaining instances were classified into oblique astigmatism (steep meridian ranging from 30 to 60 degrees and from 120 to 150 degrees). This classification was possible due to the inclusion of left eyes only. Posterior corneal astigmatism was classified as WTR when the steep meridian was within the range of either 150–180 degrees or 0–30 degrees, and as ATR when the steep meridian was within the range of 60–120 degrees. The remaining instances were classified into oblique astigmatism. A net astigmatism is given as (M@ α), where M is the astigmatic magnitude in diopters (D) and α is the astigmatic direction in degrees [21].
$$ \mathrm{Polar}\ \mathrm{value}\ \mathrm{along}\ \mathrm{the}\ \mathrm{zero}\ \mathrm{degree}\ \mathrm{meridian}=\mathrm{KP}\ (0)=\mathrm{M}\ast \mathit{\cos}\left(2\ast a\right) $$
$$ \mathrm{Polar}\ \mathrm{value}\ \mathrm{along}\ \mathrm{the}\ 45\ \mathrm{degrees}\ \mathrm{meridian}=\mathrm{KP}\ (45)=\mathrm{M}\ast \mathit{\sin}\left(2\ast a\right) $$
Additionally, the axis and magnitude of astigmatism were represented using a double-angle polar plot (Astig PLOT) in Eye Pro 2013 (for iPhone/iPad; Apple; Cupertino, California, USA) developed by Dr. Edmondo Borasio. All statistical analyses were performed using IBM SPSS Statistics for Windows, Version 21.0 (IBM Corp, Armonk, NY, USA). The sample size of this study was sufficient to offer a power of 95% statistical power at a significance level of 5%, and detected 1.0 diopter difference of corneal refractive power by G*power (Version 3.1.9.6, https://www.gpower.hhu.de). All p-values ≤0.05 were considered statistically significant. Friedman tests were performed to determine the differences due to measurement area and Wilcoxon signed ranked tests were performed to determine differences between total corneal refractive power centered on the pupil and apex.
A total 64 left eyes (64 patients) were evaluated in the present study. The mean age was 29.94 ± 6.63 years (range: 18–44 years) and the two subgroups showed no significant difference in age (29.81 ± 6.34 years with stage 1 keratoconus and 30.03 ± 6.93 years with stage 2 ~ 4 keratoconus: p = 0.76). The demographic characteristics of the included patients are summarized in Table 1. Mean corneal power for the anterior and posterior corneal surfaces were 49.12 ± 3.99 diopters and − 7.39 ± 0.79 diopters, respectively. Mean corneal power from TNP was 47.78 ± 4.09 diopters. Statistically significant difference was observed in mean K values between measurements of anterior corneal power and true net power (p < 0.001). No significant difference was observed between astigmatism derived from true net power and that derived from power of the anterior surface (p = 0.34).
Table 1 Simulated K, posterior K, and true net power (TNP)
TCRP values for the 2.0–5.0 mm zones are listed in Table 2. As the measurement zone expanded, mean K tended to decrease in TCRP when centered on pupil (p = 0.005, respectively). Mean K, and corneal astigmatism provided by TCRP centered on apex significantly decreased according to changes in measurement zone (all p < 0.001). TCRP centered on the apex resulted in greater values than TCRP centered on the pupil for all measurements, with the exception of corneal astigmatism at the 4.0 mm and 5.0 mm zones. Changes in corneal power according to measurement zone were also greater in TCRP centered on apex. Table 3 demonstrates TCRP values for the 2.0–5.0 mm zone in two subgroups. There was no difference in mean K with stage 1 (p > 0.05). However, in stage 2 ~ 4, TCRP significantly decreased as the measurement area widened (all p < 0.001).
Table 2 Mean arithmetic values and power vectors for total corneal refractive power within 2.0–5.0 mm zones
Table 4 indicates the dioptric power values of TCRPs in the 2.0–5.0 mm rings, which were similar to those obtained for the 2.0–5.0 mm zones. Statistically significant differences were observed for all dioptric powers between measurement rings (all p < 0.001) and for all values with the exception of corneal astigmatism calculated according to TCRP centered on the pupil, which tended to decrease as the measurement ring extended to the periphery. TCRP calculation centered on the apex resulted in greater refractive power values for mean K and corneal astigmatism at the 2.0 mm ring, while TCRP calculation centered on the pupil resulted in greater values for corneal astigmatism at 3.0–5.0 mm rings (all p < 0.05). TCRP values for the 2.0–5.0 mm ring in two subgroups are listed in Table 5. For mean arithmetic corneal astigmatism in TCRP centered on pupil and mean K in TCRP centered on apex, stage 1 keratoconus showed no statistical difference (all p > 0.05) and stage 2 ~ 4 keratoconus showed significant difference (all p < 0.001).
Table 4 Mean arithmetic values and power vectors for total corneal refractive power at 2.0–5.0 mm rings
Figures 1 and 2 depict mean corneal astigmatism values on a double-angle plot. The steep axes for anterior and posterior K were located at 76 and 79 degrees, respectively; while the steep axis for TNP was located at 78 degrees. As the measurement area shifted toward the periphery, the steep axis shifted toward WTR astigmatism for TCRP measurements. The magnitude of mean corneal astigmatism as calculated by TCRP increased with more peripheral measurement for pupil-centered zones, while the same value decreased with more peripheral measurement for TCRP apex-centered zones and rings.
Mean corneal astigmatisms represented on double-angle polar plots by simulated K, posterior K, and true net power
Mean corneal astigmatisms represented on double-angle polar plots by total corneal refractive power (TCRP). a 2.0 mm–5.0 mm zone centered on the pupil; (b) 2.0 mm–5.0 mm zone centered on the apex; (c) 2.0 mm–5.0 mm ring centered on the pupil; (d) 2.0 mm–5.0 mm ring centered on the apex
Figures. 3 and 4 depict the distribution of corneal astigmatism according to steep meridian. On the anterior corneal surface, the proportion of WTR astigmatism was greatest, followed by oblique, and ATR astigmatism. In contrast, the opposite pattern is observed in the distribution of posterior corneal astigmatism: The proportion of ATR astigmatism was greatest, followed by oblique astigmatism and WTR. As the measurement area increased, the proportion of WTR increased, while the proportion of oblique astigmatism decreased. TCRP centered on pupil resulted in a greater proportion of WTR astigmatism than that centered on the apex.
Distributions of corneal astigmatism for simulated K, posterior K, true net power
Distributions of corneal astigmatism for total corneal refractive power (TCRP). a 2.0 mm–5.0 mm zone centered on the pupil; (b) 2.0 mm–5.0 mm zone centered on the apex; (c) 2.0 mm–5.0 mm ring centered on the pupil; (d) 2.0 mm–5.0 mm ring centered on the apex
The present study demonstrated that keratometric measurements including corneal power and astigmatism are influenced by the method used to calculate such values and measurement area. To the best of our knowledge, this study is the first study to evaluate various methods for calculating total corneal power and astigmatism in patients with keratoconus.
TNP values as measured using a Scheimpflug rotating camera in previous studies of the normal cornea were flatter than simulated keratometry (Sim-K) values [22, 23] and these were consistent with the result of the present study. However, some keratometric measurements obtained using a Scheimpflug rotating camera result in greater dioptric powers when compared to simulated keratometry (Sim-K). Corneal power on the flat axis as calculated from EKR and TCRP in the 2.0 mm zone centered on the apex exhibited greater power values than Sim-K. In addition, steep K and mean K values calculated from TCRP at 2.0 mm zone, 3.0 mm zone, and 2.0 mm ring centered on the apex were greater than those obtained from Sim-K.
Calculation of corneal power at more peripheral areas results in lower values for refractive power in patients with keratoconus, opposite to what is observed in the normal cornea. Naeser et al. [24] concluded that TCRP increases with pupil size due to positive spherical aberration and further demonstrated that differences between TCRP values centered on the pupil versus apex ranged from 0.01 to 0.02 diopters in the 2.0 ~ 5.0 mm zones/rings. In the present study, TCRP values centered on the apex were significantly greater than those centered on the pupil with respect to all parameters of corneal power, with the exception of steep keratometry at the 5.0 mm ring. Differences in TCRP ranged from 0.68 diopters (2.0 mm zone) to 1.21 diopters (2.0 mm ring) for flat K; from 0.07 diopters (5.0 mm ring) to 2.41 diopters (2.0 mm zone) for steep K; and from 0.33 diopters (5.0 mm ring) to 1.55 diopters (2.0 mm zone) for mean K.
In the normal cornea, posterior astigmatism is steepest vertically when acting as a minus lens, creating what is known as against-the-rule ocular astigmatism [12]. Ho et al. [12] measured posterior corneal astigmatism and reported that the proportion of against-the-rule (ATR) astigmatism was 96.1% (474 eyes), while the proportion of with-the-rule (WTR) astigmatism was only 2.0% (10 eyes). Koch et al. [12] further concluded that the prevalence of WTR corneal astigmatism has been overestimated and the prevalence of ATR astigmatism has been underestimated. The mean power values for posterior astigmatism as reported by Ho et al. [11], Koch et al. [12], and Zhang et al. [25] are 0.30 D, 0.30 ± 0.15 D, and 0.33 ± 0.16 D, respectively. In eyes with keratoconus, the posterior corneal astigmatism displays large and variable values for total corneal astigmatism [26]. Anterior corneal astigmatism and posterior corneal astigmatism were 4.47 ± 2.05 diopters and 0.87 ± 0.44 diopters in the present study. The mean magnitudes of power were similar to those obtained in prior studies, ranging from 3.05–4.49 diopters for the anterior cornea and from 0.71–0.93 diopters for the posterior cornea [4, 5, 27]. Previous studies have also evaluated the axis orientation of astigmatism [4, 5]. WTR astigmatism is more prevalent on the anterior corneal surface, while ATR astigmatism is more prevalent on the posterior corneal surface in the eyes of Japanese patients with keratoconus [4]. In contrast, ATR astigmatism is more prevalent on the anterior cornea, while WTR astigmatism is more prevalent on the posterior cornea in the eyes of Iranian patients [5]. In the present study, we evaluated the eyes of Korean patients diagnosed with keratoconus and obtained results similar to those of the previous studies, indicating that type of astigmatism may be influenced by ethnicity.
As the measurement zone or ring moved toward the peripheral area, the proportion of WTR astigmatism increased, and mean corneal astigmatism on the double-angle polar plots shifted toward WTR astigmatism. On the other hand, the magnitude of corneal astigmatism exhibited a different pattern of change: The Magnitude of mean corneal astigmatism on double-angle polar plots as determined from TCRP centered on pupil increased with more peripheral measurement zones. In contrast, the magnitude of mean corneal astigmatism as determined from TCRP centered on the apex tended to decrease with more peripheral measurement areas.
We also performed subgroup analysis in this study. For TCRP zone and TCRP ring centered on apex, the mean K did not change according to the measurement area in stage 1 keratoconus. On the other hand, in stage 2 ~ 4 keratoconus, the mean K decreased significantly in refractive dioptric power as the measurement area approached the peripheral area. For mean arithmetic astigmatism, stage 1 keratoconus yielded significant difference in TCRP 2.0 ~ 5.0 mm zone centered on apex and TCRP 2.0 ~ 5.0 mm ring centered on pupil. The above results were also the opposite of those of stage 2–4 keratoconus. These differences due to staging might be related to cone location. When the distance from the maximum K to the apical center provided by the scheimpflug rotating camera was measured, stage 2–4 keratoconus showed a greater value than stage 1 keratoconus [1.88 ± 1.12 mm (range: 0 ~ 4.42 mm) versus 0.93 ± 0.63 mm (range: 0.12 ~ 3.20 mm) / p value < 0.001 by Mann-Whitney U test].
There is a limitation in the present study. We did not investigate the best measurements of corneal power in the present study. Further studies evaluating changes in corneal power following corneal cross-linking or intracorneal ring segment implantation are required in order to determine the most appropriate measurements that account for surgically induced refractive change, which may also be helpful in the calculation of intraocular lens (IOL) power calculation and evaluation of disease progression. In addition, finding the best corneal astigmatism reflecting manifest refractive cylinder may be useful in toric IOL implantation.
In this study, we observed that some total corneal power calculated from measurements obtained from more central areas results in greater corneal refractive power than simulated K. Although changes in the magnitude of corneal astigmatism according to measurement area varied with the method used for the calculation of total corneal, all parameters indicated that more peripheral areas exhibit a higher proportion of WTR astigmatism. We further observed that measurements of TCRP centered on the apex are greater than those centered on the pupil. We believe these findings will help to enhance our understanding of the anatomical and optical characteristics of keratoconus as well as our ability to diagnose keratoconus and determination IOL power in the future.
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
Krachmer JH, Feder RS, Belin MW. Keratoconus and related noninflammatory corneal thinning disorders. Surv Ophthalmol. 1984;28(4):293–322.
Mihaltz K, Kovacs I, Takacs A, Nagy ZZ. Evaluation of keratometric, pachymetric, and elevation parameters of keratoconic corneas with pentacam. Cornea. 2009;28(9):976–80.
Savini G, Barboni P, Carbonelli M, Hoffer KJ. Comparison of methods to measure corneal power for intraocular lens power calculation using a rotating Scheimpflug camera. J Cataract Refract Surg. 2013;39(4):598–604.
Kamiya K, Shimizu K, Igarashi A, Miyake T. Assessment of anterior, posterior, and Total central corneal astigmatism in eyes with Keratoconus. Am J Ophthalmol. 2015;160(5):851–7 e851.
Naderan M, Rajabi MT, Zarrinbakhsh P. Distribution of the anterior and posterior corneal astigmatism in eyes with Keratoconus. Am J Ophthalmol. 2016;167(7):79–87.
Royston JM, Dunne MC, Barnes DA. Measurement of posterior corneal surface toricity. Optom Vis Sci. 1990;67(10):757–63.
Dunne MC, Royston JM, Barnes DA. Posterior corneal surface toricity and total corneal astigmatism. Optom Vis Sci. 1991;68(9):708–10.
Prisant O, Hoang-Xuan T, Proano C, Hernandez E, Awwad ST, Azar DT. Vector summation of anterior and posterior corneal topographical astigmatism. J Cataract Refract Surg. 2002;28(9):1636–43.
Modis L Jr, Langenbucher A, Seitz B. Evaluation of normal corneas using the scanning-slit topography/pachymetry system. Cornea. 2004;23(7):689–94.
Dubbelman M, Sicam VA, Van der Heijde GL. The shape of the anterior and posterior surface of the aging human cornea. Vis Res. 2006;46(6–7):993–1001.
Ho JD, Tsai CY, Liou SW. Accuracy of corneal astigmatism estimation by neglecting the posterior corneal surface measurement. Am J Ophthalmol. 2009;147(5):788–95 795 e781–782.
Koch DD, Ali SF, Weikert MP, Shirayama M, Jenkins R, Wang L. Contribution of posterior corneal astigmatism to total corneal astigmatism. J Cataract Refract Surg. 2012;38(12):2080–7.
Cua IY, Qazi MA, Lee SF, Pepose JS. Intraocular lens calculations in patients with corneal scarring and irregular astigmatism. J Cataract Refract Surg. 2003;29(7):1352–7.
Montalban R, Alio JL, Javaloy J, Pinero DP. Correlation of anterior and posterior corneal shape in keratoconus. Cornea. 2013;32(7):916–21.
Montalban R, Alio JL, Javaloy J, Pinero DP. Comparative analysis of the relationship between anterior and posterior corneal shape analyzed by Scheimpflug photography in normal and keratoconus eyes. Graefes Arch Clin Exp Ophthalmol. 2013;251(6):1547–55.
Pinero DP, Alio JL, Aleson A, Escaf Vergara M, Miranda M. Corneal volume, pachymetry, and correlation of anterior and posterior corneal shape in subclinical and different stages of clinical keratoconus. J Cataract Refract Surg. 2010;36(5):814–25.
Pinero DP, Camps VJ, Caravaca-Arens E, Perez-Cambrodi RJ, Artola A. Estimation of the central corneal power in keratoconus: theoretical and clinical assessment of the error of the keratometric approach. Cornea. 2014;33(3):274–9.
Camps VJ, Pinero DP, Caravaca-Arens E, de Fez D, Perez-Cambrodi RJ, Artola A. New approach for correction of error associated with keratometric estimation of corneal power in keratoconus. Cornea. 2014;33(9):960–7.
Watson MP, Anand S, Bhogal M, Gore D, Moriyama A, Pullum K, Hau S, Tuft SJ. Cataract surgery outcome in eyes with keratoconus. Br J Ophthalmol. 2014;98(3):361–4.
Savini G, Hoffer KJ, Carbonelli M, Barboni P. Scheimpflug analysis of corneal power changes after myopic excimer laser surgery. J Cataract Refract Surg. 2013;39(4):605–10.
Naeser K. Assessment and statistics of surgically induced astigmatism. Acta Ophthalmol. 2008;86(3):349.
Borasio E, Stevens J, Smith GT. Estimation of true corneal power after keratorefractive surgery in eyes requiring cataract surgery: BESSt formula. J Cataract Refract Surg. 2006;32(12):2004–14.
Savini G, Barboni P, Carbonelli M, Hoffer KJ. Agreement between Pentacam and videokeratography in corneal power assessment. J Refract Surg. 2009;25(6):534–8.
Naeser K, Savini G, Bregnhoj JF. Corneal powers measured with a rotating Scheimpflug camera. Br J Ophthalmol. 2015;100(9):1196–200.
Zhang L, Sy ME, Mai H, Yu F, Hamilton DR. Effect of posterior corneal astigmatism on refractive outcomes after toric intraocular lens implantation. J Cataract Refract Surg. 2015;41(1):84–9.
Savini G, Naeser K, Schiano-Lomoriello D, Mularoni A. Influence of posterior corneal astigmatism on Total corneal astigmatism in eyes with Keratoconus. Cornea. 2016;35(11):1427–33.
Orucoglu F, Toker E. Comparative analysis of anterior segment parameters in normal and keratoconus eyes generated by scheimpflug tomography. J Ophthalmol. 2015;2015:925414.
Not applicable for this study.
No funds, grants, or other support were received.
Department of Ophthalmology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, South Korea
Jinho Kim & Hyun-Seung Kim
Department of Ophthalmology, Yeouido St. Mary's Hospital, College of Medicine, The Catholic University of Korea Korea, Seoul, South Korea
Woong-Joo Whang
Jinho Kim
Hyun-Seung Kim
JK and WJW contributed to the design of the manuscript. JK and WJW collected the data. JK, WJW and HSK performed the clinical examination and investigation. JK and WJW shared in data analysis and interpretation and revised the intellectual content of the manuscript. HSK critically revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Woong-Joo Whang.
Ethics approval and consent to participate
This study was approved by the institutional research board (IRB) of the St. Mary Hospital (SC19RESI0111) and was performed in accordance with the ethical standards of the Declaration of Helsinki. All patients included in the study provided verbal informed consent. As this study was conducted retrospectively and all data was anonymized, the written informed consent procedures have been exempted under the provisions of IRB of St. Mary Hospital.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Kim, J., Whang, WJ. & Kim, HS. Analysis of total corneal astigmatism with a rotating Scheimpflug camera in keratoconus. BMC Ophthalmol 20, 475 (2020). https://doi.org/10.1186/s12886-020-01747-9
|
CommonCrawl
|
Submissions 1905.05858v1
SciPost Submission Page
Hyperbolic Nodal Band Structures and Knot Invariants
by Marcus Stålhammar, Lukas Rødland, Gregory Arone, Jan Carl Budich, Emil J. Bergholtz
This is not the current version.
Submission summary
As Contributors: Emil Bergholtz · Marcus Stålhammar
Arxiv Link: https://arxiv.org/abs/1905.05858v1
Date submitted: 2019-05-16
Submitted by: Stålhammar, Marcus
Submitted to: SciPost Physics
Domain(s): Theoretical
Subject area: Condensed Matter Physics - Theory
We extend the list of known band structure topologies to include hyperbolic nodal links and knots, occurring both in conventional Hermitian systems where their stability relies on discrete symmetries, and in the dissipative non-Hermitian realm where the knotted nodal lines are generic and thus stable towards any small perturbation. We show that these nodal structures, including the figure-eight knot and the Borromean rings, appear in both continuum- and lattice models with relatively short-ranged hopping that is within experimental reach. To determine the topology of the nodal structures, we devise an efficient algorithm for computing the Alexander polynomial, linking numbers and higher order Milnor invariants based on an approximate and well controlled parameterisation of the knot.
Has been resubmitted
Submission & Refereeing History
Resubmission 1905.05858v2 on 5 July 2019
Submission 1905.05858v1 on 16 May 2019
Report 2 submitted on 2019-06-24 10:35 by Anonymous
Author Reply by Mr Stålhammar on 2019-07-05
Reports on this Submission
Show/hide Reports view
Anonymous Report 2 on 2019-6-24 Invited Report
1. This manuscript provides a pedagogical introduction to knot theories is detailed in the context of nodal line semimetals.
2. Alongside the pedagogical introduction of the mathematical knot invariants, algorithms for the numerical calculation of the nodal line/knot and its invariants, are provided.
3. The topics is contemporary and the reported text provides a good entry point for researcher from a wide range of fields.
1. It is not clear while reading the paper: what are the novel aspects that are brought forward by the current work?
1. Can the authors elaborate more on the possible new physics that these structures might entail?
2. What would protect the different link invariants from having a small perturbation deform between the different knot topologies? What would be the physical signatures of such a transition?
3. There is not much physics discussed in the work, e.g., the physical implications and signatures of the new nodal structures as well as their robustness to disorder.
In this manuscript, a pedagogical introduction to knot theories is detailed in the context of nodal line semimetals. The two-band spectra of nodal line semimetals are parametrized in a general way for both Hermitian and non-Hermitian systems. This allows for various knot configurations of nodal lines and a general scheme for deriving tight-binding and continuum models that realize these knotted spectra is proposed. Alongside the pedagogical introduction of the mathematical knot invariants that help discriminating between the different knotted spectra, algorithms for the numerical calculation of the nodal line/knot and its invariants, are provided. These algorithms are demonstrated and compared on a number of examples, such as the figure-eight knot and Borromean rings.
Nodal line semimetals are one of the more recent members of the topological materials family. Topology of band structures remains a very interesting field of study with new realizations and implications showing up in a wide range of fields. In this context, providing a pedagogical and hands-on introduction to knot theory is very useful and the authors do a very good job about it. At the same time, I have some reservations before I can recommend publication:
1. It is not clear while reading the paper: what are the novel aspects that are brought forward by the current work? For a lay reader, it appears that all of the various parts are well-known in various fields. It would be, therefore, useful to have clear statements on what are imported methods from mathematics and computer science and what was newly developed for this work.
2. Whereas nodal line semimetals can be realized/found nowadays, from the current work I do not see the physical implications/importance of seeking out more complicated nodal link/knot structures. Can the authors elaborate more on the possible new physics that these structures might entail?
3. One of the main goals of analyzing the topology of different structures, is that there must be an obstruction in moving between the different topologies. This obstruction then manifests with some physical implication, e.g., a topological phase transition between different topological insulators. What would protect the different link invariants from having a small perturbation deform between the different knot topologies? What would be the physical signatures of such a transition?
4. The nontrivial topology of simple nodal line semimetals can be understood using a bulk winding invariant. Can the authors comments on the generalization of this characterization to the more complex spectral arrangements of nodal knots?
5. Correspondingly, in simple nodal line semimetals, topological boundary effects appear. What would be the expected boundary modes of the more complex knotted structures? On this note, I think that readers of this more mathematical-type work, would benefit from a short update on the physics state-of-the-art the is devoted to analyzing these effects, see e.g., Phys. Rev. Lett. 121, 166802 (2018) and references therein.
6. Last, the topology of structures is meaningful only when considering disorder. What would disorder do to the knotted nodal line structures? How robust are the presented algorithms to various disorder distributions?
Requested changes
Properly incorporate answers in the main text to the six point that are detailed in the report.
validity: good
significance: ok
originality: low
clarity: high
formatting: excellent
grammar: excellent
Author Marcus Stålhammar on 2019-07-05
(in reply to Report 2 on 2019-06-24)
Dear Referee,
Thank you for reading our manuscript and providing questions well-suited for interesting discussions and constructive improvements. Below we provide answers, alongside with the corresponding changes to our manuscript. Most importantly, we clarify the originality of our work and now better contrast it with earlier works.
It has become clear to us when reading the reports that the main findings of our paper are not accessible enough, especially concerning what is genuinely new, which of course originates in the manuscript being highly cross-disciplinary. We stress that it is actually not true that all components were previously known in the previous literature in the various sub-fields. We have clarified this in the following ways.
We added a clarifying sentence in the end of the Introduction, Sec. 1, stating what is the main original contributions of this work.
We have more clearly spelled out which nodal knots and links that were found earlier and which were newly reported in our work. We also have provided additional new examples of hyperbolic knots and links appearing in these systems, presented in a new subsection, Sec. 2.3.
We have clarified that calculations of knot invariants from any given Hamiltonian model was not provided in earlier work. In particular we have clarified how the use of an approximate but topologically equivalent parameterisation of the knots and links in order to do these calculations significantly differ from earlier works. This is added in a revised beginning of Sec. 3, and in the end of Sec. 3.2 and highlights a key new development of the present manuscript.
The key new physics is that these new structures have topologically distinct Fermi surfaces. While our discussion mainly focuses on how to observe this in spectroscopic measurements such as ARPES, their phenomenology e.g. in transport is very likely rich and remains an interesting research topic which goes beyond the scope of this work. In this context we also stress that the systems presented in our work are both of the conventional Hermitian type and of dissipative non-Hermitian type and that nodal degeneracies on these systems have quite different physical implications and different Fermi surfaces, even when sharing the exact same nodal structure. While the Hermitian nodal lines provide the Fermi surface, the non-Hermitian nodal lines entail open Fermi surfaces in the form of Seifert surfaces, being a natural higher-dimensional generalisation of Fermi arcs. E.g. in a photonics crystal realisation generalising the 2D experiments in Science 359, 1009 (2018) (Ref. 29) these Fermi-Seifert surfaces would be directly visible in light scattering experiments in direct analogy with the mentioned reference. We have commented on this in the revised manuscript.
Even though the Hermitian knotted semimetals appear similar to nodal line semimetals, the fact that the knotted semimetals require polynomial-type invariants in order to be characterised should be interpreted as if there is something unique about the knotted semimetals. Moreover, this knotting is a feature that is unique in 3D spaces — in 4D and higher every knot can be untied and in 2D projections the knot will give rise to singular (i.e. self intersecting) curves. This means, e.g., that studying 2D features of knotted band structures will not provide full information about the knottedness. This is also discussed below in point 4 and we have provided a discussion of this in a new paragraph in Sec. 4.
However, since the knotted solutions are generic in non-Hermitian systems, their experimental discovery may be more more likely in that realm, e.g. in optical systems or cold atoms. Our discussion when it comes to new physics and possible experimental discovery is therefore mainly focused on this part which we have extended in the new version of the manuscript.
As a direct physical signature of these transitions, we note that the topology Fermi (Seifert) surfaces of the non-Hermitian systems changes with the transitions. The complexity of the nodal knot or link structure increases, especially when the central circle merges with the previously existing knot — in particular its genus changes. A natural consequence of this is that the minimal genus of the corresponding Fermi surface will increase as well. This feature is something that potentially could be measured in spectroscopical systems, e.g. with light scattering experiments performed in photonic crystals. Accordingly, we have added a paragraph on this in the manuscript at the end of Sec. 2.3.
Also, let us stress that the topologies of the nodal exceptional structures in the non-Hermitian systems are indeed preserved when an arbitrary, but small, perturbation is added to the system. This is because of that 1D nodal structures are generic in non-Hermitian systems. Of course, when this perturbation grows, the topology will eventually undergo some transition. There are several ways to analyse this, and we have provided a specific example (figure-eight knot) of how such transitions may look in Sec. 2.3. These transitions occur generically for all examples of hyperbolic knots and links studied in this work. The perturbation is not added directly to the system, but rather included as a variation of the radius of the three-sphere on which the knot lies. Thus, the knottedness of the band intersection is changed under such perturbations. It should be noted though that this occurs for both (non-fine-tuned) Hermitian and non-Hermitian systems, as long as the perturbations in the Hermitian case preserves the existing symmetry.
This was indeed one of the main goals of the paper. Nodal unknotted lines are, as said above, characterised by computing an integer bulk winding invariant. When it comes to knots, the structures are so complicated that the usual $\mathbb{Z}_2$ or even $\mathbb{Z}$ invariants are not enough to characterise their topology. In fact, there is to our knowledge no known, or at least computable, unique invariant for knots. As for now, the Alexander, Jones and HOMFLY polynomials are the best known alternatives, and we chose to compute the Alexander polynomial because of its computationally effective algorithms and nice geometrical interpretation (note that we use the Alexander polynomial to in principle extract Milnor invariants of any order). The Alexander polynomial is of course not a $\mathbb{Z}_2$ or $\mathbb{Z}$ invariant, but rather a $\mathbb{Z}[t]$ invariant, that is, an invariant which is an element of the polynomial ring $\mathbb{Z}[t]$. This polynomial then characterises the knottedness of the nodal structure, which is not provided by any integer valued invariant.
To emphasise this question, we have provided a more thorough discussion related to these issues in the beginning of the re-written version of Sec. 3.
The 2D-surface states of these knotted structures will have similar boundary states, i.e. the drum-head surface states, as "usual" nodal line semimetals, see e.g. Phys. Rev. B 96, 201305(R). Thus, in the 2D surface Brillouine zone there seems to be nothing unique separating unknots with certain windings from knotted structures for every projections — since knots only exists in 3D-space, its special properties will be destroyed on a 2D surface. Formally, one would need to scan over all possible 2D-projections in order to be make sure that the obtained surface states really originate from a knotted structure. We have provided a discussion on this in Sec. 4. However, in order to provide similar signatures, it was proposed in an arXiv pre-print arXiv:1905.07069 to consider boundary states of 4D systems, where the boundary states takes the form of Seifert surfaces and hence includes the information related to the knots. We have therefore mentioned the pre-print referred to above in a Note added.
We have also added a reference to Phys. Rev. Lett. 121, 166802 (2018) as suggested.
Generally disorder is a problem in any topological semimetals and standard nodal line degeneracies in 3D as well as the Dirac fermions of graphene in 2D are know to be highly susceptible to disorder. In fact, it by definition ruins the translational symmetry of the system, which in turn ruins at least the simplest available ways of characterising the nodal topology. Nevertheless, some features can survive even quite strong disorder, e.g. the transport properties of Weyl semimetals are known to be robust to finite disorder strengths.
However, if the translational invariance is preserved (or altered such that the unit cell size increases), the story is more straight forward to analyse. The answer to the question is then qualitatively different depending on if Hermitian or non-Hermitian systems are studied. Let us therefore separate the discussion.
Hermitian systems: When it comes to Hermitian systems, the robustness against disorder of the knotted nodal lines does not differ fundamentally from that of regular nodal lines. Thus, there is really nothing new to add to this point.
Non-Hermitian systems: In non-Hermitian band structures, the knotted nodal lines occur generically, in contrast to the Hermitian realm. This is discussed in Phys. Rev. B 99, 161115(R), and the principles are completely analogous in this case. Any small, but arbitrary "disorder" contribution will leave the nodal topology intact. This fact suggests that non-Hermitian systems, such as cold atoms and photonics systems, will be more robust candidates for experimental observations than the knotted semimetals. Studying real translational breaking disorder in these systems would be particularly interesting noting that the generic nature of 3D Weyl fermions make them robust against order and it is an open question if this property carries over also to the generically occurring nodal non-Hermitian degeneracies.
1. This manuscript is clear and well-written, and is of pedagogical value particularly in Sect. 3, where mathematical results on the Milnor invariant are explained to a physics audience that is probably new to such concepts.
2. This work is self-contained in that the whole story, from model construction and approach to knot characterization, are all explained in sequential order.
1. This work is of limit novelty, since most of the results are not new at all. In Section 2, the microscopic models, or at least close variants of them, have already appeared in previous works ie. Refs 11 and 15 by most of the same authors. Section 3, while pedagogical, are mostly not new, being established mathematical results from knot theory.
2. Even the approximation approach on truncating j_max, which did not appear in any previous works by the same authors, is not new. For instance, an analogous truncation approach for realizing arbitrary knots appeared in
Bode, Benjamin, and Mark R. Dennis. "Constructing a polynomial whose nodal set is any prescribed knot or link." arXiv preprint arXiv:1612.06328 (2016).
As explained in the strengths and weakness remarks, this manuscript on the construction and characterizations of nodal knots is of limited novelty, even though it is well-written and of pedagogical value.
If I were to pick one aspect that is most interesting and original, it will be the determination of the Milnor invariant from the Conway polynomial.
1. The authors should consider also citing some other existing/contemporary theoretical and experimental works on physical knots:
Bode, Benjamin, and Mark R. Dennis. "Constructing a polynomial whose nodal set is any prescribed knot or link." arXiv preprint arXiv:1612.06328 (2016). - please also contrast the approach used in your manuscript with this work.
Sugic, Danica, and Mark R. Dennis. "Singular knot bundle in light." JOSA A 35, no. 12 (2018): 1987-1999.
Li, Linhu, Ching Hua Lee, and Jiangbin Gong. "Boundary states of 4D topological matter: Emergence and full 3D-imaging of nodal Seifert surfaces." arXiv preprint arXiv:1905.07069 (2019).
Larocque, Hugo, Danica Sugic, Dominic Mortimer, Alexander J. Taylor, Robert Fickler, Robert W. Boyd, Mark R. Dennis, and Ebrahim Karimi. "Reconstructing the topology of optical polarization knots." Nature Physics 14, no. 11 (2018): 1079.
Zhang, Yi. "Cyclotron orbit knot and tunable-field quantum Hall effect." arXiv preprint arXiv:1905.02192 (2019). - which gives knotted analogies to cyclotron orbits
2. In the note added pertaining to Ref 50, Ref 50 pertains to Hermitian and not non-Hermitian knots.
3. It will be desirable to provide a more extensive discussion on how the approach used here deviates/complements the approaches of Refs 6 and 22, beyond the level of the relationship between d_R and d_I.
validity: high
grammar: good
Thank you for reading the manuscript and for providing constructive suggestions for improvements and useful questions. Below, we have provided answers to all the points and questions raised in the report. Crucially we realise that our initial manuscript did lack clarity in what is original and how our work contrasts to earlier works. We hope that emphasising this in the updated manuscript version will clarify what is novel about our manuscript, thereby refuting the assessment that our findings are of low novelty.
Below we divide our answer in a fashion similar to the report, and answer the points raised under "Weaknesses" and "Requested Changes" accordingly.
It is true that related results have been obtained in Refs. 11 and 15 in the sense that the construction of the Hamiltonians is a natural extension of the methods of Ref. 8 and 15, which is indeed stated in the manuscript. However, it is not true that these microscopic models have appeared in the works referred to in the report. The new nodal structures presented here are in fact of a fundamentally different kind from the point of view of knots. In Ref 11, Hopf-links are presented, while in Ref 15 treats the general family of torus knots and links. These knots and links can be considered to be of the simplest sort, since they are completely classified from a mathematical point of view, see e.g. Milnor (1968). The figure-eight knot and the Borromean rings are characterised as Hyperbolic knots/links, meaning that their complement can be assigned a metric with constant curvature -1. Thus, they differ fundamentally from the torus knots/links.
To emphasise this key point, we have complemented the figure-eight knot and the Borromean rings with more new knots and links of a somewhat more exotic appearance, taking the form of Turk's head knots.
2. Even the approximation approach on truncating $j_\text{max}$, which did not appear in any previous works by the same authors, is not new. For instance, an analogous truncation approach for realizing arbitrary knots appeared in Bode, Benjamin, and Mark R. Dennis. "Constructing a polynomial whose nodal set is any prescribed knot or link." arXiv preprint arXiv:1612.06328 (2016).
We thank the referee for raising this relevant point. Even though the approximation techniques may at first sight look similar, we stress that they are used to completely different ends. Bode et. al. are using trigonometric interpolation to in the end construct complex functions whose zeros on the three-sphere represent knots and links. We, on the other hand, use the interpolation to construct a parameterisation of the set of zeros of the function representing the knot in order to calculate knot invariants. Sure, we are both using trigonometric interpolation/Fourier basis expansion for approximating a discrete data set, but our goals differ fundamentally. We include the paper by Bode et. al. in the reference list, and have cited in connection to our parameterisation ansatz. We have furthermore explicitly stated how the two works differ.
After considering the suggested additional references, we have come to the following conclusions.
Bode et. al. arXiv:1612.06328 (2016) has been cited in connection to our parameterisation ansatz, where we furthermore have highlighted the contrasts with our work.
The paper by Li et al, arXiv:1905.07069 (2019) has been mentioned in a Note added, essentially because of the discussion on imaging full Seifert surfaces in systems.
Larocque et. al. Nature Physics 14, no. 11 (2018): 1079 has been included in the citation of optical experiments.
Sugic et. al. JOSA A 35, no. 12 (2018), 1987-1999 has been included in the citation of optical experiments due to their concrete experiment suggestion.
The paper by Zhang has however not be included in the reference list. At first, it looks relevant but is not at the core of the present discussion.
This typo has been fixed, and we thank the referee for pointing this out.
3. It will be desirable to provide a more extensive discussion on how the approach used here deviates/complements the approaches of Refs 6 and 22, beyond the level of the relationship between $d_R$ and $d_I$.
It is our understanding that the original material of our work was not highlighted and clear enough. The key concepts and new insights of this work are twofold.
We explicitly construct continuous and discrete models, both Hermitian and non-Hermitian, whose nodal structures attains the form of hyperbolic knots and links, which haven't been previously discovered.
We provide an efficient algorithm for computing an invariant, the Alexander polynomial, of any knotted structure appearing in a generic translational invariant Hamiltonian. In this way, the nodal knots and links can be identified without visual inspection. Furthermore, we show that the polynomial provides higher order Milnor invariants, characterising the Brunnian links.
We have emphasised in a clearer fashion how our work differs and complements earlier works on similar topics in several ways.
A new subsection, 2.3, has been added to the manuscript. Here we have included more exotic, and new, hyperbolic nodal structures. In fact, they form the family of $(3,n)$ Turk's head knots.
The originality of computation of knot invariants from a generic Hamiltonian has been stressed in a revised introduction to Section 3.
These aspects are mentioned in the last sentence of the Introduction, Section 1.
Login to report or comment
|
CommonCrawl
|
Effect of Bacillus thuringiensis CAB109 on the growth, development, and generation mortality of Spodoptera exigua (Hübner) (Lepidoptera: Noctuidea)
Shichen Huang1,
Xiangguo Li1,
Guangchun Li1 &
Dayong Jin1
The efficiency of Bacillus thuringiensis (Bt) CAB109 on Spodoptera exigua (Hübner) (Lepidoptera: Noctuidea) larvae was investigated. This study introduces a novel concept of generation mortality (GM) was introduced. Bt CAB109 suspensions at sub-lethal concentrations of 0, 102, 103, 104, 105, and 106 cfu/ml were prepared and used to treat the second instar larvae of S. exigua. The results showed that the mortality rates of the larvae were 5.0, 8.3, 15.0, 23.3, 36.7, and 55.0% respectively after 7 days of treatment. The mean weights of treated larvae with different concentrations were 2.63, 2.19, 2.03, 1.87, 1.34, and 0.96 mg respectively after 6 days, while the developmental durations of such larvae were 16.3, 16.8, 17.5, 18.2, 19.5, and 21.2 days, respectively. Treatment with Bt affected the growth of the larvae at all instars (from the first to the fifth one).
Through the comprehensive interference index of population control (CIIPC), the GM was calculated and the percentages were 30.4, 50.3, 63.8, 77.2, and 90.6% for the six tested concentrations respectively. Thus, the GM can be used to evaluate the efficiency of biological pesticides in agricultural practices in the future.
Spodoptera exigua (Hübner) (Lepidoptera: Noctuidea) is a worldwide pest (Da-yong et al. 2009), which mainly attacks vegetables and field crops. Currently, chemical pesticides are being used to control this pest; however, they are not ideal because they cause environmental pollution (Gui-lan et al. 2002). Therefore, it is necessary to use methods that will not pollute the environment (Da-yong and Yong-man 2010 and Da-yong et al. 2012). Bacillus thuringiensis (Bt) is currently the most widely used bio-pesticide (Lan-lan et al. 2008 and Qing-xian 2008). Previous studies showed that Bt can not only kill the target pests, but can also inhibit, hinder, and prolong duration of the growth and reproduction of the pests (Barker 1998 and Erb et al. 2001). Therefore, only using the concept of generation mortality to evaluate the effect of chemicals is not sufficient.
In the present study, the efficacy of sub-lethal concentrations of Bt CAB109 on the mortality rate, growth, and duration of larvae of S. exigua under laboratory conditions was evaluated.
Bacillus thuringiensis (Bt CAB109) was kindly provided by the Laboratory of Pest Biological Control, College of Agriculture and Life Sciences, Chungnam National University (Korea). The Bt CAB109 strain was cultured in Nutrient Agar (NA) medium at 27 °C for 4 days (when the spore and the parasporal crystal separated from each other) (Da-yong and Yong-man 2010). Next, the culture was washed with sterile water and centrifuged at 4 °C for 10 min. The supernatant was collected and the concentration of cells in the pellet was about 1010 cfu/ml; then the pellet was stored at 4 °C until further use. S. exigua (larvae and adults) were collected and reared at 25 °C in 16:8 h (light to dark) cycles, and the relative humidity 50–60%. The second instar S. exigua larvae were selected for use at different treatments.
Effect on larval growth and development
On larval mortality
The Bt suspension was diluted to final concentrations of 0 (control), 1 × 102, 1 × 103, 1 × 104, 1 × 105, and 1 × 106 cfu/ml; 100 μl of the diluted suspension was drawn using a pipette and incubated into artificial feed (0.5 g) and mixed well. Twenty larvae (second instar) of S. exigua that were fasted for 3 h were placed in a plate with the artificial feed containing concentration of the Bt suspension culture and incubated for 3 h. The larvae were then transferred to a new plate containing artificial feed without Bt and reared for 7 days, after which the larval mortality rate was calculated. All the experiments were repeated four times.
On larval weight
The second instar larvae of S. exigua were divided into groups of 10 and weighed to obtain the average. These groups were treated with Bt as described before. Six days later, the groups of treated larvae were weighed and the average of larva weight was estimated. All the experiments were repeated four times.
On larval duration
Twelve of the second instar S. exigua larvae were treated with Bt for 3 h, after which they were transferred into a 12-well culture plate with the diet and reared until pupation, and the larval duration was calculated. All experiments were repeated four times.
Generation mortality (GM)
GM is the mortality or decrease of the pest numbers at various growth stages (egg, larva, pupa, and adult) in one generation after treating the second instar larvae of S. exigua with Bt CAB109, which was calculated according to the theory of interference index of population control (IIPC) defined by Xiong-fei et al. (2000).
On larvae
The Bt suspension was diluted to final concentrations of 0 (control), 102, 103, 104, 105, and 106 cfu/ml; then second instar larvae were treated as mentioned before and kept for 7 days, after which the number of live larvae was calculated; all experiments were repeated three times.
On pupae
The second instar larvae of S. exigua were treated as described before and reared until pupation. The pupae were then transferred into a new plate and kept until emergence of adults. The ratio of live pupae was then determined. All experiments were repeated three times.
On adults
The adults were then transferred into a plate and the numbers of live adults in each concentration were calculated. All experiments were repeated three times.
On eggs
The normal adults were grouped in 1:1 female to male ratio in a plate and provided with absorbent cotton containing 10% (w/v) glucose solution. The number of eggs laid was estimated. All experiments were repeated three times.
Formula used for the calculations
$$ \mathrm{Interference}\ \mathrm{Index}\ \mathrm{of}\ \mathrm{Population}\ \mathrm{Control}\ \left(\mathrm{IIPC}\right)\kern0.5em =\kern0.5em \frac{\mathrm{survival}\kern0.5em \mathrm{rate}\kern0.5em \mathrm{of}\kern0.5em \mathrm{treated}\kern0.5em \mathrm{group}}{\mathrm{survival}\kern0.5em \mathrm{rate}\kern0.5em \mathrm{of}\kern0.5em \mathrm{treated}\kern0.5em \mathrm{group}} $$
Comprehensive interference index of population control (CIIPC)
$$ {\displaystyle \begin{array}{l}\mathrm{CIIPC}={\mathrm{IIPC}}_1\times {\mathrm{IIPC}}_2\times {\mathrm{IIPC}}_3\times {\mathrm{IIPC}}_4\\ {}{\mathrm{where}\ \mathrm{IIPC}}_1=\frac{\mathrm{larvae}\ \mathrm{survival}\ \mathrm{rate}\ \mathrm{of}\ \mathrm{treated}\ \mathrm{group}}{\mathrm{larvae}\ \mathrm{survival}\ \mathrm{rate}\ \mathrm{of}\ \mathrm{control}\ \mathrm{group}}\\ {}\kern5em {\mathrm{IIPC}}_2=\frac{\mathrm{pupae}\ \mathrm{survival}\ \mathrm{rate}\ \mathrm{of}\ \mathrm{treated}\ \mathrm{group}}{\mathrm{pupae}\ \mathrm{survival}\ \mathrm{rate}\ \mathrm{of}\ \mathrm{control}\ \mathrm{group}}\\ {}\kern5em {\mathrm{IIPC}}_3=\frac{\mathrm{adults}\ \mathrm{survival}\ \mathrm{rate}\ \mathrm{of}\ \mathrm{treated}\ \mathrm{group}}{\mathrm{adults}\ \mathrm{survival}\ \mathrm{rate}\ \mathrm{of}\ \mathrm{control}\ \mathrm{group}}\\ {}\kern5em {\mathrm{IIPC}}_4=\frac{\mathrm{eggs}\ \mathrm{number}\ \mathrm{of}\ \mathrm{treated}\ \mathrm{group}}{\mathrm{eggs}\ \mathrm{number}\ \mathrm{of}\ \mathrm{control}\ \mathrm{group}}\end{array}} $$
Generation mortality
$$ \mathrm{GM}=\left(1-\mathrm{CIIPC}\right)\times 100\% $$
The data were analyzed using the OriginPro 9.0 software.
Effect on larval mortality
Data of larval mortality are shown in Fig. 1. A significant difference was found between the results of the treatments when the Bt CAB109 concentration was more than 103 cfu/ml. Furthermore, at 103 cfu/ml, the mortality showed an obvious increase.
Effect of Bt CAB109 on the mortality rate of S. exigua larvae
Effect on larval weight
The larvae were first treated with Bt CAB109 at different concentrations for 2 h then were reared for 6 days. The survived larvae were weighed. It was found that the average weight of the survived larvae was lower than that of the control and the increase of Bt concentration resulted in decrease of the larval weight (Fig. 2).
Effect of Bt CAB109 on body weight of S. exigua larvae
The weight of each treated larva (in Fig. 2) was calculated using the following equation: W = W6 − W0, where W0 is the weight before treatment, and W6 is the weight of the larvae treated with Bt for 6 days.
Effect on larval duration
The duration of larvae treated with Bt increased with increase in concentration. The prolongation of treatment time caused significant differences among treatments (Fig. 3). The durations were 5, 3, and 2 days at Bt concentrations of 1 × 106, 1 × 105, and 1 × 104 cfu/ml respectively.
Effect on Bt CAB109 duration of larvae of S. exigua
Effect on the larvae, pupae, adults, and eggs
The Bt CAB109 could evidently affect all stages of S. exigua (Table 1). The CIIPC value decreased with increase in the concentration of Bt. It was found that the smallest IIPC value had the greatest in survival of all the pest stages.
Table 1 Effect of different concentrations of Bt CAB109
Effect on GM
Bt CAB109 affected not only the larvae, pupae, and adults, but also the eggs laid by the adults (Table 1). Therefore, we introduced the novel concept of GM to explain the relationship between the concentration of Bt and stages of S. exigua. The GM increased with increase in the concentration of Bt CAB109.
S. exigua was reported to have relatively less sensitivity to Bt (Yue-qiu and Xing-fu 2002 and Bao-shan et al. 2006). Our results revealed that unlike insecticides, Bt had sublethal effects on S. exigua and affected its biological aspects and development. These results are in agreement with those reported by Da-yong and Yong-man (2010) who stated that Bt affected the growth and development of S. exigua. In addition, Ming et al. (2002) and Donglin et al. (2007) obtained the same results when fed S. exigua larvae on Bt cotton. Although chemical insecticides quickly and efficiently control pests, unlike biological pesticides, the biological pesticides could have a sublethal effect, which could directly affect the weights of the larva, pupa, and the adult pests; growth and development; eclosion rate; egg count; and deformity development, which would inevitably result in the decline in crop yields; moreover, biological pesticides will cause less air pollution than the chemical ones (Shen et al., 1994, Xiao-hui et al. 1999 and Choi et al. 2008). Therefore, it is necessary to evaluate efficient methods for the biological control of pests using pesticides that have the follow-up effect (Shu-liang et al. 2006).
Most target pests would be killed by Bt, but there are some exceptions such as S. exigua, which has relatively lower sensitivity to Bt (Yue-qiu and Xing-fu 2002 and Bao-shan et al. 2006). Moreover, Bt was deterrent or feeding inhibition for insects to Bt. The amount of Bt toxin was insufficient to cause the death of insects, but it was enough to affect their normal growth and development (Da-yong and Yong-man 2013). The weight of the insects reduced, which was explained by reduction in food intake. Thus, although the pests were still alive, the degree of harm caused to the plants reduced. At the same time, the development duration of surviving larvae would be prolonged, the generation number would decrease, and the harm to the crops would be reduced.
In this study, with a CAB109 concentration of 1 × 106 cfu/ml, the larval mortality was only 52.6%, but the subsequent actual GM could be up to 90.6%. Therefore, if Bt is applied manually, high mortality in a short period cannot be expected. At the time of the occurrence of pests of relatively low density, Bt can be proven effective for pest control, with little or no chemical pesticides. Thus, biological pesticides such as Bt are important for environmental protection and pollution-free agricultural production (Gay 2012 and Pretali et al. 2016).
The generation mortality can be a comprehensive and systematic reflection of the actual control effect of Bacillus thuringiensis for the biological pesticide in the actual control effect, which can make a reasonable assessment.
Bao-shan Y, Lan-juan C, L. (Jun. 2006) Research progress in the control of beet armyworm. J Anhui AgriSci 34(14):3418–3419
Barker J (1998) Effect of Bacillus thuringiensis subsp. kurstaki toxin on the mortality and development of the larval stages of the banded sunflower moth (Lepidoptera: Cochylidae). J Econ Entomol 91:1084–1088
Choi YJ, Gringorten JL, Belanger L, Morel L, Bourque D, Masson L, Groleau D, Miguez CB (2008) Production of an insecticidal crystal protein from Bacillus thuringiensis by the methylotroph Methylobacterium extorquens. Appl Environ Microbiol 74(16):5178–5182
Da-yong J, Seungkyung P, Jinsu K, Suyeon C, Chan P, Taehwan K, Nayoung J, Sunyoung J, Youngnam Y, Yongman Y (2009) Environment-friendly control of beet armyworm, Spodoptera exigua (Noctuidae: Lepidoptera) to reduce insecticide use. Appl Entomol 48(2):253–261
Da-yong J, Xueli Q, Xiangguo L, Yongwan Y (2012) Effects of Tween 80 on spreading of Bacillus t huringiensis on crop leaves and its control efficacy against Spodoptera exigua in scallion fields. Plant Prot 38(5):143–146
Da-yong J, Yong-man Y (2010) Isolated and bioassay of Bacillus thuringiensis with high insecticidal activity to Spodoptera exigua. J Agri Sci Yanbian Univ 32(4):238–242
Da-yong J, Yong-man Y (2013) Effect on growth and development of Spodoptera exigua larvae by Bacillus thuringiensis CAB109. Northern Horticul 20(6):122–124
Donglin H, Jinyu X, Hui L, Wei W, Ji Z, Qian W (2007) Effects of CpTI+Bt transgenie cotton and Bt transgenic cotton oil population increase and preference of Spodoptera exigua (Hübner). Acta Phytophy Sin 34(5):461–465
Erb SL, Bourchier RS, Van Frankenhuyzen K, Smith SM (2001) Sublethal effects of Bacillus thuringiensis Berliner subsp. kurstaki on Lymantria dispar (Lepidoptera: Lymantriidae) and the Tachinid parasitoid Compsilura concinnata (Diptera: Tachinidae). Environ Entomol 30(6):1174–1181
Gay H (2012) Before and after silent spring: from chemical pesticides to biological control and integrated pest management—Britain, 1945-1980. Ambix 59(2):88–108
Gui-lan N, Jian-ping Y, Da-sheng Z, Zhiming Y (2002) Characterization of Bacillus thuringiensis WY-190 showing high performance in killing Spodoptera exigua. Chin J Biol Control 18(4):166–170
Ji-zhong S, Chuan-fan Q, Shu-fang Z (1994) Effects of sub-lethal dosages of Bacillus thuringiensis Galleriae on the metabolism of substances in galleria Mellonella larvae. Acta Phytophylacica Sin. 21(4):373–377
Lan-lan H, Chang-chun D, Fu-ping S, Jie Z, Kui-jun Z (2008) Analysis activity of cry protein from Bacillus thuringiensis against Plutella xylostella of vegetable pest in Heilongjiang Province. Northern Horticul (8):198–200
Pretali LL, Bernardo TS, Butterfield M, Trevisan L, Lucini (2016) Botanical and biological pesticides elicit a similar induced systemic response in tomato (Solanum lycopersicum) secondary metabolism. Phytochemistry 130:56–63
Qing-xian Y (2008) Progress on synergistic bacteria of Bacillus thuringiensis. Northern Horticul (1):55–58
Shu-liang F, Rong-yan W, Jin-yao W, Li-xin D, Da-fang H (2006) Evaluation of control effect of Bacillus thuringiensis strain HBF-1 against larvae of Scarabaeoidae. Acta Phytophylacica Sin 33(4):417–422
Xiao-hui Z, Zi-niu Y, Cui H (1999) Effect of Cry1C toxin from Bacillus thuringiensis on growth, survival and feeding behavior of beet armyworm larva. J Zhejiang Agri Univ 25(1):62–66
Xiong-fei P, Mao-xin Z, You-ming H (2000) Evaluation of plant protectants against pest insects. Chin J Appl Ecol 11(1):108–110
Xue M, Jie D, Cheng-sheng Z (2002) Effect of feeding Bt cotton and other plants on the changes of development and insecticide susceptibilities of lesser armyworm Spodoptera exigua (Hübner). Acta Phytophylacica Sin. 29(1):13–18
Yue-qiu L, Xing-fu J (2002) Biological control of Spodoptera exigua. Plent Protection 28(1):54–56
We thank Professor Youn Young-nam and Dr. Jin Na-young (Chungnam National University, Korea) for their assistance with this study.
Yanbian University, Yanbian Korean Autonomous Prefecture, China
Shichen Huang
, Xiangguo Li
, Guangchun Li
& Dayong Jin
Search for Shichen Huang in:
Search for Xiangguo Li in:
Search for Guangchun Li in:
Search for Dayong Jin in:
SC carried out performed and wrote the paper, XG carried out performed partly, GC participated in the statistical analysis, DY conceived of the study and participated in its design and coordination. All authors read and approved the final manuscript.
Correspondence to Dayong Jin.
Huang, S., Li, X., Li, G. et al. Effect of Bacillus thuringiensis CAB109 on the growth, development, and generation mortality of Spodoptera exigua (Hübner) (Lepidoptera: Noctuidea). Egypt J Biol Pest Control 28, 19 (2018). https://doi.org/10.1186/s41938-017-0023-y
Bacillus thuringiensis
Spodoptera exigua
Growth development
|
CommonCrawl
|
What Is A Qubit Made Of
We will show that if you can clone a state, then Bob can distinguish the two above probability distributions (equal probabilities of | + ⟩, | − ⟩ vs. Quantum dots are made of semiconductor material and are used to contain and manipulate electrons. Oct 13, 2017 · The two companies began working together in 2015 and have made strides that include developing a spin qubit fabrication flow on the chip maker's 300mm process technology and the unique packaging. Oct 23, 2019 · Sycamore, measuring about 10 mm (0. Martinis, in Les Houches, 2004. We review here three difierent ways that these nonlinear resonators can be made, and which are named as phase, °ux, or charge qubits. IEEE Spectrum, October 30, 2017. The Qubit[1] syntax simply allocates one qubit to the qubits array. America's Richest Self-Made Women China's Richest India's Richest If an operation is applied to a qubit while it's in a superposition state, it will affect both states simultaneously. " Read more: Quantum computing could change everything, and. One can consider a working quantum computer an ultimate example of such. Google's next target is a 49-qubit quantum computer. As a result, a roadmap leading toward diamond-based quantum computers is starting to emerge. Let me explain as best I can. It does so with specific use cases that tackle key business challenges in specific verticals. Qubit is the leader in highly persuasive personalization at scale. If successful, it could be the world. out of which a quantum computer would be made. Principles of Quantum Computing Qubits To implement a computational model as a physical device, the computer must be able to adept different internal states, provide means to perform the necessary transformations on them and to extract the output information. Theory of quantum computing A description of the fundamental concepts behind quantum computation begins with the complexity of problems, qubits, and a look at quantum parallelism, quantum algo-rithms, and the challenges of building a quan-tum computer. 5 Energy diagram changes over time as the quantum annealing process runs and a bias is applied. This task resembles the early days of programming, in which software was built in machine languages. A Qubit (or QBit) is a unit of measure used in quantum computing. Oct 23, 2019 · hold on a qubit While the peer-reviewed research has drawn plaudits, with MIT's William D. QUBIT DIGITAL LTD - Free company information from Companies House including registered office address, filing history, accounts, annual return, officers, charges, business activity. The analogous to the bit is Qubit (short for "Quantum Bit") in quantum computers. Back in April, I read the pre-print "Lower bounds on the non-Clifford resources for quantum computations" by Beverland et al. Sep 15, 2017 · A qubit is the core building block of a quantum computer. Quantum computing breakthrough: Qubits made from standard silicon transistors. Line 10 and 11 describe the quantum gates that form the circuit. They support many local charities as. Quantum computing's also-rans and their fatal flaws When it comes to performance, engineering matters more than physics. Of course, it depends on the shape of the. Qubit is the leader in highly persuasive personalization at scale. Because Quantum is Coming. To their credit, all the mothers I met made me feel as accepted and welcome as any other parent of a young baby, and I made some really good friends. Qubit = quantum form of a bit. ) used extensively at IBM, Google, and elsewhere. First off it really works and works well with all of the six supported algorithms – X11, X13, X14, X15, Quark and Qubit. Back in April, I read the pre-print "Lower bounds on the non-Clifford resources for quantum computations" by Beverland et al. review articles Quantum Money made by a mint or dug out of the ground. IBM has made a breakthrough in quantum computing by demonstrating a way to control the quantum behavior of individual atoms. Those of us who are the leading publishers in independent media have long known that government-funded tech advancements are typically allowed to leak to the public only after several years of. 3200 qubit by 2030. C) Photograph of the quantum processor package for the first IBM Q systems. A qubit can exhibit "pure" and "impure," or mixed, states. The difficulty is that such processes. One can consider a working quantum computer an ultimate example of such. By Lucian Armasu 2019-09 The new computer will be the largest commercially available quantum computer yet when it's made available in. Ions are arguably the leading candidate for use as qubits in a quantum computer. Coupling to a second qubit can be made strong. has made atomic qubits by placing a single phosphorus atom at a known position inside a. A qubit can exhibit "pure" and "impure," or mixed, states. A qubit is a single unit of quantum information. Back in April, I read the pre-print "Lower bounds on the non-Clifford resources for quantum computations" by Beverland et al. However, unlike a bit, which can either be 0 or 1, a qubit can be 0 and 1 at the same time - a quantum superposition of both states. Density matrices have no "freedom of phase' every mathematical change made to a density matrix has a measurable physical effect. Our dream: to understand space, time and gravity as emergent features in a world made of information – quantum information. Appreciate that a meeting over some suboptimal conferencing tool, is not the same as F2F time together. What is a Qubit? The fundamental unit of processing information in a classical computer is a bit which can hold binary values ('0' or '1'). Sep 19, 2019 · IBM yesterday announced the opening of the IBM Quantum Computing Center in New York, with five 20-qubit systems up and running and a 53-qubit system expected to go online next month. Implement the foundations of personalization with a quick time to value. The Qubit[1] syntax simply allocates one qubit to the qubits array. Entanglement is what allows quantum computers to go beyond computers made out of balls, or even transistors. We are specialized in higher education academic information systems, powered by multichannel technologies and really important, we 'eat our own dog food' and rule ourselves by core values!. Intel Sees Promise of Silicon Spin Qubits for Quantum Computing Intel Corporation has invented a spin qubit fabrication flow on its 300 mm process technology using isotopically pure wafers, like this one. Qubit is the currency that sets the bar for the media, the lay, and for funding but it is not what most people think it is. What The Heck Is a Qubit? Post by Drew Sebastino » Tue Dec 22, 2015 7:32 pm I'm sorry, but I've been hearing about some "top-secret", area 51 type stuff called "quantum computing" and how it's (according to some dumb YouTube video) "100 million times faster" than traditional computing, and they always say that it's because of some magical. One famous example is the transmon qubit (see "Charge-insensitive qubit design derived from the Cooper pair box" by Koch et al. It's also a vindication of kind for D-Wave Systems, the company that built this computer and markets a 128-qubit computer for $10 million. Hundreds of researchers in a collaborative project called "It from Qubit" say space and time may spring up from the quantum entanglement of tiny bits of information. Jun 12, 2014 · Station Q is a Microsoft Research lab located on the campus of the University of California, Santa Barbara. 200-qubit by 2022. Quantum volume is a little harder to understand but it's the real measure of growth in the field. The qubit -- want to know why and how they work? This series of videos from IBM quantum computing researchers has the answers. qubit synonyms, qubit pronunciation, qubit translation, English dictionary definition of qubit. Qubit entanglement is a feature of qubits that differentiates quantum. In addition, we can combine quantum states in forming a. The internet has brought new opportunities for business interaction. Craig Gidney's computer science blog. Use the MyQubit assay design tool to create your own assays to run on the Qubit fluorometer. What does a continuously monitored qubit readout really show? For continuous measurements of a quantum observable it is widely recognized that the measurement output approximates the expectation value of the observable, hidden by additive white noise. This is made possible by the quantum effects known as entanglement and superposition. This can be arranged in many ways. Google: We've made 'quantum supremacy' breakthrough with 54-qubit Sycamore chip. Back in April, I read the pre-print "Lower bounds on the non-Clifford resources for quantum computations" by Beverland et al. The junction bias Idc is typically chosen to give 3–7 states in the well. May 04, 2016 · The 5-qubit quantum processor is part of a new platform called the IBM Quantum Experience. The company also […] IBM has been offering quantum computing as a cloud service since last year when it came out with a 5 qubit version of the. Let's face it; we 're having some trouble keeping up with Moore's Law. Sep 06, 2017 · Some tried and true ways to capture a qubit are to use standard atom-taming technology such as ion traps and optical tweezers that can hold onto particles long enough for their quantum states to be analysed. The spin of an electron has also been suggested for use as a qubit and the increased knowledge of nuclear magnetic resonance (NMR) has made this idea more feasible. But researchers have made great strides in recent years figuring out how to stabilize qubits, so they are starting to build rudimentary quantum computers. They have demonstrated a fully controllable chip with ten qubits that can store quantum information for up to one minute. Oct 23, 2019 · UPDATE 3-Google unveils quantum computer breakthrough; critics say wait a qubit. What is a Qubit? The fundamental unit of processing information in a classical computer is a bit which can hold binary values ('0' or '1'). The difficulty is that such processes. The 50 qubit is the natural extension of the 20 qubit architecture. In their experiment this year, the researchers were able to get 53 of Sycamore's qubits to interact in a quantum state. However, qubit readouts are not guaranteed to be independent from one another, giving another bit of headache here. So, a bit can only be one thing at one time (a 0 or a 1). A complete circuit, however, has yet to be demonstrated. Such a qubit would be made of thin aluminum films deposited on a silicon chip. Jan 08, 2019 · Last year, IBM hauled a 50-qubit quantum computer to CES. There's a certain chance that you may opt-out of reading this article without completing it. These both states are well defined. Mar 31, 2018 · In the lab are a series of white cylinders, which are fridges, cooled almost to absolute zero as part of the process of creating a qubit, the building block of a quantum computer. May 15, 2014 · Forging a Qubit to Rule Them All. Qubit is built for digital merchandising, optimization and data science teams, with native integrations for marketers. One logical qubit will require 10 to 100 physical qubits with Microsoft's topological qubits. 39 inch) across, is made using aluminum and indium parts sandwiched between two silicon wafers. superconductors - allow electrons to flow with almost no resistance at very low temperatures. In this article, we will look deeper into quantum supremacy, its significance…. The simplest invariants obtained in this way are made explicit and compared with various known entanglement measures. Quantum computing begins with the notion of a qubit. Click these words to find out how many points they are worth, their definitions, and all the other words that can be made by unscrambling the letters from these words. Last January, IBM announced at the Consumer Electronics Show in Las Vegas that they were introducing the world's first integrated quantum computer for business and research purposes, to be made. The future success of your business will rely on instant access to information, from any location, at any time. Now take those same 8 slots we talked about and now each slot is filled with a Qubit instead of a bit. A qubit just corresponds to a unit vector in two-dimensional Hilbert space! When we talk about a qubit being collapsed to a specific state with a certain probability, you may think it's the same as tossing a fair coin or throwing a balanced die. To fight decoherence, a quantum computer made of entangled electrons, for example, requires each unit of information to be shared among an elaborate network of many qubits cleverly arranged to prevent an environmental disturbance of one from leading to the collapse of them all. The complex numbers $\mathbb{C}$ are isomorphic to vectors in $\mathbb{R}^2$, whereas the qubit $|\Psi\rangle\in\mathbb{C}^2$ is isomorphic to vectors in $\mathbb{R}^4$. "A more realistic, tangible implementation of qubit can be a ring made of superconducting material, known as flux qubit, where two states with clockwise- and counterclockwise-flowing electric currents may exist simultaneously," says Chia-Ling Chien, Professor of Physics at The Johns Hopkins University and another author on the paper. The device was created using standard manufacturing techniques, by modifying current-generation silicon transistors, and the technology could scale up to include thousands, even millions of entangled quantum bits on a single chip. Just as a bit is the basic unit of information in a classical computer, a qubit is the basic unit of information in a quantum computer. See the complete profile on LinkedIn and discover Lyth's connections and jobs at similar companies. There's a certain chance that you may opt-out of reading this article without completing it. Read about developing the topological qubit Building a quantum cloud platform Our complete quantum stack approach includes familiar tools, provides development resources to build and simulate quantum solutions, and continues with deployment through Azure for a streamlined combination of both quantum and classical processing. Although made from over a billion aluminum atoms in a superconducting electronic circuit, these qubits behave as single atoms. has made atomic qubits by placing a single phosphorus atom at a known position inside a. We call this protocol classical teleportation and actually most of you have actually encountered this protocol before!. qubit (redirected from Qubits) Also found the journal. We are specialized in higher education academic information systems, powered by multichannel technologies and really important, we 'eat our own dog food' and rule ourselves by core values!. 8, 2019 The Glossy Awards Europe recognize the companies transforming the. These both states are well defined. What Is Custom Made Web Application. The proposed "Chiral Qubit" is a micron-scale ring made of a Weyl or Dirac semimetal, with the j0iand j1iquantum states corresponding to the symmetric and antisymmetric superpositions of quantum states describing chiral fermions circulating along the ring clockwise and. Substantial progress has been made in order to mitigate this effect. Sep 19, 2019 · IBM yesterday announced the opening of the IBM Quantum Computing Center in New York, with five 20-qubit systems up and running and a 53-qubit system expected to go online next month. It doesn't matter which qubit he sends, since the state is symmetric. If we continue to double the amount of transistors in a microprocessor every 18 months, then in 2030 we'll have circuits that will be measured on the atomic scale. If successful, it could be the world. We find that the qubit jump statistics fluctuates between. Our products. The spin of an electron has also been suggested for use as a qubit and the increased knowledge of nuclear magnetic resonance (NMR) has made this idea more feasible. IBM quantum experience Last month, Google claimed to have achieved quantum supremacy—the overblown name given to the step of. We set ourselves up for a growth plan that had capacity for 125%. Oct 29, 2018 · This also depends on the qubit platform that is used, which can be thought of as how strong the soda can holder is and what material they are made of. 7 million. 2 The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. The proposed "Chiral Qubit" is a micron-scale ring made of a Weyl or Dirac semimetal, with the j0iand j1iquantum states corresponding to the symmetric and antisymmetric superpositions of quantum states describing chiral fermions circulating along the ring clockwise and. Nov 09, 2019 · Qubit's hiring policy was beyond superlative in the few years I was there. Using the qubit to pump spin into the nuclei, the nuclear magnetic field is stabilized, and the nuclear bath is narrowed. But there is a problem: Today's qubits are noisy, or "dirty. Jul 10, 2018 · A qubit or quantum bit is the basic unit of quantum information. They now have the ability to control up to 50,000 qubits through simply three wires, a cryogenic CMOS design, and a 1cm2 chip computing at near. qubit is made using a cooper pair box which is two supercon- ductors A and B capacitively shunted forming a Josephson Junction (JJ) where C is the insulating material. of a 1000 qubit scale experiment by coupling multiple collective ensembles[46]. IEEE Spectrum, October 30, 2017. They are exploring theoretical and experimental approaches to creating the reliable quantum analog of the traditional bit—the qubit. The use of quantum dots to define charge and spin qubits is well established in the field of quantum information processing. So a commercial one will need to use vintage tech—ultra dense hard drives, maybe made of DNA or single atoms. Qubit is a device in helmets that enables football coaches to communicate with individual or groups of players. hence the limit of a qubit would be the amount of states the bloch-sphere for a single qubit can represent at a time. For example, if qubit 1 is connected to qubit 2, many implementation require one of the qubits to be the CONTROL and the other qubit to be the TARGET. What The Heck Is a Qubit? Post by Drew Sebastino » Tue Dec 22, 2015 7:32 pm I'm sorry, but I've been hearing about some "top-secret", area 51 type stuff called "quantum computing" and how it's (according to some dumb YouTube video) "100 million times faster" than traditional computing, and they always say that it's because of some magical. Physicists set new record with 10-qubit entanglement. The fact that measurement causes the qubit to jump to a new state is something that belongs to quantum mechanics. One logical qubit will require 10 to 100 physical qubits with Microsoft's topological qubits. A tale of two qubits: how quantum computers work or qubit, is the simplest unit of quantum information. To make a qubit, you need an object that can attain a state of quantum superposition between two states. A single qubit can represent a one, a zero, or any quantum superposition of those two qubit states; a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. "Quantum computers are very different from today's computers, not only in what they look like and are made of, but more importantly in what they can do. In this area, we have developed high quality factor materials and radiation suppression techniques that allow for reliable T1 times on the order of 30 - 40 us. The 50 qubit is the natural extension of the 20 qubit architecture. Here, we introduce a qubit architecture that incorporates fast tunable coupling, high coherence, and minimal cross talk. It had built a system called the IBM Quantum Experience which allowed guest users to. 7 million. They are exploring theoretical and experimental approaches to creating the reliable quantum analog of the traditional bit—the qubit. Enter any letters to see what words can be formed from them. Inset: In the qubit even subspace, fluctuations are increased in a qubit state–dependent quadrature, leading to slow dephasing inside the subspace. Further improvement on this characterization can be made by adopting two- or more-qubit detector models instead of independent single-qubit detectors for all the qubits in one device. Let's face it; we 're having some trouble keeping up with Moore's Law. Within one month, IBM's commercially available quantum fleet will grow to 14 systems, including a new 53-qubit quantum computer, the single largest universal quantum system made available for. Jun 24, 2019 · Archer Exploration Limited is in trading halt ahead of an announcement in relation to progress with development of a nanoscale qubit processor. The company was registered / incorporated on 12 February 1996 (Monday), 23 years ago with a paid up capital of $50000. Quantum dots are made of semiconductor material and are used to contain and manipulate electrons. It's also possible the rate of progress will be slightly higher than 2x every 2 years, so doing it a few years sooner than that is not out of the question. In the new set-up, the researchers used qubits made of tiny pieces of aluminum, which they connected to each other and arranged in a. That's very useful if you are trying to find your way through an unfamiliar part of town, but if you pause for a moment to think about this, it is actually quite remarkable (and maybe a bit creepy?). Today, IBM announced a 50-quantum bit (qubit) quantum computer, the largest in the industry so far — but it's still far from a universal quantum computer. Qubit definition: a quantum bit | Meaning, pronunciation, translations and examples. which are made up of. Here, we introduce a qubit architecture that incorporates fast tunable coupling, high coherence, and minimal cross talk. Made for cross-functional teams. At the low temperatures in our system these loops become superconductors and exhibit quantum mechanical effects. Sep 18, 2019 · What is quantum computing? Understanding the how, why and when of quantum computers. The new machine is named. While single-qubit gates possess some counter-intuitive features, such as the ability to be in more than one state at a given time, if all we had in a quantum computer were single-qubit gates then we would have a device with computational power that would be dwarfed by even a calculator let alone a classical supercomputer. Our software uses data to transform how companies understand and influence their customers. The interesting question, one that I am only beginning to get a handle on, is: 'In what ways will HBASE help. On the heels of IBM's quantum news last week come two more quantum items. A qubit can exhibit "pure" and "impure," or mixed, states. Oct 23, 2019 · Sycamore, measuring about 10 mm (0. C/CS/Phys C191 Physical Qubits 9/8/07 Fall 2009 Lecture 4 1 Physical Qubits We now give three examples of physical realizations of qubits, but there are many more. There are working machines today that perform some small part of what a full quantum computer may eventually do. Constructing two-qubit gates with minimal couplings Haidong Yuan,1,* Robert Zeier,2 Navin Khaneja,2 and Seth Lloyd1 1Department of Mechanical Engineering, MIT, Cambridge, Massachusetts 02139, USA. In the tech and business world there is a lot of hype about quantum computing. Qubit® working quantitation temperature, protected from light. The device consists of a two-gate, p-type transistor with an undoped channel. Aug 23, 2019 · HOUSTON — (Aug. That is, Twitchen explained, light can be used to either transmit a qubit state or to initialize such a state. Our products. A quantum bit can exist in superposition, which means that it can exist in multiple states at once. Cooke confirmed this to BI: "That is the reality. What is Quantum Computing? the state of a qubit is a unit vector in C2—the An orthonormal basis for an inner product space V is a basis made. Dec 03, 2019 · iCrowd Newswire - Dec 3, 2019 Illustration by Alex Castro / The Verge. Dec 11, 2018 · It has stored 160 qubits and performed operations on 79 qubits, a record. Oliver comparing it to the Wright brothers' first flights, skeptics say Google is over-selling its. Description Ever wonder what a Qubit is or what the basis of quantum computing is founded on. Sep 24, 2019 · D-Wave's announcement follows IBM's announcement of a 53-qubit system, though the types of qubits used by IBM are dissimilar, making direct comparisons between IBM and D-Wave unproductive. These all are responsive and included all basic component of bootstrap. These both states are well defined. Each of these 18 qubits have two possible values (just like there are two sexes in clownfish) -- together they represent 218 or 262,144 possibilities. If we treat a qubit as a single unit of information, a qubit could encode a value of j0i, j1ior any superposition j0i+ j1iof these. sciencedaily. A nanowire bridging two superconductors substitutes the insulating barrier used in conventional. Aug 07, 2018 · That would be more powerful than IBM's 50-qubit computer and more powerful than Google's 72-qubit machine. phase, flux or charge qubit [6, 10-12]. The Qubit[1] syntax simply allocates one qubit to the qubits array. So I made what is essentially me giving him the bird. If the first qubit is in state |0 we can write the input as a state of the form. • Select the Qubit package for the site (either PRO or Aura). A qubit can also be made using the spin of the nucleus of the nitrogen atom. Fast food is "fast" because they're mostly just assembling stuff from ready made parts, and performing the last stage of cooking/heating. A qubit is a two-state quantum-mechanical system, such as the polarization of a single photon: here the two states are vertical polarization and horizontal polarization. A qubit is a representation of a probability, and once read it's destroyed. review articles Quantum Money made by a mint or dug out of the ground. There's a certain chance that you may opt-out of reading this article without completing it. Essentially, once a connection of some type is made between qubits, the connection remains in place. Product overview Product overview; Qubit Start. Today, IBM announced a 50-quantum bit (qubit) quantum computer, the largest in the industry so far — but it's still far from a universal quantum computer. Quantum computing company D-Wave Systems Inc. What is PCR?. The Invitrogen Qubit 4 Fluorometer is the next generation of the popular benchtop fluorometer designed to accurately measure DNA, RNA, and protein quantity. However, qubit readouts are not guaranteed to be independent from one another, giving another bit of headache here. A 53 qubit quantum computer will find its place in the new Quantum Computation Center that IBM will soon inaugurate in the state of New York. Qubit is a leading software development company creating custom-built offline software and web solutions. "A more realistic, tangible implementation of qubit can be a ring made of superconducting material, known as flux qubit, where two states with clockwise- and counterclockwise-flowing electric currents may exist simultaneously," says Chia-Ling Chien, Professor of Physics at The Johns Hopkins University and another author on the paper. Qubit's Quest, with its adorable protagonist and multitude of different gameplay styles, is the best MARS Lightcon game currently available. All the Matlab, Octave and Mathematica code linked from this page is released under the GPL license, version 2 or later. In contrast to previous designs, our "gmon" device inductively couples transmon qubits at their low voltage node. This leads to the formation of qubit pairs. If we continue to double the amount of transistors in a microprocessor every 18 months, then in 2030 we'll have circuits that will be measured on the atomic scale. Exchange interaction between spins is fairly weak. Those of us who are the leading publishers in independent media have long known that government-funded tech advancements are typically allowed to leak to the public only after several years of. (though this is a rather misleading term because, like all quantum systems, a qubit has a continuum of physical states available to it). Nanodrop is good if you do not have to contamination at 260nm. Martinis, in Les Houches, 2004. 12/11/2017; 11 minutes to read; In this article. This is because a decidedly exponential equation that's well worth memorizing drives the size of the matrix (and thus the sophistication of the quantum gate):. What does a continuously monitored qubit readout really show? For continuous measurements of a quantum observable it is widely recognized that the measurement output approximates the expectation value of the observable, hidden by additive white noise. In their. decay channels through which the qubit decoheres [17]. The company already has over 120 employees, and has made a 19 qubit quantum computer available online through its developer environment called Forest. Qubit by Qubit, IBM Scientists Contend With Quirks of Quantum Physics the solution to a computation made by a quantum computing system still needs to be read by a conventional computer, which. has made atomic qubits by placing a single phosphorus atom at a known position inside a. German Scientists Create 5 qubit Quantum Register 206 Posted by timothy on Tuesday October 12, 2004 @08:28AM from the sehr-klein dept. *This is only if the value v has only one linearly independent eigenvector, but if not you can still make the same argument. A schematic drawing of a superconducting qubit coupled to phonons inside a sapphire crystal. We work with you to maximize the success of your business ideas through professionally developed software application & websites. Google: We've made 'quantum supremacy' breakthrough with 54-qubit Sycamore chip. So, a bit can only be one thing at one time (a 0 or a 1). Qubit definition, the fundamental unit of information in a quantum computer, capable of existing in two states, 0 or 1, simultaneously or at a different time. Google announced a more powerful 72-qubit 'Bristlecone' model last year, but that was for its internal techies only. The new machine is named. A single molecule of air can knock a qubit out of. The processor features improvements in superconducting qubit design, connectivity and packaging. Dec 01, 2019 · The most common class of quantum error correcting codes are stabilizer codes. The coherence times were similar to those of the quantronium and phase qubits, around 1 microsecond. Additionally, methods of applying and implementing qubits differ drastically, depending on the type of qubit involved. Further improvement on this characterization can be made by adopting two- or more-qubit detector models instead of independent single-qubit detectors for all the qubits in one device. Jul 10, 2018 · A qubit or quantum bit is the basic unit of quantum information. Everything else being equal, the probability of the qubit ending in the 0 or the 1 state is equal (50 percent). Martinis, in Les Houches, 2004. In contrast to previous designs, our "gmon" device inductively couples transmon qubits at their low voltage node. Characterization of non-universal two-qubit Hamiltonians by Laura Mancinska A thesis presented to the University of Waterloo in ful llment of the thesis requirement for the degree of Master of Mathematics in Combinatorics and Optimization Waterloo, Ontario, Canada, 2009 c Laura Mancinska 2009. Line 10 and 11 describe the quantum gates that form the circuit. Qubit currently have a rating of 0 out of 5 stars on Serchen and are currently not rated by their customers. A logical qubit is the qubit at the level of the algorithm. This was the first demonstration of the one-qubit rotation gate in solid state devices. Such extension has been made possible by the recent contribution of Miltzow et al. How is a qubit made? Optical traps use light waves to trap and control particles. I've worked on assembly lines making food products. In their experiment this year, the researchers were able to get 53. Measuring about 10 mm across, it is made using aluminum and indium parts sandwiched between two silicon wafers. There are three major problems we work on at Qubit: How do we collect, process and store in realtime the billions of events our clients send us every day? How do we make sense of all that data?. View Additional Video Information TOPICS: Art & Science, Biology & Origins of Life, Earth & Environment, Kavli, Mind & Brain, Physics & Math, Science in Society, Science Unplugged, Space & The Cosmos, Technology & Engineering, Youth. The company already has over 120 employees, and has made a 19 qubit quantum computer available online through its developer environment called Forest. The Stuff Science Fiction Is Made of By March 17, 2003 4:00PM PST Wasn?t it just a few years ago that fastest CPU out there was the Intel Pentium 233MMX?. When using the above vector notation for qubits, gates should then be represented by matrices that preserve the normalization condition; such matrices are called unitary matrices. They have demonstrated a fully controllable chip with ten qubits that can store quantum information for up to one minute. Each of these 18 qubits have two possible values (just like there are two sexes in clownfish) -- together they represent 218 or 262,144 possibilities. The game is simple. That is, Twitchen explained, light can be used to either transmit a qubit state or to initialize such a state. Jun 23, 2018 · A 64-qubit circuit was simulated with a 128-node computer cluster, but the hardware resources they used have been greatly reduced compared with other methods. The fact that a qubit is a type of physical system, rather than a pure abstraction, is another important conceptual difference between the. At that hardware level, each logical qubit is represented in hardware by a number of physical qubits to enable protection of the logical information. The scientists' experiment involved building a 54-qubit processor, named Sycamore, made up of "fast, high-fidelity quantum logic gates. sciencedaily. Each ordinary bit can be either a 1 or a 0 at any given time, but a qubit can be both at once. 'Quantum computers are very different from today's computers, not only in what they look like and are made of,. Quantum computing for the qubit curious. Each technology has its own tradeoffs. EDA Quantum What? The Future of Computing and Electronics Is All About Qubits. Differences in magnetic field drive transitions between S and T0 at an unknown rate, leaving the qubit in an unknown state. Although made from over a billion aluminum atoms in a superconducting electronic circuit, these qubits behave as single atoms. ScienceDaily. All of these characteristics make N-V centers very attractive as a potential qubit technology, and have inspired substantial research efforts. Figure 1: Scheme of nanowire-based qubits [3]. The paper describes a way of compressing states emitted by a quantum source of information so that they require fewer physical. There are three major problems we work on at Qubit: How do we collect, process and store in realtime the billions of events our clients send us every day? How do we make sense of all that data?. 1 a Plot of non-linear potential U(δ) for the Josephson phase qubit. hand, and coupling qubits strongly for fast qubit control and readout, on the other hand. The new Qubit 4 also easily measures RNA integrity and quality. What is PCR?. Qubit is a device in helmets that enables football coaches to communicate with individual or groups of players. Unfortunately, these parameters are often not released publicly, so we are unable to publish them at this time. However, and this is the crucial part, these quantum probabilities can be made to cancel each other because they take. We set ourselves up for a growth plan that had capacity for 125%. It doesn't matter which qubit he sends, since the state is symmetric. One qubit is two possible numbers, two is four possible numbers, three is eight and so. Other systems use lithographically patterned superconducting circuits kept at millikelvin temperatures in dilution refrigerators. It can behave like regular bits, being a 1 OR a 0, but it also can be both AT THE SAME TIME. The qubit states |0 and |1 are the two lowest eigenstates in the well. IBM has unveiled its latest quantum device: the Q System One, a beautifully polished 20-qubit machine. What is quantum computing? Quantum computers could spur the development of new breakthroughs in science, medications to save lives, machine learning methods to diagnose illnesses sooner, materials to make more efficient devices and structures, financial strategies to live well in retirement, and algorithms to quickly direct resources such as ambulances. To read out the qubit, in principle, you would collide the two half-electron quasiparticles together on the nanowire and measure the outcome, which would yield a different signal depending on whether they were in a 0, 1, or a superposition state. Such a qubit would be made of thin aluminum films deposited on a silicon chip. One way, for instance, is for Bob to perform the following quantum circuit in his laboratory: With that done, Bob sends one of the two qubits to Alice. Enlarge / IBMs 16-qubit quantum computer from 2017. Substantial progress has been made in order to mitigate this effect. PHOTOGRAPHY, AUDIO, AND VIDEO RECORDING.
|
CommonCrawl
|
A Hybrid-Dimensional Coupled Pore-Network/Free-Flow Model Including Pore-Scale Slip and Its Application to a Micromodel Experiment
K. Weishaupt1,
A. Terzis2,
I. Zarikos3,
G. Yang4,
B. Flemisch1,
D. A. M. de Winter5 &
R. Helmig1
Transport in Porous Media volume 135, pages 243–270 (2020)Cite this article
Modeling coupled systems of free flow adjacent to a porous medium by means of fully resolved Navier–Stokes equations is limited by the immense computational cost and is thus only feasible for relatively small domains. Coupled, hybrid-dimensional models can be much more efficient by simplifying the porous domain, e.g., in terms of a pore-network model. In this work, we present a coupled pore-network/free-flow model taking into account pore-scale slip at the local interfaces between free flow and the pores. We consider two-dimensional and three-dimensional setups and show that our proposed slip condition can significantly increase the coupled model's accuracy: compared to fully resolved equidimensional numerical reference solutions, the normalized errors for velocity are reduced by a factor of more than five, depending on the flow configuration. A pore-scale slip parameter \(\beta _{{{{\rm pore}}}}\) required by the slip condition was determined numerically in a preprocessing step. We found a linear scaling behavior of \(\beta _{{{{\rm pore}}}}\) with the size of the interface pore body for three-dimensional and two-dimensional domains. The slip condition can thus be applied without incurring any run-time cost. In the last section of this work, we used the coupled model to recalculate a microfluidic experiment where we additionally exploited the flat structure of the micromodel which permits the use of a quasi-3D free-flow model. The extended coupled model is accurate and efficient.
Coupled systems of free flow over a porous medium play an important role in many environmental, biological and technical processes. Examples include evaporation from soil governed by atmospheric air flow (Vanderborght et al. 2017), intervascular exchange in living tissue (Chauhan et al. 2011), preservation of food (Verboven et al. 2006), fuel cell water management (Gurau and Mann 2009) or heat exchange systems (Yang et al. 2018). Considerable effort has been spent on modeling these kinds of systems where a discrete resolution of the complex porous geometry such as in direct numerical simulation (DNS) is often not computationally feasible for larger systems. The porous medium can instead be treated in an averaged sense, based on the concept of an REV (Whitaker 1999). Following the so-called one-domain approach, one set of equations is used to describe both the free flow and the porous medium (Neale and Nader 1974). For the two-domain approach, a domain decomposition is performed where the free flow is usually described by the Navier–Stokes equations while the porous medium is accounted for by a lower-order model, such as Darcy's law (Ochoa-Tapia and Whitaker 1995; Layton et al. 2002; Jamet et al. 2009; Mosthaf et al. 2011). Appropriate coupling conditions between the two domains have to be formulated to ensure thermodynamic consistency (Hassanizadeh and Gray 1989). While being computationally efficient, these upscaled models may provide an insufficient degree of detail on the pore scale crucial for certain applications, e.g., when local saturation patterns at the interface of a drying soil globally affect the system (Shahraeeni et al. 2012). For these situations, a new class of so-called hybrid-dimensional models have been developed (Scheibe et al. 2015) which combine the high spatial resolution of pore-scale approaches, such as pore-network models, with the computational efficiency of REV-scale models. Pore-network models simplify the complex void geometry of the porous medium to a collection of equivalent pore elements and provide a comparatively high degree of pore-scale accuracy at low computational demand (Oostrom et al. 2016). Balhoff et al. (2007b) coupled a pore-network model to a Darcy-type continuum model. This concept was further developed (Balhoff et al. 2007a; Mehmani and Balhoff 2014) using mortar methods based on the work of Arbogast et al. (2007). Beyhaghi et al. (2016) proposed an iterative scheme to couple a pore-network model to free flow.
In our previous work (Weishaupt et al. 2019), we have presented a fully monolithic, fully implicit coupled model employing the Navier–Stokes equations in the free-flow region and a pore-network model in the porous domain. The model was verified against numerical reference solutions for stationary single-phase flow and an example of transient compositional flow over a random network was given.
Pores intersecting with the free flow represent local deviations from the no-slip condition which otherwise holds at the solid matrix of the porous medium's surface. This poses a conceptual weak point of our original model (Weishaupt et al. 2019) where no-slip coupling conditions would always occur at pores with throats oriented normally with the coupling interface. We address this issue here and introduce a pore-local slip condition which helps to correct the momentum exchange between pore-network and free flow.
As in our previous work, we still follow a monolithic coupling approach which does not require any coupling iterations since both sub-problems are solved simultaneously. Here, we further extend the capabilities of the coupled model by removing its former dependency on a direct linear solver. Instead, we now employ an iterative linear solver where an Uzawa method (Ho et al. 2017) serves as a preconditioner for the free-flow matrix blocks. This enhances the model's applicability to larger and three-dimensional systems while we still benefit from a fully coupled and implicitly mass conservative model formulation.
We assess the accuracy of the novel slip condition and demonstrate the improved model's capabilities with a three-dimensional numerical example involving a randomly generated pore network.
Finally, we use the coupled model to recalculate high-resolution micro-Particle Image Velocimetry (micro-PIV) experiments performed on a micromodel comprising a free-flow channel over a regular porous structure at low Reynolds numbers (Terzis et al. 2019). Here, we exploit the micromodel's flat geometry which permits the use of a two-dimensional free-flow model including a wall friction term (Flekkøy et al. 1995) in order to save computational cost.
Mathematical and Numerical Model Concepts
Without loss of generality, gravity is neglected in the following and we assume incompressible steady-state flow conditions for sake of simplicity. Creeping flow (\(Re < 1\)) is considered in this work.
Free-Flow Model
The Stokes equations are used for the description of incompressible steady-state laminar flow:
$$\nabla \cdot \!{[\mu (\nabla \!{ {\bf v}} + \nabla \!{ {\bf v}}^{\rm T})]} -\nabla \!p = 0.$$
\(\mu\) is the fluid's dynamic viscosity, \({\bf v}\) is the fluid velocity while p is the pressure.
The continuity equation closes the system:
$$\nabla \cdot \!{{\bf v}} = 0.$$
Pore-Network Model
In the porous domain, a pore-network model is used where at each pore body (the intersection of two or more pore throats), the continuity of mass is required:
$$\sum _j Q_{ij} = 0.$$
Here, \(Q_{ij}\) is the discrete volume flow rate in a throat connecting pore bodies i and j:
$$Q_{ij} = g_{ij} (p_i - p_j).$$
\(p_i\) and \(p_j\) are the pressures defined at the centers of the pores bodies. The throat conductance \(g_{ij}\) depends on the pore throat geometry and the fluid properties. For certain geometries, simple analytical expressions for \(g_{ij}\) are available in the literature (Patzek and Silin 2001). Otherwise, numerical upscaling (Mehmani and Tchelepi 2017; Weishaupt et al. 2019) may be used.
Coupling Conditions
Appropriate coupling conditions are required to ensure the continuity of mass and momentum at the interface between porous medium (\(\Omega ^{{\rm PNM}}\)) and free flow (\(\Omega ^{{{\rm FF}}}\)) (Hassanizadeh and Gray 1989; Layton et al. 2002). Here, we formulate the coupling conditions for each discrete intersection of a pore body i with the boundary of the free-flow domain, yielding pore-local discrete coupling interfaces \(\Gamma _i\).
The coupling pore bodies are cut in half by the interface and only the interior part of the volume is considered. We assume that the coordination number of pore bodies connected to the free-flow domain is always one, i.e, only one pore throat is connected to them.
The conservation of mass (neglecting density since we consider incompressible fluids in this work) across the interface is enforced via
$${[}{\bf v}\cdot {\bf n}]^{{\rm PNM}}= -[{\bf v}\cdot {\bf n}]^{{{\rm FF}}}.$$
The superscripts \({{{\rm FF}}}\) and \({{\rm PNM}}\) refer to the interfacial quantities of the free-flow domain and the pore-network model, respectively. \({\bf n}\) is a unit vector normal to the coupling interface, pointing out of the own domain.
Compared to Weishaupt et al. (2019), we revise our coupling conditions for the mechanical equilibrium, i.e., the conservation of momentum across the interface. We first recall that Eq. (4), which yields the discrete volume flow per pore throat in the pore-network model, is based on the volume integration of the three-dimensional Stokes equations along the medial axis of the pore throat (Blunt 2017). Contrary to Darcy-type models (Whitaker 1999; Layton et al. 2002), the pore body pressure of the pore-network model and the pressure of the Stokes model employed in the free-flow region have thus a comparative physical meaning. Therefore, we require the pressures at the interface to be equal in order to satisfy the balance of forces perpendicular to the interface:
$${[}p]^{{{\rm FF}}}= [p]^{{\rm PNM}}.$$
At the location of solid grains (no intersecting pore throat), a no-flow/no-slip condition for the free flow is assumed. Weishaupt et al. (2019) used the tangential component of the discrete pore velocity as a coupling condition for the free-flow model at the location of the intersecting pore:
$$\begin{aligned} {[}{\bf v}\cdot {\bf t}_k]^{{{{\rm FF}}}} = {\left\{ \begin{array}{ll} {[} {\bf v}\cdot {\bf t}_k ]^{{{\rm PNM}}} ,&\quad k \in \{0,\ldots ,d-1\} \quad {{\rm on}} ~ \Gamma _i^{{{\rm FF}}}\\ 0 &\quad \text {else}. \end{array}\right. } \end{aligned}$$
The basis of the interface's tangent plane is given by \({\bf t}_k,~ k \in \{0,\ldots ,d-1\}\). The tangential component of the pore-body interface velocity is approximated as
$$\begin{aligned} {[}{\bf v}\cdot {\bf t}_k ]^{{\rm PNM}}= \frac{Q_{ij}}{|\Gamma _i^{{{\rm FF}}}|} [ {\bf n}_{{\bf ij}} \cdot {\bf t}_k]^{{\rm PNM}}. \end{aligned}$$
\(Q_{ij}\) is the volume flow through pore throat ij while \(|\Gamma _i^{{{\rm FF}}}|\) is the area of the discrete coupling interface. \({\bf n}_{{\bf ij}}\) is a unit normal vector parallel to the throat's central axis and pointing toward the interface. Note that this is a simplification which does not take into account potential deflection effects of the fluid flow leaving the pore throat and entering the pore body (see Fig. 1a). Note that Eq. (8) does not impair mass conservation as it is only used for the approximation of the tangential momentum transfer.
The disadvantage of Eq. (7) is that pore throats orientated orthogonally with the interface (\({\bf n}_{{\bf ij}} \perp [{\bf t}_k]^{\text {FF}}\)) will always lead to a no-slip condition such that \([{\bf v}\cdot {\bf t}_k ]^{{{\rm FF}}}= 0\) at the interface since \([{\bf n}_{{\bf ij}} \cdot {\bf t}_k]^{{\rm PNM}}= 0\). The same issue occurs for \(Q_{ij} = 0\), yielding \([{\bf v}\cdot {\bf t}_k ]^{{\rm PNM}}= 0\).
Pore-local slip conditions. Illustration of the two possible interface conditions for \([{\bf v}\cdot {\bf t}_k]^{{{{\rm FF}}}}\) (here with \(k =0\)). a Old condition (Eq. 7) assigning the pore-body tangential velocity at the interface directly. b Novel slip condition (Eq. 11) allowing \([{\bf v}\cdot {\bf t}_k]^{{{{\rm FF}}}} \ne [{\bf v}\cdot {\bf t}_k]^{{\rm PNM}}\)
Here, we propose a modified approach to approximate the slip velocity on \(\Gamma _i^{{{\rm FF}}}\) (see Fig. 1b). We require the continuity of tangential stress
$$\begin{aligned} {[}(-\mu (\nabla \!{ {\bf v}} + \nabla \!{ {\bf v}}^{\rm T}) \cdot {\bf n}) \cdot {\bf t}_k]^{{{{\rm FF}}}} = [(-\mu (\nabla \!{ {\bf v}} + \nabla \!{ {\bf v}}^{\rm T}) \cdot {\bf n}) \cdot {\bf t}_k]^{{{\rm PNM}}}. \end{aligned}$$
Instead of trying to calculate the shear rate \(\nabla \!{ {\bf v}} + \nabla \!{ {\bf v}}^{\rm T}\) in the one-dimensional pore throats where only uniform, averaged flows along the center-line of the throats are defined, we use a simple parametrization
$$\begin{aligned} {[}(-(\nabla \!{ {\bf v}} + \nabla \!{ {\bf v}}^{\rm T}) \cdot {\bf n}) \cdot {\bf t}_k]^{{{{\rm FF}}}} = \beta _{{{\rm throat}}} \frac{[\mu ]^{{\rm PNM}}}{[\mu ]^{{{\rm FF}}}} \left( [{\bf v}\cdot {\bf t}_k]^{{{{\rm FF}}}} - [{\bf v}\cdot {\bf t}_k]^{{{\rm PNM}}} \right) \end{aligned}$$
in close analogy to the widely used Beavers–Joseph interface slip condition for REV-scale models (Beavers and Joseph 1967; Jones 1973). The main difference here is that the slip coefficient \(\beta _{{{{\rm pore}}}}\) is now defined locally per pore and not given as averaged quantity of the entire porous medium's interface. As we assume a constant and equal viscosity in both domains, the term \(\frac{[\mu ]^{{\rm PNM}}}{[\mu ]^{{{\rm FF}}}} = 1\) is dropped for sake of brevity.
Our new coupling condition for the tangential component of the free-flow velocity thus reads
$$\begin{aligned} {[}{\bf v} \cdot {\bf t}_k]^{{{\rm FF}}}= {\left\{ \begin{array}{ll} v_{{{{\rm slip}}},k} &{}\quad \text {on pore throat},\\ 0 &{}\quad \text {else}, \end{array}\right. } \end{aligned}$$
$$\begin{aligned} v_{{{{\rm slip}}},k} = \frac{1}{\beta _{{{{\rm pore}}}}} \left[ (-(\nabla \!{ {\bf v}} + \nabla \!{ {\bf v}}^{\rm T}) \cdot {\bf n}) \cdot {\bf t}_k \right] ^{{{\rm FF}}}+ \left[ {\bf v} \cdot {\bf t}_k \right] ^{{\rm PNM}}. \end{aligned}$$
\(1/\beta _{{{{\rm pore}}}}\) corresponds to a local Navier slip length (Navier 1823) and is generally a tensorial (Kamrin et al. 2010) and solution-dependent quantity (Yang et al. 2019). For certain geometries and flow configurations, it may be obtained by (semi-) analytical (Jeong 2001; Wang 2003; Schönecker and Hardt 2013) expressions which are, however, mathematically involved and often require numerical methods for their solution at some point. Furthermore, surface-averaged effective values of the slip length for periodic geometries are usually considered (Lauga and Stones 2003) while we require a local value for a single pore.
Since \(\beta _{{{{\rm pore}}}}\) is merely an input parameter for our model and the aim of this work is to investigate the benefit of using Eq. (11), we employ a simple numerical procedure to estimate this value as described later on. We will furthermore show that \(\beta _{{{{\rm pore}}}}\) scales linearly with the diameter of the intersecting entity for our two- and three-dimensional setups. This implies that Eq. (11) can be applied at zero additional run-time cost, once the scaling factors for the geometries of interest have been determined in a preprocessing step.
The coupled model is implemented in DuMux, an open-source framework for simulating flow and transport in porous media (Flemisch et al. 2011; Heck et al. 2019; Koch et al. 2020), built upon dune (Bastian et al. 2008a, b). We use dune-subgrid (Gräser and Sander 2009) for the generation of the reference solution grids and dune-foamgrid (Sander et al. 2017) for the pore-network model.
The free-flow model is discretized in space using a staggered-grid finite volume approach, also known as MAC scheme (Harlow and Welch 1965), which provides inherently stable and oscillation-free solutions without the need of any stabilization techniques (Versteeg and Malalasekera 2007). The original model's restriction to odd numbers (Weishaupt et al. 2019) of free-flow grid cells assigned to each pore throat has been lifted here.
As in Weishaupt et al. (2019), we follow a fully monolithic coupling concept which means that all sub-models' balance equations are assembled into one system of linear equations which is solved simultaneously such that no coupling iterations are required and the scheme is inherently conservative:
$$\begin{aligned} \begin{pmatrix} A &{} B_1 &{} C_1\\ B_2 &{} D &{} C_2 \\ C_3 &{} C_4 &{} P \end{pmatrix} \begin{pmatrix} {\bf x}_{v,{{{\rm FF}}}}\\ {\bf x}_{p,{{{\rm FF}}}}\\ {\bf x}_{p,{{\rm PNM}}} \end{pmatrix} =\begin{pmatrix} {\bf r}_{v,{{{\rm FF}}}}\\ {\bf r}_{p,{{{\rm FF}}}}\\ {\bf r}_{p,{{\rm PNM}}} \end{pmatrix}. \end{aligned}$$
Here, \(A, \,B_1, \,B_2\) and D are sub-matrices of the free-flow problem. P is the sub-matrix of the pore-network model and \(C_1\)–\(C_4\) are the coupling matrix blocks. \({\bf x}\) is a sub-vector of unknowns for the velocity v or pressure p and the sub-domains \({{{\rm FF}}}\) (free flow) and \({{\rm PNM}}\) (pore-network model). \({\bf r}\) are the corresponding right-hand side sub-vectors.
Solving this system is challenging for Krylov-type iterative methods as it features a poorly conditioned system matrix including a saddle-point structure for incompressible fluids (\(D = 0\)) (Benzi et al. 2005). For this reason, we used a direct linear solver in Weishaupt et al. (2019) which, however, does not scale very well in terms of memory and CPU-time requirements for larger systems, especially in 3D. We overcome this limitation here by applying a flexible restarted GMRES iterative solver (Saad 2003) on Eq. (13) which requires appropriate preconditioning.
As a very first step toward an effective solution strategy, a simple Uzawa algorithm (Benzi et al. 2005; Ho et al. 2017) is used for the free-flow sub-system
$$\begin{aligned} \begin{pmatrix} A &{} B_1\\ B_2 &{} D\\ \end{pmatrix} \begin{pmatrix} {\bf x}_{v,{{{\rm FF}}}}\\ {\bf x}_{p,{{{\rm FF}}}}\\ \end{pmatrix} = \begin{pmatrix} {\bf r}_{v,{{{\rm FF}}}}\\ {\bf r}_{p,{{{\rm FF}}}}\\ \end{pmatrix}, \end{aligned}$$
$${\bf x}_{v,{{{\rm FF}}}, m+1}= {\bf x}_{v,{{{\rm FF}}}, m} + A^{-1} \left( {\bf r}_{v,{{{\rm FF}}}} -\left( A {\bf x}_{v,{{{\rm FF}}}, m} + B_1 {\bf x}_{p,{{{\rm FF}}},m} \right) \right) ,$$
$${\bf x}_{p,{{{\rm FF}}}, m+1} = {} {\bf x}_{p,{{{\rm FF}}}, m} + \omega \left( {\bf r}_{p,{{{\rm FF}}}} - \left( B_2 {\bf x}_{v,{{{\rm FF}}}, m+1} + D {\bf x}_{p,{{{\rm FF}}}, m} \right) \right) .$$
\(A^{-1}\) is approximated using an algebraic multigrid method (Shapira 2008) and \(\omega\) is a relaxation factor determined according to Benzi et al. (2005).
A simple Jacobi (diagonal) preconditioner is applied to the pore-network sub-system
$$P {\bf x}_{p,{{\rm PNM}}} = {\bf r}_{p,{{\rm PNM}}}.$$
The further development of this rather elementary preconditioning strategy is part of ongoing work and techniques such as presented in Kuchta et al. (2018) could certainly improve the solver's efficiency.
Numerical Determination of \(\beta _{{{{\rm pore}}}}\) and Assessment of Accuracy
In this section, we present a simple numerical procedure to estimate the pore-scale slip coefficient \(\beta _{{{{\rm pore}}}}\) for two different geometries: (1) a hemispherical, three-dimensional pore body and (2) a two-dimensional square cavity.
Numerical Determination of the Slip Coefficient \(\beta _{{{{\rm pore}}}}\)
Figure 2 shows the computational domains used to evaluate \(\beta _{{{{\rm pore}}}}\): In the three-dimensional setup (Fig. 2a), a hemispherical pore body with radius \(r_i\) intersects with the lower bottom of a cubic free-flow channel, whose side lengths \(L_x, L_y\) and \(L_z\) are ten times larger than \(r_i\). For sake of simplicity, we neglected the pore throat adjacent to the pore body and hence, \([{\bf v}]^{{\rm PNM}}\cdot {\bf t}_k = 0\). We will discuss this choice later on. In order to assess the dependence of \(\beta _{{{{\rm pore}}}}\) on the pore radius, we performed multiple simulations with different \(r_i\) while keeping the overall proportions of the geometry constant (i.e, scaling \(L_x = L_y = L_z\) accordingly). \(Re \ll 1\) with respect to the channel hydraulic diameter was held for all setups. Flow was induced in two different ways in order to investigate the influence of the boundary conditions on \(\beta _{{{{\rm pore}}}}\): first, a pressure drop \(\Delta p\) between the inlet on the left side and the outlet on the right side of the channel was assigned which corresponds to a situation typical for micro-PIV experiments. Second, shear-driven flow was considered by moving the top wall of the domain at a given velocity which resembles a near-interface flow field for free or atmospheric flow conditions. Figure 2b shows the two-dimensional setup with a square pore body for which the same procedure as for the 3D setup was performed.
Setups for the numerical determination of \(\beta _{{{{\rm pore}}}}\). The distribution of \(v_x\) along the domain's vertical extent \(L_y\) for a pressure-driven flow is sketched by the colored surface (a) and line (b), respectively. The dotted areas mark the pore-local slip interfaces \(\Gamma _i\) on which the averaging is performed
Having checked for grid convergence, the domains were meshed uniformly such that 40 cells per pore diameter were used for each simulation. Since \(\beta _{{{{\rm pore}}}}\) will later on be used in the coupled model which is implemented in DuMux, we also employed the free-flow solver (not being coupled to another model) of the latter for determining \(\beta _{{{{\rm pore}}}}\) for sake of comparability. Currently, only structured, axis parallel grids are supported here, but we assured that our results are in accordance with simulations performed on an unstructured grid (allowing a smoother description of the hemispherical cavity's bottom surface) performed with the open-source CFD tool OpenFOAM (Jasak 2009).
For the pressure-driven cases, all boundaries were equipped with no-flow/no-slip conditions, except at the inlet and the outlet where fixed pressures \(p_{{{{\rm in}}}} = 1\times 10^{-6}\,\hbox {Pa}\) and \(p_{{{\rm out}}} = 0\,\hbox {Pa}\) were assigned. Preliminary simulations showed that a fully developed flow within the channel (with respect to the x-axis) was achieved this way, without the need of using periodic boundary conditions at the inlet and the outlet (which are not yet supported by the free-flow model in DuMux). For the shear-driven setup, we set \(p_{{{{\rm in}}}} = p_{{{{\rm out}}}} = 0\,\hbox {Pa}\) and a constant velocity \(v_{x,{{{\rm top}}}} = 4 \times 10^{-8}\,\hbox {ms}\) at the top wall of the channel. For the 3D geometry, we eliminated the wall friction on the lateral sides of the free-flow channel (\(z_{{{{\rm min}}}}\) and \(z_{{{{\rm max}}}}\)) by setting symmetry boundary conditions. We solve Eqs. (1) and (2) to obtain the stationary flow field for an incompressible fluid (water) with \(\mu = 1\times 10^{-3}\hbox {Pa}\,\hbox {s}\).
In theory, Eq. (12) holds on each point on the local interface between the free-flow channel and the pore \(\Gamma _i\) (see Fig. 2) such that the value of \(\beta _{{{{\rm pore}}}}\) actually depends on the relative position on \(\Gamma _i\). As a simplification for the numerical evaluation of \(\beta _{{{{\rm pore}}}}\) and, more importantly, for an efficient application of the slip concept within the coupled model, we instead consider one integral value of \(\beta _{{{{\rm pore}}}}\) for each \(\Gamma _i\). As mentioned above, \(\left[ {\bf v} \cdot {\bf t}_k \right] ^{{\rm PNM}}= 0\) as there is no pore throat attached to the body and \(Q_{ij} = 0\) (see Eq. 8). For the given setup (3D), \({\bf n}= (0, -1, 0)^{\rm T}\) and we only consider \({\bf t}_0 = (1,0,0)^{\rm T}\) as the flow is mainly in x-direction. We average the relevant velocity gradients and the streamwise horizontal velocity component \(v_{{{{\rm slip}}},0} = \left[ {\bf v} \cdot {\bf t}_0 \right] ^{{{{\rm FF}}}} = \left[ v_x \right] ^{{{\rm FF}}}\) on \(\Gamma _i\) in order to estimate \(\beta _{{{{\rm pore}}}}\) by re-arranging and simplifying Eq. (12):
$$\begin{aligned} \beta _{{{{\rm pore}}}}\approx \frac{\left\langle \left[ -(\nabla \!{ {\bf v}} + \nabla \!{ {\bf v}}^{\rm T}) \cdot {\bf n}) \cdot {\bf t}_0\right] ^{{{\rm FF}}}\right\rangle }{\langle v_{{{{\rm slip}}},0} \rangle } = \frac{ \left\langle \left[ \frac{\partial v_x }{\partial y} + \frac{\partial v_y }{\partial x} \right] ^{{{\rm FF}}}\right\rangle }{\left\langle \left[ v_x \right] ^{{{\rm FF}}}\right\rangle }. \end{aligned}$$
Here, \(\langle \cdot \rangle\) is a surface average defined on \(\Gamma _i\). In this setup, \(\beta _{{{{\rm pore}}}}\) is isotropic because \(\Gamma _i\) is of symmetric circular shape. For other shapes (such as ovals), the value of \(\beta _{{{{\rm pore}}}}\) would depend on the orientation of the pore body relative to the channel-flow direction.
Table 1 Values of \(\beta _{{{{\rm pore}}}}\) and \(\beta _{{{{\rm pore}}}}^* = \beta _{{{{\rm pore}}}}r_i\) for the 3D hemispherical pore body
The results of \(\beta _{{{{\rm pore}}}}\) for the 3D setup with different pore radii \(r_i\) are given in Table 1. Non-dimensionalizing these values by multiplication with the respective value of \(r_i\) yields a constant value of \(\beta _{{{{\rm pore}}}}^* = \beta _{{{{\rm pore}}}}r_i = 5.73\) for pressure-driven flow and \(\beta _{{{{\rm pore}}}}^* = 6.44\) for shear flow. The results thus vary by around \(12\%\) for different boundary conditions. The linear scaling of \(\beta _{{{{\rm pore}}}}\) with respect to \(r_i\) for the given setups is a direct consequence of the linear nature of Eq. (1). Using a dimensionless velocity \({\bf v}^* = {\bf v}/ v_{{{\rm ref}}}\) and a dimensionless gradient \(\nabla \!^* = L_{{{\rm ref}}} \nabla \!\) (with \(v_{{{\rm ref}}}\) and \(L_{{{\rm ref}}} = r_i\) as reference velocity and reference length), Eq. (18) can be re-written as
For our uniformly scaled setups, any change of \(L_{{{\rm ref}}}\) yields the same dimensionless velocity field scaled by an appropriate \(v_{{{\rm ref}}}\) (we keep the values of \(p_{{{\rm in}}}\) and \(p_{{{\rm out}}}\) or \(v_{x,{{{\rm top}}}}\) fixed). The averaged values of the dimensionless shear rate and slip velocity on \(\Gamma _i\) in Eq. (19) are proportional to each other and \(\beta _{{{{\rm pore}}}}^*\) becomes a constant for each uniformly scaled setup.
As a next step, we assessed the impact of altering the free-flow channel's aspect ratio by halving and doubling its vertical size \(L_y\) for a pore radius of \(r_i = 200\times 10^{-6}\,\hbox {m}\). As shown in the last two rows of Table 1, this leads to a slightly different value of \(\beta _{{{{\rm pore}}}}^*\) because this new problem is not just a uniformly scaled variant of the previous setups (\(L_y\not = L_x\)), as discussed before. The difference is largest for a decrease of \(L_y\) in the pressure-driven case (\(-4\%\)). This is probably due to the rather pronounced change of the parabolic velocity profile within the free-flow channel when reducing its height (while keeping the pressure gradient constant). For the shear-driven flow, increasing \(L_y\) does not change \(\beta _{{{{\rm pore}}}}^*\), as the linear flow profile in the free-flow channel remains its shape. Reducing \(L_y\) for the same flow configuration only slightly (\(+0.3\%\)) affects \(\beta _{{{{\rm pore}}}}^*\).
The same findings as described above generally also hold for the two-dimensional square cavity (see Fig. 2b), for which we repeated the same evaluating steps as for the 3D geometry. Table 2 shows that \(\beta _{{{{\rm pore}}}}^*\) for the shear-flow setup is around \(4\%\) larger than the value for pressure-driven flow, whereas it again reacts more sensitive to a change of the free-flow channel's vertical extent (\(-3.6\%\) and \(2\%\) compared to \(L_y / r_i = 10\)) in the latter configuration.
Table 2 Values of \(\beta _{{{{\rm pore}}}}\) and \(\beta _{{{{\rm pore}}}}^* = \beta _{{{{\rm pore}}}}r_i\) for the 2D square pore body
In summary, our very simple and heuristic method to estimate \(\beta _{{{{\rm pore}}}}\) has shown that this value scales linearly with the pore diameter, and that the chosen set of boundary conditions mildly affects the results. As shown by Moffatt (1964), a series of diminishing vortices can be observed within the cavity. Compared to the slip velocity on \(\Gamma _i\), the intensity of these recirculations is negligible and the vortex structures will be completely overlaid for situations with flow in the adjacent pore throat (Sects. 3.2, 4, 5). We therefore consider our setups as presented in Fig. 2 representative for our further analysis.
As previously mentioned, a direct comparison of our findings for the values of \(\beta _{{{{\rm pore}}}}\) with literature values for the Navier slip length is not straightforward because typically, surface-averaged, effective values for periodic structures are reported. However, one can roughly estimate the maximum slip length over a single two-dimensional quadratic cavity by \(0.5 r_i\), based on the center position of the first vortex within the cavity (Schönecker and Hardt 2013). This yields a value of \(1\times 10^{-4}\,\hbox {m}\) for the cavity with \(r_i = 2\times 10^{-4}\,\hbox {m}\) which is therefore around twice as large as our numerically determined mean slip length of \(1/\beta _{{{{\rm pore}}}}= 4.36\times 10^{-5}\,\hbox {m}\) (shear-driven flow). We again want to stress that the focus of this paper is not on finding a generalized method for describing \(\beta _{{{{\rm pore}}}}\) but rather on evaluating the effect of using the slip condition in the context of our coupled model for which \(\beta _{{{{\rm pore}}}}\) serves as an input parameter.
Evaluation of the Slip Condition's Accuracy Improvement
Having estimated \(\beta _{{{{\rm pore}}}}\) numerically, we investigate the benefit of using the new slip condition (Eq. (11)) in the coupled model compared to Eq. (7), yielding a no-slip condition for throats oriented orthogonally with the interface (\([{\bf v}]^{{\rm PNM}}\cdot [{\bf t}_k]^{{{\rm FF}}}= 0\)) or featuring no flow (\([{\bf v}]^{{\rm PNM}}= 0\)). For this purpose, we consider a three-dimensional cubic free-flow channel with side lengths of \(5\times 10^{-4}\,\hbox {m}\) intersecting with a single pore (\(r_i = 1\times 10^{-4}\,\hbox {m}\)) and throat (\(r_{ij} = 5\times 10^{-5}\,\hbox {m}\)). Six different geometrical setups are investigated by varying the throat's orientation represented by the polar and the azimuth angles (\(\theta _{{{\rm pol}}}\) and \(\varphi _{{{\rm az}}}\)), as shown in Fig. 3. The vertical position of the lower pore body center is fixed at \(y = -4\times 10^{-4}\,\hbox {m}\) such that the length of the throat is slightly different for each setup (which is accounted for in the throat's conductance in Eq. (20)).
We use DuMux to solve the stationary Stokes equations (Eqs. 1, 2) on the fully resolved, three-dimensional domains in order to obtain reference solutions. Again, 40 grid cells per pore diameter are employed and water (\(\mu = 1\times 10^{-3}\,\hbox {Pa}\,\hbox {s}\)) is considered.
Flow is induced in the channel by applying a pressure drop \(p_{{{{\rm in}}}} - p_{{{{\rm out}}}} = 1\times 10^{-6}\,\hbox {Pa}\) between the inlet and the outlet, as shown in Fig. 3. The bottom of the lower half-pore is equipped with a fixed pressure of \(p_{{{\rm bottom}}}\). No-flow/no-slip conditions hold at all remaining boundaries. We investigate four different flow configurations by varying the ratio between the bottom and the inlet pressure \(p_{{{\rm bottom}}} / p_{{{\rm in}}} = \{0.33, 1, 10, 100 \}\). The first ratio of 0.33 corresponds to an extraction, i.e., the liquid is sucked out of the channel through the pore throat. For the remaining three ratios, liquid is injected from the pore throat into the channel.
Geometry used for error analysis. \(\theta _{{{{\rm pol}}}}\) is the polar angle corresponding to the vertical inclination of the throat. The azimuth angle \(\varphi _{{{{\rm az}}}}\) corresponds to the horizontal orientation of the throat
Having obtained reference solutions, the coupled model is applied twice to each case, using Eq. (7) or Eq. (11), respectively. We used a numerical upscaling approach in a preprocessing step as described in the appendix of Weishaupt et al. (2019) to determine the throat conductance: a pressure boundary value problem is solved numerically on a discretely resolved, reduced but equivalent pore structure in order to relate the pressure drop within the pore throat and bodies to the resulting volume flow. This yields, for the given geometry,
$$\begin{aligned} g_{ij}(l_{ij}) \approx \frac{1}{\mu } \left( \frac{2.44\times 10^{-18}\,\hbox {m}^{-2}}{l_{ij}} + \frac{2}{5.45\times 10^-14\,\hbox {m}^{3}}\right) ^{-1}. \end{aligned}$$
Here, \(\mu\) and \(l_{ij}\) are the fluid's viscosity and the throat length, excluding the two adjacent pore-body radii. The first term of Eq. (20) differs by less than \(1\%\) from the corresponding analytical value for a cylindrical tube. The second term of Eq. (20) accounts for the pressure drop within the two adjacent pore bodies. Following the results of the previous section, we chose \(\beta _{{{{\rm pore}}}}= 57 348\,\hbox {m}^{-1}\) according to Table 1.
The benefit of using the novel slip condition in the coupled model is quantified by
$$\begin{aligned} \eta _{\rm v} = \frac{{{{\rm err}}}_{\rm v}^{{{\rm old}}}}{{{{\rm err}}}_{\rm v}^{{{\rm new}}}}, \end{aligned}$$
where \({{{\rm err}}}_{\rm v}\) is the normalized velocity error norm for the free-flow region ("old" when using Eq. 7 and "new" when using Eq. 11),
$$\begin{aligned} {{{\rm err}}}_{\rm v} = \frac{\Vert \Delta {\bf v} \Vert _2}{\Vert {\bf v}_{{{\rm ref}}} \Vert _2} = \frac{\left( \sum _i (\Delta v_{x}^2 + \Delta v_{y}^2 + \Delta v_{z}^2)_i \right) ^{1/2}}{\left( \sum _i (v_{{{{\rm ref}}},x}^2 + v_{{{{\rm ref}}},y}^2 + v_{{{{\rm ref}}},z}^2)_i \right) ^{1/2}}. \end{aligned}$$
\({\bf v}_{{{{\rm ref}}}}\) is the velocity in the free-flow region of the reference solution while \(\Delta {\bf v}\) is the corresponding difference between the reference solution and the one of the coupled model. In analogy to Eq. (21), we determined \(\eta _{\rm p} = \eta _{\rm p}^{{{\rm old}}} / \eta _{\rm p}^{{{\rm new}}}\) to evaluate the influence of Eq. (11) on pressure.
Figure 4 shows \(\eta _{\rm v}\) (full markers) and \(\eta _{\rm p}\) (empty markers) for all geometric setups over the ratio of Reynolds numbers within the channel and the pore throat \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}}\), based on the corresponding mean velocity and the hydraulic diameter of the structures. For the injection scenarios (\(p_{{{\rm bottom}}} / p_{{{\rm in}}} \ge 1\)), an increase of \(p_{{{\rm bottom}}}\) leads to higher flow rates within the pore throat which in turn decreases \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}}\) as the pressure drop \(p_{{{\rm in}}} - p_{{{\rm out}}}\) driving the main-channel flow is kept constant. For \(p_{{{\rm bottom}}} / p_{{{\rm in}}} = 0.33\) (extraction), the lowest flow rate in the throat is obtained for \(\varphi _{{{\rm az}}} = 0^{\circ }\) and \(\theta _{{{\rm pol}}} = 60^{\circ }\) (red circle). Here, the flow coming from the channel and entering the throat is reversed and rotated by \(150^{\circ }\).
Error reduction for different configurations. The filled markers show the reduction of the velocity-related error \(\eta _{\rm v}\) (Eq. 21) for different geometrical setups and flow configurations. The empty markers show the corresponding value \(\eta _{\rm p}\) for the pressure
The error reduction provided by Eq. (11) strongly depends on \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}}\), while the orientation of the throat does not have a significant impact.
For \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}} > 10\), \({{{\rm err}}}_{\rm v}\) is reduced by a factor of more than five for all cases considered, while \({{{\rm err}}}_{\rm p}\) is more than halved. For comparison, we also considered the case of \(\varphi _{{{\rm az}}} = 0^{\circ }\) and \(\theta _{{{\rm pol}}} = 0^{\circ }\) where the boundary of the lower pore body was closed such that no flow occurred in the throat. This corresponds to the simplified configuration used for the evaluation of \(\beta _{{{{\rm pore}}}}\) in the previous section. Here, we obtained a benefit of \(\eta _{\rm v} = 5.63\) and \(\eta _{\rm p} = 2.44\) which shows that the simplifications made for the determination of \(\beta _{{{{\rm pore}}}}\) do not impair the accuracy of the method for the given cases.
However, we observe a steep drop of \(\eta _{\rm v}\) and \(\eta _{\rm p}\) for \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}} < 10\). In order to assess whether this is due to the above-mentioned simplifications, we re-evaluated \(\beta _{{{{\rm pore}}}}\) for \(\varphi _{{{\rm az}}} = 45^{\circ }, \theta _{{{\rm pol}}} = 0^{\circ }\) and \(p_{{{\rm bottom}}} / p_{{{\rm in}}} = 100\) based on the results of the corresponding reference solution, taking into account the horizontal flow velocity within the pore throat such that
$$\begin{aligned} \beta _{{{\rm pore, new}}} \approx \frac{ \left\langle \left[ \frac{\partial v_x }{\partial y} + \frac{\partial v_y }{\partial x} \right] ^{{{\rm FF}}}\right\rangle }{\left\langle \left[ v_x \right] ^{{{\rm FF}}}\right\rangle - \left[ v_x \right] ^{{\rm PNM}}}. \end{aligned}$$
This yields a new value of \(\beta _{{{{\rm pore}}}}= -21{,}541\,\hbox {m}^{-1}\). The negative sign is due to the fact that in this case, the velocity within the pore throat is actually higher than the free-flow velocity at the interface. Using this new value for the coupled model only slightly improves the results with \(\eta _{\rm v} = 1.14\) and \(\eta _{\rm p} = 1.17\) (compared to \(\eta _{\rm v} = 1.08\) and \(\eta _{\rm p} = 1.13\)). The decrease of \(\eta\) is therefore caused by something else, as shown in Fig. 5: for \(p_{{{\rm bottom}}} / p_{{{\rm in}}} = 1\) (Fig. 5a), the flow directly above the coupling pore is mainly parallel to the free-flow channel's x-axis. It is entirely governed by the pressure drop between the channel's inlet and outlet which also results in a rather homogeneous pressure distribution along the coupling interface. In strong contrast to this, Fig. 5b shows the effect of the pronounced inflow coming from the pore throat when \(p_{{{\rm bottom}}} / p_{{{\rm in}}} = 100\). This influx causes the velocity field to diverge due to the locally increased pressure on the left side of the coupling interface. The velocity field is therefore not governed by the free-flow channel's bulk pressure gradient anymore but shaped by the local influx at the pore. This can also be quantified in terms of the standard deviation of the pressure field on the coupling interface \(\Gamma _i\): for \(p_{{{\rm bottom}}} / p_{{{\rm in}}} = \{0.33, 1, 10, 100\}\), the standard deviation is \(1.09\times 10^{-7}\,\hbox {Pa},\, 1.09\times 10^{-7}\,\hbox {Pa},\, 1.13\times 10^{-7}\,\hbox {Pa}\) and \(3\times 10^{-7}\,\hbox {Pa}\), respectively. It thus correlates inversely proportional with \(\eta _{\rm v}\) and \(\eta _{\rm p}\) (Fig. 4). The coupled model assumes a constant pressure on \(\Gamma _i\) (Eq. 6) for the balance of normal forces. This assumption is obviously not met for high flow rates within the coupling throat. This issue is, however, not related to the slip condition proposed here. Furthermore, in many technical and environmental applications, the free-flow bulk velocity is likely to be considerably higher than the velocity within the pore throats at the interface, corresponding to \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}} > 10\) where Eq. (11) performs well.
Pressure field and velocity vectors of the reference solution at the coupling interface as seen from the top (x–z-plane) for \(\varphi _{{{\rm az}}} = 45^{\circ }\), \(\theta _{{{\rm pol}}} = 0^{\circ }\)
In conclusion, the novel slip condition reduces the coupled model's error with respect to the reference solution in the free-flow channel by a factor of over five for the velocity and by more than two for the pressure, provided that the ratio between the Reynolds numbers in the channel and in the throat \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}} > 10\). The pore throat's orientation does not have a significant effect. If the flow through the interface pore strongly affects the overall flow field, the coupled model's accuracy is limited by the coupling condition for the normal momentum exchange, which assumes a uniform pressure at the pore. This could be addressed in future work.
The next section features a three-dimensional showcase where the coupled model is applied to a free-flow channel above a randomly generated pore network.
A Three-Dimensional Showcase with a Random Network
This example serves to illustrate the coupled model's ability to handle unstructured pore networks in 3D while reducing the computational cost compared to a fully resolved reference solution. Figure 6 shows the setup which features a free-flow channel above a randomly generated network of pores which was created following the procedure described by Raoof and Hassanizadeh (2009). Starting from a regular lattice of \(3 \times 3 \times 3\) pores (\(\Delta x = \Delta y = \Delta z = 2\times 10^{-4}\,\hbox {m}\)) where the nodes are connected to all neighbors, some connections are deleted randomly. The remaining connections are the pore throats with a uniform radius of \(r_{ij} = 5\times 10^{-5}\,\hbox {m}\) while the nodes are the pore bodies with \(r_i = 1\times 10^{-4}\,\hbox {m}\). We assured that throats only intersect at the pore bodies and that the coordination number of the latter at the interface is always one. The resulting network (shown in black in Fig. 6) features 42 throats and 26 pore bodies. A three-dimensional grid featuring 4,320,307 uniform cells (including 3,200,000 cells in the free-flow channel) was then constructed based on this network, as shown in gray in Fig. 6. As in the previous sections, we chose the grid resolution such that 40 cells per pore body diameter are used.
3D geometry consisting of a free-flow channel and a random network. The opaque gray 3D reference geometry was created from the random 1D network shown in black
Flow is induced in the channel and in the network by setting \(p_{{{\rm in}}} = 1\times 10^{-6}\,\hbox {Pa},\, p_{{{\rm out}}} = 0\,\hbox {Pa}\) and \(p_{{{\rm bottom}}} = 1\times 10^{-6}\,\hbox {Pa}\). All remaining boundaries are set to no-flow/no-slip. Equation (20) and \(\beta _{{{{\rm pore}}}}= 57 348\,\hbox {m}^{-1}\) are considered for the coupled model. Water with \(\mu = 1\times 10^{-3}\,\hbox {Pa}\,\hbox {s}\) is used.
Solving the stationary Stokes equations (Eqs. 1, 2) with DuMux on a single core (Intel Xeon CPU E5-2683 v4 @ 2.10 GHz, 62 GB RAM) took \(65\,\hbox {min}\) for the reference model and \(47\,\hbox {min}\) for the coupled model, regardless of whether using Eq. (11) or Eq. (7). The total CPU time including grid creation, matrix assembly and I/O was \(82\,\hbox {min}\) for the reference model and \(54\,\hbox {min}\) for the coupled model. The speedup of \(\tfrac{65}{47} = 1.4\) with respect to solver time corresponds to the ratio of the number of degrees of freedom for the reference model and for the coupled model, \(\tfrac{17{,}480{,}883}{12{,}872{,}026}= 1.36\), showing the almost linear scaling behavior of the iterative solver.
The coupled model's results (Fig. 7b) match closely with the reference solution (Fig. 7a) in a qualitative sense. Some local deviations of up to \(20\%\) with respect to \({\bf v}\) occur at pore bodies with pronounced inflow which corresponds to the discussion related to Fig. 5.
Results for the random 3D network. The velocity (magnitude) directly above the coupling interface is shown by the plane in red and blue. The gray velocity vectors are scaled by magnitude. The one-dimensional network in (b) is extruded for visualization purposes
The coupled model's normalized errors for the free-flow channel (Eq. 22) are \({{{\rm err}}}_{\rm v} = 4.78\times 10^{-3}\) and \({{{\rm err}}}_{\rm p} = 4.92\times 10^{-3}\) when considering Eq. (11), compared to \({{{\rm err}}}_{\rm v} = 2.84\times 10^{-2}\) and \({{{\rm err}}}_{\rm p} = 1.14\times 10^{-2}\) for Eq. (7). This yields (Eq. 21) \(\eta _{\rm v} = 5.94\) and \(\eta _{\rm p} = 2.32\) which is in the same range as in the previous section when \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}} > 10\) (Fig. 4). For the given setup, \(Re_{{{\rm bulk}}} = 5.85\times 10^{-6}\) (based on the channel's hydraulic diameter and mean velocity) while \(Re_{{{\rm throat}}} = 3.22\times 10^{-8}\) (evaluated using the throats adjacent to the interface pores with a uniform hydraulic diameter of \(2 r_{ij}\) and the mean velocity within those throats).
Repeating all simulations with an increased bottom pressure of \(p_{{{\rm bottom}}} = 1\times 10^{-5}\,\hbox {Pa}\) leads to \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}} = 10.22\), yielding \(\eta _{\rm v} = 3.18\) and \(\eta _{\rm p} = 1.7\) which is again in accordance with our previous findings (Fig. 4).
In conclusion, this example showed that the coupled model can also be effectively applied to larger three-dimensional network structures where the benefit of using the proposed slip condition shows the same scaling behavior with \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}}\) as in our previous error analysis considering only a single throat. In the next and last section, we will recalculate a microfluidic experiment using the coupled model.
Recalculation of a Micromodel Experiment
In this section, we use the coupled model to recalculate a micromodel experiment of Terzis et al. (2019) where we exploit the quasi-two-dimensional nature of the experimental setup. The latter is especially suited for applying the proposed slip condition (Eq. 11) as it features only pore throats intersecting normally with the coupling interface which would result in a no-slip condition when using Eq. (7).
Schematic of the micromodel used in the experiment (redrawn from Terzis et al. 2019) with dimensions, origin of coordinates and flow direction. The model has a height in z-direction of \(200\,{\upmu}\hbox {m}\), the pillars are quadratic with \(l = 240\,{\upmu}\hbox {m}\) and evenly spaced throughout the porous domain
The micromodel geometry is shown in Fig. 8. It features three main regions: (1) the free-flow channel at the top, (2) the porous medium made of \(80 \times 20\) evenly spaced quadratic pillars and (3) a triangular reservoir region which was included into the design to facilitate the complete saturation of the model with water through an auxiliary inlet (not shown) at the bottom. This inlet was closed during the experiments. Details on the experimental procedure can be found in Terzis et al. (2019). For convenience, two dimensionless lengths x/l and y/l are introduced, where \(l = 240\times 10^{-6}\,{\hbox {m}}\) is the width of the pores in the porous region. The model has a uniform height of \(200\times 10^{-6}\,\hbox {m}\) in z-direction. Note that the inlet and outlet parts of the actual micromodel are longer to ensure a fully developed flow profile at the beginning of the porous medium during the experiment. For the simulations, these parts of the channel have been shortened (and correspond to the dimensions given in the drawing) for efficiency reasons while a fully developed flow was still achieved by applying pressure boundary conditions at the inlet and the outlet (\(p_{{{\rm in}}} = 1\times 10^{-3}\,\hbox {Pa}\) and \(p_{{{\rm out}}} = 0\,\hbox {Pa}\)). All remaining walls were set to no-slip/no-flow. Again, water is used (\(\mu = 1\times 10^{-3}\,\hbox {Pa}\,\hbox {s}\)).
As in the previous sections, we first generate a three-dimensional reference solution for comparison with the coupled model. This is achieved, after ensuring grid convergence, by uniformly meshing the entire micromodel using more than 62 million regular, axis-parallel cells, such that each pore throat is discretized with 20 cells in all directions. Since the free-flow model of DuMux is not parallelized yet, we use the open-source CFD tool OpenFOAM (Jasak 2009) for obtaining the stationary flow field in this case for sake of efficiency. A close match between the reference solution and the experimental data of Terzis et al. (2019) is found, as shown in Appendix 1.
For the coupled model, we simplify the micromodel geometry by reducing it to a two-dimensional plane where the z-coordinate and all velocities in this direction are omitted. Assuming a parabolic flow profile along the z-axis, Flekkøy et al. (1995) proposed a drag term which accounts for the wall friction of the virtual frontal and rearward boundary:
$$\begin{aligned} {\bf f}_{{{\rm drag}}} = - c \frac{\mu }{{h}^2} {\bf v}. \end{aligned}$$
\({h}\) is the virtual height of the model domain while c is a constant which determines whether the maximum velocity at the central plane of the channel at \(0.5{h}\) (\(c = 8\)) or the height-averaged one (\(c = 12\)) is recovered. This approach has been applied successfully for a number of different applications with Hele-Shaw-type flow (Venturoli and Boek 2006; Laleian et al. 2015; Kunz et al. 2015; Class et al. 2020) and provides the best accuracy for \({h}\ll w\) where w is the width of the flow channel. Equation (24) is added as a momentum source term to the left side of Eq. (1).
We chose a factor of \(c = 8\) to obtain the maximum, center-plane velocities because this corresponds to the experimental micro-PIV data and a comparison with the 3D OpenFOAM results is straightforward since we just need to extract the center plane from the 3D simulation data rather than performing an averaging along the z-axis. Note that the coupling between the free-flow domain and the pore-network model is still realized in terms of volumetric flow rates which can be approximated from the quasi-3D model by
$$\begin{aligned} Q_{{{\text{quasi-3D}}}} = \tfrac{2}{3} {h}\int _s ({\bf {\bf v}} \cdot {\bf n}) {{{\rm d}}} s. \end{aligned}$$
\({\bf n}\) is a unit vector normal to the line s over which the flow is evaluated, extruded in the virtual z-direction by the domain's height \({h}\). The factor 2/3 transfers the maximum velocity to a height-averaged one, assuming again a parabolic profile along the omitted z-axis.
In the following, the results of four different models will be discussed: the center-plane data (\(z= 100\times 10^{-6}\,\hbox {m}\)) of the three-dimensional reference model (OpenFOAM), the results of the quasi-3D model applied to the entire micromodel (DuMux) and those of the coupled model using either Eq. (7) or Eq. (11) (DuMux). The coupled model treats the free-flow channel and the triangular region with the Stokes equations (Eqs. 1, 2) while the porous domain is accounted for by the pore-network model. We used the quasi-3D model to determine the input parameter \(\beta _{{{{\rm pore}}}}= 30{,}983\,\hbox {m}^{-1}\) as described in Sect. 3. Interestingly, introducing the wall friction term \({\bf f}_{{{\rm drag}}}\) leads to a nonlinear scaling of \(\beta _{{{{\rm pore}}}}\) over \(r_i\). Further investigation of this behavior is required in future work. The throat conductance including the pressure loss within the pore bodies,
$$\begin{aligned} g_{ij} = \left( g_{ij,t}^{-1} + g_{1/2, i}^{-1} + g_{1/2, j}^{-1} \right) ^{-1}, \end{aligned}$$
with \(g_{ij,t} = 3.05\times 10^{-10}\,\hbox {m}^{3}/(\hbox {Pa s})\) and \(g_{1/2, i} = g_{1/2, j} = 8.47\times 10^{-10}\,\hbox {m}^{3}/(\hbox {Pa s})\), was determined using again the quasi-3D model and the numerical upscaling approach described in the appendix of Weishaupt et al. (2019).
In contrast to the previous numerical examples, we employ the direct linear solver UMFPack (multifrontal LU factorization, Davis 2004) to solve the linear system of equations in DuMux. This is feasible and actually more efficient than using the iterative approach described in Sect. 2.4 due to the system's moderate size with 9,412,010 and 3,737,351 degrees of freedom for the quasi-3D model and the coupled one, respectively. The corresponding CPU times were 11 min and 5 min on a single core of the same machine as before.
Velocity and pressure fields for the micromodel setup. The center-plane (\(z= 100\times 10^{-6}\,\hbox {m}\)) velocity (magnitude) and pressure fields of the 3D reference solution obtained with OpenFOAM are shown in (a) and (b). The corresponding results of the coupled model using the novel slip condition are given in (c) and (d) where the one-dimensional elements of the pore network have been extruded for visualization purposes. Note that the pore throats in the coupled model show averaged velocities based on Eq. (4) which are by implication smaller than the peak free-flow velocities at the associated interface
Figure 9 shows the center-plane velocity and pressure fields of the reference and the coupled model using Eq. (11). As observed in the experiment (Terzis et al. 2019, cf. Appendix 1), the flow enters the porous domain almost vertically on the left side of the porous medium, traverses it mainly parallel and re-enters the channel on the right side of the porous domain. A substantial fraction of flow passes through the triangular reservoir at the bottom of the model as this features less resistance than the narrow flow channels within the porous medium. The maximum resulting Reynolds number, both with respect to the free-flow channel and the one of the pore throats (considering the hydraulic diameter), is always below \(1\times 10^{-3}\).
There is a high level of visual agreement between the reference and the coupled solution. Local velocity deviations in the free-flow channel of up to \(4\%\) can be observed, especially on the leftmost and rightmost vertical throat intersecting with the interface. This is probably due the velocity gradients which are highest at these positions and the sudden change of flow direction. In addition, the aspect ratio between the model height \({h}\) and the flow cross-section changes from a value of 0.1 (\(\frac{200\,{\upmu }{\rm m}}{2000\,{\upmu }{\rm m}}\)) in the channel to a less favorable value of 0.83 (\(\frac{200\,{\upmu }{\rm m}}{240\,{\upmu }{\rm m}}\)) in the pore throats, which impairs the validity of Eq. (24).
The volumetric flow rates for each throat at the interface are given in Fig. 10. The throats are labeled from left to right from #1 to #81. The in- and outflow behavior across the interface is symmetrical and, as expected, no flow occurs at the horizontal center of the micromodel (#40). The coupled models' results are almost identical to the ones of reference solution, regardless of whether Eq. (7) or Eq. (11) is used which means that the vertical mass exchange between the free flow and porous medium is not significantly influenced by the slip velocity above the throats.
Pore-local volume fluxes. Discrete volumetric flow rates at all throats intersecting with the interface for all numerical models, normalized by the maximum flow rate of the 3D reference model (OpenFOAM). "new" refers Eq. (11), "old" to Eq. (7)
In Fig. 11, the central throat # 40 intersecting with the interface at \(y/l = 0, x/l = 80.5\) is magnified and the velocity vectors of the 3D reference, the quasi-3D and the coupled models are shown. The main channel flow slightly dips into the throat cavity on the left just to re-enter the main channel on the right. There is no net mass flux across the interface. This flow behavior is generally reflected by all models. Using Eq. (11) instead of Eq. (7) in the coupled model noticeably improves the agreement with the reference solution's vectors, both in magnitude and orientation.
The vertical velocity component of both coupled models is essentially determined by the coupling condition for the conservation of momentum in normal direction (Eq. 6) and is thus more or less identical corresponding to our previous findings in Fig. 10. The black vectors feature strongly decreased x-components due to the no-slip condition at the coupling interface yielded by Eq. (7) for this type of geometry.
Near-interface flow field. Close-up of the interface region at the central throat (\(x/l= 80.5, y/l = 0\)). The yellow, purple and black velocity vectors correspond to the quasi-3D model, the coupled model considering Eq. (11) and the coupled model considering Eq. (7). The opaque white vectors with contours (barely visible as they mostly overlap with the quasi-3D vectors) correspond to the 3D center-plane (\(z= 100\times 10^{-6}\,\hbox {m}\)) results of OpenFOAM
The same pattern can be observed in Fig. 12 which shows a close-up of the two leftmost throats at the interface. Here, we see a pronounced downward flow from the free-flow channel into the porous domain. Again there is a much better match with the reference solution if the slip velocity is taken into account using Eq. (11).
Near-interface flow field. Close-up of the interface region at the two leftmost throats (\(0 \le x/l \le 3, y/l = 0\)). The yellow, purple and black velocity vectors correspond to the reference (quasi-3D) model, the coupled model considering Eq. (11) and the coupled model considering Eq. (7). The opaque white vectors with contours (barely visible as they mostly overlap with the quasi-3D vectors) correspond to the 3D center-plane (\(z= 100\times 10^{-6}\,\hbox {m}\)) results of OpenFOAM
Table 3 summarizes the normalized errors for the free-flow channel and the triangular region of micromodel setup. In the first row, the 3D center-plane results (OpenFOAM) serve as reference solution. As seen in the last column, the largest portion of the error originates from the quasi-3D simplification (here the quasi-3D free-flow model is applied to the entire geometry).
As explained above, the coupled model employs the quasi-3D model in the free-flow channel and in the triangular region and the relevant input parameters \(\beta _{{{{\rm pore}}}}\) and \(g_{ij}\) have been determined using the quasi-3D model. For sake of comparability, we therefore consider the latter (applied to the whole geometry) as a reference for the coupled models in the second row of Table 3 and obtain a benefit for using the novel slip condition (Eq. 21) of \(\eta _{\rm v} = 2.52\) and \(\eta _{\rm p} = 1.25\). For the entire free-flow channel, \(Re_{{{{\rm bulk}}}} / Re_{{{{\rm throat}}}} = 12.25\) (\(Re_{{{{\rm throat}}}}\) based on the mean velocity of the throats at the interface and their hydraulic diameter) for which we would expect slightly higher values of \(\eta\) according to Fig. 4. However, the flow across the interface is not uniform (see Fig. 10) and \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}} = 1.87\) for the leftmost and rightmost throat, for which even lower values of \(\eta\) were found in Fig. 4. The smaller an interfacial throat's distance from the center \(x/l = 80.5\), the lower \(Re_{{{\rm throat}}}\) and the more favorable the conditions for applying Eq. (11) which explains the results for \(\eta\) ranging in-between the bounds presented in Fig. 4.
Table 3 Normalized errors for the free-flow channel and the triangular region of micromodel setup
Finally, Fig. 13 sheds some light onto the horizontal flow conditions within the free-flow channel, the porous medium and the triangular region at the vertical center line of the micromodel. Depicted are the normalized horizontal velocities at \(x/l = 80.5\) and the integral volume flows Q at the throats directly left to the center line at \(x/l = 79.5\), likewise normalized. As the pore-network model only yields averaged velocities within the pore throats, \(v_x\) is only drawn in the free-flow channel and the triangular region, where it matches almost perfectly the solution of the quasi-3D model. Both coupled models and the quasi-3D one also give rise to very similar integral volume flows within the throats, which deviate by around 6% from the values of the 3D simulation. This can be explained by the aforementioned unfavorable aspect ratio of 0.83 in the pore throats which impairs the accuracy of Eq. (24) used for the quasi-3D model from which subsequently also the throat conductances were derived by numerical upscaling, as described previously. The inset image on the lower left of Fig. 13 shows, as expected, a higher value of \(v_x\) right at the interface when Eq. (11) is used in the coupled model.
Horizontal velocity profiles and volumetric fluxes over height. Velocity profiles \(v_x\) over y at \(x/l = 80.5\) and discrete volumetric flow rates at \(x/l = 79.5\) for all numerical models, normalized by the maximum values of the 3D reference model (OpenFOAM). The coupled model only features continuous velocities in the free-flow channel and the triangular region. "new" refers Eq. (11), "old" to Eq. (7)
In summary, this section showed how the coupled model can be applied to recalculate a microfluidic experiment. We considered the results of a fully resolved 3D simulation and the one of a simplified quasi-3D model for comparison with the coupled model. The latter also made use of the quasi-3D approach in the free-flow regions. The coupled model was more than twice as fast as the quasi-3D model applied to the entire domain while providing a high degree of accuracy, especially when making use of Eq. (11).
In this work, we have extended and improved the hybrid-dimensional coupled model of Weishaupt et al. (2019) where only two-dimensional setups were considered and the coupling conditions for tangential momentum transfer would effectively yield no-slip conditions for throats oriented orthogonally with the coupling interface. Here, we introduced a novel condition for pore-scale slip and considered three-dimensional computational domains. The accuracy of this condition was assessed in detail on the example of a single pore intersecting with a free-flow domain under various geometrical settings and flow conditions. The slip condition can reduce the normalized error within the free-flow domain by a factor of more than five, provided the flow through the intersecting pore does not substantially influence the free-flow velocity field, i.e, \(Re_{{{\rm bulk}}} / Re_{{{\rm throat}}} > 10\). These findings also hold when the coupled model is applied to a complex, three-dimensional random network coupled to a free-flow channel. Weishaupt et al. (2019) used a direct linear solver due to the poorly conditioned monolithic system matrix. We lifted this constraint here be applying an iterative linear solver in combination with a simple preconditioning strategy based on the Uzawa algorithm (Ho et al. 2017). Following the first promising results obtained here, we will investigate further ways to improve the linear solver, such as proposed by Kuchta et al. (2018), while also aiming for parallelization. In addition, alternatives to our monolithic coupling scheme will be investigated (Bungartz et al. 2016; Jaust et al. 2020). The limitation to free-flow grids conforming with the discrete pore bodies at the coupling interface could be addressed in future work by considering mortar techniques (Song et al. 2013; Mehmani and Balhoff 2014).
In the last section of this work, we applied the coupled model for the recalculation of a microfluidic experiment (Terzis et al. 2019). Here, the coupled model's results were in high accordance with the numerical reference solution and the proposed slip condition again proved beneficial.
In summary, the coupled, hybrid-dimensional model is an interesting and efficient option for the simulation of coupled systems of free flow over a permeable medium. It can be certainly used as a powerful design tool during the optimization of microfluidic experiments as well as in industrial applications providing accurate results in a timely manner.
Arbogast, T., Pencheva, G., Wheeler, M.F., Yotov, I.: A multiscale mortar mixed finite element method. Multiscale Model. Simul. 6(1), 319–346 (2007). https://doi.org/10.1137/060662587
Balhoff, M.T., Thomas, S.G., Wheeler, M.F.: Mortar coupling and upscaling of pore-scale models. Comput. Geosci. 12(1), 15–27 (2007a). https://doi.org/10.1007/s10596-007-9058-6
Balhoff, M.T., Thompson, K.E., Hjortsø, M.: Coupling pore-scale networks to continuum-scale models of porous media. Comput. Geosci. 33(3), 393–410 (2007b). https://doi.org/10.1016/j.cageo.2006.05.012
Bastian, P., Blatt, M., Dedner, A., Engwer, C., Klöfkorn, R., Kornhuber, R., Ohlberger, M., Sander, O.: A generic grid interface for parallel and adaptive scientific computing. Part II: Implementation and tests in DUNE. Computing 82(2–3), 121–138 (2008a). https://doi.org/10.1007/s00607-008-0004-9
Bastian, P., Blatt, M., Dedner, A., Engwer, C., Klöfkorn, R., Ohlberger, M., Sander, O.: A generic grid interface for parallel and adaptive scientific computing. Part I: Abstract framework. Computing 82(2), 103–119 (2008b). https://doi.org/10.1007/s00607-008-0003-x
Beavers, G.S., Joseph, D.D.: Boundary conditions at a naturally permeable wall. J. Fluid Mech. 30(01), 197–207 (1967). https://doi.org/10.1017/S0022112067001375
Benzi, M., Golub, G.H., Liesen, J.: Numerical solution of saddle point problems. Acta Numer. 14, 1–137 (2005). https://doi.org/10.1017/s0962492904000212
Beyhaghi, S., Xu, Z., Pillai, K.M.: Achieving the inside–outside coupling during network simulation of isothermal drying of a porous medium in a turbulent flow. Transp. Porous Media 114(3), 823–842 (2016). https://doi.org/10.1007/s11242-016-0746-3
Blunt, M.J.: Multiphase Flow in Permeable Media: A Pore-Scale Perspective. Cambridge University Press, Cambridge (2017)
Bungartz, H.J., Lindner, F., Gatzhammer, B., Mehl, M., Scheufele, K., Shukaev, A., Uekermann, B.: PreCICE—a fully parallel library for multi-physics surface coupling. Comput. Fluids 141, 250–258 (2016). https://doi.org/10.1016/j.compfluid.2016.04.003
Chauhan, V.P., Stylianopoulos, T., Boucher, Y., Jain, R.K.: Delivery of molecular and nanoscale medicine to tumors: transport barriers and strategies. Annu. Rev. Chem. Biomol. Eng. 2(1), 281–298 (2011). https://doi.org/10.1146/annurev-chembioeng-061010-114300
Class, H., Weishaupt, K., Trötschler, O.: Experimental and simulation study on validating a numerical model for CO2 density-driven dissolution in water. Water 12(3), (2020). https://doi.org/10.3390/w12030738
Davis, T.A.: Algorithm 832: UMFPACK V4.3—an unsymmetric-pattern multifrontal method. ACM Trans. Math. Softw. (TOMS) 30(2), 196–199 (2004). https://doi.org/10.1145/992200.992206
Flekkøy, E.G., Oxaal, U., Feder, J., Jøssang, T.: Hydrodynamic dispersion at stagnation points: Simulations and experiments. Phys. Rev. E 52(5), 4952–4962 (1995). https://doi.org/10.1103/physreve.52.4952
Flemisch, B., Darcis, M., Erbertseder, K., Faigle, B., Lauser, A., Mosthaf, K., Müthing, S., Nuske, P., Tatomir, A., Wolff, M., et al.: DuMux: DUNE for multi-{phase, component, scale, physics,…} flow and transport in porous media. Adv. Water Resour. 34(9), 1102–1112 (2011). https://doi.org/10.1016/j.advwatres.2011.03.007
Gräser, C., Sander, O.: The dune-subgrid module and some applications. Computing 86(4), 269–290 (2009). https://doi.org/10.1007/s00607-009-0067-2
Gurau, V., Mann, J.A.: A critical overview of computational fluid dynamics multiphase models for proton exchange membrane fuel cells. SIAM J. Appl. Math. 70(2), 410–454 (2009). https://doi.org/10.1137/080727993
Harlow, F.H., Welch, J.E.: Numerical calculation of time-dependent viscous incompressible flow of fluid with free surface. Phys. Fluids 8(12), 2182–2189 (1965). https://doi.org/10.1063/1.1761178
Hassanizadeh, S.M., Gray, W.G.: Derivation of conditions describing transport across zones of reduced dynamics within multiphase systems. Water Resour. Res. 25(3), 529–539 (1989). https://doi.org/10.1029/WR025i003p00529
Heck, K., Ackermann, S., Becker, B., Coltman, E., Emmert, S., Flemisch, B., Gläser, D., Grüninger, C., Koch, T., Kurz, T., Lipp, M., Mohammadi, F., Scherrer, S., Schneider, M., Seitz, G., Stadler, L., Utz, M., Vescovini, A., Weinhardt, F., Weishaupt, K.: Dumu\(^\text{x}\) 3.1.0. (2019). https://doi.org/10.5281/zenodo.3482428
Ho, N., Olson, S.D., Walker, H.F.: Accelerating the uzawa algorithm. SIAM J. Sci. Comput. 39(5), S461–S476 (2017). https://doi.org/10.1137/16m1076770
Jamet, D., Chandesris, M., Goyeau, B.: On the equivalence of the discontinuous one-and two-domain approaches for the modeling of transport phenomena at a fluid/porous interface. Transp. Porous Media 78(3), 403–418 (2009). https://doi.org/10.1007/s11242-008-9314-9
Jasak, H.: OpenFOAM: Open source CFD in research and industry. Int. J. Naval Archit. Ocean Eng. 1(2), 89–94 (2009). https://doi.org/10.2478/ijnaoe-2013-0011
Jaust, A., Weishaupt, K., Mehl, M., Flemisch, B.: Partitioned coupling schemes for free-flow and porous-media applications with sharp interfaces. In: Finite Volumes for Complex Applications IX—Methods, Theoretical Aspects, Examples. Springer, pp. 605–613. (2020). https://doi.org/10.1007/978-3-030-43651-3_57
Jeong, J.T.: Slip boundary condition on an idealized porous wall. Phys. Fluids 13(7), 1884–1890 (2001). https://doi.org/10.1063/1.1373680
Jones, I.P.: Low reynolds number flow past a porous spherical shell. Math. Proc. Cambr. Philos. Soc. 73(1), 231–238 (1973). https://doi.org/10.1017/s0305004100047642
Kamrin, K., Bazant, M.Z., Stone, H.A.: Effective slip boundary conditions for arbitrary periodic surfaces: the surface mobility tensor. J. Fluid Mech. 658, 409–437 (2010). https://doi.org/10.1017/s0022112010001801
Koch, T., Gläser, D., Weishaupt, K., Ackermann, S., Beck, M., Becker, B., Burbulla, S., Class, H., Coltman, E., Emmert, S., Fetzer, T., Grüninger, C., Heck, K., Hommel, J., Kurz, T., Lipp, M., Mohammadi, F., Scherrer, S., Schneider, M., Seitz, G., Stadler, L., Utz, M., Weinhardt, F., Flemisch, B.: DuMux 3—an open-source simulator for solving flow and transport problems in porous media with a focus on model coupling. Comput. Math. Appl. (2020). https://doi.org/10.1016/j.camwa.2020.02.012
Kuchta, M., Mardal, K.A., Mortensen, M.: Preconditioning trace coupled 3d–1d systems using fractional Laplacian. Numer. Methods Partial Differ. Equ. 35(1), 375–393 (2018). https://doi.org/10.1002/num.22304
Kunz, P., Zarikos, I.M., Karadimitriou, N.K., Huber, M., Nieken, U., Hassanizadeh, S.M.: Study of multi-phase flow in porous media: comparison of SPH simulations with micro-model experiments. Transp. Porous Media 114(2), 581–600 (2015). https://doi.org/10.1007/s11242-015-0599-1
Laleian, A., Valocchi, A., Werth, C.: An incompressible, depth-averaged lattice boltzmann method for liquid flow in microfluidic devices with variable aperture. Computation 3(4), 600–615 (2015). https://doi.org/10.3390/computation3040600
Lauga, E., Stones, H.A.: Effective slip in pressure-driven Stokes flow. J. Fluid Mech. 489, 55–77 (2003). https://doi.org/10.1017/s0022112003004695
Layton, W.J., Schieweck, F., Yotov, I.: Coupling fluid flow with porous media flow. SIAM J. Numer. Anal. 40(6), 2195–2218 (2002). https://doi.org/10.1137/S0036142901392766
Mehmani, Y., Balhoff, M.T.: Bridging from pore to continuum: a hybrid mortar domain decomposition framework for subsurface flow and transport. Multiscale Model. Simul. 12(2), 667–693 (2014). https://doi.org/10.1137/13092424X
Mehmani, Y., Tchelepi, H.A.: Minimum requirements for predictive pore-network modeling of solute transport in micromodels. Adv. Water Resour. 108, 83–98 (2017). https://doi.org/10.1016/j.advwatres.2017.07.014
Moffatt, H.K.: Viscous and resistive eddies near a sharp corner. J. Fluid Mech. 18(1), 1–18 (1964). https://doi.org/10.1017/s0022112064000015
Mosthaf, K., Baber, K., Flemisch, B., Helmig, R., Leijnse, A., Rybak, I., Wohlmuth, B.: A coupling concept for two-phase compositional porous-medium and single-phase compositional free flow. Water Resour. Res. 47(10) (2011). https://doi.org/10.1029/2011WR010685
Navier, C.: Mémoire sur les lois du mouvement des fluides. Mém. l'Acad. R. Sci. l'Inst. France 6(1823), 389–440 (1823)
Neale, G., Nader, W.: Practical significance of Brinkman's extension of Darcy's law: coupled parallel flows within a channel and a bounding porous medium. Can. J. Chem. Eng. 52(4), 475–478 (1974). https://doi.org/10.1002/cjce.5450520407
Ochoa-Tapia, J.A., Whitaker, S.: Momentum transfer at the boundary between a porous medium and a homogeneous fluid—I. Theoretical development. Int. J. Heat Mass Transf. 38(14), 2635–2646 (1995). https://doi.org/10.1016/0017-9310(94)00346-W
Oostrom, M., Mehmani, Y., Romero-Gomez, P., Tang, Y., Liu, H., Yoon, H., Kang, Q., Joekar-Niasar, V., Balhoff, M., Dewers, T., et al.: Pore-scale and continuum simulations of solute transport micromodel benchmark experiments. Comput. Geosci. 20(4), 857–879 (2016). https://doi.org/10.1007/s10596-014-9424-0
Patzek, T.W., Silin, D.B.: Shape factor and hydraulic conductance in noncircular capillaries: I. One-phase creeping flow. J. Colloid Interface Sci. 236, 295–304 (2001). https://doi.org/10.1006/jcis.2000.7413
Raoof, A., Hassanizadeh, S.M.: A new method for generating pore-network models of porous media. Transp. Porous Media 81(3), 391–407 (2009). https://doi.org/10.1007/s11242-009-9412-3
Saad, Y.: Iterative Methods for Sparse Linear Systems, 2nd edn. SIAM, Philadelphia (2003)
Sander, O., Koch, T., Schröder, N., Flemisch, B.: The Dune FoamGrid implementation for surface and network grids. Arch. Numer. Softw. 5(1), 217–244 (2017). https://doi.org/10.11588/ans.2017.1.28490
Scheibe, T.D., Murphy, E.M., Chen, X., Rice, A.K., Carroll, K.C., Palmer, B.J., Tartakovsky, A.M., Battiato, I., Wood, B.D.: An analysis platform for multiscale hydrogeologic modeling with emphasis on hybrid multiscale methods. Groundwater 53(1), 38–56 (2015). https://doi.org/10.1111/gwat.12179
Schönecker, C., Hardt, S.: Longitudinal and transverse flow over a cavity containing a second immiscible fluid. J. Fluid Mech. 717, 376–394 (2013). https://doi.org/10.1017/jfm.2012.577
Shahraeeni, E., Lehmann, P., Or, D.: Coupling of evaporative fluxes from drying porous surfaces with air boundary layer: characteristics of evaporation from discrete pores. Water Resour. Res. 48(9) (2012). https://doi.org/10.1029/2012WR011857
Shapira, Y.: Matrix-Based Multigrid: Theory and Applications. Springer, Berlin (2008)
Silva, G., Leal, N., Semiao, V.: Micro-PIV and CFD characterization of flows in a microchannel: velocity profiles, surface roughness and poiseuille numbers. Int. J. Heat Fluid Flow 29(4), 1211–1220 (2008). https://doi.org/10.1016/j.ijheatfluidflow.2008.03.013
Song, P., Wang, C., Yotov, I.: Domain decomposition for Stokes-Darcy flows with curved interfaces. Proc. Comput. Sci. 18, 1077–1086 (2013). https://doi.org/10.1016/j.procs.2013.05.273
Terzis, A., Zarikos, I., Weishaupt, K., Yang, G., Chu, X., Helmig, R., Weigand, B.: Microscopic velocity field measurements inside a regular porous medium adjacent to a low reynolds number channel flow. Phys. Fluids 31(4), 042001 (2019). https://doi.org/10.1063/1.5092169
Vanderborght, J., Fetzer, T., Mosthaf, K., Smits, K.M., Helmig, R.: Heat and water transport in soils and across the soil-atmosphere interface: 1. Theory and different model concepts. Water Resour. Res. 53(2), 1057–1079 (2017). https://doi.org/10.1002/2016WR019982
Venturoli, M., Boek, E.S.: Two-dimensional lattice-boltzmann simulations of single phase flow in a pseudo two-dimensional micromodel. Phys. A 362(1), 23–29 (2006). https://doi.org/10.1016/j.physa.2005.09.006
Verboven, P., Flick, D., Nicolaï, B., Alvarez, G.: Modelling transport phenomena in refrigerated food bulks, packages and stacks: basics and advances. Int. J. Refrig. 29(6), 985–997 (2006). https://doi.org/10.1016/j.ijrefrig.2005.12.010
Versteeg, H.K., Malalasekera, W.: An Introduction to Computational Fluid Dynamics: The Finite Volume Method. Pearson Education (2007)
Wang, C.Y.: Flow over a surface with parallel grooves. Phys. Fluids 15(5), 1114–1121 (2003). https://doi.org/10.1063/1.1560925
Weishaupt, K., Joekar-Niasar, V., Helmig, R.: An efficient coupling of free flow and porous media flow using the pore-network modeling approach. J. Comput. Phys. X 1, 100011 (2019). https://doi.org/10.1016/j.jcpx.2019.100011
Whitaker, S.: The Method of Volume Averaging. Kluwer Academic, London (1999)
Yang, G., Weigand, B., Terzis, A., Weishaupt, K., Helmig, R.: Numerical simulation of turbulent flow and heat transfer in a three-dimensional channel coupled with flow through porous structures. Transp. Porous Media 122(1), 145–167 (2018). https://doi.org/10.1007/s11242-017-0995-9
Yang, G., Coltman, E., Weishaupt, K., Terzis, A., Helmig, R., Weigand, B.: On the Beavers–Joseph interface condition for non-parallel coupled channel flow over a porous structure at high reynolds numbers. Transp. Porous Media 128(2), 431–457 (2019). https://doi.org/10.1007/s11242-019-01255-5
We thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for supporting this work by funding SFB 1313, Project Number 327154368. Guang Yang is grateful to the support from the National Natural Science Foundation of China (NSFC), Contract Number 51906142. We would also like to thank Ivan Yotov, Wietse Boon, Martin Schneider and Bernhard Weigand for fruitful discussions.
Open Access funding provided by Projekt DEAL.
Department of Hydromechanics and Modelling of Hydrosystems, University of Stuttgart, Stuttgart, Germany
K. Weishaupt, B. Flemisch & R. Helmig
Department of Mechanical Engineering, Stanford University, Stanford, CA, USA
A. Terzis
Institute of Nuclear and Radiological Sciences and Technology, Energy and Safety, NCSR Demokritos, Athens, Greece
I. Zarikos
School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
G. Yang
Environmental Hydrogeology Group, Department of Earth Sciences, Utrecht University, Utrecht, The Netherlands
D. A. M. de Winter
K. Weishaupt
B. Flemisch
R. Helmig
Correspondence to K. Weishaupt.
Appendix 1: Comparison of 3D Simulation Results with Micro-PIV Experimental Data
Figure 14 is a reproduction of Fig. 7a presented in Terzis et al. (2019), using the same experimental data and color scheme. In Fig. 14c, the results of the micro-PIV measurements are shown in terms of \(v_y\) and the velocity vector fields for four different locations A, B, C and D, as indicated in Fig. 14a. Figure 14b displays the corresponding simulation results (OpenFOAM) which show a very good agreement with the experimental data in a qualitative sense, reproducing the same distinct flow patterns at the various locations: in region A, a pronounced inflow from the free-flow channel into the porous structure can be observed which diminishes in streamwise direction. Region C basically shows a mirrored flow field as the fluid leaves the porous medium and re-enters the channel symmetrically compared to A. Here, the vertical flow intensity increases again in streamwise direction. In region B, at the center of the porous domain, no net influx or outflux occurs. The fluid crosses the interface in a downwards motion on the right sides of the solid blocks (red spots) and returns to the free flow channel at left sides of the blocks (blue spots). Region D lies inside the porous domain and features mainly parallel flow in x-direction.
Comparison of flow fields obtained by numerical simulation with OpenFOAM (b) and micro-PIV measurement results (c, reproduced from the original data of Terzis et al. 2019). The regions A, B, C and D are shown in (a)
In Fig. 15a, a more quantitative comparison is performed. Here, \(v_x\) is averaged in x-direction between \(75 \le x/l \le 85\) along a vertical column, as shown by area E in Fig. 15b. Both the experimental and numerical data are given and the graphs are normalized by the respective maximum values in the free-flow region. A very good fit can be found, both qualitatively and quantitatively. The local deviations can be explained by measurements uncertainties (Terzis et al. 2019) or small-scale structural differences between the actual micromodel geometry and the computational domain, such as surface roughness (Silva et al. 2008) which is not captured by the numerical model. While Fig. 15c shows that the pillars of the micromodel are indeed not entirely smooth and that the corners are slightly rounded, the numerical model only considers perfectly smooth squares with sharp corners. This could also explain the local deviations of flow angles \(\vartheta\) close to the interface between the free-flow channel and porous medium, as presented in Fig. 16. A detailed analysis of the impact of the pillars' rounded edges is beyond the scope of this work and should be addressed in future studies.
Experimental and numerical velocity profiles. a Comparison of averaged velocity profiles (\(75 \le x/l \le 85\)) between simulation (OpenFOAM) and experiment. The original data of Terzis et al. (2019) were used. \(v_x\) is averaged in x-direction between \(75 \le x/l \le 85\) at different locations of y, see area E in (b). c Camera image of a pillar within the porous domain showing the surface roughness of the pillar as a potential source of error
Figure 16 shows a symmetric characteristic of the flow angles due to the inflow into the porous medium and the outflow back into the free flow channel. On the left, the velocity vectors feature a negative inclination as the flow enters the porous domain while the same angles with opposed sign can be found on the right side, where the flow returns to the free flow channel. The local oscillations are caused by the same up- and downwards movement of the flow between the pillars as explained for region B in Fig. 14. The angles are greater for \(y/l = 0.1\), which is closer to the interface, as the free flow in channel senses a stronger influence of the porous medium compared to \(y/l = 0.5\).
In summary, the three-dimensional numerical model is able to reproduce the experimental data adequately. The obtained reference solution is thus suited for comparison with the reduced model's results as described in Sect. 5.
Experimental and numerical flow angles. Comparison of the flow angles \(\vartheta\) close to the interface between free flow and porous medium at \(y/l=0.5\) (a) and \(y/l=0.1\) (b) for the simulation and the experiment. The original data of Terzis et al. (2019) were used
Weishaupt, K., Terzis, A., Zarikos, I. et al. A Hybrid-Dimensional Coupled Pore-Network/Free-Flow Model Including Pore-Scale Slip and Its Application to a Micromodel Experiment. Transp Porous Med 135, 243–270 (2020). https://doi.org/10.1007/s11242-020-01477-y
Issue Date: October 2020
DOI: https://doi.org/10.1007/s11242-020-01477-y
Porous medium
Micromodel
|
CommonCrawl
|
Complex kinetics and residual structure in the thermal unfolding of yeast triosephosphate isomerase
Ariana Labastida-Polito1,
Georgina Garza-Ramos2,
Menandro Camarillo-Cadena1,
Rafael A. Zubillaga1 &
Andrés Hernández-Arana1
Saccharomyces cerevisiae triosephosphate isomerase (yTIM) is a dimeric protein that shows noncoincident unfolding and refolding transitions (hysteresis) in temperature scans, a phenomenon indicative of the slow forward and backward reactions of the native-unfolded process. Thermal unfolding scans suggest that no stable intermediates appear in the unfolding of yTIM. However, reported evidence points to the presence of residual structure in the denatured monomer at high temperature.
Thermally denatured yTIM showed a clear trend towards the formation of aggregation-prone, β-strand-like residual structure when pH decreased from 8.0 to 6.0, even though thermal unfolding profiles retained a simple monophasic appearance regardless of pH. However, kinetic studies performed over a relatively wide temperature range revealed a complex unfolding mechanism comprising up to three observable phases, with largely different time constants, each accompanied by changes in secondary structure. Besides, a simple sequential mechanism is unlikely to explain the observed variation of amplitudes and rate constants with temperature. This kinetic complexity is, however, not linked to the appearance of residual structure. Furthermore, the rate constant for the main unfolding phase shows small, rather unvarying values in the pH region where denatured yTIM gradually acquires a β-strand-like conformation. It appears, therefore, that the residual structure has no influence on the kinetic stability of the native protein. However, the presence of residual structure is clearly associated with increased irreversibility.
The slow temperature-induced unfolding of yeast TIM shows three kinetic phases. Rather than a simple sequential pathway, a complex mechanism involving off-pathway intermediates or even parallel pathways may be operating. β-strand-type residual structure, which appears below pH 8.0, is likely to be associated with increased irreversible aggregation of the unfolded protein. However, this denatured form apparently accelerates the refolding process.
It is now accepted that many proteins fold and unfold following complex kinetic models [1]. The most detailed kinetic studies of conformational change have been performed on small monomeric proteins by means of rapid mixing or fast temperature jumps, because protein molecules of this sort usually unfold reversibly but with relaxation times ranging from less than a millisecond to a few seconds [2–4]. Previous studies have demonstrated the presence of transiently populated intermediates, apart from the native and unfolded end-states [1, 3]. Intermediate states may be found either on- or off-pathway, and their interconnections may even result in the consolidation of parallel, competing folding-unfolding pathways [5, 6]. Furthermore, the combination of experimental studies and molecular dynamics simulations has provided detailed structural descriptions of the multiple intermediates and transition states involved [7]. Recently, strong emphasis has been placed on the structural characterization of unfolded states, because the presence of residual, native-like structure in parts of an otherwise unfolded polypeptide chain may be implicated in the speed of folding, as well as in the formation of misfolded molecules [8, 9].
However, there are examples of proteins that show very slow unfolding-refolding kinetics in the transition region (i.e., under conditions where the native and unfolded states are both significantly populated at equilibrium). Specifically, when unfolding is promoted by adding GuHCl or urea, slow-unfolding proteins take days to weeks to equilibrate, whereas for fast-unfolding proteins under similar conditions, equilibrium is reestablished in just a few hours [10–12]. Thus, if incubation times in the denaturing agent are not long enough, a slow-unfolding protein would display noncoincident unfolding and refolding profiles as the concentration of denaturing agent is varied (hysteresis). Likewise, hysteresis has been nicely demonstrated in the temperature-induced transitions of at least four proteins: an immunoglobulin light chain (monomer) [13], the Lpp-56 three-stranded α-helical coiled coil [14], and two dimeric triosephosphate isomerases [15, 16]. In these cases, thermal transitions detected by circular dichroism (CD) appear to be consistent with a two-state model with no intermediates.
Regarding trisosephosphate isomerase (TIM), many mesophilic members belonging to this enzyme family have been found to unfold slowly in chemical-denaturation studies, with one or more equilibrium or kinetic intermediates [11, 17, 18]. In contrast, thermal unfolding transitions of TIMs (in the absence of chemical denaturants) usually manifest themselves as monophasic profiles (i.e., simple sigmoidal curves with no evidence of intermediates), as recorded by CD [15, 19–21]. Unfortunately, irreversibility appears as a common feature in thermal unfolding, which has precluded the study of TIM refolding in cooling scans. Nevertheless, Benítez-Cardoza et al. [15] demonstrated that yeast TIM (yTIM) thermal unfolding is highly reversible at low protein concentration (≈0.20 μM), although the unfolding-refolding cycle displays marked hysteresis when a heating-cooling rate of 2.0 °C min−1 is used. Attempts to achieve near-equilibrium transition profiles by decreasing the scan rate led to pronounced irreversibility [15].
At a fixed temperature, kinetic data for yTIM unfolding registered by far-UV circular dichroism (CD) in a restricted time span, are well-fitted by single exponential curves, whereas near-UV CD and fluorescence indicate biphasic kinetics. Refolding data are consistent with a second-order reaction [15]. Unlike yeast TIM, the enzyme from Trypanosoma cruzi (TcTIM) shows completely irreversible, temperature-induced denaturation, even at low protein concentration. Kinetic studies of this protein found that denaturation is a complex process in which two or three phases are clearly seen [19]. A common finding for both yTIM and TcTIM is that their denatured states appear to conserve some kind of residual structure based on calorimetric data [15, 19].
This work mainly focuses on determining the kinetic characteristics of temperature-induced yTIM unfolding in aqueous solution over long durations and in a wide pH range. Regardless of pH, three kinetic phases were observed, although the small-amplitude faster phase was detected at only low temperatures. The relative amplitudes of the second and third phases vary with temperature in a way that seems difficult to explain by a sequential mechanism. The results thus evidence that the kinetics of yTIM thermal unfolding is more complex than previously thought. Furthermore, residual secondary structure was found in denatured yTIM below pH 8.0. Because this residual structure appears to be associated with the loss of refolding ability, its presence may indicate that misfolded, aggregation-prone structures are formed at high temperature. Molecular dynamics simulations showed that yTIM has a tendency to suffer α-to-β transitions when unfolded at high temperature, but this method does not properly reproduce the marked effect of pH on the structure of the thermally unfolded protein.
Overexpresion and purification of wild-type Saccharomyces cerevisiae TIM (yTIM) was carried out as described elsewhere [22]. Mass spectrometry (Additional file 1) and SDS-PAGE showed that the obtained enzyme was homogeneous. Enzymatic activity was determined by the coupled assay with α-glycerophosphate dehydrogenase (α-GDH), using D-glyceraldehyde 3-phosphate (DGAP) as the TIM substrate [23]. Assays were performed at 25.0 °C in 1.0 mL of 0.1 M triethanolamine buffer (pH 7.4) containing 10 mM EDTA, 0.20 mM NADH, 0.02 of α-GDH, and 2.0 mM DGAP; the reaction was started by the addition of 3.0 ng of yTIM, and NADH oxidation was followed by the change in absorbance at 340 nm. The catalytic efficiency (k cat/K M) of this enzyme was 5.0 x 106 s–1 M–1, a value similar to that reported previously [24, 25].
Circular dichroism spectra
Circular dichroism (CD) spectra were obtained with a JASCO J-715 instrument (Jasco Inc., Easton, MD) equipped with a Peltier-type cell holder for temperature control and stirring with a magnetic bar. Cells of 1.00-cm path length were used to keep the protein concentration near 10 μg mL−1 (0.19 μM). Although this somewhat restricted the lower wavelength limit of data registering, a low concentration is mandatory to observe reversible thermal unfolding scans [15]. CD spectral data are reported as mean residue ellipticity, [θ], which was calculated as [θ] = 100 θ/(C l); in this expression θ is the measured ellipticity in degrees, C is the mean residue molar concentration (mean residue M r = 107.5), and l is the cell path length in centimeters.
Thermal transitions
Conformational changes induced by heating or cooling of yTIM were continuously monitored by following the ellipticity at 220 nm while temperature was varied at 2.0 °C min−1. Samples (≈0.19 μM) were placed in a 1.00-cm cell with a magnetic stirrer, and the temperature within the cell was registered by the external probe of the Peltier-type accessory. Refolding profiles were registered immediately after the unfolding transitions had been completed.
Kinetics studies
Unfolding kinetics tracings were registered by following ellipticity changes at 220 nm, as described previously [15, 25]. Unfolding was initiated by adding a small aliquot of concentrated TIM solution to a 1.00-cm cell containing buffer equilibrated at the temperature selected for each experiment. Within the cell, the temperature reached ±0.15 °C of the final equilibrium value in about 15 s. The final protein concentration was 0.19 μM in most cases. Essentially, the same procedure was used for monitoring changes in intrinsic fluorescence over time. In this case, experiments were carried out in a K2 spectrofluorometer (ISS, Champaign, IL), which had a Peltier accessory. Protein samples were excited at 292 nm, and the light emitted at 318 nm was collected. Kinetic data were analyzed using a triple exponential decay equation:
$$ y={y}_0+{A}_1\left[ \exp \left(-{\lambda}_1t\right)-1\right]+{A}_2\left[ \exp \left(-{\lambda}_2t\right)-1\right]+{A}_3\left[ \exp \left(-{\lambda}_3t\right)-1\right] $$
where y is the physical observable monitored as a function of time t, and y 0 is the initial value of the observable. A i and λ i represent the observed amplitude and rate constant, respectively, for the ith exponential phase. In some cases, only two exponential terms were required for satisfactory curve fitting.
In refolding experiments, yTIM samples were first subjected to unfolding for 10 min at 63.0 °C. Then, the temperature control of the CD spectrometer Peltier accessory was set to a value 4.0 °C below the temperature intended for the study of the refolding reaction (42.0 °C) to allow for fast cooling of the sample (≈15 °C min−1). The final temperature value (42.0 °C) was entered into the cell-holder control when the solution in the cell was 0.5 °C above that value, and the CD signal was registered thereafter. Inside the cell, temperature came to equilibrium (±0.15 °C) in approximately 40 s.
Molecular dynamics (MD) simulations were performed using GROMACS 4.5.4 software [26] with the GROMOS96 53A6 force-field [27]. The side-chain ionization states in the protein at the pH values simulated (6.7, 7.4, and 8.0) were established using pK a values estimated with PROPKA [28]. Dimeric yTIM (PDB ID: 1YPI) was placed in the center of a periodic dodecahedral box with 10 Å between the protein and the edge of the box. To simulate the solvent conditions at pH 6.7 (7.4; 8.0), a total of 21,763(21,757; 21 751) SPC water molecules, 12 (16; 20) sodium ions, and 7 (10; 12) chloride ions were needed to fill the box, neutralize the net protein charge, and reach the experimental ionic strength of 0.015 M (0.022 M; 0.027 M).
Prior to MD simulations, the system was relaxed by energy minimization, followed by 100 ps of thermal equilibration under the position restraints of protein heavy atoms through a harmonic force constant of 1000 kJ mol−1 nm−1. MD simulation was performed using an NPT ensemble at 423 K and 1.0 bar for 100 ns. A LINCS algorithm was applied to constrain the length of all covalent bonds [29], and a 2-fs time step was used. A cutoff of 1.0 nm was applied for short-range electrostatic and van der Waals interactions, while the long-range electrostatic forces were treated using the particle mesh Ewald method [30]. Two replicas were simulated at each solvent condition.
Unfolding-refolding thermal transitions
Denaturation (unfolding) and renaturation (refolding) of yTIM were followed by continuous monitoring of the ellipticity (220 nm) under a constant heating or cooling rate of 2.0 °C min−1. Temperature scanning profiles recorded at three different pH values are shown in Fig. 1. These profiles display the hysteresis phenomenon previously observed for yTIM [15, 16, 25], which indicates that unfolding and refolding events occur under kinetic control at the imposed scanning rate [14, 15]. It is clear that pH has an influence on the kinetic stability of the protein, because the apparent melting temperature is displaced to lower values at pH 8.5. Despite this pH effect, all the unfolding traces appear as sigmoid curves, with no evidence of stable intermediates. However, the total ellipticity change at pH 6.7 that takes place upon denaturation seems slightly larger (Fig. 1). It must be noted that the up-temperature scans in Fig. 1 were not allowed to proceed to higher temperatures to avoid reactions that make the process irreversible and thus decrease the extent of refolding on down-temperature scans [15].
Thermal unfolding-refolding transitions of yTIM at selected pH values. The ellipticity at 220 nm was monitored while samples were heated or cooled at 2.0 °C min−1. Arrows indicate whether the temperature increased or decreased in scans
In a different set of experiments, denatured samples of the enzyme were left to stand at 70 °C for 10 min to ensure that unfolding had been completed before their CD spectra were recorded. Spectra shown in Fig. 2 indicate that the native α/β secondary structure of yTIM is rather insensitive to pH and is largely lost upon heating at all pH values, as judged by the decrease in magnitude in the 208–222 nm region at high temperature (Fig. 2). However, the spectrum of heat-denatured yTIM shows striking changes as pH is varied. Above pH 8.0, the spectral shape and signal magnitude of the denatured enzyme are typical of small and medium-size proteins (e.g., hen-egg lysozyme, ribonuclease A, cytochrome C, staphylococcal nuclease, cysteine proteinases) when unfolded at high temperatures in the absence of denaturant agents (see, for example, CD spectra of native and thermally unfolded lysozyme in Additional file 2). This spectral type is characterized by a negative band of approximately 10 × 103 deg cm2 dmol−1 centered at 202–204 nm, along with a broad negative shoulder with magnitude around 5 × 103 deg cm2 dmol−1at longer wavelength [31, 32]. Below pH 8.0, the spectra of heat-denatured yTIM progressively decrease in magnitude and acquire a shape typical of all-β proteins [33], thus pointing to the presence of residual secondary structure in the denatured enzyme.
Far-UV CD spectra of thermally unfolded yTIM at different pH values. Protein samples were allowed to unfold by continuous heating (2 °C min−1) until the end of the transition (cf. Fig. 1) and then left to stand at 70.0 °C for 10 min before recording their spectra. For comparison, spectra of native yTIM at various pH values are also shown (dotted lines)
Regarding yTIM refolding in cooling-down scans, it is evident that this process becomes increasingly irreversible as the pH decreases below pH 7.0, as judged from the extent of recovery of native yTIM ellipticity shown in Fig. 1. To gain detailed information on the influence of pH in the unfolding and refolding events, further kinetic experiments were carried out.
Unfolding kinetics
Kinetic studies were carried out by monitoring the time course of ellipticity at 220 nm. Experiments examining a large temperature interval were done at pH 8.0 and 6.7, where the CD spectra of denatured yTIM showed distinct features. The results at pH 8.0 indicate that at relatively high temperatures (60.0 °C and above), the loss of secondary structure shows double exponential behavior (Fig. 3), with phases well separated on the time scale. Indeed, over a restricted time interval, a single exponential-decay equation can fit the experimental data reasonably well. Only when data were recorded over a long time did a second phase become readily apparent, but this phase had small amplitude. Nevertheless, at low temperature, triple exponential behavior was observed (Fig. 3). The fastest phase, which conveys a minor ellipticity change, occurred too fast for accurate assessment of the kinetic constant by the manual-mixing method (i.e., time constant of about 20 to 100 s). This fast phase seems to be completely lost within the dead time in experiments at high temperature. Hereafter, the observed rate constants are referred to as λ1, λ2, and λ3, in descending order of their magnitudes. Unfolding of yTIM at pH 6.7 showed similar behavior, with two and three kinetic phases at high and low temperature, respectively, as shown in Additional file 3.
Kinetics of yTIM unfolding at pH 8.0, as followed by far-UV CD at 220 nm. Data shown correspond to 54.5 °C (upper tracing) and 60.0 °C (lower tracing). Red lines are least-squares fits of triple (upper curve) or double (lower curve) exponential decay equations to experimental data (see Methods). Residuals from fit (black lines) are shown below each kinetic tracing
CD spectra were recorded near the end of the unfolding process when the slowest phase was more than 98 % complete (these experiments required recordings of kinetic data for more than nine hours in the case of low temperatures). The final spectra appeared nearly identical, notwithstanding the temperature at which the kinetics was studied (see Additional file 4 for results obtained at pH 8.0). Furthermore, at a given pH, the spectral shape and magnitude observed at the end of unfolding were both similar to those illustrated in Fig. 2. In other words, the final conformation achieved by the protein seems to be independent of the temperature (in the range studied), but is otherwise strongly affected by pH.
The voltage applied to the phototube of the CD instrument, which is proportional to the absorbance, was simultaneously recorded. The measurements indicated that changes in ellipticity associated with the first two phases are accompanied by only small changes (5.0 % or less) in the absorbance of the protein solution (see Additional file 5). Such small changes are known to occur due to alterations in the secondary and, to less extent, the tertiary structure of proteins and polypeptides [34]. However, a relatively large absorbance increment (approximately 10.0 % of the protein absorbance) was linked to the slower CD-detected kinetic phase. It is likely that this apparent increment comes from the scattering of light by aggregates of unfolded protein molecules.
Monitoring of the denaturation kinetics by changes in the fluorescence intensity also showed that this is a complex process (Fig. 4) in which there is a progressive decrease of intensity (at the wavelength of maximum emission by native yTIM). Overall, comparison of the plots shown in Figs. 3 and 4 indicates that progressive loss of secondary structure upon denaturation is accompanied by a quenching of the fluorescence signal of tryptophan residues, which in turn likely reflects either the exposure of these residues to the aqueous solvent or less constraint by the environment [35]. Notwithstanding the temperature, three exponential terms were required to fit fluorescence data. As in CD experiments, the first fluorescence-detected phase was too fast (time constant of about 25 s) for an accurate determination of its rate constant. At low temperature (55.0 °C), the rate constant for the second phase had a value similar to that of λ2 from CD experiments (the two values differed by 50–80 %). At 62.0 to 64.0 °C, however, it was the first fluorescence-detected rate constant that was consistent with λ2. Furthermore, the decrease in the fluorescence intensity extended over a much longer time than the change in ellipticity (i.e., the rate constant for the slowest phase was approximately three- to fourfold smaller when determined from fluorescence than from CD). These markedly different values suggest that the slowest phase comprises several elementary steps that respond differently to the spectroscopic probes employed. For instance, formation of molecular aggregates can conceivably occur with little or no change in secondary conformation, but with an otherwise significant fluorescence quenching of tryptophan residues.
Kinetics of yTIM unfolding at pH 8.0, as followed by fluorescence intensity. Data shown correspond to 55.0 °C (upper tracing) and 60.0 °C (lower tracing). Protein samples were excited at 292 nm, and the light emitted at 318 nm was collected. Red lines are least-squares fits of triple exponential decay equations to experimental data (see Methods). Residuals from fit (black lines) are shown above each kinetic tracing
Kinetic model for yTIM unfolding
The simplest model accounting for the results obtained from CD would be that of three sequential reactions (Scheme 1), with native and unfolded yTIM (N and U, respectively) and two intermediate species (I and X):
Kinetic model for three sequential first-order reactions
In this model, each of the three λ values determined from data analysis (eqn. 1) is identical to each one of the microscopic rate constants k 1, k 2, and k 3. As mentioned, neither the rate constant nor the amplitude of the faster phase could be accurately determined from experiments at the lowest temperatures studied. Moreover, this phase was apparently lost within the dead time of experiments performed at high temperature. Fortunately, because k 1 seems to be 15–20 times larger than k 2, the first kinetic step occurs on a much shorter time scale than the other steps and can be regarded as kinetically separated from the other events, at least in a first approximation. This implies that amplitudes A 2 and A 3 reflect changes involved solely with steps I → X → U. Therefore, the kinetic model can be simplified to a two-step model (Scheme 2).
Simplified kinetic model involving only two first-order steps
Equations describing the evolution in time of the fraction of each species are well known [36, 37]. By denoting the characteristic ellipticity of each species as θI, θX, and θU, it can be shown that (see Additional file 6):
$$ \left({\uptheta}_{\mathrm{X}}-{\uptheta}_{\mathrm{I}}\right)/\left({\uptheta}_{\mathrm{U}}-{\uptheta}_{\mathrm{I}}\right)={k}_3/{k}_2-\left[\left({k}_3-{k}_2\right)/{k}_2\right]\left[{A}_2/\left({A}_2+{A}_3\right)\right] $$
$$ \left({\uptheta}_{\mathrm{U}}-{\uptheta}_{\mathrm{X}}\right)/\left({\uptheta}_{\mathrm{U}}-{\uptheta}_{\mathrm{I}}\right)=-\left[\left({k}_3-{k}_2\right)/{k}_2\right]\left[{A}_3/\left({A}_2+{A}_3\right)\right] $$
The two equations above were used to compute (θX−θI)/(θU−θI) and (θU−θX)/(θU−θI), which give the ellipticity change as a fraction of the total change for each step in Scheme 2. The results indicate that the degree of unfolding occurring during the I → X step (normalized to a total unitary change) would vary from 0.35 to 0.70 over a temperature range of 11 °C (Fig. 5). For the X → U step, a concomitant decrease in the degree of unfolding would take place. Admittedly, it seems unlikely that the conformation of intermediate species would vary so drastically within such a narrow temperature range. Alternatively, these results may point to the presence of an off-pathway intermediate or even different, parallel unfolding pathways with predominance that changes with temperature.
Fractional structural change for the two kinetic steps in Scheme 2 at different temperatures. Fractional changes in ellipticity were calculated from eqns. 2 and 3 from values of the rate constants and amplitudes determined at pH 6.7. Data for step I → X, i.e., (θX−θI)/(θU−θI), are shown as circles; data for step X → U, i.e., (θU−θX)/(θU−θI), are shown as squares
Temperature dependence of unfolding rate constants
Further studies on the denaturation of the enzyme were also performed at other pH values but over a restricted temperature range to determine the activation parameters that control the temperature dependence of k 2 and k 3. Results for selected pH values are shown in Fig. 6 as Eyring plots, which agree with the well-known equation:
Eyring plots for the rate constants k 2(a) and k 3(b) at selected pH values. Rate constants were determined from far-UV CD kinetic experiments. Dotted lines in (a) are linear fits performed with data corresponding to temperatures above 60.0 °C for pH 6.7 and 8.0
$$ \ln \left(k/T\right)= \ln E+\varDelta {S}^{\ddagger }/R-\left(\varDelta {H}^{\ddagger }/R\right)\left(1/T\right) $$
where k is the rate constant for an elementary reaction, T is the absolute temperature, E stands for a preexponential factor, and ΔS ‡ and ΔH ‡ represent the activation entropy and enthalpy, respectively. Figure 6a shows that plots corresponding to k 2 follow linear trends when a narrow temperature range is considered. This linearity was observed before for yTIM and has been found for a large number of other proteins [15]. However, in the cases of pH 6.7 and 8.0, at which larger temperature intervals were examined, Eyring plots appear slightly curved upwards in the low temperature region. This might be due to a shift between parallel unfolding pathways with different activation enthalpies [38]; that is, unfolding would switch from one predominant pathway to another as the temperature varies, in agreement with the interpretation mentioned for the change with temperature of the computed degree of unfolding for step I → X. However, a nonzero activation heat capacity cannot be ruled out as the origin of the curvature.
From Eyring plots, such as those in Fig. 6a, ΔH 2 ‡ was determined between pH 6.0 and 8.5. It must be noted that values of k 2 were determined from data registered in a temperature region in which the unfolding degree accompanying step I → X remains relatively constant (i.e., from 60 to 65 °C, cf. Fig. 5). Therefore, k 2 can be assigned to a single predominant pathway. Overall, the value of ΔH 2 ‡was about 450 kJ mol−1at pH 6.0–8.0 and showed a slight decrease (≈15 %) at pH 8.5 (data not shown). In contrast, Eyring plots for k 3 display a linear but ill-defined trend (Fig. 6b), suggesting that the slowest kinetic phase is indeed composed of several elementary steps. It is also seen that k 3 is much less temperature dependent than k 2.
The effect of pH on k 2 and k 3 was examined over a longer interval of pH values at constant temperature; 60.0 °C was chosen, because of the single apparent pathway at this temperature, and the unfolding process was slow enough to allow for determining the value of k 2over an extended pH range. Results are shown in Fig. 7, which shows that pH-induced changes in k 2 resemble the sigmoid titration curve for an ionizable group with an approximate pK a of 8.5. Because this value of pK a is close to that of a thiol group, it may be hypothesized that a cysteine residue is responsible for the behavior observed for k 2. In this regard, it has been proposed that Cys126, which is a residue that is conserved with the family of TIM enzymes, plays an important role in the stability of this protein [24]. In contrast, k 3 values showed no defined variation with pH, again suggesting that the step X → U actually comprises multiple individual reactions.
Variation of rate constants k 2 and k 3 with pH. Data for k 2 (squares) and k 3 (circles) were determined from far-UV CD kinetic experiments at 60.0 °C
Refolding of yTIM
As reported previously [15, 24, 25], the kinetics of yTIM refolding at low protein concentration (0.13–0.75 μM) and in a certain temperature range is slow enough to be monitored without resorting to fast temperature-jump techniques. By using the procedure described in the Methods section, we followed the recovery of secondary structure under two pH conditions. These studies were aimed at determining the effect of the residual native-like structure of unfolded yTIM (which is clearly observed at pH 6.7) on the refolding ability of the enzyme. For this purpose, yTIM samples were allowed to unfold (in the cell of the CD instrument) for 10 min at 63.0 °C. These conditions ensured ca. 85 % (pH 6.7) or 99 % (pH 8.0) unfolding, as judged by the ellipticity signal. After that, the protein solution was cooled to 42.0 °C to record the refolding reaction. Additional file 7 shows that at pH 6.7 the enzyme refolds faster than at pH 8.0. In both cases, however, refolding tracings are adequately described by a second-order kinetics equation, as determined previously [15, 24].
To explore the effect of the residual structure on the reversibility of the unfolding process, samples of yTIM were unfolded for different time spans and then cooled to 25 °C to record CD spectra. As a quantitative indicator of irreversibility, the difference in ellipticity (220 nm) between cooled-down samples and native yTIM, normalized to the ellipticity of the native protein, was used. The results are shown in Fig. 8, together with the fractional values of U (f U) in Scheme 2. Experimentally determined values of the kinetic constants k 2 and k 3 were used to calculate the time variation of f U according to eqn.S3 in Additional file 6. An inspection of the plots in the figure makes it evident that irreversibility is more intense at pH 6.7 than pH 8.0, as expected from the thermal scan results (cf. Fig. 1). At the lower pH, however, irreversibility begins with early unfolding times and approximately parallels the formation of f U. In contrast, the onset of irreversibility at pH 8.0 is delayed and thus appears as a late event in unfolding, which takes place after the final U state becomes largely populated. This suggests again that the slowest CD-detected kinetic phase does not represent an elementary step. Reactions that lead to irreversibility probably do not involve major changes in secondary conformation and are therefore silent in CD studies.
Time course for the appearance of irreversibility on the unfolding of yTIM. Samples of yTIM were unfolded (63.0 °C) for different time spans, and then cooled to 25 °C for recording of CD spectra. Irreversibility was then calculated as the difference in ellipticity (220 nm) between cooled-down samples and native yTIM, normalized to the ellipticity of the native protein. Irreversibility data are represented by open (pH 6.7) or solid (pH 8.0) squares. Open (pH 6.7) and solid (pH 8.0) circles correspond to the fraction of unfolded protein, f U, which was calculated from eqn. S3 in Additional file 5
In summary, results from refolding studies indicate that yTIM refolds faster from the denatured state with residual structure, although such denatured state decreases the folding efficiency (i.e., the amount of native protein recovered upon refolding). Thus, it may be thought that under physiological conditions (pH near neutrality, 37 °C) the advantage of a fast folding process overcomes the difficulties posed by some degree of irreversibility. Furthermore, because irreversibility appears to be related to the time unfolded (denatured) YTIM stays at moderate to high temperatures [15], the problem of a low folding efficiency may be of less significance for mesophilic organisms such as Saccharomyces cerevisiae.
Residual structure
As mentioned, thermally denatured yTIM retains a high content of β structure below pH 8.0 (see Fig. 2), which is implicated in reactions leading to irreversibility. This type of secondary structure has been found to be refractory to temperature in thermophilic and mesophilic proteins [39, 40], whereas in other instances, such as in apomyoglobin, β structure appears to be formed at elevated temperature [31] as a result of α-to-β transitions [41]. Furthermore, molecular dynamics simulations have shown that certain all-α peptides, and even full-length proteins, may be transformed to all-β structures [42, 43]. We carried out preliminary MD simulations to investigate whether this method can reproduce the structural differences in denatured yTIM that were experimentally observed when pH is varied. Simulations run at 400 K for 100 ns showed that helixes are completely lost after 75 ns, regardless of the pH value. Conversely, β-strands actually seem to be formed during the simulation, but they are slightly more abundant at pH 6.7 than at pH 8.0 (Additional file 8). Although preliminary, these results are encouraging, for they indicate that some regions in the polypeptide sequence of yTIM have a tendency to undergo α-to-β transitions. In contrast, the effect of pH does not appear to have been properly taken into account by the MD method used here, and thus deserves to be studied further.
Two experimental approaches were used to study the influence of pH on the temperature-induced unfolding of yeast triosephosphate isomerase (yTIM). Temperature-scan experiments showed that unfolding profiles (monitored by CD) appear as monophasic transitions, with no evidence of intermediate species. pH was found to affect the kinetic stability of the protein based on shifts in the melting temperature (T m ). Furthermore, below pH 8.0, CD spectra of heat-denatured yTIM gradually changed in shape to look like those for proteins rich in β-strands, but otherwise, the unfolded protein became prone to aggregate.
Despite the apparent simplicity of thermal profiles, kinetic studies performed at constant temperature clearly showed the presence of up to three kinetic phases, irrespective of pH (i.e., at high temperatures, the fastest phase was completely lost within the experimental dead time). Because the relative values of the kinetic constants suggested that the fastest phase is indeed decoupled from the other two, we analyzed the kinetic constants and amplitudes of the two slowest phases according to a two-step sequential mechanism. Results from the analysis, however, pointed to a more complex actual mechanism, such as one that involves parallel pathways. The temperature dependence of the rate constants appears to lend some evidence to this proposal. A simple model for yTIM unfolding that accounts for the information summarized above is shown in Scheme 3, where N2 stands for the native dimer, I and X represent partially unfolded intermediates, and D and U are used to symbolize, respectively, the denatured form with β-strand residual structure and the thermally unfolded state of yTIM.
Proposed model for yTIM unfolding
In summary, it was shown that the temperature-induced denaturation of yTIM reveals itself as a complex process when followed for a long time and over an ample temperature range. Further investigation over a wide pH range showed that the kinetic stability of yTIM responds to the titration of an ionizable group with pK a ≈ 8.5. Refolding studies, on the other hand, indicated that the refolding ability of the unfolded protein decreases under pH conditions that favor the formation of residual, β-strand-like structures in heat-denatured yTIM, even though refolding is faster under such conditions. Moreover, most of the reactions leading to irreversibility occur late in the unfolding process and are not detected by CD. Finally, as demonstrated in molecular dynamics simulations, yTIM unfolding shows α-to-β transition behavior, albeit with no discrimination of the experimentally observed pH effect.
yTIM:
Triosephosphate isomerase from yeast (Saccharomyces cerevisiae)
Circular dichroism
Molecular dynamics
Sánchez IE, Kiefhaber T. Evidence for sequential barriers and obligatory intermediates in apparent two-state protein folding. J Mol Biol. 2003;325:367–76.
Ferguson N, Fersht A. Early events in protein folding. Curr Op Struc Biol. 2003;13:75–81.
Kamagata K, Arai M, Kuwajima K. Unification of the folding mechanisms of non-two-state and two-state proteins. J Mol Biol. 2004;339:951–65.
Tsong TY. Detection of three kinetic phases in the thermal unfolding of ferricytochrome c. Biochemistry. 1973;12:2209–14.
Baldwin RL. On-pathway versus off-pathway folding intermediates. Folding & Design. 1996;1:R1–8.
Aghera N, Udgaonkar JB. The utilization of competing unfolding pathways of monellin is dictated by enthalpic barriers. Biochemistry. 2013;52:5770–9.
Travaglini-Allocatelli C, Ivarsson Y, Jemth P, Gianni S. Folding and stability of globular proteins and implications for function. Curr Opin Struct Biol. 2009;19:3–7.
Wong KB, Clarke J, Bond CJ, Jose Luis Neira J, Freund SMV, Fersht AR, et al. Towards a complete description of the structural and dynamic properties of the denatured state of barnase and the role of residual structure in folding. J Mol Biol. 2000;296:1257–82.
Pearce MC, Cabrita LD, Rubin H, Gore MG, Bottomley SP. Identification of residual structure within denatured antichymotrypsin: implications for serpin folding and misfolding. Biochem Biophys Res Commun. 2004;324:729–35.
Sawano M, Yamamoto H, Ogasahara K, Kidokoro S, Katoh S, Ohnuma T, et al. Thermodynamic basis for the stabilities of three CutA1s from Pyrococcus horikoshii, Thermus thermophilus, and Oryza sativa, with unusually high denaturation temperatures. Biochemistry. 2008;47:721–30.
Vázquez-Pérez AR, Fernández-Velasco DA. Pressure and denaturants in the unfolding of triosephosphate isomerase: the monomeric intermediates of the enzymes from Saccharomyces cerevisiae and Entamoeba histolytica. Biochemistry. 2007;46:8624–33.
Shirley BA. Urea and guanidine hydrochloride denaturation curves. In: Shirley BA, editor. Protein Stability and Folding. Theory and Practice. Totowa, NJ: Humana Press; 1995. p. 177–90.
Blancas-Mejía LM, Tischer A, Thompson JR, Tai J, Wang L, Auton M, et al. Kinetic control in protein folding for light chain amyloidosis and the differential effects of somatic mutations. J Mol Biol. 2014;426:347–61.
Article PubMed Central PubMed Google Scholar
Dragan AI, Potekhin SA, Sivolob A, Lu M, Privalov PL. Kinetics and thermodynamics of the unfolding and refolding of the three-stranded R-helical coiled coil, Lpp-56. Biochemistry. 2004;43:14891–900.
Benítez-Cardoza CG, Rojo-Domínguez A, Hernández-Arana A. Temperature-induced denaturation and renaturation of triosephosphate isomerase from Saccharomyces cerevisiae: evidence of dimerization coupled to refolding of the thermally unfolded protein. Biochemistry. 2001;40:9049–58.
Samanta M, Banerjee M, Murthy MRN, Balaram H, Balaram P. Probing the role of the fully conserved Cys 126 in triosephosphate isomerase by site-specific mutagenesis − distal effects on dimer stability. FEBS Journal. 2011;278:1932–43.
Pan H, Raza AS, Smith DL. Equilibrium and kinetic folding of rabbit muscle triosephosphate isomerase by hydrogen exchange mass spectrometry. J Mol Biol. 2004;336:1251–63.
Guzman-Luna V, Garza-Ramos G. The folding pathway of glycosomal triosephosphate isomerase: Structural insights into equilibrium intermediates. Proteins. 2012;80:1669–82.
Mixcoha-Hernández E, Moreno-Vargas LM, Rojo-Domínguez A, Benítez-Cardoza CG. Thermal-unfolding reaction of triosephosphate isomerase from trypanosoma cruzi. Protein J. 2007;26:491–8.
Cabrera N, Hernández-Alcántara G, Mendoza-Hernández G, Gómez-Puyou A, Perez-Montfort R. Key residues of loop 3 in the interaction with the interface residue at position 14 in triosephosphate isomerase from Trypanosoma brucei. Biochemistry. 2008;47:3499–506.
Dhaunta N, Arora K, Chandrayan SK, Guptasarma P. Introduction of a thermophile-sourced ion pair network in the fourth beta/alpha unit of a psychophile-derived triosephosphate isomerase from Methanococcoides burtonii significantly increases its kinetic thermal stability. Biochim Biophys Acta. 1834;2013:1023–33.
Vázquez-Contreras E, Zubillaga RA, Mendoza-Hernández G, Costas M, Fernández-Velasco DA. Equilibrium unfolding of yeast triosephosphate isomerase: a monomeric intermediate in guanidine-HCl and two-state behavior in urea. Protein Pept Lett. 2000;7:57–64.
Rozacky EE, Sawyer TH, Barton RA, Gracy RW. Studies of human triosephosphate isomerase: isolation and properties of the enzyme from erythrocytes. Arch Biochem Biophys. 1971;146:312–20.
González-Mondragón E, Zubillaga RA, Saavedra E, Chánez-Cárdenas ME, Pérez-Montfort R, Hernández-Arana A. Conserved cysteine 126 in triosephosphate isomerase is required not for enzymatic activity but for proper folding and stability. Biochemistry. 2004;43:3255–63.
Reyes-López CA, González-Mondragón E, Benítez-Cardoza CG, Chánez-Cárdenas ME, Cabrera N, Pérez-Montfort R, et al. The conserved salt bridge linking two C-terminal b/a units in homodimeric triosephosphate isomerase determines the folding rate of the monomer. Proteins. 2008;72:972–9.
Hess B, Kutzner C, van der Spoel D, Lindahl E. GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable molecular simulation. J Chem Theory Comput. 2008;4:435–47.
Oostenbrink C, Villa A, Mark AE, van Gunsteren WF. A biomolecular force field based on the free enthalpy of hydration and solvation: the GROMOS force-field parameter sets 53A5 and 53A6. J Comput Chem. 2004;25:1656–76.
Li H, Robertson AD, Jensen JH. Very fast empirical prediction and interpretation of protein pKa values. Proteins. 2005;61:704–21.
Hess B, Bekker H, Berendsen HJC, Fraaije JGEM. LINCS: a linear constraint solver for molecular simulations. J Comput Chem. 1997;18:1463–72.
Darden T, York D, Pedersen L. Particle mesh Ewald: An N*log(N) method for Ewald sums in large systems. J Chem Phys. 1993;98:10089–92.
Privalov PL, Tiktopulo EI, Venyaminov SY, Griko YV, Makhatadze GI, Khechinashvili NN. Heat capacity and conformation of proteins in the denatured state. J Mol Biol. 1989;205:737–50.
Arroyo-Reyna A, Hernández-Arana A. The thermal unfolding of stem bromelain is consistent with an irreversible two-state model. Biochim Biophys Acta. 1995;1248:123–8.
Manavalan P, Johnson WC. Sensitivity of circular dichroism to protein tertiary structure class. Nature. 1983;305:831–2.
Van Holde KE, Johnson WC, Ho PS. Principles of Physical Biochemistry. New Jersey: Prentice Hall International; 1998.
Campbell ID, Dwek RAR. Biological Spectroscopy. Menlo Park, CA: The Benjamin/Cummings Publishing Company; 1984.
Gutfreund H. Kinetics for the Llife Sciences. Receptors, Transmitters and Catalysts. Cambridge, UK: Cambridge University Press; 1995.
Szabo ZG. Kinetic characterization of complex reaction systems. In: Banford CH, Tipper CFH, editors. Comprehensive Chemical Kinetics. Volume 2. Amsterdam: Elsevier; 1969. p. 1–80.
Zaman MH, Sosnick TR, Berry RS. Temperature dependence of reactions with multiple pathways. Phys Chem Chem Phys. 2003;5:2589–94.
Toledo-Núñez C, López-Cruz JI, Hernández-Arana A. Thermal denaturation of a blue-copper laccase: Formation of a compact denatured state with residual structure linked to pH changes in the region of histidine protonation. Biophys Chem. 2012;167:26–32.
Ausili A, Scire A, Damiani E, Zolese G, Bertoli E, Tanfani F. Temperature-induced molten globule-like state in human R1-acid glycoprotein: An infrared spectroscopic study. Biochemistry. 2005;44:15997–6006.
Fabiani E, Stadler AM, Madern D, Koza MM, Tehei M, Hirai M, et al. Dynamics of apomyoglobin in the α-to-β transition and of partially unfolded aggregated protein. Eur Biophys J. 2009;38:237–44.
GC JB, Bhandari YR, Gerstman BS, Chapagain PP. Molecular dynamics investigations of the α-helix to β-barrel conformational transformation in the RfaH transcription factor. J Phys Chem B. 2014;118:5101–8.
Kaur H, Sasidhar YU. Molecular dynamics study of an insertion/duplication mutant of bacteriophage T4 lysozyme reveals the nature of α-β transition in full protein context. Phys Chem Chem Phys. 2013;15:7819–30.
This work was funded in part by CONACYT, México (SEP-CONACYT2007-80457, and SEP-CONACYT 2012-181049). ALP received a doctoral fellowship from CONACYT, México (208217). The authors thank Dr. Ponciano García-Gutiérrez (Laboratorio Interdivisional de Espectrometría de Masas, UAM-Iztapalapa) for obtaining the mass spectrum of yTIM.
Área de Biofisicoquímica, Departamento de Química, Universidad Autónoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Iztapalapa, D.F. 09340, Mexico
Ariana Labastida-Polito, Menandro Camarillo-Cadena, Rafael A. Zubillaga & Andrés Hernández-Arana
Departamento de Bioquímica, Facultad de Medicina, Universidad Nacional Autónoma de México, Coyoacán, D.F. 04510, Mexico
Georgina Garza-Ramos
Ariana Labastida-Polito
Menandro Camarillo-Cadena
Rafael A. Zubillaga
Andrés Hernández-Arana
Correspondence to Andrés Hernández-Arana.
ALP and MCC carried out most of the spectroscopic experiments, performed part of the data analysis, and participated in the experimental design and interpretation of results. GGR helped with the experiments, data analysis, and interpretation of results. RAZ performed the molecular dynamics simulations, participated in the interpretation of results, and helped in drafting the manuscript. AHA conceived of the study, participated in its design, carried out a major part of data analysis, and drafted the manuscript. All authors read and approved the final manuscript.
Mass Spectrum of isolated yTIM. (PDF 72 kb)
Native and thermally unfolded hen-egg lysozyme CD spectra. (PDF 67 kb)
Kinetics of yTIM unfolding at pH 6.7, as followed by CD. (PDF 210 kb)
Far-UV CD spectra of yTIM near the end of the unfolding kinetics process. (PDF 89 kb)
Kinetics of yTIM unfolding detected by light absorption. (PDF 142 kb)
Derivation of eqns. 2 and 3 in text. (PDF 113 kb)
Refolding kinetics of yTIM as followed by CD. (PDF 133 kb)
Molecular dynamics simulations of yTIM unfolding. (PDF 113 kb)
Labastida-Polito, A., Garza-Ramos, G., Camarillo-Cadena, M. et al. Complex kinetics and residual structure in the thermal unfolding of yeast triosephosphate isomerase. BMC Biochem 16, 20 (2015). https://doi.org/10.1186/s12858-015-0049-2
Circular Dichroism Spectrum
Fast Phase
Triosephosphate Isomerase
|
CommonCrawl
|
Mean-field limit of a spatially-extended FitzHugh-Nagumo neural network
KRM Home
Large amplitude stationary solutions of the Morrow model of gas ionization
December 2019, 12(6): 1313-1327. doi: 10.3934/krm.2019051
Focusing solutions of the Vlasov-Poisson system
Katherine Zhiyuan Zhang
Department of Mathematics, Brown University, 151 Thayer Street, Providence, RI 02912, USA
Received February 2019 Revised May 2019 Published September 2019
Full Text(HTML)
We study smooth, spherically-symmetric solutions to the Vlasov-Poisson system and relativistic Vlasov-Poisson system in the plasma physical case. We construct solutions that initially possess arbitrarily small $ C^k $ norms ($ k \geq 1 $) for the charge densities and the electric fields, but attain arbitrarily large $ L^\infty $ norms of them at some later time.
Keywords: Vlasov-Poisson equation, focusing solution, blow-up type behavior, spherically symmetric, kinetic theory.
Mathematics Subject Classification: 35Q83, 35B44, 35B40, 65M25.
Citation: Katherine Zhiyuan Zhang. Focusing solutions of the Vlasov-Poisson system. Kinetic & Related Models, 2019, 12 (6) : 1313-1327. doi: 10.3934/krm.2019051
J. Ben-Artzi, S. Calogero and S. Pankavich, Arbitrarily large solutions of the Vlasov-Poisson system, SIAM Journal on Mathematical Analysis, 50 (2018), 4311-4326. doi: 10.1137/17M1142715. Google Scholar
J. Ben-Artzi, S. Calogero and S. Pankavich, Concentrating solutions of the relativistic Vlasov-Maxwell system, Commun. Math. Sci., 17 (2019), 377–392, arXiv: 1807.02801. doi: 10.4310/CMS.2019.v17.n2.a4. Google Scholar
[3] J. P. Friedberg, Ideal Magnetohydrodynamics, Plenum Press, New York, 1987. Google Scholar
R. T. Glassey, The Cauchy Problem in Kinetic Theory, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1996. doi: 10.1137/1.9781611971477. Google Scholar
R. Glassey, S. Pankavich and J. Schaeffer, Decay in time for a one-dimensional two-component plasma, Math. Methods Appl. Sci., 31 (2008), 2115-2132. doi: 10.1002/mma.1015. Google Scholar
R. Glassey, S. Pankavich and J. Schaeffer, On long-time behavior of monocharged and neutral plasma in one and one-half dimensions, Kinetic and Related Models, 2 (2009), 465-488. doi: 10.3934/krm.2009.2.465. Google Scholar
R. Glassey, S. Pankavich and J. Schaeffer, Large time behavior of the relativistic Vlasov-Maxwell system in low space dimension, Differential and Integral Equations, 23 (2010), 61-77. Google Scholar
R. Glassey, S. Pankavich and J. Schaeffer, Time decay for solutions to one-dimensional two component plasma equations, Quarterly of Applied Mathematics, 68 (2010), 135-141. doi: 10.1090/S0033-569X-09-01143-4. Google Scholar
R. T. Glassey and J. Schaeffer, On symmetric solutions of the relativistic Vlasov-Poisson system, Comm. Math. Phys., 101 (1985), 459-473. doi: 10.1007/BF01210740. Google Scholar
E. Horst, Symmetric plasmas and their decay, Comm. Math. Phys., 126 (1990), 613-633. doi: 10.1007/BF02125703. Google Scholar
R. Illner and G. Rein, Time decay of the solutions of the Vlasov-Poisson system in the plasma physical case, Math. Methods Appl. Sci., 19 (1996), 1409-1413. doi: 10.1002/(SICI)1099-1476(19961125)19:17<1409::AID-MMA836>3.0.CO;2-2. Google Scholar
P.-L. Lions and B. Perthame, Propogation of moments and regularity for the three dimensional Vlasov-Poisson system, Invent. Math., 105 (1991), 415-430. doi: 10.1007/BF01232273. Google Scholar
D. R. Nicholson, Introduction to Plasma Theory, Wiley, New York, 1983. Google Scholar
K. Pfaffelmoser, Global classical solution of the Vlasov-Poisson system in three dimensions for general initial data, J. Diff. Eq., 95 (1992), 281-303. doi: 10.1016/0022-0396(92)90033-J. Google Scholar
G. Rein and L. Taegert, Gravitational collapse and the Vlasov-Poisson system, Annales Henri Poincaré, 17 (2016), 1415-1427. doi: 10.1007/s00023-015-0424-y. Google Scholar
Jean Dolbeault. An introduction to kinetic equations: the Vlasov-Poisson system and the Boltzmann equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (2) : 361-380. doi: 10.3934/dcds.2002.8.361
Jack Schaeffer. Global existence for the Vlasov-Poisson system with steady spatial asymptotic behavior. Kinetic & Related Models, 2012, 5 (1) : 129-153. doi: 10.3934/krm.2012.5.129
Van Duong Dinh. On blow-up solutions to the focusing mass-critical nonlinear fractional Schrödinger equation. Communications on Pure & Applied Analysis, 2019, 18 (2) : 689-708. doi: 10.3934/cpaa.2019034
Helin Guo, Yimin Zhang, Huansong Zhou. Blow-up solutions for a Kirchhoff type elliptic equation with trapping potential. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1875-1897. doi: 10.3934/cpaa.2018089
Frank Merle, Hatem Zaag. O.D.E. type behavior of blow-up solutions of nonlinear heat equations. Discrete & Continuous Dynamical Systems - A, 2002, 8 (2) : 435-450. doi: 10.3934/dcds.2002.8.435
Joackim Bernier, Michel Mehrenberger. Long-time behavior of second order linearized Vlasov-Poisson equations near a homogeneous equilibrium. Kinetic & Related Models, 2020, 13 (1) : 129-168. doi: 10.3934/krm.2020005
Anthony Suen. Existence and a blow-up criterion of solution to the 3D compressible Navier-Stokes-Poisson equations with finite energy. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1775-1798. doi: 10.3934/dcds.2020093
Jian Zhang, Shihui Zhu, Xiaoguang Li. Rate of $L^2$-concentration of the blow-up solution for critical nonlinear Schrödinger equation with potential. Mathematical Control & Related Fields, 2011, 1 (1) : 119-127. doi: 10.3934/mcrf.2011.1.119
Shota Sato. Blow-up at space infinity of a solution with a moving singularity for a semilinear parabolic equation. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1225-1237. doi: 10.3934/cpaa.2011.10.1225
Frédéric Abergel, Jean-Michel Rakotoson. Gradient blow-up in Zygmund spaces for the very weak solution of a linear elliptic equation. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 1809-1818. doi: 10.3934/dcds.2013.33.1809
Alberto Bressan, Massimo Fonte. On the blow-up for a discrete Boltzmann equation in the plane. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 1-12. doi: 10.3934/dcds.2005.13.1
Pablo Álvarez-Caudevilla, Jonathan D. Evans, Victor A. Galaktionov. Gradient blow-up for a fourth-order quasilinear Boussinesq-type equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 3913-3938. doi: 10.3934/dcds.2018170
Binhua Feng. On the blow-up solutions for the fractional nonlinear Schrödinger equation with combined power-type nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1785-1804. doi: 10.3934/cpaa.2018085
Jens Lorenz, Wilberclay G. Melo, Natã Firmino Rocha. The Magneto–Hydrodynamic equations: Local theory and blow-up of solutions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3819-3841. doi: 10.3934/dcdsb.2018332
Qiong Chen, Chunlai Mu, Zhaoyin Xiang. Blow-up and asymptotic behavior of solutions to a semilinear integrodifferential system. Communications on Pure & Applied Analysis, 2006, 5 (3) : 435-446. doi: 10.3934/cpaa.2006.5.435
Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 0-0. doi: 10.3934/dcds.2020052
Blanca Ayuso, José A. Carrillo, Chi-Wang Shu. Discontinuous Galerkin methods for the one-dimensional Vlasov-Poisson system. Kinetic & Related Models, 2011, 4 (4) : 955-989. doi: 10.3934/krm.2011.4.955
Silvia Caprino, Guido Cavallaro, Carlo Marchioro. Time evolution of a Vlasov-Poisson plasma with magnetic confinement. Kinetic & Related Models, 2012, 5 (4) : 729-742. doi: 10.3934/krm.2012.5.729
Gang Li, Xianwen Zhang. A Vlasov-Poisson plasma of infinite mass with a point charge. Kinetic & Related Models, 2018, 11 (2) : 303-336. doi: 10.3934/krm.2018015
2018 Impact Factor: 1.38
HTML views (69)
|
CommonCrawl
|
Improved results on exponential stability of discrete-time switched delay systems
Optimality of (s, S) policies with nonlinear processes
January 2017, 22(1): 187-198. doi: 10.3934/dcdsb.2017009
Z-Eigenvalue Inclusion Theorems for Tensors
Gang Wang 1, , Guanglu Zhou 2,, and Louis Caccetta 2,
School of Management Science, Qufu Normal University, Rizhao, Shandong 276826, China
Department of Mathematics and Statistics, Curtin University, Perth, Australia
* Corresponding author: Guanglu Zhou
Received December 2015 Revised May 2016 Published December 2016
Fund Project: The first author is supported by the natural science foundation of Shandong Province grant ZR2016AM10 and the Fundamental Research Funds for Qufu Normal University grant xkj201415, xkj201314
In this paper, we establish $Z$-eigenvalue inclusion theorems for general tensors, which reveal some crucial differences between $Z$-eigenvalues and $H$-eigenvalues. As an application, we obtain upper bounds for the largest $Z$-eigenvalue of a weakly symmetric nonnegative tensor, which are sharper than existing upper bounds.
Keywords: Z-eigenvalue inclusion sets, H-eigenvalue inclusion sets, weakly symmetric nonnegative tensors, largest Z-eigenvalue, spectral radius.
Mathematics Subject Classification: Primary:15A18, 15A42;Secondary:15A6.
Citation: Gang Wang, Guanglu Zhou, Louis Caccetta. Z-Eigenvalue Inclusion Theorems for Tensors. Discrete & Continuous Dynamical Systems - B, 2017, 22 (1) : 187-198. doi: 10.3934/dcdsb.2017009
L. Bloy and R. Verma, On computing the underlying fiber directions from the diffusion orientation distribution function, in Medical Image Computing and Computer-Assisted Intervention, Springer, 5241 (2008), 1-8. doi: 10.1007/978-3-540-85988-8_1. Google Scholar
K. C. Chang, K. Pearson and T. Zhang, Perron-Frobenius theorem for nonnegative tensors, Communications in Mathematical Sciences, 6 (2008), 507-520. Google Scholar
K. C. Chang, K. Pearson and T. Zhang, Some variational principles for Z-eigenvalues of nonnegative tensors, Linear Algebra and its Applications, 438 (2013), 4166-4182. Google Scholar
S. Friedland, S. Gaubert and L. Han, Perron-Frobenius theorem for nonnegative multilinear forms and extensions, Linear Algebra and its Applications, 438 (2013), 738-749. Google Scholar
E. Kofidis and P. Regalia, On the best rank-1 approximation of higher-order supersymmetric tensors, SIAM Journal on Matrix Analysis and Applications, 23 (2002), 863-884. doi: 10.1137/S0895479801387413. Google Scholar
T. Kolda and J. Mayo, Shifted power method for computing tensor eigenpairs, SIAM Journal on Matrix Analysis and Applications, 32 (2011), 1095-1124. doi: 10.1137/100801482. Google Scholar
J. He and T. Huang, Upper bound for the largest Z-eigenvalue of positive tensors, Applied Mathematics Letters, 38 (2014), 110-114. doi: 10.1016/j.aml.2014.07.012. Google Scholar
L. H. Lim, Singular values and eigenvalues of tensors: a variational approach. Proceedings of the IEEE International Workshop on Computational Advances, in Multi-Sensor Adaptive Processing, Puerto Vallarta, (2005), 129-132. Google Scholar
Y. Liu, G. Zhou and N. F. Ibrahim, An always convergent algorithm for the largest eigenvalue of an irreducible nonnegative tensor, Journal of Computational and Applied Mathematics, 235 (2010), 286-292. Google Scholar
G. Li, L. Qi and G. Yu, The Z-eigenvalues of a symmetric tensor and its application to spectral hypergraph theory, Numerical Linear Algebra with Applications, 20 (2013), 1001-1029. Google Scholar
C. Li, Y. Li and X. Kong, New eigenvalue inclusion sets for tensors, Numerical Linear Algebra with Applications, 21 (2014), 39-50. doi: 10.1002/nla.1858. Google Scholar
M. Ng, L. Qi and G. Zhou, Finding the largest eigenvalue of a nonnegative tensor, SIAM Journal on Matrix Analysis and Applications, 31 (2009), 1090-1099. Google Scholar
Q. Ni, L. Qi and F. Wang, An eigenvalue method for testing the positive definiteness of a multivariate form, IEEE Transactions on Automatic Control, 53 (2008), 1096-1107. Google Scholar
L. Qi, Eigenvalues of a real supersymmetric tensor, Journal of Symbolic Computation, 40 (2005), 1302-1324. doi: 10.1016/j.jsc.2005.05.007. Google Scholar
L. Qi, G. Yu and E. X. Wu, Higher order positive semi-definite diffusion tensor imaging, SIAM Journal on Imaging Sciences, 3 (2010), 416-433. doi: 10.1137/090755138. Google Scholar
L. Qi, F. Wang and Y. Wang, Z-eigenvalue methods for a global polynomial optimization problem, Mathematical Programming, 118 (2009), 301-316. doi: 10.1007/s10107-007-0193-6. Google Scholar
Y. Song and L. Qi, Spectral properties of positively homogeneous operators induced by higher order tensors, SIAM Journal on Matrix Analysis and Applications, 34 (2013), 1581-1595. doi: 10.1137/130909135. Google Scholar
R. S. Varga, Gergorin and His Circles Springer-Verlag, Berlin, 2004. doi: 10.1007/978-3-642-17798-9. Google Scholar
Z. Wang and W. Wu, Bounds for the greatest eigenvalue of positive tensors, Journal of Industrial and Managenment Optimization, 10 (2014), 1031-1039. Google Scholar
Y. Yang and Q. Yang, Further results for Perron-Frobenius theorem for nonnegative tensors, SIAM Journal on Matrix Analysis and Applications, 31 (2010), 2517-2530. doi: 10.1137/090778766. Google Scholar
T. Zhang and G. Golub, Rank-1 approximation of higher-order tensors, SIAM Journal on Matrix Analysis and Applications, 23 (2001), 534-550. doi: 10.1137/S0895479899352045. Google Scholar
L. Zhang and L. Qi, Linear convergence of an algorithm for computing the largest eigenvalue of a nonnegative tensor, Numerical Linear Algebra and Its Applications, 19 (2012), 830-841. Google Scholar
G. Zhou, L. Qi and S. Wu, On the largest eigenvalue of a symmetric nonnegative tensor, Numerical Linear Algebra with Applications, 20 (2013), 913-928. doi: 10.1002/nla.1885. Google Scholar
Table1
$\mathcal{L}_{1,2}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq 4\}$ $\mathcal{L}_{1,3}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq \frac{3+\sqrt{29}}{2}\}$
$\mathcal{L}_{2,1}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq 4\}$ $\mathcal{L}_{2,3}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq 4\}$
$\mathcal{L}_{3,1}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq 2+2\sqrt{2}\}$ $\mathcal{L}_{3,2}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq 5\}$
$\mathcal{M}_{1,2}(\mathcal{A})=\{\lambda\in C: 3\leq |\lambda|\leq 4\}$ $\mathcal{M}_{1,3}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq \frac{7+\sqrt{5}}{2}\}$
$\mathcal{M}_{2,1}(\mathcal{A})=\{\lambda\in C: 2\leq |\lambda|\leq 4\}$ $\mathcal{M}_{2,3}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq 4\}$
$\mathcal{M}_{3,1}(\mathcal{A})=\{\lambda\in C: 3+\sqrt{3}\leq |\lambda|\leq 5\}$ $\mathcal{M}_{3,2}(\mathcal{A})=\{\lambda\in C: 3\leq |\lambda|\leq 5\}$.
$\mathcal{N}_{1,2}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq \frac{3+\sqrt{21}}{2}\}$ $\mathcal{N}_{1,3}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq \frac{3+\sqrt{29}}{2} $
$\mathcal{N}_{2,1}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq 4\}$ $\mathcal{N}_{2,3}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq \frac{3+\sqrt{29}}{2} $
$\mathcal{N}_{3,1}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq 2+2\sqrt{2}\}$ $\mathcal{N}_{3,2}(\mathcal{A})=\{\lambda\in C: |\lambda|\leq 4\}$.
Yaotang Li, Suhua Li. Exclusion sets in the Δ-type eigenvalue inclusion set for tensors. Journal of Industrial & Management Optimization, 2019, 15 (2) : 507-516. doi: 10.3934/jimo.2018054
Chaoqian Li, Yajun Liu, Yaotang Li. Note on $ Z $-eigenvalue inclusion theorems for tensors. Journal of Industrial & Management Optimization, 2017, 13 (5) : 0-0. doi: 10.3934/jimo.2019129
Gang Wang, Yuan Zhang. $ Z $-eigenvalue exclusion theorems for tensors. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-12. doi: 10.3934/jimo.2019039
Nur Fadhilah Ibrahim. An algorithm for the largest eigenvalue of nonhomogeneous nonnegative polynomials. Numerical Algebra, Control & Optimization, 2014, 4 (1) : 75-91. doi: 10.3934/naco.2014.4.75
Caili Sang, Zhen Chen. $ E $-eigenvalue localization sets for tensors. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-19. doi: 10.3934/jimo.2019042
Chaoqian Li, Yaqiang Wang, Jieyi Yi, Yaotang Li. Bounds for the spectral radius of nonnegative tensors. Journal of Industrial & Management Optimization, 2016, 12 (3) : 975-990. doi: 10.3934/jimo.2016.12.975
Haitao Che, Haibin Chen, Yiju Wang. On the M-eigenvalue estimation of fourth-order partially symmetric tensors. Journal of Industrial & Management Optimization, 2020, 16 (1) : 309-324. doi: 10.3934/jimo.2018153
Zhen Wang, Wei Wu. Bounds for the greatest eigenvalue of positive tensors. Journal of Industrial & Management Optimization, 2014, 10 (4) : 1031-1039. doi: 10.3934/jimo.2014.10.1031
Jonathan Meddaugh, Brian E. Raines. The structure of limit sets for $\mathbb{Z}^d$ actions. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4765-4780. doi: 10.3934/dcds.2014.34.4765
Jun He, Guangjun Xu, Yanmin Liu. Some inequalities for the minimum M-eigenvalue of elasticity M-tensors. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-11. doi: 10.3934/jimo.2019092
Victor Kozyakin. Iterative building of Barabanov norms and computation of the joint spectral radius for matrix sets. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 143-158. doi: 10.3934/dcdsb.2010.14.143
Maria Fărcăşeanu, Mihai Mihăilescu, Denisa Stancu-Dumitru. Perturbed fractional eigenvalue problems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6243-6255. doi: 10.3934/dcds.2017270
Ravi P. Agarwal, Kanishka Perera, Zhitao Zhang. On some nonlocal eigenvalue problems. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 707-714. doi: 10.3934/dcdss.2012.5.707
Wei-Ming Ni, Xuefeng Wang. On the first positive Neumann eigenvalue. Discrete & Continuous Dynamical Systems - A, 2007, 17 (1) : 1-19. doi: 10.3934/dcds.2007.17.1
Robert Brooks and Eran Makover. The first eigenvalue of a Riemann surface. Electronic Research Announcements, 1999, 5: 76-81.
Nikolaos S. Papageorgiou, Vicenţiu D. Rădulescu, Dušan D. Repovš. Perturbations of nonlinear eigenvalue problems. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1403-1431. doi: 10.3934/cpaa.2019068
Gang Wang, Yiju Wang, Yuan Zhang. Brualdi-type inequalities on the minimum eigenvalue for the Fan product of M-tensors. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-12. doi: 10.3934/jimo.2019069
Mihai Mihăilescu. An eigenvalue problem possessing a continuous family of eigenvalues plus an isolated eigenvalue. Communications on Pure & Applied Analysis, 2011, 10 (2) : 701-708. doi: 10.3934/cpaa.2011.10.701
Fanxin Zeng, Xiaoping Zeng, Zhenyu Zhang, Guixin Xuan. Quaternary periodic complementary/Z-complementary sequence sets based on interleaving technique and Gray mapping. Advances in Mathematics of Communications, 2012, 6 (2) : 237-247. doi: 10.3934/amc.2012.6.237
Quanyi Liang, Kairong Liu, Gang Meng, Zhikun She. Minimization of the lowest eigenvalue for a vibrating beam. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 2079-2092. doi: 10.3934/dcds.2018085
Gang Wang Guanglu Zhou Louis Caccetta
|
CommonCrawl
|
Deceleration of probe beam by stage bias potential improves resolution of serial block-face scanning electron microscopic images
James C. Bouwer1,
Thomas J. Deerinck1,
Eric Bushong1,
Vadim Astakhov1,
Ranjan Ramachandra1,
Steven T. Peltier1 &
Mark H. Ellisman1
Advanced Structural and Chemical Imaging volume 2, Article number: 11 (2016) Cite this article
Serial block-face scanning electron microscopy (SBEM) is quickly becoming an important imaging tool to explore three-dimensional biological structure across spatial scales. At probe-beam-electron energies of 2.0 keV or lower, the axial resolution should improve, because there is less primary electron penetration into the block face. More specifically, at these lower energies, the interaction volume is much smaller, and therefore, surface detail is more highly resolved. However, the backscattered electron yield for metal contrast agents and the backscattered electron detector sensitivity are both sub-optimal at these lower energies, thus negating the gain in axial resolution. We found that the application of a negative voltage (reversal potential) applied to a modified SBEM stage creates a tunable electric field at the sample. This field can be used to decrease the probe-beam-landing energy and, at the same time, alter the trajectory of the signal to increase the signal collected by the detector. With decelerated low landing-energy electrons, we observed that the probe-beam-electron-penetration depth was reduced to less than 30 nm in epoxy-embedded biological specimens. Concurrently, a large increase in recorded signal occurred due to the re-acceleration of BSEs in the bias field towards the objective pole piece where the detector is located. By tuning the bias field, we were able to manipulate the trajectories of the primary and secondary electrons, enabling the spatial discrimination of these signals using an advanced ring-type BSE detector configuration or a standard monolithic BSE detector coupled with a blocking aperture.
Serial block-face scanning electron microscopy (SBEM) has proved to be a remarkable technique for imaging at moderate lateral and axial resolution (approximately 10 and 40 nm, respectively) and across large fields of view spanning many hundreds of microns of sample. This technique is elucidating processes, where selectively stained cells within large fields of view can be found [1–3] and small details can be followed across multi-scale dimensions. It has been especially useful in neuron tracking across large distances [4]. One SBEM study that tracked mitophagy events from neurons in their neighboring astrocytes [5], in particular, has fundamentally changed how scientists look at the role played by astrocytes in glaucoma.
The SBEM platform is comprised of an ultramicrotome embedded within the scanning electron microscope (SEM) chamber. Originally conceived by Leighton [6], use of this imaging technique has become mainstream, because of an automated platform to collect 3D volumes developed by Denk and Horstmann [7]. With SBEM, the embedded microtome removes ~30–80-nm-thick sections from the sample block mounted on a fixed rivet. With each "slice", the electron-beam raster scans the newly exposed block face, and the BSEs are collected to create an image. A 3D volume is obtained by stacking the images of the block face obtained after each microtomy cycle.
The platform improved by Denk and Horstmann was commercialized by Gatan, Inc., as the 3View system. The latter has been integrated into multiple SEM platforms across manufacturers and features full computer automation. In 2015, a competing SBEM platform, the Teneo VS, was released by FEI Company.
The focused ion beam milling technique (FIB-SEM) is analogous to SBEM, but the block face is vaporized by bombardment by heavy ions to remove subsequent layers of sample [8, 9]. As with SBEM, FIB-SEM backscattered electron images are collected after each bombardment and stacked to form a 3D reconstruction.
In spite of their different methods for removing the exposed block face, SBEM and FIB-SBEM use SEM backscattered electron imaging, and thus, improvements in preparing specimens and detectors are germane to both technologies. In this study, we concentrated on determining the theoretical and experimental factors in BSE collection critical to further improvements, particularly using primary beam reversal potentials to improve detection efficiency and image resolution.
It has long been known that applying a negative potential to the sample reduces beam-landing energies and, thereby, reduces beam damage to the specimen and improves surface detail. This understanding informed the design of the low-energy electron microscope (LEEM) [10, 11]. LEEM has supported complex studies in materials science [12–15]. However, in LEEM, the sample is generally irradiated with a flood beam and uses complex instrumentation of magnetic sector plates to separate electrons leaving the sample from the primary beam. There have been only a few studies of stage bias in a conventional SEM and, specifically, its relevance to imaging biological samples [9, 16, 17]. For imaging biological samples, secondary electron (SE) signal generally provides unsatisfactory contrast. BSE imaging, however, provides a clear contrast between the heavy-metal stain and unstained structures [7]. It was been pointed out that, although beam deceleration in SBEM improves the contrast and resolution of the BSE image, it also leads to severe image artifacts whose cause is unclear [9]. We believe that, to enable more widespread use of the beam-deceleration techniques in SBEM, a comprehensive study of its advantages and pitfalls is needed. This manuscript tries to address these issues.
Specimen mounting
Small (1 mm × 1 mm × 0.5 mm) pieces of resin-embedded tissues were mounted on aluminum specimen pins (Gatan, Inc., Pleasanton, CA, USA) using cyanoacrylic glue and precision trimmed with a glass knife to a rectangle approximately 0.5 mm × 1.0 mm, so that tissue was exposed on all four sides. Silver paint (Ted Pella, Inc., Redding, CA, USA, http://www.tedpella.com) was used to electrically ground the edges of the tissue block to the aluminum pin, taking care not to get paint on the block face or edges of the embedded tissue to be sectioned. The entire specimen was then sputter coated with a thin layer of gold/palladium. After the top layer of gold/palladium block was removed by the ultramicrotome, the tissue morphology became visible by BSE imaging. The remaining coating on the edges of the block served to reduce charging and did not interfere with imaging.
Implementing the stage bias potential on two SEM/3View platforms
The initial experiments using stage biasing to achieve deceleration were performed on a FEI Quanta 200 FEG scanning electron microscope on loan from FEI. The Quanta FEG was equipped with a high-precision dc power supply to apply a negative-bias potential to the sample. The Quanta FEG was also equipped with a 3View system from Gatan, Inc. The backscatter electron detectors (BSDs) included a monolithic backscatter electron diode detector from Gatan, Inc., as well as a concentric backscatter (CBS) detector from FEI.
These two backscatter detectors are both approximately 9 mm in diameter and include a 1-mm central hole to allow the probe beam to pass through. The two detectors were also positioned to subtend a similar solid angle for BSEs.
The monolithic BSE detector from Gatan, Inc., was read out, though a single output into the amplification and digitization circuitry. The FEI CBS detector, on the other hand, reads signals from the concentric rings of active regions through four individually configurable amplifiers. The geometry of this device allows for the spatial differentiation of signal (i.e., signal reaching the inner vs. the outer rings is read out separately). When used in conjunction with a biasing field, this device is advantageous, as it allows for the ability to spatially discriminate SE from BSE signals (e.g., to eliminate SE signal contamination when performing BSE imaging).
A similar strategy for signal differentiation was implemented on a Zeiss Sigma variable-pressure SEM platform equipped with a Gatan 3View system and a monolithic BSE detector. Gatan, Inc., also provided a high-precision dc power supply to bias the sample potential with a stable and accurate negative voltage. In this case, we used a custom-made aperture to block unwanted SEs from striking the BSE detector and polluting the BSE signal. The SE-blocking aperture was machined to our specifications as determined by the gun-acceleration potential, the negative-bias potential, and the working distance. The blocking aperture was attached to the grounded shielding of the Gatan BSE detector using silver paint (Ted Pella, Inc.) and suspended above the detector by thin wires.
To apply a bias voltage to the sample, we implemented modifications to electrically isolate the microtome and the sample. The 3View ultramicrotomes on both platforms were modified by machining an insulating ceramic holder that mounts where a metal holder usually sits. The sample holder and sample pin were electrically conductive and connected to a highly stable, adjustable dc voltage source. A variable-bias voltage of 0 to −5000 V could then be applied to the sample. Figure 1 shows the setup for the various modes of operation: without deceleration using a monolithic BSE detector, with deceleration using the monolithic BSE detector, with deceleration using the CBS detector, and with deceleration operating with a monolithic BSE detector and an SE-blocking aperture. Illustrations of the SE and BSE trajectories as affected by the biasing electric fields are shown along with the illustrations of the equipotential lines during sample biasing.
Scaled diagram of SBEM stage-biasing geometry for various configurations of SBEM and SBEM with deceleration. The aluminum pin with mounted sample sits inside a stainless steel holder. The holder is attached to a high-stability power supply to provide negative-biasing potential. The biased sample, sample pin and stainless steel holder are insulated in a machined ceramic holder. The diamond knife is held at ground potential. Since diamond is a poor conductor, no arcing occurs during cutting at voltages lower than 3.5 keV. The sample working distance is maintained at 6.6 mm during imaging on the FEI Quanta 200 and 8.8 mm on the Zeiss Sigma. a Setup for the conventional SBEM without deceleration. The SEs and BSEs propagate in straight lines without the effect of electric fields. b Layout for deceleration with the monolithic BSE detector. The SE and BSE signals are convolved in the detector, causing artifacts. c The SEs are collected in the central ring of the CBS detector. Since each ring has its own amplifier, the SEs and BSEs can be separated to provide pure BSE images. d Configuration used on a Zeiss Sigma. SEs are collected by an SE-blocking aperture connected to ground, while a pure BSE signal is collected by the Gatan monolithic BSE detector
The metal-stained samples used in these experiments are geometrically complex: They are composed of part plastic insulator and part heavy-metal-stained tissue with a multitude of densities. As a result, accurate calculations of sample permittivity (the ability of a substance to store electrical energy) are quite difficult to model and, thus, need to be determined experimentally. To do this, we used a special gold electrode tacked to the plastic-embedded sample surface with silver paint to verify the voltage on the sample surface. To confirm that the voltage source was outputting the correct values, we connected an electrode to the sample-mounting pin. The output voltage measured at the aluminum sample pin was confirmed to be accurate to within 99.2 % of the requested output voltage. Once we verified the output voltage from the source, we tested the voltage at the surface of the plastic-embedded block as a function of voltage applied to the aluminum pin (the measured surface potential as a function of applied voltage for each block is presented in Fig. 2), and the electron-landing energy was then matched with the appropriate electron-landing-energy, penetration depth, and cutting thickness settings.
Plot of the potential applied to the pin versus the measured surface potential. Above −500 V, the surface potential measured shows an approximately 12 % drop in the measured voltage versus applied voltage
Factors affecting BSE image resolution in the SBEM
Scattering of incident electrons in the SEM is governed principally by the probe-beam-landing energy and composition of the scattering substrate. The penetration depth of the probe-beam electrons into the sample (electron range), the lateral scattering of the probe beam and BSEs, and the thickness of the ultramicrotome cuts all limit resolution in the SBEM. The electron range for carbon is ~60 and 180 nm for beam energy of 1.5 and 3 keV, respectively, according to the Kanaya and Okayama range equation [18].
We performed the Monte Carlo simulations of electron–beam interaction with a carbon substrate similar to those detailed in [19]. In the simulations, we used 1 million electrons. Figure 3a, c shows the electron-trajectory plot of a subset of electrons (10,000 electrons) used in the simulation at 1.5 and 3 keV. The BSEs that escape from the surface after successive scattering are plotted in black, and the incident beam electrons that do not escape, i.e., the incident electrons that get deposited in the carbon substrate, are plotted in gray. The electron range predicted by the simulation is slightly lower than the Kanaya and Okayama range: ~50 and 130 nm for beam energies of 1.5 and 3 keV, respectively.
Monte Carlo simulation of electron–beam interaction in a carbon substrate. a Penetration depth of a 1.5-keV electron beam. b Line scan of a BSE signal profile for the 1.5-keV electron beam. c Penetration depth of a 3-keV electron beam. d Line scan of a BSE signal profile for the 3-keV electron beam. The simulations in a and c were performed for 10,000 electrons, with the electrons emerging as BSE shown in a darker shade. The simulations in b and d were performed at a pixel size of 1 nm for 1 million electrons. An infinitesimally thin-electron beam was assumed in all the simulations
The escape depth of the BSEs is much smaller than the electron range and, therefore, provides information from a much shallower region of the sample. Figure 3a, b shows that the escape depth of the BSE is ~20 and 40 nm, respectively, for the 1.5- and 3-keV electron beams. The escape depth of the BSEs is generally ~0.2–0.3 times the electron range [20], which is consistent with our simulation and corroborates very well with our experimental data (Figs. 4, 5).
Determination of BSE penetration depth. Sections of 50-nm (a, c, e) or 30-nm (b, d, f) thickness of pure Durcupan resin were overlaid on one half of the tissue block (visible on the left half of each image). At 3.0 keV, a significant number of BSEs emanated from beneath the 50- and 30-nm-thick sections (a, b). At 2.0-keV accelerating voltage, almost no signal was observed from below the 50-nm section (c) while some was noted from below a 30-nm-thick section (d). At 1.7 keV, no signal was observed from below a 50-nm section (e), while very little was observed from below a 30-nm-thick section (f). Note that the decrease in signal to noise as the acceleration voltage is lowered
Electron-penetration depth as a function of probe-beam-landing energy modulated via a decelerating biasing potential. A 30-nm-thick blank plastic section was placed on top of the cerebellum block sample measured in Fig. 4. Images were then collected on the FEI CBS detector with the central ring turned off. The 30-nm section runs diagonally across the middle of the image. Images are inverted, so white shows no backscattering signal. All images were acquired with 3-keV column-electron energy and deceleration appropriate to achieve the electron-landing energy shown in each panel. At a landing energy between 1.5 and 1 keV, the electron-penetration depth drops below 30 nm
Another, important parameter that determines the ability to resolve fine features in BSE imaging is the lateral-energy spread of the BSEs, i.e., how far the BSEs emerge from the incident-beam impact point and the fraction of the incident beam energy they have. The lateral-energy spread was calculated by dividing the area around the beam impact point into pixels of size 1 and 2 nm for the 1.5- and 3-keV beams, respectively, and the total integrated BSE signal emanating from each pixel was computed as the summation of the energy of all the BSEs from the particular pixel. The lateral-energy spread was measured in nm as the distance from the incident-beam impact point that contains 50 % of the total BSE energy. It was found to be 11 nm and 36 nm for the 1.5- and 3-keV beams, respectively (Fig. 3b, d).
The only way to physically improve the axial resolution of electron scattering from a theoretical basis is to lower the energy of the probe-beam electrons. However, below 2 keV, beam control is problematic as the electromagnetic fields used to steer the electrons down the column are less stable, and the less-energetic primary beam is more susceptible to external electric and magnetic interference leading to degraded resolution [21]. In addition, at lower accelerating voltages, chromatic aberration can degrade resolution as well. Chromatic aberration is defined as follows:
$$\delta_{\text{c}} = C_{\text{c}} \left( {\frac{\Delta E}{{E_{0} }}} \right)\alpha$$
where δ c is the disk of least confusion, C c is the chromatic aberration coefficient, α is the aperture semi-angle, E 0 is the accelerating energy, and ΔE is the width of the energy distribution of beam electrons [22].
Decreasing primary beam-landing energy improves axial resolution
We originally used an FEI Quanta 200 ESEM equipped with a Gatan 3View for our SBEM imaging. The optimal performance for this SBEM was achieved at lower landing energies (down to 1.7 keV). At these energies, we achieved a smaller interaction volume in the sample that resulted in improved axial resolution and less energy deposition on the specimen. As a result, we were able to cut finer sections. However, at these lower energies, our probe was also subject increased chromatic aberration effects and we experienced decreased BSE detector sensitivity.
We initially performed landing-energy experiments by placing a 50- and a 30-nm-thick blank section of Durcupan resin cut on a Leica UC EM6 ultramicrotome over a biological sample, then imaging through the layer. As we varied the acceleration voltage, we looked for signal emanating from the layers below. Figure 4 shows a panel of images of the 50-nm-blank Durcupan resin overlays on the left and 30-nm-blank section overlays on the right imaged at 3.0, 2.0, and 1.7 keV. At 3.0 keV, a great deal of signal is observed from below the epoxy layer for both 50- and 30-nm-blank sections. After stepping down to 2.0 keV, the interaction volume is reduced. As expected, we see much less signal coming from the region beneath the 30-nm-blank section and just about no signal coming from the region beneath the 50-nm section. As the acceleration voltage was decreased to 1.7 keV, very few BSEs originating from below 30 nm were detected, demonstrating that we effectively reduced our penetration depth.
However, image quality was degraded due to decreased detector sensitivity. These low-energy-loss BSEs arose from elastically scattered or nearly elastically scattered electrons. Small energy losses are due to interactions with the atomic nuclei of the sample stain, and they backscatter with nearly the same energy as the probe electrons [21, 23]. Since most solid-state backscatter detectors are built from silicon photo-diode circuitry that have a thin-surface passivation layer and metalized electrodes, low-energy BSEs do not deposit much ionization charge in the active region of the diode and are difficult to detect above inherent detector noise.
Testing the impact of beam deceleration on SBEM resolution
To achieve the low electron-landing energies required for thinner penetration while avoiding detector sensitivity issues, we decelerated the probe-beam electrons at the final stage of imaging after the objective lens with the use of a negative-bias voltage applied at the sample stage.
We repeated the same overlay experiments as shown in Fig. 4, but this time, we kept the accelerating voltage fixed at 3.0 keV and adjusted the negative-bias voltage to produce a final landing energy between 3.0 and 1.0 keV. Figure 5 shows the images of the plastic-embedded sample with a 30-nm-blank section overlay across the surface at various landing energies, achieved by adjusting the negative reversal potential. It is clear from the BSE signal-depth retrieval images that we reduced our penetration depth and retained good image quality and signal to noise even at extremely low landing energies. For 30-nm ultramicrotome sectioning, a landing energy of between 1.0 and 1.5 keV was optimum. Electrons striking the BSE detector did so with an energy nearly matching the 3-keV instrument accelerating voltage and with high signal to noise: on the order of 20× greater signal can be achieved at a similar landing energy without the use of the negative-bias voltage.
Repeated imaging of the block face often results in altered viscosity properties and electron-beam-induced damage to the plastic block, causing the block to become difficult to section. By reducing the primary beam-landing energy, we can reduce damage to sample. Moreover, the increase in signal to noise provided by the acceleration of the BSEs reduces the dose per unit area and allows for significantly faster scan rates at higher magnifications. The increase in signal to noise and the ability to operate at lower dose rates should not only benefit SBEM but also other techniques, such as FIB-SEM [8, 24, 25], which is limited by probe-penetration depth and detector sensitivity.
Characterizing detector response as a function of backscatter electron energy
As detailed in the "Methods" section, above, the two backscatter detectors (Gatan BSD and FEI CBS) were configured differently. In the conventional backscatter detection mode without deceleration (Gatan BSD), the signal produced in the silicon photo-diode detector drops off significantly, as the accelerating voltage is lowered. To determine the rate at which the signal in each detector drops off as a function of electron energy, we measured the detector response when the electrons were backscattered from a gold substrate compared with a carbon background as the difference between gold and carbon signals.
The FEI Quanta FEG 200 microscope was aligned stepwise for accelerating voltages of 5–1-keV accelerating voltage, and a Faraday cup (Ted Pella, Inc.) measured the beam current at a fixed spot size for each voltage. By integrating the backscatter signal difference for the same area of the gold sample and carbon background at each accelerating voltage, we could normalize backscatter signal for beam-current differences between accelerating voltages. In addition, we normalized the signal according to the backscattering coefficient η for gold, where
$$\eta = \, N_{\text{backscattered}} / \, N_{\text{incident}}$$
is the ratio of the number of BSEs to the number of incident electrons. The backscattering coefficient is a function of the electron energy and is well established for gold [26]. Since we did not know the actual gains on the amplifiers or the relationship between detectors, we kept the amplifier gain and brightness settings fixed throughout the measurement process. The signal was determined as the difference between the gold sample and the carbon background.
Normalized results are given in Fig. 6 for both detectors on the same SEM, such that the 5-keV strong signal is set to unity. Because the two detectors have different configurations for detection, we set the values in Fig. 6 to unit-less values, so this graph should not be inferred to compare signal to noise between to the two detectors). Instead, it is intended to demonstrate a linear drop in signal to about 2.0 keV, at which point, the decreases become non-linear. This is presumably due to less penetration through the surface passivation layer on the silicon diodes of the Gatan BSD.
Normalized detector backscatter signal in arbitrary units for fixed detector gain and offset. The open circles show the normalized response of the FEI CBS detector as a function of electron energy without any sample biasing. The open squares show the normalized response of the Gatan backscatter detector as a function of electron energy without sample biasing. The filled diamonds show the normalized response of the CBS detector with 3-keV gun-acceleration voltage and deceleration voltages sufficient to achieve 1- and 1.5-keV landing energy. The closed triangles show the normalized response of the CBS detector for 4-keV acceleration voltage and deceleration voltages sufficient to achieve 1- and 1.5-keV landing energies. The results at 1-keV landing energy show an improvement of ×16.5 in signal for 3-keV column-electron energy and a ×23.6 increase in signal for 4-keV column-electron energy. The results at 1.5-keV landing energy show an improvement in backscattered signal of ×4.5 with 3-keV column-electron energy, and a ×6.5 increase in signal at 4-keV column-electron energy
The results of both detector responses as a function of gun-accelerating voltage are also shown in Fig. 6. To compare the two detectors, we kept the amplifier gains and brightness settings constant for both detectors, and the central ring of the CBS was turned off.
The results show that depending on the gun-acceleration potential and the deceleration voltage, signal level increases of up to 20× and 6× or greater are possible at 1- and 1.5-keV landing energy, respectively, for decelerated probe-beam electrons versus the conventional SBEM. In addition, the signals in the CBS detector used with deceleration are actually higher than those where deceleration was not applied. We attribute this to the fact that the bias fields help collect and collimate the BSE electrons toward the BSD detector.
To demonstrate the increase in signal produced by the acceleration of BSEs when the reversal potential is used, we used the FEI CBS detector with the central ring turned off (the central ring becomes saturated with the SE signals and should be turned off when detecting BSEs with deceleration). The Gatan BSE detector was not used in this deceleration experiment, because the detector becomes saturated from accelerated SEs.
With the FEI CBS detector's central ring turned off, we explored the BSE signals from the gold sample with 3- and 4-keV gun-accelerating energies and deceleration appropriate to achieve 1- and 1.5-keV landing energies. Although the electrons are decelerated when they travel down the column from the objective pole piece to the sample, electrons leaving the sample as BSEs are re-accelerated by the same amount. Therefore, the BSEs have nearly the same energy as the electron beam at the pole piece, theoretically with some energy loss due to the inelastic collisions with the atoms of the sample. For example, a 3-keV electron will be decelerated to a 1.5-keV landing energy in a −1.5-keV bias potential, and the BSEs will be re-accelerated to energy of almost 3 keV before striking the detector.
Figure 7 compares the images acquired with and without beam deceleration. Both images were acquired with 1.5-keV landing energy. However, in the panel of Fig. 7b, a 3-keV column energy combined with a −1.5-keV deceleration bias results in a dramatic increase in signal to noise. In principle, as the column-electron energy is increased and the deceleration fields are increased, the backscatter signal should be significantly improved.
Comparison of two images of a cerebellum block with a 30-nm-thick blank plastic overlay acquired with 1.5-keV probe-beam-landing energy. a Image acquired with 1.5-keV high tension and no deceleration. b Image acquired with 3-keV high tension and −1.5-keV deceleration potential. Both images show a near equivalent penetration depth. However, b shows a significant improvement in signal as a result of BSE re-acceleration
We explored a number of column-beam energies and deceleration potentials and found that the best results were obtained with deceleration between −1 and −2 keV and with a gun-accelerating energy of 2.5 keV and higher. For column-beam energies less than 2 keV, the beam current drops off quickly, and the image quality and stability are less than ideal. For deceleration voltages less than −1 keV, the electron re-acceleration was not as dramatic. For deceleration voltages higher than −2.5 keV, focus and astigmatism stability were decreased, and sample movements and distortions became problematic. For this geometry, we obtained our best results with moderate deceleration. However, we feel that, by careful design of the deceleration hardware and geometry, it should be possible to mitigate stability issues, enabling long runs at much higher bias voltages.
Using the biasing field to spatially differentiate SE from BSE signals
The SBEM technique relies primarily on BSE signals for image generation. The image contrast arises from differences in electron scattering from the lighter Z-number atoms (hydrogen, carbon, nitrogen and oxygen) in the tissue and the plastic-embedding media, and the high-Z elements (osmium, uranium and lead) from the stains used in the specimen-preparation protocol. SEs are also created at the sample surface but usually have energies lower than 50 eV and, thus, are not detected in a photo-diode backscatter detector. Once deceleration is applied, these same low-energy secondary electrons are accelerated by the electric fields, striking the detector with sufficient energy to produce a measurable signal.
In Fig. 8, we illustrate the various scattering mechanisms for detectable SEM signals. The primary electron-probe beam (PE) strikes the sample where it can produce secondary electrons near the surface with sufficient energy to escape the surface. The SEs produced by interaction with the primary electron are labeled SE1. A primary beam electron will typically scatter within the sample many times, producing additional SEs, which lack the energy to escape and be detected. If the PE is backscattered near the surface, it can create additional SEs that can escape the surface to be detected. These signals are labeled SE2. A third type of SE signal can be created through BSEs interacting with the chamber or pole-piece surface. In the conventional scanning electron microscopy, where the microscope is operated typically in the energy range of 5–30 keV, BSEs can scatter out of the sample from much deeper inside the substrate due to significantly longer mean free paths than the low-energy secondary electrons, and, as such, BSEs are considered to be lower resolution. In the figure, t-SE represents the depth from which 50 eV and lower-energy SEs can escape. For most materials, the depth from which SEs can escape is on the order of 5–15 nm. However, as the landing energies of the electrons are reduced to about 1 keV, the escape depth of the BSEs approaches a few tens of nanometer, similar to the SE signal [23], and, therefore, carries higher resolution information of the sample.
Relative contribution of backscattered and secondary electrons from the sample to the detection signal. Type SE1 electrons are created by interactions with the primary electron (PE) beam. Type SE2 electrons are created by interactions with the BSE. Low-energy SE electrons (<50 eV) typically have mean free paths between scattering events of 5–15 nm, depending on the sample composition, and do not escape from the sample unless they are produced extremely close to the specimen surface. BSEs can scatter out from deep inside the sample and be detected
When imaging with the monolithic Gatan BSE detector and using deceleration voltages of −1 keV and greater, the SE signals dominated the images. Using the FEI CBS detector and looking at the signals in the individual concentric rings, we observed that these low-energy SEs, which are much more easily captured by the bias field, tend to be focused toward the center of the detector. The higher energy BSEs, by contrast, tend to be focused toward the outer regions of the backscatter detector. These observations are in agreement with the calculated trajectories of BSEs and SEs under the influence of a biasing field, as shown in Fig. 9, and are consistent with data presented elsewhere [27]. Using the configurable FEI CBS detector, we could separate the SE signal from the BSE signal by reading from the outer three rings for BSE signals, and the inner-most ring for SE signals. Figure 1c illustrates this setup as used in combination with deceleration via stage biasing.
Calculation of the trajectories of the scattered electrons for BSEs (blue) and SEs (green) in a biasing field. Trajectories are plotted for electrons scattered from a point on the surface equally distributed through an angle of 0°–70° from the beam axis. a 3-keV gun-accelerating voltage with a −1.5-keV decelerating potential, resulting in a 1.5-keV beam-electron-landing energy. b 4.5-keV gun-accelerating voltage with a −3-keV decelerating potential, resulting in a 1.5-keV beam-electron-landing energy. The simulation matches well with the observed signal in the various rings of the CBS detector as a function of applied bias voltage
Figure 10 shows typical images collected with the CBS detector at 3-keV gun-accelerating voltage and −1.5-keV deceleration voltage. Figure 10a, using all rings of the detector, shows strong topographic contrast in addition to Z contrast, indicating that it is a mix of SE and BSE signals. In comparison, Fig. 10b, using only the outer three rings, shows only Z contrast with almost no topographic details, confirming that it is nearly a pure BSE signal image. The SE signal mixed with some BSE signal is clearly separated from the pure backscatter signal collected in the outer rings. Without this separation of BSE and SE signals, it would not be possible to use the deceleration techniques to lower electron-landing energy and simultaneously increase signal without obtaining charging type artifacts in the images, such as those shown in Fig. 10a. We attribute this charging artifact to the fact that low-energy SEs are easily affected by surface charging and the local electric fields created as a result of sample biasing. Enlarging the hole in the Gatan BSE detector or masking the center of the detector and using the deceleration fields to steer the SEs toward the center hole or mask on the detector will produce a similar separation of BSEs from SEs.
Typical images collected on the FEI CBS detector with 1.5-keV deceleration voltage. The separation of SE signals from BSE signals is possible by turning off the inner-most ring and using only the outer rings of the FEI CBS detector. Images were collected at 3-keV gun-accelerating electron energy and −1.5-keV deceleration voltage. a Signal from the central ring only. The signal is composed of a mix of BSEs and SEs, but it is dominated by SEs. b Signal from the outer three rings of the CBS detector. The signal contains almost no SE signal. With the monolithic Gatan BSE detector, separation of the BSE and SE signals with deceleration is not possible unless the SEs are blocked. The intensity has been inverted to produce TEM-like contrast
Enhanced SBEM volume acquisition using deceleration
Using our improved deceleration protocol, we collected large-scale volume data sets using the 3View-equipped FEI Quanta FEG SEM. First, we wanted to ensure that the sample could be reliably cut with a large sample bias applied and the diamond knife sitting at ground. Figure 11 shows a reconstructed volume comprised of serial block-face images collected with 3-keV electron-probe energy and 1.5-keV landing energy. The ultramicrotome cuts were 50-nm thick. The images were collected at 3200× magnification and 5-µs dwell time.
10 µm × 10 µm × 4 µm volume of rodent cerebellum collected by SBEM using sample biasing. Imaging was performed at 3.0-keV column-electron energy and a landing energy of 1.5 keV. The beam current was 129 pA at the sample with a magnification of ×3200 and 5 µs dwell time per pixel. The cutting depth on this sample was 50 nm
Adding an aperture to the monolithic BSD enabled the discrimination of BSEs and SEs similar to that of the CBS detector
With the ability to separate BSE and SE signals by turning off the inner ring and using only the outer rings of FEI's concentric-ring BSE detector in combination with moderate deceleration, we then took the next step to determine if a simplified setup could be used, employing an monolithic Gatan BSE detector with a blocking aperture to remove the large SE signal from our images.
Careful design of the correct blocking aperture diameter for the geometry and the deceleration potential is critical for BSE/SE separation. We found that the most important parameters were the gun-accelerating voltage, decelerating voltage, and landing energy of the probe beam; the distance from the sample to the detector; and the diameter of the SE-blocking aperture.
Weighing these considerations, we built a simple prototype blocking aperture by modifying a standard TEM aperture, which had a 5.0-mm outer diameter ring with a central hole of 0.8-mm diameter. This custom SE-blocking aperture was centered and attached above the diode to the grounded shielding of the Gatan BSE detector. Using this setup, any SEs striking the aperture should be conducted away quickly, leaving only BSEs to be detected on the BSE device.
As proof of principle, we tested this setup with a 3-keV column energy and a −1.0-keV decelerating potential between the pole piece and the sample (8.0-mm working distance). Figure 12 shows a 2D slice through a volume collected on the 3View-equipped Zeiss Sigma using this setup. The images remained quite stable during the cutting and imaging process. 150 sections were cut at 60-nm thickness at 1.5-µs dwell times, 55-pA beam current, 1.5K× magnification, and a 4K × 4K raster.
3D cross-section view of a 40 µm × 40 µm × 8 µm volume of rodent cerebellum collected by SBEM using 3-keV primary beam energy and a −1-keV sample bias to achieve a 2-keV landing energy. The volume was collected on a Zeiss Sigma using the standard monolithic BSE detector from Gatan and a 5-mm outer diameter and 0.8-mm inner diameter SE-blocking aperture appropriate to blocking SEs from a 3-keV primary beam, a −1-keV bias potential, and an 8.9-mm working distance. The volume was collected at 1.5K× magnification, 1.5-µs dwell time, and a 4K × 4K× raster scan at a beam current of 55 pA
The addition to Gatan's microscopy suite of a new autofocus and autostigmation routines should enable stable imaging for weeks at a time using this technique. Future development of deceleration-based detection would also benefit from fabrication of a variety of blocking apertures to better optimize aperture geometry to the deceleration potential employed.
This study demonstrates that using beam deceleration in the SBEM results in higher quality images. Thinner axial-penetration depth achieved with probe-beam deceleration improved axial resolution. A remarkable increase in total signal collection of 20-fold or higher was produced with the use of relatively low-deceleration potentials as compared with the conventional SEM imaging using the same landing energies. This increase in signal allowed for much lower interrogation currents on the sample, resulting in improved sample-sectioning properties.
Here, we demonstrated a procedure that mitigates the degradation of the image quality resulting from low-energy SEs when using deceleration. By removing SEs from the images, using the FEI concentric backscatter detector or a simple SE-blocking aperture in a monolithic detector, beam deceleration becomes a feasible approach to achieve limited penetration depths with sufficient sample and imaging stability to produce large volumes of data. Additional improvements in sample preparation, particularly the use of conductive epoxy resins formulated for SBEM, should improve the technique of beam deceleration by producing uniform electric fields at the sample surface and reducing surface-charging effects.
We believe that this study demonstrates the first of many improvements that can be made to SBEM imaging. We observed drift in image position, astigmatism, and focus changes, so these artifacts still represent challenges that need to be addressed to optimize volume quality. We found, however, that these were not significant obstacles at lower deceleration voltages, and we now believe that, with improved system geometry and the addition of autofocus software now available from Gatan, these instabilities can be addressed easily, enabling significant improvements in volumetric reconstructions.
SBEM:
serial block-face scanning electron microscopy
backscattered electron
BSD:
backscattered electron detector
secondary electrons
Shu, X., Lev-Ram, V., Deerinck, T.J., Qi, Y., Ramko, E.B., Davidson, M.W., Jin, Y., Ellisman, M.H., Tsien, R.Y.: A genetically encoded tag for correlated light and electron microscopy of intact cells, tissues, and organisms. PLoS Biol 9(4), e1001041 (2011)
West, J.B., Fu, Z., Deerinck, T.J., Mackey, M.R., Obayashi, J.T., Ellisman, M.H.: Structure-function studies of blood and air capillaries in chicken lung using 3D electron microscopy. Respir Physiol Neurobiol 170(2), 202–209 (2010)
Williams, M.E., Wilke, S.A., Daggett, A., Davis, E., Otto, S., Ravi, D., Ripley, B., Bushong, E.A., Ellisman, M.H., Klein, G., Ghosh, A.: Cadherin-9 regulates synapse-specific differentiation in the developing hippocampus. Neuron 71(4), 640–655 (2011)
Jurrus, E., Hardy, M., Tasdizen, T., Fletcher, P.T., Koshevoy, P., Chien, C.B., Denk, W., Whitaker, R.: Axon tracking in serial block-face scanning electron microscopy. Med Image Anal 13(1), 180–188 (2009)
Nguyen, J.V., Soto, I., Kim, K.Y., Bushong, E.A., Oglesby, E., Valiente-Soriano, F.J., Yang, Z., Davis, C.H., Bedont, J.L., Son, J.L., Wei, J.O., Buchman, V.L., Zack, D.J., Vidal-Sanz, M., Ellisman, M.H., Marsh-Armstrong, N.: Myelination transition zone astrocytes are constitutively phagocytic and have synuclein dependent reactivity in glaucoma. Proc Natl Acad Sci USA 108(3), 1176–1181 (2011)
Leighton, S.B.: SEM images of block faces, cut by a miniature microtome within the SEM—A technical note. Scan. Electron Microsc. 2, 73–76 (1981)
Denk, W., Horstmann, H.: Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS Biol 2(11), 1900–1909 (2004)
Knott, G., Marchman, H., Wall, D., Lich, B.: Serial section scanning electron microscopy of adult brain tissue using focused ion beam milling. J. Neurosci 28(12), 2959–2964 (2008)
Ohta, K., Sadayama, S., Togo, A., Higashi, R., Tanoue, R., Nakamura, K.: Beam deceleration for block-face scanning electron microscopy of embedded biological tissue. Micron 43(5), 612–620 (2012)
Bauer, E.: The resolution of the low-energy electron reflection microscope. Ultramicroscopy 17(1), 51–56 (1985)
Telieps, W., Bauer, E.: An analytical reflection and emission Uhv surface electron-microscope. Ultramicroscopy 17(1), 57–65 (1985)
Frank, L., Mullerova, I.: Strategies for low- and very-low-energy SEM. J Electron Microsc 48(3), 205–219 (1999)
Matsuda, K., Ikeno, S., Mullerova, I., Frank, L.: The potential of the scanning low energy electron microscopy for the examination of aluminum based alloys and composites. J Electron Microsc 54(2), 109–117 (2005)
Mullerova, I.: Imaging of specimens at optimized low and very low energies in scanning electron microscopes. Scanning 23(6), 379–394 (2001)
Khursheed, A., Osterberg, M.: A spectroscopic scanning electron microscope design. Scanning 26(6), 296–306 (2004)
Pluk, H., Stokes, D., Lich, B., Wieringa, B., Fransen, J.: Advantages of indium-tin oxide-coated glass slides in correlative scanning electron microscopy applications of uncoated cultured cells. J Microsc 233(3), 353–363 (2009)
Titze, B., Denk, W.: Automated in-chamber specimen coating for serial block-face electron microscopy. J Microsc 250(2), 101–110 (2013)
Kanaya, K., Okayama, S.: Penetration and energy-loss theory of electrons in solid targets. J Phys D Appl Phys 5(1), 43 (1972)
Joy, D.C.: Monte carlo modeling for electron microscopy and microanalysis. Oxford University Press, Oxford (1995)
Goldstein, J.I., Newbury, D.E., Echlin, P., Joy, D.C., Lyman, C.E., Lifshin, E., Sawyer, L., Michael, J.R.: Scanning electron microscopy and microanalysis. Springer, Berlin (2003)
Reimer, L.: Scanning electron microscopy—physics of image formation and microanalysis. In: Hawkes, P.W. (ed.) Springer series in optical sciences, vol. 45. Springer, Berlin (1998)
Agar, A.W., Alderson, R.H., Chescoe, D.: Principles and practice of electron microscope operation, In: Glauert, A.M. (ed.) American Elsevier Publishing Co, New York (1974)
Joy, D.C., Joy, C.S.: Low voltage scanning electron microscopy. Micron 27(3–4), 247–263 (1996)
Helmstaedter, M., Briggman, K.L., Denk, W.: 3D structural imaging of the brain with photons and electrons. Curr Opin Neurobiol 18(6), 633–641 (2008)
Schroeder-Reiter, E., Perez-Willard, F., Zeile, U., Wanner, G.: Focused ion beam (FIB) combined with high resolution scanning electron microscopy: a promising tool for 3D analysis of chromosome architecture. J Struct Biol 165(2), 97–106 (2009)
Assa'd, A.M.D., El Gomati, M.M.: Backscattering coefficients for low energy electrons. Scanning Microsc 12(1), 185–192 (1998)
Phifer, D., Tuma, L., Vystavel, T., Wandrol, P., Young, R.J.: Improving SEM imaging performance using beam deceleration. Microsc Today 17(4), 40–49 (2009)
JB, SP, and ME designed the experiments. JB performed the experiments and analyzed the data. TD and EB helped with sample preparation. RR performed the Monte Carlo simulations. VA helped with data processing. JB, RR, SP and ME wrote the paper. All authors read and approved the final manuscript.
This work was supported by a grant from the NIH National Institute of General Medical Sciences under award number P41GM103412 to Mark H. Ellisman, to operate the National Center for Microscopy and Imaging Research. The authors would like to acknowledge Gatan, FEI Company, and Carl Zeiss for loan of SBEM equipment used in this study.
National Center for Microscopy and Imaging Research, University of California at San Diego, BSB 1000, 9500 Gilman Dr., La Jolla, CA, 92093-0608, USA
James C. Bouwer, Thomas J. Deerinck, Eric Bushong, Vadim Astakhov, Ranjan Ramachandra, Steven T. Peltier & Mark H. Ellisman
James C. Bouwer
Thomas J. Deerinck
Eric Bushong
Vadim Astakhov
Ranjan Ramachandra
Steven T. Peltier
Mark H. Ellisman
Correspondence to Mark H. Ellisman.
Bouwer, J.C., Deerinck, T.J., Bushong, E. et al. Deceleration of probe beam by stage bias potential improves resolution of serial block-face scanning electron microscopic images. Adv Struct Chem Imag 2, 11 (2016). https://doi.org/10.1186/s40679-016-0025-y
SBEM
Volume reconstruction
Serial section
Backscatter electron detector
|
CommonCrawl
|
Optimal pricing and ordering strategies for dual-channel retailing with different shipping policies
Approximation algorithm with constant ratio for stochastic prize-collecting Steiner tree problem
doi: 10.3934/jimo.2021030
Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.
Readers can access Online First articles via the "Online First" tab for the selected journal.
Semidefinite relaxation method for polynomial optimization with second-order cone complementarity constraints
Lin Zhu and Xinzhen Zhang ,
School of Mathematics, Tianjin University, 135 Yaguan Road, Tianjin 300354, China
* Corresponding author: Xinzhen Zhang
Received May 2020 Revised November 2020 Early access February 2021
Fund Project: The work is supported by National Natural Science Foundation of China grant 11871369
Polynomial optimization problem with second-order cone complementarity constraints (SOCPOPCC) is a special case of mathematical program with second-order cone complementarity constraints (SOCMPCC). In this paper, we consider how to apply Lasserre's type of semidefinite relaxation method to solve SOCPOPCC. To this end, we first reformulate SOCPOPCC equivalently as a polynomial optimization and then solve the reformulated polynomial optimization with semidefinite relaxation method. For a special case of SOCPOPCC, we present another reformulation of polynomial optimization, which is of lower degree. SDP relaxation method is applied to solve the new polynomial optimization. Numerical examples are reported to show the efficiency of our proposed method.
Keywords: Polynomial optimization, second-order cone complementarity, Lasserre's hierarchy, semidefinite relaxation.
Mathematics Subject Classification: Primary: 90C23, 90C33; Secondary: 90C22.
Citation: Lin Zhu, Xinzhen Zhang. Semidefinite relaxation method for polynomial optimization with second-order cone complementarity constraints. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2021030
F. Alizadeh and D. Goldfarb, Second-order cone programming, Mathematical Programming, 95 (2003), 3-51. doi: 10.1007/s10107-002-0339-5. Google Scholar
J. Bochnak, M. Coste and M.-F. Roy, Real Algebraic Geometry, Springer-Verlag, Berlin, 1998. doi: 10.1007/978-3-662-03718-8. Google Scholar
R. E. Curto and L. A. Fialkow, Truncated K-moment problems in several variable, Journal of Operator Theory, 54 (2005), 189-226. Google Scholar
L. Cheng and X. Zhang, A semidefinite relaxation method for second-order cone polynomial complementarity problems, Computational Optimization and Applications, 75 (2020), 629-647. doi: 10.1007/s10589-019-00162-1. Google Scholar
L. Cheng, X. Zhang and G. Ni, A semidefinite relaxation method for second-order cone tensor eigenvalue complementarity problems, Journal of Global Optimization, (2020). doi: 10.1007/s10898-020-00954-4. Google Scholar
J. Fan, J. Nie and A. Zhou, Tensor eigenvalue complementarity problems, Mathematical Pogramming, 170 (2018), 507-539. doi: 10.1007/s10107-017-1167-y. Google Scholar
M. Fukushima, Z.-Q. Luo and P. Tseng, Smoothing functions for second-order cone complementarity problems, SIAM Journal on Optimization, 12 (2001), 436-460. doi: 10.1137/S1052623400380365. Google Scholar
J. W. Helton and J. Nie, A semidefinite approach for truncated K-moment problems, Foundations of Computational Mathematics, 12 (2012), 851-881. doi: 10.1007/s10208-012-9132-x. Google Scholar
M. Ko$\breve{c}$vara, J. Outrata and J. Zowe, Nonsmooth approach to optimization problem with equilibrium constraints: Theory, application and numerical results, Computers and Mathematics with applications, 1999. Google Scholar
J. B. Lasserre, Global optimization with polynomials and the problem of moments, SIAM Journal on Optimization, 11 (2001), 796-817. doi: 10.1137/S1052623400366802. Google Scholar
[11] J. B. Lasserre, Moments, Positive Polynomials and Their Applications, Imperial College Press, 2010. Google Scholar
M. Laurent, Sums of squares, moment matrices and optimization over polynomials, Emerging Applications of Algebraic Geometry, IMA Volumes in Mathematics and its Applications (Eds. M. Putinar and S. Sullivant), Springer, 149 2009,157–270. doi: 10.1007/978-0-387-09686-5_7. Google Scholar
L. Li, X. Zhang, Z.-H. Huang and L. Qi, Test of copositive tensors, Journal of Industrial and Management Optimizaiton, 15 (2019), 881-891. doi: 10.3934/jimo.2018075. Google Scholar
Y.-C. Liang, X.-D. Zhu and G.-H. Lin, Necessary optimality conditions for mathematical programs with second-order cone complementarity constraints, Set Valued and Variational Analysis, 22 (2014), 59-78. doi: 10.1007/s11228-013-0250-7. Google Scholar
[15] Z.-Q. Luo, J.-S. Pang and D. Ralph, Mathematical Programs with Equilibrium Constraints, Cambridge University Press, Cambridge, UK, 1996. doi: 10.1017/CBO9780511983658. Google Scholar
J. Nie, Polynomial optimization with real varieties, SIAM Journal on Optimization, 23 (2013), 1634-1646. doi: 10.1137/120898772. Google Scholar
J. Nie, Certifying convergence of Lasserre's hierarchy via flat truncation, Mathematical Programming, 142 (2013), 485-510. doi: 10.1007/s10107-012-0589-9. Google Scholar
J. Nie, Optimality conditions and finite convergence of Lasserre's hierarchy, Mathematical Programming, 146 (2014), 97-121. doi: 10.1007/s10107-013-0680-x. Google Scholar
J. Nie, Z. Yang and X. Zhang, A complete semidefinite algorithm for detecting copositive matrices and tensors, SIAM Journal on Optimization, 28 (2018), 2902-2921. doi: 10.1137/17M115308X. Google Scholar
X. Wang, X. Zhang and G. Zhou, SDP relaxation algorithms for $\mathbb{P}(\mathbb{P}_0)$-tensor detection, Computational Optimization and Applications, 75 (2020), 739-752. doi: 10.1007/s10589-019-00145-2. Google Scholar
J. Wu, L. Zhang and Y. Zhang, A smoothing Newton method for mathematical programs governed by second-order cone constrained generalized equations, Journal of Global Optimization, 55 (2013), 359-385. doi: 10.1007/s10898-012-9880-9. Google Scholar
X. Zhu, J. Zhang, J. Zhou and X. Yang, Mathematical programs with second-order cone complementarity constraints: Strong stationarity and approximation method, Journal of Optimization Theory and Applications, 181 (2019), 521-540. doi: 10.1007/s10957-018-01464-w. Google Scholar
J. J. Ye and J. Zhou, First-order optimality conditions for mathematical programs with second-order cone complementarity constraints, SIAM Journal on Optimization, 26 (2016), 2820-2846. doi: 10.1137/16M1055554. Google Scholar
J. J. Ye and J. Zhou, Exact formulas for the proximal/regular/limiting normal cone of the second-order cone complementarity set, Mathematical Programming, 162 (2017), 33-50. doi: 10.1007/s10107-016-1027-1. Google Scholar
Xi-De Zhu, Li-Ping Pang, Gui-Hua Lin. Two approaches for solving mathematical programs with second-order cone complementarity constraints. Journal of Industrial & Management Optimization, 2015, 11 (3) : 951-968. doi: 10.3934/jimo.2015.11.951
Li Chu, Bo Wang, Jie Zhang, Hong-Wei Zhang. Convergence analysis of a smoothing SAA method for a stochastic mathematical program with second-order cone complementarity constraints. Journal of Industrial & Management Optimization, 2021, 17 (4) : 1863-1886. doi: 10.3934/jimo.2020050
Liwei Zhang, Jihong Zhang, Yule Zhang. Second-order optimality conditions for cone constrained multi-objective optimization. Journal of Industrial & Management Optimization, 2018, 14 (3) : 1041-1054. doi: 10.3934/jimo.2017089
Yi Zhang, Yong Jiang, Liwei Zhang, Jiangzhong Zhang. A perturbation approach for an inverse linear second-order cone programming. Journal of Industrial & Management Optimization, 2013, 9 (1) : 171-189. doi: 10.3934/jimo.2013.9.171
Xiaoni Chi, Zhongping Wan, Zijun Hao. Second order sufficient conditions for a class of bilevel programs with lower level second-order cone programming problem. Journal of Industrial & Management Optimization, 2015, 11 (4) : 1111-1125. doi: 10.3934/jimo.2015.11.1111
Ye Tian, Shu-Cherng Fang, Zhibin Deng, Wenxun Xing. Computable representation of the cone of nonnegative quadratic forms over a general second-order cone and its application to completely positive programming. Journal of Industrial & Management Optimization, 2013, 9 (3) : 703-721. doi: 10.3934/jimo.2013.9.703
Yanhong Yuan, Hongwei Zhang, Liwei Zhang. A smoothing Newton method for generalized Nash equilibrium problems with second-order cone constraints. Numerical Algebra, Control & Optimization, 2012, 2 (1) : 1-18. doi: 10.3934/naco.2012.2.1
Xin-He Miao, Kai Yao, Ching-Yu Yang, Jein-Shan Chen. Levenberg-Marquardt method for absolute value equation associated with second-order cone. Numerical Algebra, Control & Optimization, 2022, 12 (1) : 47-61. doi: 10.3934/naco.2021050
Anurag Jayswala, Tadeusz Antczakb, Shalini Jha. Second order modified objective function method for twice differentiable vector optimization problems over cone constraints. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 133-145. doi: 10.3934/naco.2019010
Yong Xia, Yu-Jun Gong, Sheng-Nan Han. A new semidefinite relaxation for $L_{1}$-constrained quadratic optimization and extensions. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 185-195. doi: 10.3934/naco.2015.5.185
Meng Xue, Yun Shi, Hailin Sun. Portfolio optimization with relaxation of stochastic second order dominance constraints via conditional value at risk. Journal of Industrial & Management Optimization, 2020, 16 (6) : 2581-2602. doi: 10.3934/jimo.2019071
Qilin Wang, Shengji Li, Kok Lay Teo. Continuity of second-order adjacent derivatives for weak perturbation maps in vector optimization. Numerical Algebra, Control & Optimization, 2011, 1 (3) : 417-433. doi: 10.3934/naco.2011.1.417
José F. Cariñena, Javier de Lucas Araujo. Superposition rules and second-order Riccati equations. Journal of Geometric Mechanics, 2011, 3 (1) : 1-22. doi: 10.3934/jgm.2011.3.1
Eugenii Shustin, Emilia Fridman, Leonid Fridman. Oscillations in a second-order discontinuous system with delay. Discrete & Continuous Dynamical Systems, 2003, 9 (2) : 339-358. doi: 10.3934/dcds.2003.9.339
Shiyun Wang, Yong-Jin Liu, Yong Jiang. A majorized penalty approach to inverse linear second order cone programming problems. Journal of Industrial & Management Optimization, 2014, 10 (3) : 965-976. doi: 10.3934/jimo.2014.10.965
Jinling Zhao, Wei Chen, Su Zhang. Immediate schedule adjustment and semidefinite relaxation. Journal of Industrial & Management Optimization, 2019, 15 (2) : 633-645. doi: 10.3934/jimo.2018062
Xiaoling Guo, Zhibin Deng, Shu-Cherng Fang, Wenxun Xing. Quadratic optimization over one first-order cone. Journal of Industrial & Management Optimization, 2014, 10 (3) : 945-963. doi: 10.3934/jimo.2014.10.945
Leonardo Colombo, David Martín de Diego. Second-order variational problems on Lie groupoids and optimal control applications. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 6023-6064. doi: 10.3934/dcds.2016064
Qiong Meng, X. H. Tang. Solutions of a second-order Hamiltonian system with periodic boundary conditions. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1053-1067. doi: 10.3934/cpaa.2010.9.1053
Qilin Wang, Xiao-Bing Li, Guolin Yu. Second-order weak composed epiderivatives and applications to optimality conditions. Journal of Industrial & Management Optimization, 2013, 9 (2) : 455-470. doi: 10.3934/jimo.2013.9.455
Lin Zhu Xinzhen Zhang
|
CommonCrawl
|
Inequalities in utilization of maternal and child health services in Ethiopia: the role of primary health care
Solomon Tessema Memirie1,
Stéphane Verguet2,
Ole F. Norheim1,
Carol Levin3 &
Kjell Arne Johansson1
BMC Health Services Research volume 16, Article number: 51 (2016) Cite this article
Health systems aim to narrow inequality in access to health care across socioeconomic groups and area of residency. However, in low-income countries, studies are lacking that systematically monitor and evaluate health programs with regard to their effect on specific inequalities. We aimed to measure changes in inequality in access to maternal and child health (MCH) interventions and the effect of Primary Health Care (PHC) facilities expansion on the inequality in access to care in Ethiopia.
The Demographic and Health Survey datasets from Ethiopia (2005 and 2011) were used. We calculated changes in utilization of MCH interventions and child morbidity. Concentration and horizontal inequity indices were estimated. Decomposition analysis was used to calculate the contribution of each determinant to the concentration index.
Between 2005 and 2011, improvements in aggregate coverage have been observed for MCH interventions in Ethiopia. Wealth-related inequality has remained persistently high in all surveys. Socioeconomic factors were the main predictors of differences in maternal and child health services utilization and child health outcome. Utilization of primary care facilities for selected maternal and child health interventions have shown marked pro-poor improvement over the period 2005–2011.
Our findings suggest that expansion of PHC facilities in Ethiopia might have an important role in narrowing the urban-rural and rich-poor gaps in health service utilization for selected MCH interventions.
There have been impressive increases in total coverage of essential child health services and child survival in developing countries over the last decades [1]. Even though equity has been stated as an important goal within health sectors, substantial disparities in coverage of maternal and child health services and in under-five mortality between rich and poor children have persisted in most low- and middle-income countries [2–5]. Inequalities across socioeconomic groups and by area of residence are important determinants of maternal and child health [6, 7].
Ethiopia has had a substantial progress in reducing under-five mortality rate (from 198 deaths per 1,000 live births in 1990 to 88 in 2011) [8, 9]. Despite gradual improvement in coverage of child health care services, inequality in child mortality and access to care between urban and rural dwellers and across wealth quintiles remain large. Under-five mortality is 114 deaths per 1,000 live births in rural areas and 83 deaths per 1,000 live births in urban areas. The poorest and the richest quintiles had an under-five mortality of 137 and 86 deaths per 1,000 live births, respectively. Among households with a child having either symptoms of pneumonia or diarrhea; 16 % and 22 % of households from the poorest quintile and 62 % and 53 % from the richest quintile sought care from a health care provider, respectively. The low service utilization occurred in the face of an increased risk of diarrhea and pneumonia among children from the poorest quintile [9].
The national health policy of Ethiopia gives strong emphasis to fulfilling the needs of the rural residents, which constitute 84 % of the Ethiopian population. Ensuring universal access to health care is one of the main targets of the national Health Sector Development Program (HSDP) IV (2011–2015) in Ethiopia [10]. An accelerated expansion of primary health care (PHC) facilities [composed of health centers (HCs) and health posts (HPs)] has been undertaken since 2003. In nearly a decade, the number of HPs and HCs in Ethiopia grew by almost six fold to reach 3245 HCs and 16,048 HPs in 2012/2013. Each health post has two health extension workers (HEWs) and so far a total of 34,850 HEWs were trained and deployed nationally with a ratio to population of 1:2301 that surpassed HSDP III target of 1:2500 [10, 11]. The expansion is envisaged as the key strategy to deliver maternal, neonatal and child health interventions especially to the rural and impoverished segments of the population [12]. According to the 5th National Health Accounts in Ethiopia, 34 % of the total health expenditure was household out-of-pocket spending [13]. It is imperative that such expansions contribute to health equity primarily by moving towards universal access. The 2010 World Health Report has identified inefficient and inequitable use of resources as one of the factors that impede rapid movement towards universal health coverage (UHC) [14].
Inequalities in child health and child survival across household wealth quintiles were examined in the 2005 and 2011 Ethiopian Demographic Health Surveys (DHS) and by Barros et al. in their survey-based analysis of inequality in maternal and child health (MCH) in 54 countdown countries [9, 15, 16]. Skaftun et al. has also examined inequalities in child health in Ethiopia [17]. However, assessments done so far lack some critical MCH interventions (such as family planning) and morbidity outcomes (e.g. stunting) and are not examined in light of the rapid expansion of PHC facilities in Ethiopia. Additionally they did not take into consideration use relative to need, therefore were unable to assess inequity in MCH service utilization.
The main objectives of this study were: (1) to measure changes in degree of inequality in utilization of selected MCH interventions and child morbidities over time; (2) to determine factors associated with inequality and inequity in access to care; and (3) to assess the role of expansion of PHC facilities in Ethiopia on inequality and inequity in access to care using 2005 and 2011 DHS conducted in Ethiopia.
Data and variables definition
We used data from DHS conducted in Ethiopia in 2005 and 2011 [9, 15]. The 2005 and 2011 DHS were conducted on a nationally representative sample of 9,861 and 11,654 households, respectively. The sampling design for both surveys was a two-staged stratified cluster sampling that was not self-weighted at national level. The survey participants/households were stratified into urban or rural groups according to their area of residence. Household's socioeconomic status was measured using household asset data via a principal components analysis. We used the wealth quintiles as a living standard measure in the subsequent modeling.
Utilization of MCH services was selected for analysis. These were binary variables, where a value of 1 was assigned if care was accessed or a value of 0 if care was not accessed. Both prevention and treatment services were included, where we looked at: medical treatment for diarrhea, skilled birth attendance (SBA), measles immunizations and modern contraceptive usage. We used prevalence of diarrhea, cough, fever and stunting in children as morbidity variables.
Inequality in outcomes was measured by calculating a concentration index, where this index quantifies the magnitude of wealth-related inequality that can be compared conveniently across time periods, countries, regions, or other comparators [18]. The paper by Wagstaff et al provides detailed description of concentration index [18]. In our analysis concentration index (C) was computed as twice the (weighted) covariance between the health variable (h) and the fractional rank of the person in the living standard distribution (r), divided by the mean of the health variable (μ) [19] as:
$$ C=\frac{2}{\mu }Cov\left(h,r\right) $$
Concentration index is restricted to values between −1 and 1 and has a value of zero where there is no income-related inequality in outcomes. If the variable reflects morbidity or mortality, the concentration index will usually be negative, showing that ill health is more prevalent among the poor. For coverage indicators, the concentration index is usually positive, as these tend to be higher among the rich [19].
Even though concentration index is a measure of income-related inequality in health care utilization, it does not measure the degree of inequity in use since it still includes legitimate income-related differences in use due to differences in need. Therefore, in our analysis, standardization for differences in need for health care in relation to wealth was done using the method of indirect standardization. Standardization adjusts for the need expected distribution as opposed to the observed distribution of use [20]. To proxy need in health care, the following demographic and morbidity variables were used: age and sex of children under-five years of age and age of women in the reproductive age group (as demographic variables), recent episode of diarrhea (as a morbidity variable in children), history of birth in the past five years (as a proxy of need for SBA) and unmet need for family planning (as a need variable for modern contraceptive usage). Wealth quintile, educational attainment of household head, educational attainment of partner, and area of residence were used as non-need correlates of health care utilization (control variables). Only 0.5 % of the households had health insurance coverage, therefore we did not use it as one of the control variable in our analysis [9].
After estimating the need-standardized utilization, inequity can be tested by determining whether standardized use is unequally distributed across wealth quintiles. Inequity could be measured by estimating the concentration index of need-standardized health care utilization, which is denoted as the health inequity index. Alternatively, the health inequity index can be calculated as a difference between the concentration index for actual utilization and need-expected utilization of medical care [20]. A positive (negative) value of horizontal inequity index indicates horizontal inequity that is pro-rich (pro-poor), while an index value of zero shows absence of horizontal inequity.
The decomposition of the concentration index allows the measurement and explanation of inequality in utilization of health care services across income groups. Wagstaff et al [21] has demonstrated that for any linear regression model of a variable, such as health care use, it is possible to decompose the measured inequality into the contribution of explanatory factors. With this decomposition approach, standardization for need as well as explanation of inequity can be done in one step. Consider the following model:
$$ {y}_i=\alpha +{\displaystyle {\sum}_j{\beta}_j{x}_{ji}+{\displaystyle {\sum}_k{\beta}_k\;{z}_{ki}+{\varepsilon}_i,}} $$
, where x j denotes the need standardizing variables, that includes demographic and health status/morbidity factors, and z k denotes the non-need variables including socioeconomic status, education, area of residence (urban vs. rural). α, β and ε are the constant, regression coefficients and the error term respectively. The concentration index (C) for utilization of health care can then be written as:
$$ \mathrm{C}={\displaystyle {\sum}_j\left({\beta}_j{\overline{x}}_j/\mu \right)}{C}_j+{\displaystyle {\sum}_k\left({\beta}_k{\overline{z}}_k/\mu \right){C}_k+\raisebox{1ex}{$G{C}_u$}\!\left/ \!\raisebox{-1ex}{$\mu \kern0.5em ,$}\right.} $$
, where C j and C k are the concentration indices for the need and non-need variables respectively while μ is the mean of our health variable of interest (y), \( {\overline{x}}_j \) is the mean of x j and \( {\overline{z}}_k \) is the mean of z k . The components \( \left({\beta}_j{\overline{x}}_j/\mu \right) \) and \( \left({\beta}_k{\overline{z}}_k/\mu \right) \) are simply the elasticity of y with respect to x j and z k , respectively, that are evaluated at the sample mean. The last term in the equation \( \left(\raisebox{1ex}{$G{C}_u$}\!\left/ \!\raisebox{-1ex}{$\mu $}\right.\right) \) captures the residual component that reflects the inequality in health that is not explained by systematic variation across income groups in the need and non-need variables.
Decomposition for non-linear models can only be applied using linear approximation which can introduce errors and is complex. Therefore, even if our health variable of interest is a binary variable, we used the linear model. It has been found elsewhere that decomposition results differ little between ordinary least squares and non-linear estimators [22].
Time trends for changes in mean levels of MCH service utilization were assessed using logistic regression model. MCH service utilizations were used as dependent variables while time of survey as independent variables. We computed the percentage change in excess risk by subtracting one from rate ratio (rate ratio-1), where rate ratio is the incidence in the poorest quintile divided by incidence in the richest quintile (Q1/Q5) [23].
Data were analyzed using the statistical software package STATA (version 13), taking into account the sampling design characteristics of each survey.
We did the analyses using publicly available data from demographic health surveys. Ethical procedures were the responsibility of the institutions that commissioned, funded, or managed the surveys. The study was approved by Regional committees for medical and health research ethics (REK) in Norway and Ethiopian Health and Nutrition Research Institute (EHNRI) scientific and ethical review committee.
Utilization of measles immunization and modern contraceptive methods has on average increased between 2005 and 2011 (Table 1). Pro-poor coverage changes with a clear dominance were observed for both interventions, demonstrated by significantly (non-overlapping 95 % CI) lower concentration indices in 2011 as compared to 2005. Use of modern contraceptive methods had the widest coverage gap between the poorest and wealthiest in all surveys. In 2011, modern contraceptive methods use rates were 6 % and 44 % for the poorest and the wealthiest quintiles, respectively.
Table 1 Average, first and fifth quintile values and concentration indices of selected maternal and child health indicators in Ethiopia (DHS: 2005 and 2011)
Prevalence of diarrhea and stunting has decreased between 2005 and 2011 survey years (Table 1). The concentration indices for all morbidities are negative, indicating a higher burden among children from poor households. The inequality across wealth strata was highest for the prevalence of stunting. The excess risk of the poorest quintile relative to the wealthiest quintile for having Acute Respiratory Infection (ARI), diarrhea, fever or stunting is 22 %, 43 %, 30 % and 71 %, respectively. The inequality in the rate of stunting has widened over the period 2005–2011.
The last row of table 2 shows the values of health inequity indices, calculated as the difference between the actual (the unstandardized concentration indices presented as "Total" in the table) and the contribution of all need factors to the concentration indices. The contribution of need factors to concentration index is negative for SBA (−2.1 %) and modern contraceptives (−1.4 %) suggesting that if utilization of these services were determined by need alone it would be pro-poor. In our case, the contribution of need factors to concentration index and their effect on health inequity index is very low highlighting the difficulty to define need for the interventions included in the analysis.
Table 2 Decomposition of the concentration indices for access to selected maternal and child health interventions in Ethiopia, 2011
The health inequity index is positive for all interventions, indicating that for a given need, children and women from wealthier households make greater use of available services in Ethiopia. Decomposition of the concentration index shows that 47 %, 66 %, 76 % and 85 % of wealth-related inequality in access to SBA, medical treatment for diarrhea, modern contraceptive use, and measles vaccination respectively is explained by the direct effect of household economic status and by educational attainment of parents. Area of residence contributes to large proportion (41 %) of the inequality in access for SBA to the disadvantage of the rural households. The elasticity of SBA with respect to women's age and number of births (by a woman in the last five years) were both negative indicating that with increasing maternal age and birth order, the probability of birth attendance by a skilled professional decreases. On the contrary, for women in their reproductive age, the probability of using modern contraceptives on average increases with women's age.
In order to assess the role of PHC expansion on changes in inequality in the utilization of MCH services, we used data on type of facility for diarrhea treatment, source for modern contraceptives and place of delivery. Utilization of services for diarrhea treatment, modern contraceptives and facility delivery in Ethiopia, on average, has improved over the period 2005–2011. Government PHC facilities played the major role for the improvement (Table 3). The contribution of PHC facilities as a point of care for diarrheal treatment, as source of contraceptives and place of delivery rose from 67 %, 74 % and 32 % in 2005 to 74 %, 85 % and 47 % in 2011 respectively. The lower socioeconomic groups are more likely to seek government PHC facilities as a source of modern contraceptive, as indicated by the negative concentration and health inequity indices (see Table 3). Even though concentration and health inequity indices for diarrhea treatment are positive for 2005 and 2011, both have shown a significant pro-poor improvement over the period 2005–2011. For all services, those with high socioeconomic status are more likely to report a visit to private facilities and the gap in private care utilization across socioeconomic groups has widened over time.
Table 3 Wealth related inequality and inequity in health care service utilization for diarrheal treatment, modern contraceptives and place of delivery by type of facility
Despite improvements in coverage of MCH services, the inequality by wealth quintile has remained persistently high in all surveys. Socioeconomic status, measured by a wealth index and parental educational attainment, were the main predictors of differences in utilization of MCH services and health outcomes in children under five years of age. Area of residence has been a significant contributor for the disparity in access to SBA.
Among the health service coverage indicators (2011 DHS), use of modern contraceptive methods was the most inequitably distributed interventions, with a horizontal inequity index of 0.28. The average concentration index for 54 countdown countries for family planning needs satisfied was 0.14 (IQR: 0.05–0.2), making Ethiopia one of the countries with the most unequal distribution of the service [16]. Wealth level and educational attainment of women are estimated to jointly contribute to 75 % of this inequity in use of contraceptive methods. Several studies have demonstrated wealth and parental educational attainment as major determinants of access to MCH services in Sub-Saharan African countries [24, 25].
Albeit the low coverage of measles immunization in Ethiopia, it was the most equitably distributed indicator with a horizontal inequity index of 0.08 in 2011 DHS and it has shown a significant pro-poor improvement in comparison to 2005 DHS finding. The pro-poor improvement in measles immunization might be related to the "follow-up" measles vaccination campaigns conducted in Ethiopia. The low measles immunization coverage with marked heterogeneity by geographic location threatens the goals set out for elimination of measles at national and global levels [26].
The PHC service in Ethiopia is organized to deliver a package of basic preventive and curative health services targeting rural households. It is comprised of the following four health subprograms that conform to the elements of PHC as defined in the Alma Ata Declaration [27]: hygiene and environmental sanitation, disease prevention and control, health education and communication and family health (that include MCH, vaccination and family planning services).
PHC facilities have played an increasingly important role as points of care for diarrhea treatment and as a source of modern contraceptive for the less privileged socioeconomic group. Several studies have documented the effect of a scale up and equitable distribution of primary health care infrastructure and intervention coverage on inequality in service utilization and child health outcomes among different socioeconomic groups [23, 28, 29]. The role of PHC facilities as points of delivery care services in Ethiopia is relatively low. Public hospitals and private facilities play a major role as delivery care services outlet, more so for the wealthiest quintile and urban residents. The low utilization of these services among the poor and rural residents might be related to out-of-pocket spending by families, either for services or because families need to travel to a health facility. In countries where maternity hospitals are accessible and free of charge, coverage for SBA is almost universal [16]. Quality of care is an important aspect in utilization of delivery care services. The 2008 National baseline assessment for emergency obstetric and neonatal care has identified critical gaps in the delivery of quality obstetric and neonatal care in Ethiopia [30]. A study conducted in Ethiopia has also shown that women in rural Ethiopia strongly preferred health facility attributes indicative of good technical quality, reliable supply of medicines, functioning equipment and respectful provider attitude in selecting a delivery facility [31]. MCH services are among those services that suffer from inadequate resource allocation compromising delivery of quality services [10]. Cultural factors also influence utilization of facility delivery care service. According to 2011 Ethiopian DHS, 31 % of rural women reported that facility deliveries were not customary [9].
This study has some limitations. Recall bias is one possible problem in surveys as they are based on maternal recall. Differential reporting by rich and poor mother's and between urban and rural residents is also a concern for a possible bias. The other limitation is that associated with asset indices. We have observed that the wealthiest quintile tend to reside in urban areas, particularly in the capital city, so that wealth inequities are closely associated with urban/rural disparities. In our analysis, the contribution of need factors to the horizontal inequity index was negligible. This could lead to a biased measurement of horizontal inequity index if there were other need factors (which we failed to include) that vary with income. Additionally, in the computation of concentration indices for binary outcomes, we used a linear regression model that may lead to inaccuracies.
Despite these limitations, our study adds important findings to the existing body of literature. The study included critical MCH interventions (such as family planning) and morbidity outcomes (for example, stunting) not addressed elsewhere. More importantly, we tried to assess if PHC expansion had any effect on inequality and inequity in access to care. The expansion of PHC facilities seems to have contributed positively to the coverage changes and the pro-poor and pro-rural improvements even though other factors (such as women's education, safe water supply, food security) might have contributed as well. The 2008 World Health Report has reaffirmed the role of PHC as a pathway to achieve UHC and as a core strategy for health systems strengthening [32]. The new global investment frame work for Women's and Children's Health [33] has shown the substantial economic and social benefits of investing in Reproductive, Maternal, Neonatal and Child Health interventions. Nearly half of the reduction in child and maternal deaths was estimated to result from greater access to contraceptives for effective family planning that can be scaled-up at a relatively small cost using PHC as a delivery platform. The expected demographic dividend from the reduction in unintended pregnancy was estimated to exceed 8 % of the Gross Domestic Product by 2035 in countries with high fertility rate like Ethiopia. Further reduction in maternal and child mortality requires ensuring a reliable access to an integrated antenatal, intrapartum and postpartum care by skilled attendants [33, 34].
While great progress has been made in Ethiopia, this analysis demonstrates that there is continued room for improvement to address persistently high inequality across the socio-economic spectrum. Future plans should aim to sustain current successes in health system strengthening and to bring these benefits to all women and children, particularly to those socioeconomically marginalized and rural residents. In addition to continued improvements to Ethiopia's health sector, investments in women's education and implementing pro-poor policies will be critical to maximize equitable health gains and population wide benefits. Monitoring the progress of intervention implementation should have an equity perspective.
ARI:
acute respiratory infection
concentration indices
DHS:
demographic health survey
HCs:
HEWs:
health extension workers
HPs:
health posts
HSDP:
health sector development program
MCH:
PHC:
SBA:
skilled birth attendant
UHC:
Victora CG, Barros AJD, Axelson H, Bhutta ZA, Chopra M, França G, et al. How changes in coverage affect equity in maternal and child health interventions in 35 countdown to 2015 countries: an analysis of national surveys. Lancet. 2012;380:1149–56.
Gwatkin D, Rutstein S, Johnson K, Suliman E, Wagstaff A, Amozou A. Initial country-level information about socioeconomic differences in health, nutrition, and population. Washington: The World Bank; 2007.
Boerma JT, Bryce J, Kinfu Y, Axelson H, Victora CG. Mind the gap: equity and trends in coverage of maternal, newborn, and child health services in 54 Countdown countries. Lancet. 2008;371:1259–67.
Moser KA, Leon DA, Gwatkin DR. How does progress towards the child mortality millennium development goal affect inequalities between the poorest and least poor? Analysis of Demographic and Health Survey data. BMJ. 2005;331(7526):1180–2.
Victora CG, Wagstaff A, Schellenberg JA, Gwatkin D, Claeson M, Habicht JP. Applying an equity lens to child health and mortality: more of the same is not enough. Lancet. 2003;362(9379):233–41.
Fotso JC. Child health inequities in developing countries: differences across urban and rural areas. Int J Equity Health. 2006;11:5–9.
Kakazo M, Lehmann D, Coakley K, Gratten H, Saleu G, Taime J, et al. Mortality rates and the utilization of health services during terminal illness in the Asaro Valley, Eastern Highlands Province, Papua New Guinea. P N G Med J. 1999;42:13–26.
UNICEF, World Health Organization, The World Bank and United Nations. Levels and Trends in Child Mortality: Report 2012. 2012. Estimates Developed by the UN Inter-agency Group for Child Mortality Estimation. UNICEF; 2012.
Central Statistical Agency [Ethiopia] and ICF International. Ethiopia Demographic Health Survey 2011. Addis Ababa and Calverton Maryland: Central Statistics Agency and ICF International; 2012.
Federal Democratic Republic of Ethiopia Ministry of health. Health Sector Development Program IV (2010/2011-2014/2015. 2010.
Federal Ministry of Health (Ethiopia). Health and Health related indicators 2005 E.C (2012/2013). 2014.
Federal Ministry of Health of Ethiopia. Health Extension Implementation Guide. Addis Ababa: Health Extension Education Center; 2007.
Ethiopia Federal Ministry of Health. April 2014. Ethiopia's Fifth National Health Accounts 2010/2011. Addis Ababa, Ethiopia
World Health Organization. 2010. The World Health Report. Health systems financing: the path to universal coverage. Geneva: World Health Organization; 2010.
Central statistical agency [Ethiopia] and ICF International. Ethiopia Demographic Health Survey 2005. Addis Ababa and Calverton, Maryland: Central Statistical Agency and ICF International; 2006.
Barros AJD, Ronsmans C, Axelson H, Loaiza E, Bertoldi A, França G, et al. Equity in maternal, newborn, and child health interventions in Countdown to 2015: a retrospective review of survey data from 54 countries. Lancet. 2012;379:1225–33.
Skaftun EK, Ali M, Norheim OF. Understanding Inequalities in Child Health in Ethiopia: Health Achievements Are Improving in the Period 2000–2011. PLoS ONE. 2014; doi:10.1371/journal.pone.0106460
Wagstaff A, Paci P, van Doorslaer E. On the measurement of inequalities in health. Soc Sci Med. 1991;33(5):545–57.
O'Donnell O, van Doorslaer E, Wagstaff A, Lindelow M. Analysing health equity using household survey data: a guide to techniques and their implementation. Washington, D.C.: The World Bank; 2008.
van Doorslaer E, Koolman X, Jones AM. Explaining income-related inequalities in doctor utilization in Europe. Health Econ. 2004;13:629–47.
Wagstaff A, van Doorslaer E, Watanabe N. On decomposing health sector inequalities, with an application to malnutrition inequalities in Vietnam. J Econometrics. 2003;112:219–27.
Van Doorslaer E, Masseria C. The OECD Health Equity Research Group. Income related inequality in the use of medical care in 21 OECD countries, In OECD, Towards high performing health systems. Paris: Head of Publication Services OECD; 2004.
Vapattanawong P, Hogan MC, Hanvoravongchai P, Gakidou E, Vos T, Lopez AD, et al. Reductions in child mortality levels and inequality in Thailand: analysis of two censuses. Lancet. 2007;369(9564):850–5.
Van Malderen C, Ogali I, Khasakhala A, Muchiri SN, Sparks C, Van Oyen H, et al. 2013. Decomposing Kenyan socioeconomic inequalities in skilled birth Attendance and measles immunization. International Journal for Equity in Health. 2013; doi: 10.1186/1475-9276-12-3.
Adewemimo AW, Msuya SE, Olaniyan CT, Adegoke AA. Utilization of skilled birth attendance in Northern Nigeria: A cross-sectional survey. Midwifery. 2014;30:e7–e13.
World Health Organization. Global Vaccine Action Plan 2011-2020. Geneva: World Health Organization; 2013.
WHO (World Health Organization). 1978. "Declaration of Alma-Ata." International Conference on Primary Health Care, Alma-Ata, September 6–12. http://www.who.int/publications/almaata_declaration_en.pdf. Accessed 30 Nov 2015.
Macinko J, Guanais FC, de Fatima M, de Souza M. Evaluation of the impact of the family health program on infant mortality in Brazil, 1990–2002. J Epidemiol Community Health. 2006;60(1):13–9.
Masanja H, Schellenberg JA, de Savigny D, Mshinda H, Victora CG. Impact of integrated management of childhood illness on inequalities in child health in rural Tanzania. Health Policy Plan. 2005;20 Suppl 1:i77–84.
Federal Ministry of Health [Ethiopia], UNICEF, UNFPA, WHO and AMDD. National Baseline Assessment for Emergency Obstetric & Newborn Care Ethiopia. 2008.
Kruk ME, Paczkowski MM, Tegegn A, Tessema F, Hadley C, Asefa M, et al. Women's preferences for obstetric care in rural Ethiopia: a population-based discrete choice experiment in a region with low rates of facility delivery. J Epidemiol Community Health. 2010;64:984–8.
World Health Organization. World Health Report 2008: Primary health care – Now more than ever. Geneva: World Health Organization; 2008.
Stenberg K, Axelson H, Sheehan P, Anderson I, Gülmezoglu AM, Temmeman M, et al. Advancing social and economic development by investing in women's and children's health: a new Global Investment Framework. Lancet. 2014;383:1333–54.
Goldie SJ, Sweet S, Carvalho N, Natchu UCM, Hu D. Alternative Strategies to Reduce Maternal Mortality in India: A Cost-Effectiveness Analysis. PLoSMed. 2010; doi:10.1371/journal.pmed.1000264
We thank University of Bergen for funding the project. The funding body has no role in the design, analysis, and interpretation of data; in the writing of the manuscript; and in the decision to submit the manuscript for publication.
Department of Global Public Health and Primary Care, University of Bergen, Bergen, Norway
Solomon Tessema Memirie, Ole F. Norheim & Kjell Arne Johansson
Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA, USA
Stéphane Verguet
Department of Global Health, University of Washington, Seattle, WA, USA
Carol Levin
Solomon Tessema Memirie
Ole F. Norheim
Kjell Arne Johansson
Correspondence to Solomon Tessema Memirie.
STM, KAJ and OFN initiated and conceptualized the study. STM coordinated the research and did the analysis with KAJ and OFN. STM wrote the first draft of the manuscript. KAJ, OFN, SV, and CL reviewed the manuscript and provided advice and suggestions. STM had final responsibility to submit for publication. All authors read and approved the final manuscript.
Memirie, S.T., Verguet, S., Norheim, O.F. et al. Inequalities in utilization of maternal and child health services in Ethiopia: the role of primary health care. BMC Health Serv Res 16, 51 (2016). https://doi.org/10.1186/s12913-016-1296-7
Maternal and child health services
|
CommonCrawl
|
The digital subsurface water-cooler
March 12, 2019 / Matt Hall
Back in August 2016 I told you about the Software Underground, an informal, grass-roots community of people who are into rocks and computers. At its heart is a public Slack group (Slack is a bit like Yammer or Skype but much more awesome). At the time, the Underground had 130 members. This morning, we hit ten times that number: there are now 1300 enthusiasts in the Underground!
If you're one of them, you already know that it's easily the best place there is to find and chat to people who are involved in researching and applying machine learning in the subsurface — in geoscience, reservoir engineering, and enything else to do with the hard parts of the earth. And it's not just about AI… it's about data management, visualization, Python, and web applications. Here are some things that have been shared in the last 7 days:
News about the upcoming Software Underground hackathon in London.
A new Udacity course on TensorFlow.
Questions to ask when reviewing machine learning projects.
A Dockerfile to make installing Seismic Unix a snap.
Mark Zoback's new geomechanics course.
It gets better. One of the most interesting conversations recently has been about starting a new online-only, open-access journal for the geeky side of geo. Look for the #journal channel.
Another emerging feature is the 'real life' meetup. Several social+science gatherings have happened recently in Aberdeen, Houston, and Calgary… and more are planned, check #meetups for details. If you'd like to organize a meetup where you live, Software Underground will support it financially.
Find out more or sign up
We've also gained a website, softwareunderground.org, where you'll find a link to sign-up in the Slack group, some recommended reading, and fantastic Software Underground T-shirts and mugs! There are also other ways to support the community with a subscription or sponsorship.
If you've been looking for the geeks, data-heads, coders and makers in geoscience and engineering, you've found them. It's free to sign up — I hope we see you in there soon!
Slack has nice desktop, web and mobile clients. Check out all the channels — they are listed on the left:
March 12, 2019 / Matt Hall/ Comment
Fun, News
community, geoscience, programming, online, collaboration
December 20, 2018 / Matt Hall
It's almost the end of another trip around the sun. I hope it's been kind to you. I mean, I know it's sometimes hard to see the kindness for all the nonsense and nefariousness in <ahem> certain parts of the world, but I hope 2018 at least didn't poke its finger in your eye, or set fire to any of your belongings. If it did — may 2019 bring you some eye drops and a fire extinguisher.
Anyway, at this time of year, I like to take a quick look over my shoulder at the past 12 months. Since I'm the over-sharing type, I like to write down what I see and put it on the Internet. I apologize, and/or you're welcome.
Top of the posts
We've been busier than ever this year, and the blog has taken a bit of a hit. In spite of the reduced activity (only 45 posts, compared to 53 last year), traffic continues to grow and currently averages 9000 unique visitors per month. These were the most visited posts in 2018:
Big open data, or is it? — about the amazing Volve dataset released by Equinor earlier this year. Don't miss the follow-up post about its licensing: Volve: not open after all, in which I discuss the license that Equinor chose.
x lines of Python: contour maps — making nice contour maps in Python (strictly a 2017 post, but almost all of its traffic came in 2018).
What is scientific computing? — the emergence of the 'digital geoscientist'.
Results from the AAPG machine learning unsession — what happened in Salt Lake City.
Jounce, crackle and pop — derivatives of displacement you never knew you cared about.
Last December's post, No more rainbows, got more traffic this year than any of these posts. And, yet again, k is for wavenumber got more than any. What is it with that post??
Every year I take a look at where our people are reading the blog from (according to Google). We've travelled more than usual this year too, so I've added our various destinations to the map… it makes me realize we're still missing most of you.
Houston (number 1 last year)
London (up from 3)
Calgary (down from 2)
Stavanger (6)
New York (—)
Bangalore (—)
Jakarta (—)
Together these cities capture at least 15% of our readship. New York might be an anomaly related to the location of cloud infrastructure there. (Boardman, Oregon, shows up for the same reason.) But who knows what any of these numbers mean…
People often ask us how we earn a living, and sometime I wonder myself. But not this year: there was a clear role for us to play in 2018 — training the next wave of digital scientists and engineers in subsurface.
We continued the machine learning project on GPR interpretation that we started last year.
We revived Pick This and have it running on a private corporate cloud at a major oil company, as well as on the Internet.
We have spent 63 days in the classroom this year, and taught 325 geoscientists the fundamentals of Python and machine learning.
Apart from the 6 events of our own that we organized, we were involved in 3 other public hackathons and 2 in-house hackathons.
We hired awesome digital geologist Robert Leckenby (right) full time.
The large number of people we're training at the moment is especially exciting, because of what it means for the community. We spent 18 days in the classroom and trained 139 scientists in the previous four years combined — so it's clear that digital geoscience is important to people today. I cannot wait to see what these new coders do in 2019 and beyond!
The hackathon trend is similar: we hosted 310 scientists and engineers this year, compared to 183 in the four years from 2013 to 2017. Numbers are only numbers of course, but the reality is that we're seeing more mature projects, and more capable coders, at every event. I know it's corny to say so, but I feel so lucky to be a scientist today, there is just so much to do.
Agile is, as they say, only wee. And we all live in far-flung places. But the Intertubes are a marvellous thing, and every week we meet new people and have new conversations via this blog, and on Twitter, and the Software Underground. We love our community, and are grateful to be part of it. So thank you for seeking us out, cheering us on, hiring us, and just generally being a good sport about things.
From all of us at Agile, have a fantastic festive season — and may the new year bring you peace and happiness.
December 20, 2018 / Matt Hall/ 3 Comments
retrospective, work, blogging
I'm dreaming of a blueschist Christmas
November 30, 2018 / Matt Hall
The festive season is speeding towards us at the terrifying rate of 3600 seconds per hour. Have you thought about what kind of geoscientific wonders to make or buy for the most awesome kids and/or grownups in your life yet? I hope not, because otherwise this post is pretty redundant… If you have, I'm sure you can think of <AHEM> at least one more earth scientist in your life you'd like to bring a smile to this winter.
I mean, here's a bargain to start you off: a hammer and chisel for under USD 15 — an amazing deal. The fact that they are, unbelievably, made of chocolate only adds to the uses you could put them to.
If your geoscientist is on a diet or does their fieldwork in a warm country, then obviously these chocolate tools won't work. You could always get some metal ones instead (UK supplier, US supplier).
Image © The Chocolate Workshop
Before you start smashing things to bits with a hammer, especially one that melts at 34°C, it's sometimes nice to know how hard they are. Tapping them with a chocolate bar or scratching them with your fingernail are time-tested methods, but the true geologist whips out a hardness pick.
I have never actually seen one of these (I'm not a true geologist) so the chances of your geoscientist having one, especially one as nice as this, are minuscule. USD 90 at geology.com.
Image © Geology.com
Hammers can be used around the house too, of course, for knocking in nails or sampling interesting countertops. If your geoscientist is houseproud, how about some of Jane Hunter's beautiful textile artworks, many of which explore geological and geomorphological themes, especially Scottish ones. The excerpt shown here is from Faults and Folds (ca. USD 1000); there are lots of others.
If textiles aren't your thing, these hydrology maps from Muir Way are pretty cool too. From USD 80 each.
Image © Jane Hunter
Topographic maps are somehow more satisfying when they are three-dimensional. So these beautiful little wooden maps from ElevatedWoodworking on Etsy, which seem too cheap to be true, look perfect.
There's plenty more for geoscientists on Etsy, if you can look past the crass puns slapped clumsily onto mugs and T-shirts. For example, if geostatistics get you going, start at NausicaaDistribution and keep clicking. My favourites: the Chisquareatops shirt and the MCMC Hammer cross-stitch pattern.
Image © ElevatedWoodworking on Etsy
I like statistics. Sometimes, not very often, people ask my where my online handle kwinkunks comes from. It's a phonetic spelling of one of my favourite words, quincunx, which has a couple of meanings, but the most interesting one is a synonym for a Galton board or bean machine. Galton boards are awesome! Demonstrate the central limit theorem right on your desktop! From USD 10: a cheap one, and an expensive one.
Oh, and there's a really lovely/expensive one from Lightning Calculator if your geoscientist is the sort of person who likes to have the best of everything. It costs USD 1190 and it looks fantastic.
Image © Random Walker
Let's get back to rocks. You can actually just give a rock to a geologist, and they'll be happy. You just might not see much of them over the holiday, as they disappear off to look at it.
If your geologist has worked in the North Sea in their career, they will definitely, 100% enjoy these amazing things. Henk Kombrink and Kirstie Wright are distributing chunks of actual North Sea core. The best part is that you can choose the well and formation the rock comes from! We gave some resinated core slabs away as prizes at the hackathons this month, and the winners loved them.
Image © Henk Kombrink
Traditionally, I mention some books. Not that I read books anymore (reasons). If I did read books, these are the ones I'd read:
Timefulness: How Thinking Like a Geologist Can Help Save the World, by Marcia Bjornerud, Princeton University Press. (Who doesn't want to save the world?)
The Writer's Map: An Atlas of Imaginary Lands, by Huw Lewis-Jones (ed), University of Chicago Press. (I like maps.)
Beyond Weird: Why Everything You Know About Quantum Physics is Different, by Philip Ball. University of Chicago Press. (Not geoscience, but I've enjoyed Philip Ball's books before.)
That's it for this year! I hope there's something here to brighten your geoscientist's day. Have fun shopping!
PS In case there's not enough here to choose from, you can trawl through the posts from previous years too:
2017: The post of Christmas present (this pun was, I believe, underrated)
2016: St Nick's list for the geoscientist
2015: Rockin' around the Christmas tree
2014: It's the GGGG (giant geosciece gift guide)
2013: All you want for Christmas
2012: How to make a geologist happy
2011: Giftological and giftophysical goodness
Unlike most images on agilescientific.com, the ones in this post are not my property and are not open access. They are the copyright of their respective owners, and I'm using them here in accordance with typical Fair Use terms. If owners object, please let me know.
November 30, 2018 / Matt Hall/ 2 Comments
gifts, Christmas, shopping
Life lessons from a neural network
August 03, 2018 / Matt Hall
The latest Geophysical Tutorial came out this week in The Leading Edge. It's by my friend Gram Ganssle, and it's about neural networks. Although the example in the article is not, strictly speaking, a deep net (it only has one hidden layer), it concisely illustrates many of the features of deep learning.
Whilst editing the article, it struck me that some of the features of deep learning are really features of life. Maybe humans can learn a few life lessons from neural networks!
Seek nonlinearity
Activation functions are one of the most important ingredients in a neural network. They are the reason neural nets are able to learn complex, nonlinear relationships without a gigantic number of parameters.
Life lesson: look for nonlinearities in your life. Go to an event aimed at another profession. Take a new route to work. Buy a random volume at your local bookshop. Pick that ice-cream flavour you've never dared try (durian, anyone?).
Iterate
Neural networks learn by repetition. They start with random guesses about what might work, then they process each data point a hundred, maybe 100,000 times, check the answer, adjust weights, and get a little better each time.
Life lesson: practice makes perfect. You won't get anything right the first time (if you do, celebrate!). The important thing is that you pay attention, figure out what to change, and tweak it. Then try again.
One of the things we know for sure about neural networks is that they work best when they train on a lot of data. They need to see as much of the problem domain as possible, including the edge cases and the worst cases.
Life lesson: seek data. If you're a geologist, get out into the field and see more rocks. Geophysicists: look at more seismic. Whoever you are, read more. Afterwards, share what you find with others, and listen to what they have learned.
Stretch metaphors
Yes, well, I could probably go on. Convolutional networks teach us to create new things by mixing ideas from different parts of our experience. Long training times for neural nets teach us to be patient, and invest in GPUs. Hidden layers with many units teach us to... er, expect a lot of parameters in our lives...?
Anyway, the point is that life is like a neural net. Or maybe, no less interestingly, neural nets are like life. My impression is that most of the innovations in deep learning have come from people looking at their own interpretive and discriminatory powers and asking, "What do I do here? How do I make these decisions?" — and then trying to approximate that heuristic or thought process in code.
What's the lesson here? I have no idea. Enjoy your weekend!
Thumbnail image by Flickr user latteda, licensed CC-BY. The Leading Edge cover is copyright of SEG, fair use terms.
August 03, 2018 / Matt Hall/ 2 Comments
Fun, Machine Learning
neural networks, deep learning
Productive chaos
May 25, 2018 / Matt Hall
Wednesday was a good day.
Over 150 participants came to Room 251 for all or part of the first 'unsession' at the AAPG Annual Conference and Exhibition in Salt Lake City. I was one of the hosts of the event, and emceed the afternoon.
In a nutshell, it was awesome. I have facilitated unsessions before, but this event was on a new scale. Twelve tables of 8–10 seats — covered in sticky notes, stickers, coloured pens, and large sheets of paper — quickly filled up. Together, we burned about 10 person-weeks of human productivity, raising the temperature in the room by several degrees in the process.
Diversity means good conversation
On the way in, people self-identified as mostly software (blue name tags) or mostly soft rocks (red), as a non-serious way to get a handle on how many data scientists we had vs how many people are focused on the rocks themselves — without, I hope, any kind of value judgment. The ratio was about 1:2.
As people continued to drift in, we counted people identifying with various categories, to get a very rough idea of who was in the room. The results are shown here. In addition, I counted 24 women present at the start. Part of the point here is to introduce participants to each other, but there's another purpose too. AAPG, like many scientific organizations, is grappling with diversity today. Like others, it needs to do much better. A small part of the solution is, I think, to name it and measure how we're doing at every opportunity. It's one way to pay more attention.
Harder to capture is the profound level of job diversity. People responsible for billion-dollar budgets sat with graduate students, AAPG medal winners with SEC executives. We even had a venture capitalist and a physician.
Look at all these lovely people:
Tangible and intangible output
At the start of the session, I told the room I wanted to fill the walls with things we made — with data. We easily achieved this, producing a survey of the skills geoscientists will need in the future, hundreds of high-value machine learning tasks in geoscience, a ranked list of the most interesting of these, and even some problem analysis of some of them. None of this was definitive, but I hope it will provide grist for the mill of future conversations about machine learning in geoscience.
As well as these tangible products, each person in the room walked away with new connections and new ideas — about machine learning, about collaboration, and about what scientific meetings can be like.
A lot of people contributed to making this event happen.
My unsession co-chairs, Brendon Hall and Yan Zaretskiy of Enthought — spent several hours on the phone with me over the last few weeks, shaping the content and flow of an event that was a bit, er, fuzzy.
We seeded the tables with some of the Software Underground crowd who were in town for the hackathon and AAPG. This ensures that there's no failure case: twelve people are definitely coming. And in the unlikely event that 100 people come, there are twelve allies to manage some of the chaos. Heartfelt thanks to the table hosts:
Didi Ooi of the University of Bristol
Graham Ganssle of Expero
Lisa Stright of Colorado State University
Thomas Martin of Colorado School of Mines
Tom Creech of ExxonMobil
David Holmes of Dell EMC
Steve Purves of Euclidity
Diego Castaneda of Agile
Evan Bianco of Agile
Jenny Cole of SEG came along to observe the session and I appreciated her enthusiastic help as it became clear we were in for more than the usual amount of entropy in the room. Theresa Curry of AAPG did an amazing job getting the venue set up, providing refreshments, and ensuring the photographers were there to capture some of the action. The ACE 2018 organizing committee, especially Zane Jobe and Lauren Birgenheier, did their part by agreeing to supprt including such a weird-sounding thing in the program.
Finally, thank you to the 100+ scientists that came to the event, not knowing at all what to expect. It was a privilege to receive your enthusiastic participation and thoughtful contributions. Let's do it again some time!
We will digitize the ideas and products of the unsession over the coming weeks. They will be released under an open license. Watch this space for updates.
If you're interested in the methodology we use for these events, check out Proceedings of an unsession in CSEG Recorder, November 2013. If you'd like help running an event like this, get in touch.
May 25, 2018 / Matt Hall/ 2 Comments
Event, Fun, News
collaboration, conferences, unsession, innovation, brainstorming, AAPG18
The geospatial sport
An orienteer leaving a control site.
If you love studying maps or solving puzzles, and you love being outside, then orienteering — the thinking runner's sport — might be the sport you've been looking for.
There are many, many flavours of orienteering (on foot, on skis, in kayaks, etc), but here's how it generally works:
Competitors make their way to an event, perhaps on a weekday evening, maybe a weekend morning.
Several courses are offered, varying in length (usually 2 to 12 km) and difficulty (from walk-in-the-park to he's-still-not-back-call-search-and-rescue).
A course consists of about 20 or so 'controls', which must be visited in order. Visits are recorded on an electronic 'dibber' carried by the orienteer, or by shapes punched on a card.
Each person chooses a course , and is allotted a start time.
You can't see your course — or the map — until you start. You have 0 seconds to prepare.
You walk or run or ski or bike around the controls, at various speeds and in various (occasionally incorrect) directions.
After making it to the finish, everyone engages in at least 30 minutes of analysis and dissection of route choices and split times, while eating everything in sight.
The catch is that your navigation system is entirely analog: you are only allowed a paper map and an analog compass, plus a whistle for safety. The only digital components are the timing system and the map-making process — which starts with LiDAR and ends in a software package like OCAD or OOM.
Orienteering maps are especially awesome. They are usually made especially for the sport, typically at 1:5000 or 1:7500, with a 2.5 m or 5 m contour interval. Many small features are mapped, for example walls and fences, small pits and mounds, and even individual trees and boulders.
The sample orienteering map from the Open Orienteering Mapper software, licensed GNU GPL. White areas correspond to open, runnable (high velocity) woodland, with darker shades of green indicating slower running. Yellow areas are open. Olive green areas are out of bounds.
Other than the contours and paths, the most salient feature is usually the vegetation, which is always carefully mapped. Geophysicists will like this: the colours correspond more to the speed with which you can run than to the type of vegetation. Orienteering maps are velocity maps!
Here's part of another map, this one from Debert, Nova Scotia:
So, sporty cartophile friends, I urge you to get out and give it a try. My family loves it because it's something we can do together — we all get to compete on our own terms, with our own peers, and there's a course for everyone. I'm coming up on 26 years in the sport, and every event is still a new adventure!
World Orienteering Day — really a whole week — is in the last week in May. It's a great time to give orienteering a try. There are events all over the world, but especially in Europe. If you can't find one near you, track down your national organization and check for events near you.
May 03, 2018 / Matt Hall/ Comment
Fun, Geospatial
geospatial, sport, maps
It's Dynamic Range Day!
April 27, 2018 / Matt Hall
OK signal processing nerds, which side are you on in the Loudness War?
If you haven't heard of the Loudness War, you have some catching up to do! This little video by Matt Mayfield is kinda low-res but it's the shortest and best explanation I've been able to find. Watch it, then choose sides >>>>
There's a similar-but-slightly-different war going on in photography: high-dynamic-range or HDR photography is, according to some purists, an existential threat to photography. I'm not going to say any more about it today, but these HDR disasters speak volumes.
True amplitudes
The ideology at the heart of the Loudness War is that music production should be 'pure'. It's analogous to the notion that amplitudes in seismic images should be 'true', and just as nuanced. For some, the idea could be to get as close as possible to a live performance, for others it might be to create a completely synthetic auditory experience; for a record company the main point is to be noticed and then purchased (or at least searched for on Spotify). It reminds me a bit of the aesthetically
For a couple of decades, mainstream producers succumbed to the misconception that driving up the loudness — by increasing the mean amplitude, in turn by reducing the peaks and boosting the quiet passages — was the solution. But this seems to be changing. Through his tireless dedication to the cause, engineer Ian Shepherd has been a key figure in unpeeling this idée fixe. As part of his campaigning, he instituted Dynamic Range Day, and tomorrow is the 8th edition.
If you want to hear examples of well-produced, dynamic music, check out the previous winners and runners up of the Dynamic Range Day Award — including tunes by Daft Punk, The XX, Kendrick Lamar, and at the risk of dating myself, Orbital.
The end is in sight
I'll warn you right now — this Loudness War thing is a bit of a YouTube rabbithole. But if you still haven't had enough, it's worth listening to the legendary Bob Katz talking about the weapons of war.
My takeaway: the war is not over, but battles are being won. For example, Spotify last year reduced its target output levels, encouraging producers to make more dynamic records. Katz ends his video with "2020 will be like 1980" — which is a good thing, in terms of audio engineering — and most people seem to think the Loudness War will be over.
April 27, 2018 / Matt Hall/ Comment
Event, Fun, Science
music, signal processing
Happy π day, Einstein
It's Pi Day today, and also Einstein's 139th birthday. MIT celebrates it at 6:28 pm — in honour of pi's arch enemy, tau — by sending out its admission notices.
And Stephen Hawking died today. He will leave a great, black hole in modern science. I saw him lecture in London not long after A Brief History of Time came out. It was one of the events that inspired me along my path to science. I recall he got more laughs than a lot of stand-ups I've seen.
But I can't really get behind 3/14. The weird American way of writing dates, mixed-endian style, really irks me. As a result, I have previously boycotted Pi Day, instead celebrating it on 31/4, aka 31 April, aka 1 May. Admittedly, this takes the edge off the whole experience a bit, so I've decided to go full big-endian and adopt ISO-8601 from now on, which means Pi Day is on 3141-5-9. Expect an epic blog post that day.
Anyway, I will transcend the bickering over dates (pausing only to reject 22/7 and 6/28 entirely so don't even start) to get back to pi. It so happens that Pi Day is of great interest in our house this year because my middle child, Evie (10), is a bit obsessed with pi at the moment. Obsessed enough to be writing a book about it (she writes a lot of books; some previous topics: zebras, Switzerland, octopuses, and Settlers of Catan fan fiction, if that's even a thing).
I helped her find some ways to generate pi numerically. My favourite one uses Riemann's zeta function, which we'd recently watched a Numberphile video about. It's the sum of the reciprocals of the natural numbers raised to increasing powers:
$$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}$$
Leonhard Euler solved the Basel problem in 1734, proving that \(\zeta(2) = \pi^2 / 6\), so you can compute pi slowly with a naive implementation of the zeta function:
def zeta(s, terms=1000):
z = 0
for t in range(1, int(terms)):
z += 1 / t**s
return z
(6 * zeta(2, terms=1e7))**0.5
Which returns pi, correct to 6 places:
Or you can use one of the various optimized versions of the zeta function, for example this one from the floating point math library mpmath (which I got from this awesome list of 100 ways to compute pi):
>>> from mpmath import *
>>> mp.dps = 50
>>> mp.pretty = True
>>> sqrt(6*zeta(2))
3.1415926535897932384626433832795028841971693993751068
...which is correct to 50 decimal places.
Here's the bit of Evie's book where she explains a bit about transcendental numbers.
Evie's book shows the relationships between the sets of natural numbers (N), integers (Z), rationals (Q), algebraic numbers (A), and real numbers (R). Transcendental numbers are real, but not algebraic. (Some definitions also let them be complex.)
I was interested in this, because while I 'knew' that pi is transcendental, I couldn't really articulate what that really meant, and why (say) √2, which is also irrational, is not also transcendental. Succinctly, transcendental means 'non-algebraic' (equivalent to being non-constructible). Since √2 is obviously the solution to \(x^2 - 2 = 0\), it is algebraic and therefore not transcendental.
Weirdly, although hardly any numbers are known to be transcendental, almost all real numbers are. Isn't maths awesome?
Have a transcendental pi day!
The xkcd comic is by Randall Munroe and licensed CC-BY-NC.
Science, Fun, Event
mathematics, geeky, numbers
Jounce, Crackle and Pop
I saw this T-shirt recently, and didn't get it. (The joke or the T-shirt.)
It turns out that the third derivative of displacement \(x\) with respect to time \(t\) — that is, the derivative of acceleration \(\mathbf{a}\) — is called 'jerk' (or sometimes, boringly, jolt, surge, or lurch) and is measured in units of m/s³.
So far, so hilarious, but is it useful? It turns out that it is. Since the force \(\mathbf{F}\) on a mass \(m\) is given by \(\mathbf{F} = m\mathbf{a}\), you can think of jerk as being equivalent to a change in force. The lurch you feel at the onset of a car's acceleration — that's jerk. The designers of transport systems and rollercoasters manage it daily.
$$ \mathrm{jerk,}\ \mathbf{j} = \frac{\mathrm{d}^3 x}{\mathrm{d}t^3}$$
Here's a visualization of velocity (green line) of a Tesla Model S driving in a parking lot. The coloured stripes show the acceleration (upper plot) and the jerk (lower plot). Notice that the peaks in jerk correspond to changes in acceleration.
The snap you feel at the start of the lurch? That's jounce — the fourth derivative of displacement and the derivative of jerk. Eager et al (2016) wrote up a nice analysis of these quantities for the examples of a trampolinist and roller coaster passenger. Jounce is sometimes called snap... and the next two derivatives are called crackle and pop.
What about momentum?
If the momentum \(\mathrm{p}\) of a mass \(m\) moving at a velocity \(v\) is \(m\mathbf{v}\) and \(\mathbf{F} = m\mathbf{a}\), what is mass times jerk? According to the physicist Philip Gibbs, who investigated the matter in 1996, it's called yank:
"Momentum equals mass times velocity.
Force equals mass times acceleration.
Yank equals mass times jerk.
Tug equals mass times snap.
Snatch equals mass times crackle.
Shake equals mass times pop."
There are jokes in there, help yourself.
What about integrating?
Clearly the integral of jerk is acceleration, and that of acceleration is velocity, the integral of which is displacement. But what is the integral of displacement with respect to time? It's called absement, and it's a pretty peculiar quantity to think about. In the same way that an object with linearly increasing displacement has constant velocity and zero acceleration, an object with linearly increasing absement has constant displacement and zero velocity. (Constant absement at zero displacement gives rise to the name 'absement': an absence of displacement.)
Integrating displacement over time might be useful: the area under the displacement curve for a throttle lever could conceivably be proportional to fuel consumption for example. So absement seems to be a potentially useful quantity, measured in metre-seconds.
Integrate absement and you get absity (a play on 'velocity'). Keep going and you get abseleration, abserk, and absounce. Are these useful quantities? I don't think so. A quick look at them all — for the same Tesla S dataset I used before — shows that the loss of detail from multiple cumulative summations makes for rather uninformative transformations:
You can reproduce the figures in this article with the Jupyter Notebook Jerk_jounce_etc.ipynb. Or you can launch a Binder right here in your browser and play with it there, without installing a thing!
David Eager et al (2016). Beyond velocity and acceleration: jerk, snap and higher derivatives. Eur. J. Phys. 37 065008. DOI: 10.1088/0143-0807/37/6/065008
Amarashiki (2012). Derivatives of position. The Spectrum of Riemannium blog, retrieved on 4 Mar 2018.
The dataset is from Jerry Jongerius's blog post, The Tesla (Elon Musk) and
New York Times (John Broder) Feud. I have no interest in the 'feud', I just wanted a dataset.
The T-shirt is from Chummy Tees; the image is their copyright and used here under Fair Use terms.
The vintage Snap, Crackle and Pop logo is copyright of Kellogg's and used here under Fair Use terms.
March 06, 2018 / Matt Hall/ 2 Comments
Fun, Science
mathematics, velocity, calculation, physics, mechanics
This year's social coding events
January 31, 2018 / Matt Hall
If you've always wondered what goes on at our hackathons, make 2018 the year you find out. There'll be plenty of opportunities. We'll be popping up in Salt Lake City, right before the AAPG annual meeting, then again in Copenhagen, before EAGE. We're also running events at the AAPG and EAGE meetings. Later, in the autumn, we'll be making some things happen around SEG too.
If you just want to go sign up right now, head to the Events page. If you want more deets first, read on.
Salt Lake City in May: machine learning and stratigraphy
This will be one of our 'traditional' hackathons. We're looking for 7 or 8 teams of four to come and dream up, then hack on, new ideas in geostatistics and machine learning, especially around the theme of stratigraphy. Not a coder? No worries! Come along to the bootcamp on Friday 18 May and acquire some new skills. Or just show up and be a brainstormer, tester, designer, or presenter.
Thank you to Earth Analytics for sponsoring this event. If you'd like to sponsor it too, check out your options. The bottom line is that these events cost about $20,000 to put on, so we appreciate all the help we can get.
It doesn't stop with the hackathon demos on Sunday. At the AAPG ACE, Matt is part of the team bringing you the Machine Learning Unsession on Wednesday afternoon. If you're interested in the future of computation and geoscience, come along and be heard. It wouldn't be the same without you.
Copenhagen in June: visualization and interaction
After events in Vienna in 2016 and Paris in 2017, we're looking forward to being back in Europe in June. The weekend before the EAGE conference, we'll be hosting the Subsurface Hackathon once again. Partnering with Dell EMC and Total E&P, as last year, we'll be gathering 60 eager geoscientists to explore data visualization, from plotting to virtual reality. I can't wait.
In the EAGE Exhibition itself, we're cooking up something else entirely. The Codeshow is a new kind of conference event, mixing coding tutorials with demos from the hackathon and even some mini-hackathon projects to get you started on your own. It's 100% experimental, just the way we like it.
Anaheim in October: something exciting
We'll be at SEG in Anaheim this year, in the middle of October. No idea what exactly we'll be up to, but there'll be a hackathon for sure (sign up for alerts here). And tacos, lots of those.
You can get tickets to most of these events on the Event page. If you have ideas for future events, or questions about them, drop us a line or leave a comment on this post!
I'll leave you with a short and belated look at the hackathon in Paris last year...
A quick look at the Subsurface Hackathon in Paris, June 2017.
January 31, 2018 / Matt Hall/ Comment
Event, Fun
hackathon, events, AAPG, EAGE, collaboration
|
CommonCrawl
|
Use The Accompanying Radiation Levels
Use The Accompanying Radiation LevelsFrom lowest SAR value to highest, they are: Samsung. Use the accompanying radiation levels(in W/kg) for 50 different cell phones. W Use The Accompanying Radiation Levels Kg Click The Icon To View The Radiation Levels S ( W For 50 Different Cell Phones. The single percentage growth rate is found by subtracting 1 from the growth factor and then multiplying by 100%. 17] Finding percentiles in a data set. 48 kg Click the icon to view the radiation levels. 86 (W/kg) corresponds to the 20th percentile. Find the percentle \ ( P_ {40} \) \ ( P_ {40}=\frac {W} {k g} \) (Type an integer or decimal tounded to two decimal places as needed. Solved use the accompanying radiation levels (in W/kg) for. W Use the accompanying radiation levels in for 50 different cell phones. Exposure to high levels of ionising radiation can result in mutation, radiation sickness, cancer, and death but when used in medical applications it can be used to prolong life. Find kg the percentile Po Use the accompanying radiation levels in WY for 50 different cell phones. Radiation protection for accompanying person and radiation. Adding new words and great descriptive adjectives will help. Find kg the percentile P30- W P30 = (Type an integer or decimal rounded to two decimal places as needed. 52 W P50 (Type an integer or a decimal. This can be due to an increased dosage of the gene product, or to changes Using genomic and transcriptomic data from 18 beetle species, . 44 W is kg (Round to the nearest whole number as needed. otc v3 download reddit malang full movie watch online free hd 1080p asha degree suspects. The jet can be seen to emit synchrotron emission in all three wavebands. Solved] STEP BY STEP SOLUTION. Radiation dose is the amount of radiation absorbed by the body. Use the accompanying radiation levels in Wkg for 50 different cell phones. Keep in mind that when buying a new smartphone, the SAR level is just one of the many features that you need to check, but not one that should be a deal breaker. Answer to W Use the accompanying radiation levels [in k—g] for 50 different cell phones. Cell phone radiation levels are rarely available at retail locations. Listed below are the measured radiation absorption rates (in W/kg) corresponding to various cell phone models. For this data-set of radiation levels, there are 50 elements, and 0. Find () W (Type an integer or decimal rounded | answerspile. Solved]: PLEASE ANSWER WILL RATE Use the accompanying. Transcribed Image Text: W Use the accompanying radiation levels in for 50 different cell phones. The standard sets limits for exposure to the radiofrequency radiation produced When using a transmitter close to the body (for example, . 51 Kg (Round To The Newest Whole Number As Needed) 0. Doses from Common Radiation Sources The following diagram compares radiation doses from common radiation sources, both natural and man-made. "Bioeffects are clearly established and occur at very low levels of exposure to electromagnetic fields and radiofrequency radiation. The accompanying data are lengths (inches) of bears. Do not use a decimal approximation for square roots. 61 We use cookies to enhance user experience. 4 % of (18)F-FDG was excreted in the urine in 117 min after injection. From the accompanying graph, calculate the D value for the treatment of a population of microbes with the control agent indicated in plot C. 14 microsieverts per hour, and the highest allowable amount of natural background radiation is. Use the accompanying radiation levels (in W/kg) for 50 Last updated: 7/27/2022 Use the accompanying radiation levels (in W/kg) for 50 different cell phones. Solved] Use the accompanying radiation levels W in kg for 50 different. Use the accompanying radiation levels in W for 50 different cell phones. Use the accompanying radiation levels in for 50 different cell phones. 72 cm and a standard deviation of 4. Keep in mind that when buying a new smartphone, the SAR level is just one of the many features that you need to check, but not one that should be a deal breaker. Find the quartile Q W Use the accompanying radiation levels s (in fr for 50 different cell phones. Per PBS, one sievert is the amount of ionizing radiation that the human body needs to be exposed to in order to experience illness. Use the accompanying radiation levels in Wkg for 50 different cell phones. Use the accompanying radiation levels in W for 50 different cell phones. Bioeffects can occur in the first few minutes at levels associated with cell and cordless phone use… At least five new cell tower studies are reporting bioeffects in the range of 0. The threat zones displayed by ALOHA represent thermal radiation levels; the accompanying text indicates the effects on people who are exposed to those thermal radiation. The dosimeter can assess the radioactive contamination within the food, air, and water. Use the accompanying radiation levels \ ( \left (\mathrm {in} \frac {\mathrm {W}} {\mathrm {kg}}\right) \) for 50 different cell phones. All of us are exposed to radiation every day, from natural sources such as minerals in the ground, and man-made sources such as medical x-rays. Cell Phone Radiation Charts. The levels of caesium-137 in these areas in general are over 20 kBq/m2, with local maxima up to 140 kBq/m2 (see accompanying figure, page 28). Transcribed Image Text: W Use the accompanying radiation levels in for 50 different cell phones. Samsung Radiation Level. The first is an alpha particle. These published guidelines outlined and defined Specific Absorption Rate (SAR) as a measure of the rate that body tissue absorbs radiation when you are using a cell phone. These could be interesting adjectives or new unfamiliar words you want to add to your own vocabulary. Radiation dose is the amount of radiation absorbed by the body. Radiation levels is 0. Use the accompanying radiation levels the percentile P30 please show how to do it in "layman" terms! Use the accompanying radiation levels the percentile P30 W kg for 50 different cell phones. use the accompanying radiation levels (in W/kg) for 50 different cell phones. Average U. Thermal radiation from hot gas can be seen in the blue 'shells' around the lobes, particularly to the south (bottom). Find kg the percentile P30- W P30 = (Type an integer or decimal rounded to…. Sievert metric radiation unit at wikipedia * 1 Sv (Sievert) = 100 rem * 1 mSv = 100 mrem = 0. In addition, radiation has useful applications in such areas as agriculture. Yet, despite the differential in comfort levels in the use of such semantics, I go right ahead and use this characterization anyway because its use, all by itself, enhances the important distinction between Common Law Jurisdiction and Kings Equity Jurisdiction Using a confluence of monochromatic radiation sources. Find the percentile \ ( P_ {60} \). Use the accompanying radiation levels \ ( \left (\ln \frac {\mathrm {W}} {\mathrm {kg}}\right) \) for 50 different cell phones. 24150974x10 18 electrons flowing past a point per second give a current of one ampere. Use the accompanying radiation levels \( \left(\right. Current, or the rate of flow of charge, is defined such that one coulomb, so 1/-1. 50 W 0,- (Type an integer or decimal rounded to two decimal places as needed. An 80 g sample of radioisotope decayed to 10 g after 24 days. This section describes how to use a radiation survey instrument and presents a simple calculation for projecting exposure rates after a nuclear. In addition, radiation has useful applications in such areas as agriculture, archaeology (carbon dating), space exploration, law enforcement, geology (including mining), and many others. The literal level deals with understanding and absorbing facts, the interpretive level concerns underlying implica. Spend over $150 and you'll find Geiger counters that can detect more radiation types at a wider range, sturdier build. 47 kg kg Click the icon to view the radiation levels. They decided that the maximum Specific Absorption rate should be 1. Use the accompanying radiation levels for 50 different cell. A sievert (Sv) is a unit of effective dose of radiation. It is important to maintain radiation detection equipment to ensure it is working properly. Genomic signatures accompanying the dietary shift to phytophagy in. A two-dimensional panoramic image of your head and jaw produces a dose comparable to 2 or 3 days of natural exposure. 4% of the children diagnosed with enuresis were aged 7-12 years. Use the accompanying radiation levels (in kg W ) for 50 different cell phones. The Centers for Disease Control and Prevention has developed the Radiation Hazard Scale as a tool for communication in emergencies. dead by daylight competitive settings. kg W Q₁ = (Type an integer or decimal rounded to two decimal places as needed. How Do You Detect Radiation?. Find the percentile Poo- \( P. 72 Use the accompanying radiation levels kg for 50 different cell phones. Keep in mind that when buying a new smartphone, the SAR level is just one of the many features that you need to check, but not one that should be a deal breaker. Yet, despite the differential in comfort levels in the use of such semantics, I go right ahead and use this characterization anyway because its use, all by itself, enhances the important distinction between Common Law Jurisdiction and Kings Equity Jurisdiction (which distinction is still very much in effect today), and makes this distinction. Transcribed image text: w Use the accompanying radiation levels in kg different cell phones. W The percentile corresponding to 0. Find kg the percentile P30- W P30 = (Type an in Use the accompanying radiation levels inWfor 50 different cell. Use the accompanying radiation levels (in kg W ) for 50 different cell phones. Find the percentile corresponding to 1. Solution for Use the accompanying radiation levels in for 50 different cell phones. Hence, we can divide the given data in to two halves: …. (W/kg) heavenlymell is waiting for your help. Study with Quizlet and memorize flashcards containing terms like The tallest living man at one time had a height of 208 cm. Designed by Gary Player, the original Sun City golf course has taken top honours in . Use the accompanying radiation levels (in kg W ) for 50 different cell phones. P 60 = kg W (Type an integer or decimal rounded to two decimal places as needed. Use the boxplots to compare the two data sets. 4 inch perforated drain pipe with sock 250 ft. When a measure is in the xth percentile of a data-set, it is greater than x% of the measures and lesser than (100 - x)%. Provides a frame of reference for relative hazards of. left parenthesis in StartFraction Upper W Over kg EndFraction right parenthesis. Imagine the ambient radiation levels coming from this jar where this person is standing are 0. Gas and dust in the galaxy emits thermal radiation in the infrared. Gas and dust in the galaxy emits thermal radiation in the infrared. Exposure of Chernobyl residents who were relocated after the blast in 1986. To protect radiation workers. Use the accompanying radiation levels \ ( \left (\mathrm {in} \frac {\mathrm {W}} {\mathrm {kg}}\right) \) for 50 different cell phones. inpatient medical and radiation oncology (located on level 7) . 47 kg kg Click the icon to view . wv snap maximum allotment 2022; thailand drama app with english subtitles; nested case when in proc sql; minecraft medieval rpg server troy bilt. (Solved): W Use the accompanying radiation levels s (in fr for 50 different cell phones. The accompanying data are lengths (inches) of bears. According to the National Council on. 1 rem in 1 hour at 30 cm from the source of radiation or from any surface that radiation penetrates. Use the accompanying radiation levels (in w/kg) for 50 different cell phones. Evaluating the Social Anxiety Depression Levels and Accompanying. w for 50 different cell phones. A more thorough comparison between real radiation exposure values in milliseiverts can be found below in the accompanying tables labeled 1-3. Use the accompanying radiation levels \ ( \left (\right. \frac {W} {\mathrm {~kg}}\right) \) for 50 different coll phcoes. 48 kg Click the icon to view the radiation levels. Use the accompanying radiation levels \ ( \left (\ln \frac {\mathrm {W}} {\mathrm {kg}}\right) \) for 50 different cell phones. Click the icon to view the radiation levels. ) We have an Answer from Expert View Expert Answer Expert Answer. P90 W for 50 different cell phones. 042 as follows: "High Radiation Area means any area accessible to persons in which radiation exists at such levels that a person could receive a dose equivalent in excess of 0. Since its initial launch, Dead By Daylight has gained a large amount of popularity and has quickly become the Monster Of The Week game with how regularly they add creeps and ghouls from across the spectrum of the Horror genre. Max radiation levels recorded at Fukushima plant yesterday, per hour. 50 w kg Click the icon to view the radiation levels Radiation levels - X w The percente correspondinyj to 1. ) Show Answer Create an account. The Samsung cell phones are among the lowest SAR values when used against the head. Find the percentile corresponding > Receive answers to your . Best Answer total number of values =50 therefore lo … View the full answer Transcribed image text: w Use the accompanying radiation levels for 50 different cell phones Find the percentile P20 kg w P20 = 0. (Round to the nearest whole number as needed. Iraq littered with high levels of nuclear and dioxin contamination. A chest x-ray on plain film will produce about 10 days of radiation. These particles consist of two protons and two neutrons and are the. \frac{W}{\mathrm{~kg}}\right) \) for 50 different cell phones. , greater than 100 mSv) delivered to the whole body over a very short period of time may have potential health risks. W W Use the accompanying radiation levels in kg for 50. Study with Quizlet and memorize flashcards containing terms like The tallest living man at one time had a height of 208 cm. Statistics and Probability questions and answers. Geiger counters start at less than $100 and can go up to $400 or $500 for high-end consumer devices. 48 is kg (Round to the nearest whole number as needed. Some of them are required to run this page, some are useful to provide you the best web experience. 01 sievert (Sv) Common Metric Prefixes. (Solved): Use the accompanying radiation levels the percentile P40 P40= (in W) for 50 different cell phones. In this video, Professor Curtis uses StatCrunch to demonstrate how to find percentiles in a data set (MyStatLab ID# 3. Use the accompanying radiation levels the percentile P40 P40= (in W) for 50 different cell phones. If one of each model is measured for radiation and the results are used to find. We can find P75, it is also known as the Q3 and we will divide data on two parts. As you read, note down useful vocabulary to include in your writing. At higher elevations the amount of atmosphere that shields us from cosmic rays decreases and thus the dose increases. W 03 =D k—g (Type an integer or decimal. State governments control states, and the federal government controls the entire nati. Use the accompanying radiation levels the percentile P40 P40= (in W) for 50 different cell phones. Use the given data to construct a boxplot and identify the 5-number summary. National Weather Service Advanced Hydrologic Prediction Service (AHPS) weather. 51 Kg X Radiation Levels W The Percentile Corresponding To 0. The single percentage growth rate is found by subtracting 1 from the growth factor and then multiplying by 100%. An automatic level, builder's auto level, leveling instrument or dumpy level is a professional leveling tool that is used by land surveyors, builders, contractors and engineers. Transcribed image text: W Use the accompanying radiation levels in for 50 different cell phones. Use the boxplots to compare the two data . Accompanying of parameters of color, gloss and hardness on polymeric films coated with pigmented inks cured by different radiation doses of. Although scientists have only known about radiation since the 1890s, they have developed a wide variety of uses for this natural force. Transcribed image text: W kg Use the accompanying radiation levels in the quartile for 50 different cell phones. The threat zones displayed by ALOHA represent thermal radiation levels; the accompanying text indicates the effects on people who are exposed to those thermal radiation levels, but are able to seek shelter within one minute. Use the accompanying radiation levels \ ( \left (\ln \frac {\mathrm {W}} {\mathrm {kg}}\right) \) for 50 different cell phones. Find the percentile \ ( \mathrm {P}_ {75} \). Find The Percentile Comesponding To 0. The three common measurements of radiation are the: Amount of radioactivity Ambient radiation levels Radiation dose Amount of Radioactivity For example, imagine we had a jar filled with radioactive material. A person's radiation exposure due to all natural sources amounts on average to about 2. Thus after 7 hours, the residual fission radioactivity declines 90%, to one-tenth its level of 1 hour. Cell Phone Radiation Charts – (SAR) Levels Of Popular Phones. Find that average growth factor. Use the accompanying radiation levels \( \left(\right. In a video accompanying the report, Volkow said the team focused on how the brain. For this data-set of radiation levels, there are 50 elements, and 0. 86 kg Type an integer or decimal rounded to two decimal places as needed. Find the percentile corresponding to 62. obituaries for thompson funeral home viper4android ddc kernel profiles rabbitmqctl connect to remote host. There are four major types of radiation: alpha, beta, neutrons, and electromagnetic waves such as gamma rays. Today, to benefit humankind, radiation is used in medicine, academics, and industry, as well as for generating electricity. 36 kg W Click the icon to view the radiation levels. 14 microsieverts per hour, and the highest allowable amount of natural background radiation is. Provides a frame of reference for relative hazards of radiation. Answer to Use the accompanying radiation levels W in kg for 50 different cell phones. 7 % of women in this group have platelet counts within 33 standard deviations of the mean, or between 49. Transcribed image text: Points: 0 w Use the accompanying radiation levels SC Click the icon to view the radiation levels. w Use the accompanying radiation levels for 50 different cell phones Find the percentile P20 kg w P20 = 0. The Reason Chernobyl's Radiation Levels Have Spiked. Math Statistics Use the accompanying radiation levels in W for 50 different cell phones. Accounting Business Managerial Accounting STAT 200 6383. A single high-level radiation exposure (i. STA 2023 Use the accompanying radiation levels W in kg for 50 different cell phones. Radiation astronomy/Active galactic nuclei. Click the icon to view the radiation levels. More than 40 sites across Iraq are contaminated with high levels or radiation and dioxins, with three decades of war and neglect having left environmental ruin in large parts of the country,. Use the Radiation Dose Calculator to estimate your yearly dose from sources of ionizing radiation. For levels below 100 Bq m, individual risk remains relatively low and not a cause for concern. Use the accompanying radiation levels \ ( \left (\mathrm {in} \frac {\mathrm {W}} {\mathrm {kg}}\right) \) for 50 different cell phones. If this person stands in the same place for one hour, he will receive a dose of 0. 0 Procedures for minimizing radiation exposure to personnel and radiation protection for the use of dental X-ray equipment. 1 rem in 1 hour at 30 cm from the source of radiation or from any surface that radiation penetrates. famous people with glioblastoma. otc v3 download reddit malang full movie watch online free hd 1080p asha degree suspects. w Use the accompanying radiation levels in kg for 50 different cell phones. Use the accompanying radiation levels W in ka for 50 different cell phones. Exposure levels ranged from 1. The UN Scientific Commission on the Effects of Atomic Radiation (UNSCEAR) currently uses the term low dose to mean absorbed levels below 100 mGy but greater than 10 mGy, and the term very low dose for any levels below 10 mGy. Evaluation of the EU Occupational Safety and Health Directives. Use the accompanying radiation levels \ ( \left (\right. Samsung Radiation Level. The short half-life of technetium-99m helps keep the dose to. Now, let's look at the different kinds of radiation. Use the accompanying radiation levels in W for 50 different cell phones. Solved W Use the accompanying radiation levels in for 50 …. Use the acsompanying radiation levels \( \left(\right. The parameter used to measure phone radiation emissions is the SAR value. Cosmic radiation comes from the sun and outer space and consists of positively charged particles, as well as gamma radiation. Solved W kg Use the accompanying radiation levels in the. This area is defined in NAC 459. polyethylene, widely used as a solid insulation for power transmission cables. 6 w/kg of body weight for any cell phone manufacturer. W Use the accompanying radiation levels in the percentile P40 for 50 different cell phones Find 0. 2 High Radiation Area. Approximately 68 % of women in this group have platelet counts between 186. levels of sap or resin which enhance ignition of the kindling. Transcribed image text: W kg Use the accompanying radiation levels in the quartile for 50 different cell phones. (Solved): Use the accompanying radiation levels in WY for 50 different cell phones. Use the accompanying radiation levels (in w/kg) for 50. 36= 80 Use the same scale to construct boxplots for the ages of the best actors and best actresses from the accompanying data sets. Medical Association, is the first investigation to document changes in brain glucose metabolism after cell phone use. Yet, despite the differential in comfort levels in the use of such semantics, I go right ahead and use this characterization anyway because its use, all by itself, enhances the important distinction between Common Law Jurisdiction and Kings Equity Jurisdiction (which distinction is still very much in effect today), and makes this distinction. kg Q3 = W (Type an integer or decimal rounded to two decimal. A two-dimensional panoramic image of your head and jaw produces a dose comparable to 2 or 3 days of natural exposure. use the accompanying radiation levels (in W/kg) for 50 different cell phones. The shortest living man at that time had a height of 124. Expert Answer For finding Pth percentile we multiply P*n to get index We have to … View the full answer Transcribed image text: Use the accompanying radiation levels ( in kgW) for 50 different cell phones. Radiation and Health Effects. 44 W kg Click the icon to view the radiation levels. The electron is a subatomic particle with a negative charge, equal to -1. When a measure is in the x-th percentile of a data-set, it is greater than x% of the measures and lesser than (100 - x)%. The results are benchmarked against recommended safety levels. The Chernobyl nuclear power plant and its surrounding area are showing increased radiation levels after heavy fighting. Which of these two men had the height that was more extreme?, Use the following cell phone airport data speeds (Mbps. From the accompanying graph, calculate the D value for the treatment of a population of microbes with the control agent indicated in plot C. The jet can be seen to emit synchrotron emission in all three wavebands. Comparison of Medical, Dental, and Natural Radiation Levels. Find the percentile corresponding to 1. A surgical case of radiotherapy induced esophageal perforation. Per PBS, one sievert is the amount of ionizing radiation that the human body needs to be exposed to in order to experience illness. Smartphones With the Highest Levels of Radiation Emissions. Find the percentile P50; Question: use the accompanying radiation levels (in W/kg) for 50 different cell phones. The percentile corresponding to 62. Use the accompanying radiation levels (in w/kg) for 50 different cell phones. pour l'application de l'article L. Find the percentile P50; Question: use the accompanying radiation levels (in W/kg) for 50 different cell phones. Use the acsompanying radiation levels \ ( \left (\right. This is a high dose- level value, but it is a local value at a single location and at a certain point in time. 175 Since data also is in ascending order. Sievert metric radiation unit at wikipedia * 1 Sv (Sievert) = 100 rem * 1 mSv = 100 mrem = 0. Solved]: W Use the accompanying radiation levels s (in fr. At sea level, the average cosmic radiation dose is about 26 mrem per year. The average growth factor for money compounded at annual interest rates of 14 %, 6 %, and 3 % can be found by computing the geometric mean of 1. So we will find the middle number: P50 = (1. Use the accompanying radiation levels. use the accompanying radiation levels (in W/kg) for 50 different cell phones. 47 kg kg Click the icon to view the radiation levels. 86 kg Type an integer or decimal rounded to two decimal places as needed. From lowest SAR value to highest, they are: Samsung Galaxy A32 (SM-A326U) 0. The test level requirement shall be 3 volts (rms) over the frequency range of 150 kHz to 80 MHz for Non-Life-Supporting Equipment. levels, I have made a fresh attempt to look for radiation from solid substances bombarded with electrons of velocities of the same order as was used in the. The jet can be seen to emit synchrotron emission in all three wavebands. This is a high dose- level value, but it is a local value at a single location and at a certain point in time. Radiation Terms and Units. Radon, however, is a natural radioactive gas found in rock formations that can release higher levels of radiation that can pose health risks. Use the accompanying radiation levels in for 50 different cell phones. A radiation level of 0. If we know the ambient radiation level, we easily can calculate our radiation dose. P 60 = kgW (Type an integer or decimal rounded to two decimal places as needed. However, the risk increases as the radon level increases. \ ( P_ {60}=\frac {W} {k g} \) (Type an integer or decimal rounded to two decimal places as needed) We have an Answer from Expert View Expert Answer Expert Answer. Use the accompanying radiation levels \( \left(\right. Max radiation levels recorded at Fukushima plant yesterday, per hour. Solved]: Use the accompanying radiation levels \( \left(\m. Use the accompanying radiation levels in for 50 different cell phones. Now, let's look at the different kinds of radiation. These published guidelines outlined and defined Specific Absorption Rate (SAR) as a measure of the rate that body tissue absorbs radiation when you are using a cell phone. The low power level for rats was equal to the highest level permitted for local tissue exposures to cell phone emissions today. This problem has been solved! You'll get a detailed solution from a subject matter expert that helps you learn core concepts. We want to find out how many radioactive atoms decay every second, giving off alpha particles, beta particles, or gamma rays. The radiation doses after 117 min were measured as 3. \ ( P_ {75}=\frac {\mathrm {W}} {\mathrm {kg}} \) (Type an integer or a decimal. kg Q3 = W (Type an integer or decimal rounded t | answerspile. Listed below are the measured radiation absorption rates (in W/kg) corresponding to 11 cell phones. RELATED: Best Horror Games To Play With Friends These homages to cult classics combined with challenging four-versus-one. Use the accompanying radiation levels W in ka for 50 different cell phones. Use the same scales to construct boxplots for the pulse rates of males and females from the accompanying data sets. Is designed for use only in radiation emergencies and is. Dec 01, 2015 · Glioblastoma is an aggressive brain cancer that remains extremely difficult to treat. The threat zones displayed by ALOHA represent thermal radiation levels; the accompanying text indicates the effects on people who are exposed to those thermal radiation levels, but are able to seek shelter within one minute. Detecting these changes in infrared radiation levels, using appropriate photodetectors sensitive in this spectral region, allows for the calculation of the CO2. Radiation used in the processing of food is 3 Which radioactive isotope is used in geological. 5 minutes Iodine can be complexed with an organic carrier to form water-soluble, stable complexes called __________, which release iodine slowly and eliminate skin burns and irritation associated. Take on the 18-hole, par 72 course in the crater of an extinct volcano. ) We have an Answer from Expert View Expert Answer. note that charge is always considered positive, so the. The charge on an electron is often given as -e. Use the accompanying radiation levels for 50 different cell phones Find the quarie d; 0. W Use the accompanying radiation levels s (in fr for 50 different cell phones. All of us are exposed to radiation every day, from natural sources such as minerals in the ground, and man-made sources such as medical x. The average growth factor for money compounded at annual interest rates of 14 %, 6 %, and 3 % can be found by computing the geometric mean of 1. 50 0 79 0 82 0186 10 Q: How do you find the difference between the two means by looking at. Heights of men at that time had a mean of 169. Solved] Use the accompanying radiation levels W in kg for 50 …. Add your answer and earn points. It is the second leading cause of. ) kg point(s) possible Submit test 0. Find the percentile P10 for 50 Ci 0. Local governments control towns, cities and counties. This rule states that for every seven-fold increase in time following a fission detonation (starting at or after 1 hour), the radiation intensity decreases by a factor of 10. gov: National Oceanic and Atmospheric Administration's. Radiation dose is the amount of radiation absorbed by the body. Use the accompanying radiation levels in for 50 different cell phones. Use the accompanying radiation levels. According to the results here, 18. The guidelines created a measure of the rate that body tissue absorbs radiation during cell phone use called the specific absorption rate (SAR). Use the accompanying radiation levels. Transcribed Image Text: W Use the accompanying radiation levels in for 50 different cell phones. Find the percentile corresponding to 0. Use the accompanying radiation levels left parenthesis in StartFraction Upper W Over kg EndFraction right parenthesis for 50 different cell phones. Use the accompanying radiation levels the percentile P40 P40= (in W) for 50 different cell phones. 48 is kg (Round to the nearest whole number as needed. 34 kg kg Click the icon to view . 72 Use the accompanying radiation levels kg for 50 different cell phones. ( 3) Provisions for shipments of Class 7 (radioactive) materials by air are described in §§ 175. Find the percentile Poo- \ ( P_ {\infty 0}=\square \frac {W} {\mathrm {~kg}} \) (Type an integer or decimal rounded to two decimal places as needod. Sadly, fewer than 10% of patients live past five years post-diagnosis. Exposure to high levels of ionising radiation can result in mutation, radiation sickness, cancer, and death but when used in medical applications it can be used to prolong life. The conditions of paragraphs (b) (2), (b) (3), (b) (4) and (c) must be met. Using the percentile concept, it is found that a radiation level of 0. The International Medical Device EMC Standard—IEC 60601. Dec 01, 2015 · Glioblastoma is an aggressive brain cancer that remains extremely difficult to treat. w Use the accompanying radiation levels for 50 different cell phones Find the percentile P20 kg w P20 = 0. Type these out with a definition in English and keep them handy for your next written activity. A Chart to Better Understand Radiation Levels and Their Effects. 6 watts of energy absorbed per kilogram of body weight. For levels below 100 Bq m, individual risk remains relatively low and not a cause for concern. However, the risk increases as the radon level increases. Use the accompanying radiation levels the percentile P40 P40= (in W) for 50 different cell phones. Use the accompanying radiation levels in Wkg for 50 different cell phones. • Get the data Radiation exposure levels compared. Find the percentile corresponding to 0. Population exposure Exposure of the population occurs through three main pathways: inhalation of airborne material, external irradiation from material deposited on the ground, and. Math Statistics Use the accompanying radiation levels in W for 50 different cell phones. Do not Get more out of your subscription* Access to over 100 million course-specific study resources 24/7 help from Expert Tutors on 140+ subjects Full access to over 1 million Textbook Solutions. W Use the accompanying radiation levels s (in fr for 50 different cell phones. The standard readings for the area hover around. Keep a vocab list. Use the accompanying radiation levels in WY for 50 different cell phones. 44 W kg Click the icon to view the radiation levels. Use the accompanying radiation levels for 50 different cell phones Find the quarie d; 0. High absorbed dose is defined as more than about 1000 mGy. 48 kg Click the icon to view the radiation levels. The levels of caesium-137 in these areas in general are over 20 kBq/m2, with local maxima up to 140 kBq/m2 (see accompanying figure, page 28). \frac{\mathrm{W}}{\mathrm{kg}}\right) \) for 50 different cell phones. Use the accompanying radiation levels in for 50 different cell phones. The Guardian (Sources: WNA, Reuters, radiologyinfo. This is a high dose- level value, but it is a local value at a single location and at a certain point in time. Thermal radiation from hot gas can be seen in the blue 'shells' around the lobes, particularly to the south (bottom). They are easy to use and can help control levels of radiation at residence, medical facility, and work. Radiation Sources and Doses. The equipment used was a telescope of 100 mm in diameter and 900 mm in focal length, fitted with a Digital Single Lens Reflex (DSLR) camera . Solution for W Use the accompanying radiation levels for 50 different cell phones. Preview your answer before submitting! Question Help: Message instructor D Post to forum Submit Question Jump to. (W/kg) heavenlymell is waiting for your help. Geiger counter prices. The RF signal shall be modulated at either 2 Hz or 1000 Hz depending on the intended use of the equipment under test. More than 40 sites across Iraq are contaminated with high levels or radiation and dioxins, with three decades of war and neglect having left environmental ruin in large parts of the country,. 36 kg W is (Round to the nearest whole number as needed. The penetrating component of cosmic radiation is now believed to consist of mesons—particles with a mass about one-tenth of that of a proton—while the soft . Abstract One of the distinguishing characteristics of the atomic bombs used against Hiroshima and Nagasaki was their accompanying radiation . Use the calculator below to estimate your yearly dose Effective dose is a measure of the amount of radiation absorbed by a person that . w Use the accompanying radiation levels in kg for 50 different cell phones. Find the percentile P 60. Find the percentile Upper P 40. W The percentile corresponding to 0. 86 is the 10th element in the ordered set, hence: 10/50 = 0. These particles consist of two protons and two neutrons and are the heaviest type of radiation particle. Find the percentile P90 kg W (Type an integer or decimal rounded to two decimal places as. Click image for graphic As radiation exposure around the Fukushima nuclear power plant reach levels of 400mSv per hour (although they've. Use the accompanying radiation levels in W for 50 different cell phones. View the full answer. Longer exposure durations, even at a lower thermal radiation level, can produce serious physiological effects. The percentile corresponding to 0. The lobes only emit in the radio. Conveys meaning without using radiation measurements or units that are unfamiliar to people. Find kg the percentile Po W P90 (Type an integer or decimal rounded to two decimal places as needed) kg Get more. 0 Restricted Area Designation Procedure, Radiation Safety …. chap 3 and 4 statistic test Flashcards. in drinking-water wells increased with proximity to the nearest shallow groundwater for household and agricultural use—up to. Use the accompanying radiation levels \ ( \left (\right. Under $150, you'll find lower-end devices with limited ranges, features, and other capabilities. \frac {W} {\mathrm {~kg}}\right) \operatorname {for} 50 \) different cel phones. 51 Kg X Radiation Levels. Find the percentile corresponding to 0. The standard readings for the area hover around. If we know the ambient radiation level, we easily can calculate our radiation dose. Solution for Use the accompanying radiation levels in W for 50 different cell phones. All of us are exposed to radiation every day, from natural sources such as minerals in the ground, and man-made sources such as medical x-rays. The Centers for Disease Control and Prevention has developed the Radiation Hazard Scale as a tool for communication in emergencies. The animals were exposed for 10-minute on, 10-minute off increments, totaling just over 9 hours each day. cervical and transhiatal approach using mediastinoscope and perforation accompanying pyogenic spondylodiscitis: a case report. W Use The Accompanying Radiation Levels Kg Click The Icon To View The Radiation Levels S ( W For 50 Different Cell Phones. Use the accompanying radiation levels \( \left(\right. The dangers of radon exposure. Use the accompanying radiation levels. Quiz 1 Statistics Flashcards. Find () W (Type an integer or decimal rounded to two decimal places as needed. How Much Radiation is Emitted by Popular Smartphones?. They also detect the radiation level of items such as clothes, shoes, furniture, soil, and much more. To enter a number like 577, type 5*sqrt(7). Find the percentile \( \mathrm{P}_{60} \) - \( P_{60}=\frac{W}{\mathrm{~kg}} \) (Type an integer or decimal rounded to two decimal places as needed. Surge testing is also covered in IEC 60601-1-2, and the testing is done as per IEC 61000-4-5 ( EMC – Part 4-5: Testing and Measurement Techniques – Surge Immunity Test ). High Exposure to Radiofrequency Radiation Linked to Tumor Activity in. Since its initial launch, Dead By Daylight has gained a large amount of popularity and has quickly become the Monster Of The Week game with how regularly they add creeps and ghouls from across the spectrum of the Horror genre. Find the percentile \( \mathrm{P}_{60} \) - \(. Find kg the percentile Po W P90 (Type an integer or decimal rounded to two decimal places as needed) kg Get more help - Tutoring Help me solve this Sucation 0. We are using cookies on this web page. 500 kg (Round to the nearest whole number as needed) 0 19 023 0. The threat zones displayed by ALOHA represent thermal radiation levels; the accompanying text indicates the effects on people who are exposed to those thermal radiation levels, but are able to seek shelter within one minute. Listed below are the measured radiation absorption rates (in W/kg) corresponding to 11. What is maximum/permissible radiation exposure limit in India?. The gamma-rays from the annihilation of N13 positrons were used for . View the full answer Transcribed image text: Use the accompanying radiation levels ( in kgW) for 50 different cell phones. 43kgW is Radiation Levels (Round to the nearest whole number as needed. 241-2 du to be exposed to radiation above the exposure limit values, . National Weather Service Advanced Hydrologic Prediction Service (AHPS) weather. From follow-up studies of the Japanese atomic bomb survivors, we know acute exposure to very high radiation doses can increase the occurrence of cancer. Accounting Business Managerial Accounting STAT 200 6383 Answer & Explanation Unlock full access to Course Hero. More than 40 sites across Iraq are contaminated with high levels or radiation and dioxins, with three decades of war and neglect having left environmental ruin in large parts of the country,. The percentile corresponding to 1. The three levels of government are local, state and federal. THE AWARENESS OF CAREGIVERS ABOUT THEIR CHILDREN'S EXPOSURE TO IONIZING RADIATION ACCOMPANYING MEDICAL PROCEDURES: THE ASSESSMENT STUDY. Use the accompanying radiation levels (in W/kg) for 50 Last updated: 7/27/2022 Use the accompanying radiation levels (in W/kg) for 50 different cell phones. Now, let's look at the different kinds of radiation. Sources: National Council on Radiation Protection & Measurements (NCRP), Report No. Do not use a decimal approximation for square roots. According to the National Council on Radiation Protection and Measurements (NCRP), the average annual radiation dose per person in the U. Is designed for use only in radiation emergencies and is applicable for short-term exposure durations, for example, over a period of several days. Using the percentile concept, it is found that a radiation level of 0. Find the percentile P90 kg W (Type an integer or decimal rounded to two d | solutionspile. A Chart to Better Understand Radiation Levels and. At 117th min after injection, dose rates were determined as 345, 220, 140, 50 and 15 µSv h(-1), at proposed distances. Find the percentile corresponding to 0. kg W Q₁ = (Type an integer or decimal rounded to two decimal places as. Find the percentile Pas- 056 0. The radiation levels in the worst-hit areas of the reactor building, including the control room, have been estimated at 300Sv/hr, (300,000mSv/hr) providing a fatal dose in just over a minute. A single high-level radiation exposure (i. Solution for Use the accompanying radiation levels in W for 50 different cell phones. 86 (W/kg) corresponds to the 20th percentile. "Invisible Contracts" by George Mercier. w Use the accompanying radiation levels for 50 different cell phones Find the percentile P20 kg w P20 = 0. dcom default authentication level; real wood buffets; eternium daily quest guide hobby lobby clear charger plates. Since the set of data given are already arranged in ascending order in order to find the P50 it is also known as the median or the middle number of the dataset. For example, technetium-99m, one of the most common medical isotopes used for imaging studies, has a half-life of 6 hours. Nuclear Regulatory Commission (NRC) requires that its licensees limit human-made radiation exposure for individual members of the public to 1mSv per year, and limit occupational radiation exposure to adults working with radioactive material to 50mSv per year (3-25 uSv/hr). 50 W P75 = ka (Type an integer or a Get more out of your subscription* Access to over 100 million course-specific study resources; 24/7 help from Expert Tutors on 140+ subjects;. But, to get accurate and reliable measurements, we need to have both the right instrument and a trained operator. 57 (W/kg) Head: Samsung Galaxy S21+ (SM-G996U) 0. Find kg the percentile P30- W P30 = (Type an integer or decimal rounded to two decimal places as. 3% of their parents used "reward", while 37. 31 Explanation: The data is already arranged in ascending order. They differ in mass, energy and how deeply they penetrate people and objects. percentile of 1. 65 (W/kg) Head: Samsung Galaxy Z Fold3 (SM-F926U). Solved] W Use the accompanying radiation levels i. W Use the accompanying radiation levels in the percentile P40 for 50 different cell phones Find 0. The sievert is used for radiation dose quantities such as equivalent dose and effective dose, which represent the risk of external radiation from sources . The Samsung cell phones are among the lowest SAR values when used against the head. Use the accompanying radiation levels(in W/kg) for 50. \frac{W}{\mathrm{~kg}}\right) \) for 50 different coll phcoes. Although scientists have only known about radiation since the 1890s, they have developed a wide variety of uses for this natural force. Consequently, consumers cannot easily identify low-radiation phones. The percentile corresponding to 1. The SAR for cell phone radiation was set at a maximum of 1. Use the given data to construct a box plot and identify the 5-number summary. W Use The Accompanying Radiation Levels Kg Click The Icon To View The Radiation Levels S ( W For 50 Different Cell Phones. There are four major types of radiation: alpha, beta, neutrons, and electromagnetic waves such as gamma rays. 86 kg Type an integer or decimal rounded to two decimal places as needed. Gamma radiation has increased to 20 times its usual levels in the area. W Use The Accompanying Radiation Levels Kg Click The Icon To View The Radiation Levels S ( W For 50 Different Cell Phones. Solution for Use the accompanying radiation levels in W for 50 different cell phones. The researchers used questionnaires to gather work-related and risk were similar in the radiation and non-radiation exposure groups. STA 2023 Use the accompanying radiation levels W in kg for 50 different cell phones. 5 to 6 watts per kilogram (W/kg) in rats, and 2. The four levels of comprehension are literal, interpretive, applied and appreciative. Yet, despite the differential in comfort levels in the use of such semantics, I go right ahead and use this characterization anyway because its use, all by itself, enhances the important distinction between Common Law Jurisdiction and Kings Equity Jurisdiction Using a confluence of monochromatic radiation sources. 042 as follows: "High Radiation Area means any area accessible to persons in which radiation exists at such levels that a person could receive a dose equivalent in excess of 0.
|
CommonCrawl
|
Home Journals MMEP Application of computational experiments based on the response surface methodology for studying of the recirculation zone in the Y-shaped channel
Application of computational experiments based on the response surface methodology for studying of the recirculation zone in the Y-shaped channel
Elham Omidbakhsh Amiri
Department of Chemical Engineering, Faculty of Engineering, University of Mazandaran, Babolsar 47416-95447, Iran
[email protected]
It is notable to study on the fluid flow in the branched channels, because of their application in the very industrial and engineering systems. In this work, the application of the Computational experiments for understanding of the fluid behavior was studied in the Y-shaped channel. Here, it focuses on the Recirculation zones and length. Two types of Y-shaped channel were used, which ones named as a straight Y-shaped channel and a diagonal Y-shaped channel. From initial considerations, the Reynolds number and the angle were determined as effective parameters. Computational experiments based on the Response Surface Methodology (as Central Composite Design) were used to predict the proposed model from Computational Fluid Dynamic data. The effect of these parameters was studied. Results show that with increasing the angle and decreasing the Reynolds number, Recirculation length decreases. However, in a diagonal Y-shaped channel, the angle is an important parameter and must be considered in studies.
CFD, computational experiments, recirculation length, Y-shape
Splitting of fluid flow, which was passed through the channel, to two branches of fluid is very applicable in biomedicine and Engineering systems [1-3]. Different structures were used such as T-shape and Y-shape. In T-shaped structure, as conventional structure, flow after passing from inlet channel, goes to two sections while the angle between these sections is 180º. This angle in Y-shaped structure can be varied from 180º. For engineering study, it is necessary to better understanding the flow in branches. In consideration of fluid flow in branches, it was found that the geometry of branches can be played important role in the distribution of fluid and velocity field. One of the old studies in this context is the work of Bramley and Dennis [4]. They studied two-dimensional flow in an angled branch. Two geometries with wide and narrow daughter tube were considered. Stream functions were used for partial differential equations. In their work, two methods were used; an expansion derived by Moffat for small Reynolds number or using the extra points near the sharp corners. They studied the separation of downstream at different Reynolds number of the fluid. They found that at the low Reynolds number, the flow definitely separates at a small distance of downstream. With increasing the Reynolds number in the wide daughter tube, the separation point moves towards the sharp corner.
The trend of the studies continued and researchers used different methods and procedure for their reviews. In some studies, different flow parameters were considered. Singh et al. [5] studied the fluid flow in Y-shaped branched pipe. They used Computational Fluid Dynamic (CFD) with Solid works Flow Simulation. Velocity profiles for different angles were considered and resistance coefficient and pressure drop were achieved. Their results show that the total secondary flow was reduced until it reaches a steady value at 90º. Also, from CFD analysis, it turns out that resistance coefficient comes out to be zero at bend angle of 45°. Uppin et al. [6] had been numerically investigated the influence of the velocity on the flow parameters in the Y-duct with 45º branching angle. Their results showed that the inlet dynamic pressure is high. When the fluid diverges into two paths at the junction, along with reducing the dynamic pressure the velocity decreases at the outlets. Also, with increasing of the velocity rate, uneven mass distribution occurs. This unevenness in flow distribution is the function of velocity and turbulence. However, it is uniform for lower rate of velocity. Kumar R and Khadabadi [7] had been considered the flow parameters and structural analysis in Y-Duct. From the results it can be found that for a bent angle of 45º, pressure drop is high and reduces as the bent angle increases. At 45º branching due to less turbulence, unvarying pressure distribution at the outlets can be achieved. Yadav et al. [8] studied the numerical simulation of shear-induced particle migration through a Y-shaped bifurcation channel. The mass, momentum and particle conservation equations were solved simultaneously by Open FOAM source, which is based on the finite volume method. The effect of bifurcation angle and concentration of the bulk particle on the velocity and concentration profiles in bifurcation was studied. Asymmetric velocity and concentration profiles can be found due to migration.
Computational Fluid Dynamic (CFD) is one of the useful tools for analyzing the system, including momentum, heat and spices transfer based on computer simulations [9]. Because of many elements, a large time usually spends for CFD runs, which can be solved based on Design of experiments (DOE) methodology. DOE can be able to establish a relation between the inlet and outlet of process [10]. It can be useful for estimating the effect of the independent variable on response. Also, it can reduce the number of simulations due to consideration of process data.
In this work, application of the Computational experiments for understanding of the effect of a geometry parameter such as the angle and Reynolds number on flow field was considered in the Y-shaped channel. The formation of Recirculation zone and its length related to these parameters were studied by CFD models. Computational experiments based on one of usual DOE methods (Response Surface Methodology, RSM) were used to predict the effect of parameters from CFD data.
2. System Description
In this work, the laminar water flow is studied at a 2-D Y-shaped channel. Two types of Y-shaped channel are used as shown in Figure 1. The width (D) of the all sections of the both channels are 1 mm and is constant on all models. Both channels have three sections, inlet channel with the length equal of L0 (equal 10 times D) and two sections with outlets with the angle of α between two sections. In straight Y-shaped channel (Figure 1(a)), two outlet sections are named as main and side channel, respectively. In diagonal Y-shaped channel (Figure 1(b)), two outlet sections are named as up channel and down channel. Recirculation zones can be seen in both outlet sections. The Recirculation length is said as Lr. The ratio of Recirculation length to channel width (D) is used in the analysis of results as non-dimensional Recirculation length.
f1.png
Figure 1. Geometries of two types of Y-shaped channel
3. Modeling Procedure
3.1 CFD modeling
In this model, it is assumed that there is a steady-state flow of an incompressible fluid. Governing equations for these systems are consisted of continuity and momentum which are listed in Eqs. (1) - (3), respectively.
$u \frac{\partial u}{\partial x}+v \frac{\partial u}{\partial y}=-\frac{\partial P}{\partial x}+\mu\left[\frac{\partial^{2} u}{\partial x^{2}}+\frac{\partial^{2} u}{\partial y^{2}}\right]$ (2)
$u \frac{\partial v}{\partial x}+v \frac{\partial v}{\partial y}=-\frac{\partial P}{\partial y}++\mu\left[\frac{\partial^{2} v}{\partial x^{2}}+\frac{\partial^{2} v}{\partial y^{2}}\right]$ (3)
In the above equations, u and v are x and y component of velocity. No-slip condition is used for all walls. Inlet flow is water with specific velocity.
Governing equations (1-3) are solved with the finite volume method over control volume. First- order upwind discretization scheme was used for governing equations expect of pressure. The SIMPLE algorithm was employed to solve the convection-diffusion equations. The convergence criteria are 10-5 for all parameters. In thorough of control volume, Quad grids were used. Optimal size and type of grids were achieved with consideration vary grids for any model to ensure that the resolution of the mesh was not influencing the results.
3.2 Computational experiments
In this work, the DOE methodology is used. DOE methods present suitable information from a limited number of cases, so this method can be applied to CFD models which consumed a large time. From initial considerations, it can be found that the two parameters (angle and Reynolds number (Re)) can affect the Recirculation length. A set of computational experiments has been designed based on the Response Surface Methodology (RSM) as Central Composite Design (CCD) as design points (Table 1) with these two parameters. Run CFD models for design points and Response (as Recirculation length) are achieved. Then, with analysis of ANOVA (Analysis of variance), proposed correlation is predicted. Some verification points (Table 1) are used to check the validity of the proposed correlations.
Table 1. Design points from CCD method and Verification points for checking the validation of proposed model
Design Point
Verification Point
Run10
3.3 Validation of CFD modeling
In this section, the validation of the model was considered by Hayes's work [11]. The flow characteristics of a Newtonian fluid in a 90 degree branch was studied. The variation of non-dimensional recirculation length with varying Reynolds numbers is presented in Figure 2. The average percentage deviation of the values obtained in this study as compared with those given by Hayes et al. [11] is found to be about 4%.
Figure 2. Validation of the model
4.1 Initial considerations
Firstly, it should be to determine the effective parameters. Figure 3 shows recirculation zone in the straight Y-shaped channel. It can be seen that there are recirculation zones at each outlet section with length of Lr1 and Lr2. On the other hand, Figure 4 shows these two recirculation zones for two different Reynolds numbers while the angle is equal as 45º for both two models.
Figure 3. Two recirculation zones with length of Lr1 and Lr2 in straight Y-shaped channel (α=45º, Re=200)
Figure 4. Effect of Reynolds number on two recirculation lengths in straight Y-shaped channel, α=45º, (a) Re=200, (b) Re=600
Another parameter that is expected to be affected on the recirculation length is the angle between two outlet sections (α). So in the next step, this parameter was considered. Figure 5 shows recirculation zones for two different angles while the Reynolds number is equal as 300 for both two models. The angle was affected on the recirculation length, but not as Reynolds number. Also, this effect on the Lr1 is much clear than Lr2. As it seems that with increasing the angle, the recirculation length decreases, however, this result must be considered more. Also, it seems the angle has less effect than the Reynolds number.
Figure 5. Effect of angle on two recirculation lengths in straight Y-shaped channel, Re=300, (a) α=30º, (b) α=60º
When fluid goes from main channel to side channel, fluid separated from the upper wall of the side branch and a recirculation region is formed, which is developed with increasing of Reynolds number. With increasing the Reynolds number and so results in increasing the velocity of fluid, this separation is more. Also, with changing the angle of the side branch, the contact angle of the fluid with the wall of the side branch is changed and can affect the length of the circulation of the fluid.
4.2 Consideration of Recirculation lengths in straight Y-shaped channel
In this section, results of CFD runs for design points (Table 1) were used to predict the correlations. Proposed correlation is a linear model (Eq. (4)) with coefficients that were listed in Table 2 and 3 for non-dimensional form of Lr1 and Lr2, respectively. Figure 6 shows the Lr/D values calculated by the proposed model versus those predicted by the CFD runs. It can be found that there is a fairly good agreement (±10% and ±15% Lr1/D and Lr2/D, respectively). For validation the proposed correlation, verification points (Table 1) are added to Figure 6 and it can be seen they exist in the above range.
$\frac{L_{r}}{D}=c_{0}+c_{1} * R e+c_{2} * \alpha$ (4)
Table 2. Value of coefficients of linear proposed model for Lr1
6.567E-3
-2.833E-3
Figure 6. Validation of the proposed linear correlation for the recirculation lengths calculation using verification points
4.2.1 Effect of the Reynolds number and angle on the recirculation length in straight Y-shaped channel
With proposed model, the effect of the Reynolds number and angle on both non-dimensional form of recirculation lengths, Lr1/D and Lr2/D was considered. Figure 7 shows the effect of these parameters on Lr1/D. From Figure 7(a) it can be seen with increasing the angle for three Reynolds number, Lr1 decreases slightly. As, when Re is 300, for 100 percent by increasing of angle (from 30º to 60º), Lr1/D decreases only 3%.
But Figure 7(b) shows that with increasing the Reynolds number for three angles, Lr1 increases. As when the angel is 30º, for 100 percent increasing by the Reynolds number (from 200 to 400), Lr1/D increases about 67 percent.
Figure 7. Effect of (a) angle and (b) Reynolds number on the recirculation length (Lr1)
On the other hand, these results can be found for Lr2 but more intense. Figure 8 shows that with decreasing the angle and increasing the Reynolds number, Lr2 increases. But in this part, the effect of the angle is much clear than the previous section. As when Re is 300, for 100% increasing of angle (from 30º to 60º), Lr2 decreases 10%. However, like as the previous section, the effect of the Reynolds number is clear. As when the angel is 30º, for 100 percent increasing by the Reynolds number (from 200 to 400), Lr2 increases 52%.
By increasing the Reynolds number, the velocity of the fluid increases, so recirculation zones are developed. On the other hand, with changing the angle, in fact, the direction of inlet fluid to the second section of outlet channel changes and fluid more lead to the wall.
4.3 Consideration of Recirculation lengths in diagonal Y-shaped channel
In this section, above mentioned procedure was used to predict the correlation. In diagonal Y-shaped channel, because of the symmetry shape of the channel, the lengths of two recirculation zones at two outlet sections are equal. So, in this part only one recirculation length was named as Lr, was considered.
Proposed correlation is a quadratic model (Eq. (5)) with coefficients that were listed in Table 4. From a comparison of Lr/D values calculated by the proposed model and CFD results, it can be found that like as the previous section, there is a fairly good agreement (±10%). In this part, verification points are used for validating the proposed correlation, too. For shortening the content and not repeating similar cases, the presentation of the related figure in this section was ignored.
$\frac{L_{r}}{D}=c_{0}+\sum_{i} c_{i} k_{i}+\sum_{j} c_{i j} k_{j}^{2}+\sum \sum_{i \neq j} c_{i j} k_{i} k_{j}$
$k_{i, j}=\alpha, \operatorname{Re}$ (5)
Table 4. Value of coefficients of Quadratic proposed model for Lr
-2.3542E-6
4.3.1 Effect of the Reynolds number and angle on the recirculation length in diagonal Y-shaped channel
With the quadratic proposed model, the effect of the Reynolds number and angle on the non-dimensional form of recirculation lengths, Lr was considered as shown in Figure 9. FromFigure 9(a) it can be seen with increasing the angle for three Reynolds number, Lr significantlydecreases. However, with increasing the Reynolds number, the difference between these values of Lr becomes small. But Figure 9 (b) shows that with increasing the Reynolds number for three angles, Lr increases. However, this increasing in higher angle is less than lower angle. As, when the angel is 30º, for 100 percent increasing by the Reynolds number (from 200 to 400), Lr increases 15%, while this increasing for angle equal as 60 is 10%. From results, it can be found that in this structure of the Y-shaped channel, the angle is an important parameter and must be considered in studies.
Figure 9. Effect of (a) angle and (b) Reynolds number on the recirculation length (Lr)
The application of the Computational experiments for understanding of the fluid behavior was studied in the Y-shaped channel. Here is focused on the Recirculation zones and length. According to the results of this work, following conclusions have been made:
There are two recirculation zones in outlet sections.
The Reynolds number and angle are two effective parameters which were affected on the recirculation lengths.
There is a fairly good agreement with linear and quadratic proposed correlations in straight and diagonal Y-shaped channel, respectively.
In straight Y-shaped channel, with increasing the angle, Lr1 decreases slightly.
In straight Y-shaped channel, with increasing the Reynolds number, Lr1 increases.
In straight Y-shaped channel, these results can be found for Lr2 but more intense.
In diagonal Y-shaped channel, because of the symmetry shape of the channel, the lengths of two recirculation zones at two outlet sections are equal.
In diagonal Y-shaped channel, with increasing the Reynolds number and decreasing the angle, Lr significantly increases.
In diagonal Y-shaped channel, the angle is an important parameter and must be considered in studies.
Width of channel, m
Initial Length of channel, m
Recirculation lengths
Pressure, Pa
Reynolds Number
x-component of velocity, m/s
y-component of velocity, m/s
Greek symbols
Angle of two branchs,deg
dynamic viscosity, kg. m-1.s-1
The application of the Computational experiments for understanding of the fluid behavior was studied in the Y-shaped channel.
Reynolds number and angle are effective parameters on the recirculation lengths.
Results show that with increasing the angle and decreasing the Reynolds number, Recirculation length decreases.
[1] Khandelwal V, Dhiman A, Baranyi L. (2015). Laminar flow of non-Newtonian shear-thinning fluids in a T-channel. Computers & Fluids 108: 79-91. https://doi.org/10.1016/j.compfluid.2014.11.030
[2] Liepsch D, Moravec S, Rastoci AK, Vlachos N.S. (1982). Measurement and calculation of laminar flow in a ninety degree bifurcation. Journal of Biomechanics 15(7): 473-485. https://doi.org/10.1016/0021-9290(82)90001-X
[3] Louda P, Kozel K, Prˇíhoda J, Beneš L, Kopácˇek T. (2011). Numerical solution of incompressible flow through branched channels. Computers & Fluids 46: 318-324. https://doi.org/10.1016/j.compfluid.2010.12.003
[4] Brameley JS, Dennis CR. (1984). The numerical solution of two-dimensional flow in a branching channel. Computers & Fluids 12(4): 339-355. https://doi.org/10.1016/0045-7930(84)90014-8
[5] Singh B, Singh H, Singh Sehgal S. (2013). CFD analysis of fluid flow parameters within A Y-shaped branched pipe. International Journal of Latest Trends in Engineering and Technology 2(2): 313-317.
[6] Uppin VS, Savannanavar RN, Paschapur V. (2017). Velocity effect investigation on the flow parameters in Y-Duct. International Journal of Advance Research and Innovative Ideas in Education 2(2): C-1476: 7-12.
[7] Kumar RS, Khadabadi UB. (2017). Investigation of flow parameters and structural analysis of Y-Duct. International Research Journal of Engineering and Technology 4(6): 1520-1524.
[8] Yadav S, Mallikarjuna Reddy M, Singh A. (2015). Shear-induced particle migration in three-dimensional bifurcation channel. International Journal of Multiphase Flow 76: 1-12.
https://doi.org/10.1016/j.ijmultiphaseflow.2015.06.007
[9] Versteeg HK, Malalasekera W. (1995). An Introduction to Computational Fluid Dynamics – The Finite Volume Method. John wiley & sons Inc.
[10] Taghavifar H, Jafarmadar S, Taghavifar H, Navid A. (2016). Application of DoE evaluation to introduce the optimum injection strategy-chamber geometry of diesel engine using surrogate epsilon-SVR. Applied Thermal Engineering 106: 56-66.
https://doi.org/10.1016/j.applthermaleng.2016.05.194
[11] Hayes RE, Nandakumar K, Naser-El-Din H. (1989). Steady laminar flow in a 90 degree planar branch. Computers & Fluids 17(4): 537-553. https://doi.org/10.1016/0045-7930(89)90027-3
|
CommonCrawl
|
Zapiski Nauchnykh Seminarov POMI
Zap. Nauchn. Sem. POMI:
Zap. Nauchn. Sem. POMI, 2004, Volume 312, Pages 69–85 (Mi znsl773)
The Kantorovich metric: initial history and little-known applications
A. M. Vershik
St. Petersburg Department of V. A. Steklov Institute of Mathematics, Russian Academy of Sciences
Abstract: We remind of the history of the transportation metric (Kantorovich metric) and the Monge–Kantorovich problem. We describe several little-known applications: the first one concerns the theory of decreasing sequences of partitions (tower of measures and iterated metric), the second one concerns Ornstein's theory of Bernoulli automorphisms ($\bar d$-metric), and the third one is the formulation of the strong Monge–Kantorovich problem in terms of matrix distributions.
Journal of Mathematical Sciences (New York), 2006, 133:4, 1410–1417
UDC: 517.987
Citation: A. M. Vershik, "The Kantorovich metric: initial history and little-known applications", Representation theory, dynamical systems. Part XI, Special issue, Zap. Nauchn. Sem. POMI, 312, POMI, St. Petersburg, 2004, 69–85; J. Math. Sci. (N. Y.), 133:4 (2006), 1410–1417
\Bibitem{Ver04}
\by A.~M.~Vershik
\paper The Kantorovich metric: initial history and little-known applications
\inbook Representation theory, dynamical systems. Part~XI
\bookinfo Special issue
\serial Zap. Nauchn. Sem. POMI
\vol 312
\pages 69--85
\publ POMI
\publaddr St.~Petersburg
\mathnet{http://mi.mathnet.ru/znsl773}
\jour J. Math. Sci. (N. Y.)
\pages 1410--1417
\crossref{https://doi.org/10.1007/s10958-006-0056-3}
http://mi.mathnet.ru/eng/znsl773
http://mi.mathnet.ru/eng/znsl/v312/p69
A. M. Vershik, A. D. Gorbul'skii, "Scaled entropy of filtrations of $\sigma$-fields", Theory Probab. Appl., 52:3 (2008), 493–508
Baba A., Komatsuzaki T., "Construction of effective free energy landscape from single-molecule time series", Proceedings of the National Academy of Sciences of the United States of America, 104:49 (2007), 19297–19302
Melleray J., Petrov F.V., Vershik A.M., "Linearly rigid metric spaces and the embedding problem", Fundamenta Mathematicae, 199:2 (2008), 177–194
Laurent S., "Further comments on the representation problem for stationary processes", Statistics & Probability Letters, 80:7–8 (2010), 592–596
Theory Probab. Appl., 55:1 (2011), 54–76
Schuetz Ph., Wuttke R., Schuler B., Caflisch A., "Free Energy Surfaces from Single-Distance Information", The Journal of Physical Chemistry B, 114:46 (2010), 15227–15235
Rigoberto Gabriel J., Gonzalez-Hernandez J., Lopez-Martinez R.R., "Numerical approximations to the mass transfer problem on compact spaces", IMA J Numer Anal, 30:4 (2010), 1121–1136
Thorsley D., Klavins E., "Approximating stochastic biochemical processes with Wasserstein pseudometrics", Iet Systems Biology, 4:3 (2010), 193–211
Kaijser T., "On Markov Chains Induced by Partitioned Transition Probability Matrices", Acta Math Sin (Engl Ser ), 27:3 (2011), 441–476
Baba A., Komatsuzaki T., "Extracting the underlying effective free energy landscape from single-molecule time series-local equilibrium states and their network", Physical Chemistry Chemical Physics, 13:4 (2011), 1395–1406
Bogachev L.V., Zarbaliev S.M., "Universality of the Limit Shape of Convex Lattice Polygonal Lines", Ann Probab, 39:6 (2011), 2271–2317
MacKay R.S., "Robustness of Markov processes on large networks", J Differ Equations Appl, 17:8 (2011), 1155–1167
V. I. Bogachev, A. V. Kolesnikov, "The Monge–Kantorovich problem: achievements, connections, and perspectives", Russian Math. Surveys, 67:5 (2012), 785–890
B. V. Oliynyk, V. I. Sushchanskiǐ, "The isometry groups of the Hamming spaces of periodic sequences", Siberian Math. J., 54:1 (2013), 124–136
A. M. Vershik, "Two ways to define compatible metrics on the simplex of measures", J. Math. Sci. (N. Y.), 196:2 (2014), 138–143
Vershik A.M., "Long History of the Monge-Kantorovich Transportation Problem", Math. Intell., 35:4 (2013), 1–9
V. M. Buchstaber, M. I. Gordin, I. A. Ibragimov, V. A. Kaimanovich, A. A. Kirillov, A. A. Lodkin, S. P. Novikov, A. Yu. Okounkov, G. I. Olshanski, F. V. Petrov, Ya. G. Sinai, L. D. Faddeev, S. V. Fomin, N. V. Tsilevich, Yu. V. Yakubovich, "Anatolii Moiseevich Vershik (on his 80th birthday)", Russian Math. Surveys, 69:1 (2014), 165–179
B. V. Oliynyk, V. I. Sushchanskiǐ, "Imprimitivity systems and lattices of normal subgroups in $D$-hyperoctahedral groups", Siberian Math. J., 55:1 (2014), 132–141
J. Math. Sci. (N. Y.), 215:6 (2016), 649–658
D. A. Zaev, "On ergodic decompositions related to the Kantorovich problem", J. Math. Sci. (N. Y.), 216:1 (2016), 65–83
Viktoriya L. Kreps, "Modeli birzhevykh torgov i povtoryayuschiesya igry s nepolnoi informatsiei: obzor", MTIP, 9:3 (2017), 3–35
Wolansky G., "From Optimal Transportation to Optimal Teleportation", Ann. Inst. Henri Poincare-Anal. Non Lineaire, 34:7 (2017), 1669–1685
V. I. Bogachev, A. N. Kalinin, S. N. Popova, "On the equality of values in the Monge and Kantorovich problems", J. Math. Sci. (N. Y.), 238:4 (2019), 377–389
Austin T., "Measure Concentration and the Weak Pinsker Property", Publ. Math. IHES, 128:1 (2018), 1–119
Basargina O., Sachkov M., "New Approach to the Space Mission Program Optimisation: Wso-Uv", Proceedings of Spie, 10704, eds. Peck A., Seaman R., Benn C., Spie-Int Soc Optical Engineering, 2018, UNSP 107042L
Ostrovska S. Ostrovskii M.I., "Generalized Transportation Cost Spaces", Mediterr. J. Math., 16:6 (2019), 157
|
CommonCrawl
|
Focus on: All days Jul 29, 2015 Jul 30, 2015 Jul 31, 2015 Aug 1, 2015 Aug 2, 2015 Aug 3, 2015 Aug 4, 2015 Aug 5, 2015 Aug 6, 2015 All sessions ApPIC/IUPAP Closing Conference diner High-Light Talks Invited Review Talks NMDB splinter meeting NTA Opening, Prizes and Awards Parallel CR 19 Future IN Parallel CR01 Aniso Parallel CR02 Hadr Int Parallel CR03 Aniso Parallel CR04 e+ e- Parallel CR05 TH/aniso Parallel CR06 Dir p He Parallel CR07 EAS mass Parallel CR08 Dir light Parallel CR09 EAS knee Parallel CR10 Dir heavy Parallel CR11 Radio Parallel CR12 Radio Parallel CR13 EX EAS Parallel CR14 Hadr Int Parallel CR15 Direct/Aniso Parallel CR16 TH prop Parallel CR17 EAS spec Parallel CR18 TH prop Parallel CR20 TH accel Parallel CR21 Future IN Parallel CR22 TH Parallel DM 01 Parallel DM 02 Parallel DM 04 Parallel DM03 Parallel GA 04 Parallel GA01 EGAL Parallel GA02 GAL Parallel GA03 Pulsars Parallel GA05 GeV excess GalCen Parallel GA06 TH Parallel GA07 MAGIC Parallel GA08 EGAL Parallel GA09 Binaries Parallel GA10 VERITAS Parallel GA11 Instruments / Prospects Parallel GA12 EGAL Parallel GA13 Future Parallel GA14 GAL / Bubbles etc Parallel GA15 Future / IN Parallel GA16 H.E.S.S. Parallel GA17 GAL / SNRs Parallel GA18 EGAL Parallel GA19 Fermi Parallel NU 01 Parallel NU 02 Parallel NU 03 Parallel NU 04 Parallel NU 05 Parallel NU 06 Parallel SH 01 SEP I Parallel SH 02 Outer Helio Parallel SH 03 SEP II Parallel SH 04 STEREO Parallel SH 06 Cycle & AMS Parallel SH 07 Modulation I Parallel SH 08 Theory Parallel SH 09 Modulation II Parallel SH05 GLEs & FDs Poster 1 Poster 1 CR Poster 1 DM and NU Poster 1 GA Poster 1 SH Poster 2 Poster 2 CR Poster 2 DM and NU Poster 2 GA Poster 2 SH Poster 3 Poster 3 CR Poster 3 DM and NU Poster 3 GA Poster 3 SH Public lecture Rapporteur Talks Reception Registration Social Programme Hide Contributions
Back to Conference View
Europe/Amsterdam English (United States)
ICRC2015
Jul 29, 2015, 4:00 PM → Aug 6, 2015, 6:30 PM Europe/Amsterdam
World Forum
Churchillplein 10 2517 JW Den Haag The Netherlands
20150730_ICRC_001-0386.jpg
Thu, Jul 30
Fri, Jul 31
Sat, Aug 1
Sun, Aug 2
Mon, Aug 3
Tue, Aug 4
Wed, Aug 5
Thu, Aug 6
Registration Onyx
9:00 AM → 10:30 AM
Opening, Prizes and Awards World Forum Theater
World Forum Theater
Welcome by the Chair of the ICRC 2015 10m
Speaker: Ad van den Berg (University of Groningen)
ICRC2015WelcomeV4.pdf
ICRC2015WelcomeV4.pptx
Address from the Chair of the IUPAP commission for Astroparticle Physics (C4). 15m
Speaker: Karl-Heinz Kampert (Universität Wuppertal)
KHK_ICRC2015_opening.pdf
Address from the President of the University of Groningen 15m
Speaker: Prof. Sibrand Poppema (University of Groningen)
Prizes and Awards Ceremony 50m
10:30 AM → 11:00 AM
Coffee & Tea break 30m
11:00 AM → 12:30 PM
Parallel CR01 Aniso World Forum Theater
Anisotropy in Cosmic Ray Arrival Directions Using IceCube and IceTop 15m
We provide an update on the continued observation of anisotropy in the arrival direction distribution of cosmic rays in the southern hemisphere. The IceCube neutrino observatory recorded more than 250 billion events between May 2009 and May 2014. Subtracting dipole and quadrupole fit maps, we can use these increased statistics to see significant small-scale structure that approaches our median angular resolution of 3 degrees. The expanded dataset also allows for a more detailed study of the anisotropy for various cosmic-ray median energies. The large-scale structure observed at median eneries near 20 TeV appears to shift around 150 TeV, with the high-energy skymap showing a strong deficit also present in IceTop maps of similar energies.
Speaker: Stefan Westerhoff (University of Wisconsin-Madison)
icrc_0274_169.pdf
Search for High Energy Neutron Point Sources in IceTop 15m
IceTop can detect an astrophysical flux of neutrons from Galactic sources as an excess of cosmic ray air showers arriving from the source direction. Neutrons are undeflected by the Galactic magnetic field and can typically travel 10 ($E$ / PeV) pc before decay. Two searches through the IceTop dataset are performed to look for a statistically significant excess of events with energies above 10 PeV ($10^{16}$ eV) arriving within a small solid angle. The blind search method covers from -90$^{\circ}$ to approximately -50$^{\circ}$ in declination. A targeted search is also performed, looking for significant correlation with candidate sources in different target sets. Flux upper limits can be set in both searches.
Speaker: Michael Sutherland
Full-Sky Analysis of Cosmic-Ray Anisotropy with IceCube and HAWC 15m
During the past two decades, experiments in both the Northern and Southern hemispheres have observed a small but measurable energy-dependent sidereal anisotropy in the arrival direction distribution of galactic cosmic rays. The relative amplitude of the anisotropy is $10^{−4} - 10^{−3}$. However, each of these individual measurements is restricted by limited sky coverage, and so the pseudo-power spectrum of the anisotropy obtained from any one measurement displays a systematic correlation between different multipole modes $C_\ell$. To address this issue, we present the current state of a joint analysis of the anisotropy on all angular scales using cosmic-ray data from the IceCube Neutrino Observatory located at the South Pole (90° S) and the High-Altitude Water Cherenkov (HAWC) Observatory located at Sierra Negra, Mexico (19° N). We present a combined skymap and an all-sky power spectrum in the overlapping energy range of the two experiments at ~10 TeV. We describe the methods used to combine the IceCube and HAWC data, address the individual detector systematics and study the region of overlapping field of view between the two observatories.
Speaker: Juan Carlos Diaz Velez (University of Wisconsin-Madison)
Observation of Anisotropy in the Arrival Direction Distribution of TeV Cosmic Rays With HAWC 15m
The High-Altitude Water Cherenkov (HAWC) Observatory, located 4100 m above sea level near Pico de Orizaba (19° N) in Mexico, is sensitive to gamma rays and cosmic rays at TeV energies. The arrival direction distribution of cosmic rays at these energies shows significant anisotropy on several angular scales, with a relative intensity ranging between $10^{-3}$ and $10^{-4}$. We present the results of a study of cosmic-ray anisotropy based on more than 100 billion cosmic-ray air showers recorded with HAWC since June 2013. The HAWC cosmic-ray sky map, which has a median energy of 2 TeV, exhibits several regions of significantly enhanced cosmic-ray flux. We present the energy dependence of the anisotropy and the cosmic-ray spectrum in the regions of significant excess.
Speaker: Daniel Fiorino
HAWC-Cosmic-Ray-Ansiotropy_ICRC2015_widescr.pdf
A study of the first harmonic of the large scale anisotropies with the KASCADE-Grande experiment 15m
In this contribution we present the results of a search for large scale anisotropies performed, using the East-West method, with the whole data set of the KASCADE-Grande experiment. The counts distribution in sidereal time intervals of 20 minutes, obtained applying the East-West analysis technique (correctly removing instrumental and atmospheric effects), is analyzed in terms of a dipole component. The amplitude obtained with the whole data set has a 3.5 sigma significance, therefore an upper limit is derived: $A<0.47\times10^{-2}$. To investigate a possible variation of the phase of the first harmonic with energy the search has been repeated in shower size intervals. The errors on the phases obtained in all energy intervals are of the order of 20-30 degrees. The phases obtained point at a sky direction that agrees with those measured at lower energies by the EAS-TOP, IceCube and IceTop experiments and at higher energy by the low energy extension of the Pierre Auger Observatory.
Speaker: Andrea Chiavassa (Universita` di Torino)
LSA-KGrande-Chiavassa.pdf
Measurement of (p+He)-induced anisotropy in cosmic rays with ARGO-YBJ 15m
Deviations from isotropy in the cosmic ray arrival direction distribution indicate the laboratory reference frame moving with respect to the cosmic radiation. When data are ordered in sidereal time, any effect is of great importance, as it may trace potential sources of cosmic rays and probe their propagation through magnetic fields. For the same reason, to decipher results implies unfolding effects from source distribution, energy spectrum and mass composition of cosmic rays, as well as magnetic field on regular and turbulent scales. Any efficient selection of cosmic ray mass would have a major impact on this scenario, as parameters related to cosmic rays production site, acceleration and propagation mechanisms would be importantly constrained in terms of rigidity. So far, no experiment managed to implement efficient mass selections and save high statistics at the same time. The ARGO-YBJ experiment (located at the YangBaJing Cosmic Ray Observatory, Tibet, China, 4300 m asl) is the only detector able to select the cosmic ray light (p+He) component with high efficiency in the wide energy range few TeV - 10 PeV. In this contribution a preliminary measurement of the anisotropy for the p+He primary component is reported for the first time.
Speaker: Roberto Iuppa (Universita e INFN Roma Tor Vergata (IT))
Parallel CR02 Hadr Int Yangtze 2
Yangtze 2
Status of the LHCf experiment 15m
Observations of UHECRs' by extensive air showers rely on understanding of hadron interactions at very high energies. Recent LHC experiments have provided useful hadron interaction data at the collision energy which is almost equivalent to 10**17 eV in the laboratory frame. Among them, the LHCf experiment is dedicated measurement of neutral particle productions at very forward region of LHC IP1. Two detectors consisting of a pair of compact electromagnetic sampling calorimeters installed at 140 m apart from the IP1, covering the pseudorapidity range eta from 8.6 to infinity. So far measurements of energy spectra for gamma rays, neutral pions, and neutrons have been measured for 7TeV or 0.9 TeV p-p collisions. LHCf has also reported neutral pions from p-Pb collisions at root sNN = 5.02 TeV. Obtained results are compared with the existing cosmic ray interaction models, SYBILL, QGSJETII, DPMJET3, and EPOS. The measured data are well bracketed by these models, although none of them could completely reproduce the data. In 2015 LHCf revisits LHC to obtain p-p collision data at 13 TeV. Current achievement of LHCf experiment and the first look of 13 TeV data as well as future prospects for possible very forward measurement for p-p or p-light ions at RHIC or future LHC will be presented.
Speaker: Yoshitaka Ito (Nagoya University (JP))
LHCf-ICRC20150730-4.pdf
LHCf-ICRC20150730-4.ppt
The TOTEM experiment at LHC for proton-proton cross section measurements. 15m
The precise knowledge of the proton-proton cross section is extremely important to model the development, in the atmosphere, of the showers induced by the interaction of ultra high energy cosmic rays. The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section with a luminosity independent method, based on the optical theorem, and study the elastic and diffractive scattering at the LHC energy. This method relies on the capability of the simultaneous measurements of inelastic and elastic rates; in the TOTEM experiment this is possible thanks to two forward inelastic telescopes, covering the pseudorapitidy range 3.1 $< \|\eta\| <$ 6.5, and Roman Pot detectors, that can be inserted down to few hundred microns to the beam centre. Thanks to dedicated runs, taken between 2011 and 2012, with special beam optics, TOTEM experiment was able to measure the elastic, inelastic and total cross-section at $\sqrt{s}=7~TeV$ and $8~TeV$, using the luminosity independent method, along with the pseudorapidity distribution of charged particles. In this contribution the latest results of the TOTEM experiment will be described along with its performance and the future physics program for the LHC run 2.
Speaker: Francesco Cafagna (Universita e INFN, Bari (IT))
totem_icrc2015.pdf
Study of high muon multiplicity cosmic ray events with ALICE at the CERN Large Hadron Collider 15m
ALICE is one of four large experiments at the CERN Large Hadron Collider. Located 52 meters underground with 28 meters of overburden rock,it has also been used to detect atmospheric muons produced by cosmic ray interactions in the upper atmosphere. We present the multiplicity distribution of these cosmic ray muon events and their comparison with Monte Carlo simulation. This analysis exploits the large size and excellent tracking capability of the ALICE Time Projection Chamber. A special emphasis is given to the study of high multiplicity events containing more than 100 reconstructed muons and corresponding to a muon areal density larger than 6.7 $m^{−2}$. Similar high muon multiplicity events have been studied in previous underground experiments such as ALEPH and DELPHI at LEP. While these experiments were able to reproduce the measured muon multiplicity distribution with Monte Carlo simulation at low and intermediate multiplicities, they failed to reproduce the frequency of the highest multiplicity events. We demonstrate that the high muon multiplicity events observed in ALICE stem from primary cosmic rays with energies above $10^{16}$ eV. The frequency of these events can be successfully described by assuming a heavy mass composition of primary cosmic rays in this energy range and using the most recent hadronic interaction models to simulate the development of the resulting air showers. This observation narrows the scope alternative, more exotic, production mechanisms for these events.
Speaker: Mario Rodriguez Cahuantzi (Autonomous University of Puebla (BUAP, México)))
icrc2015_slides_MarioRC_v4.pdf
Results from pion-carbon interactions measured by NA61/SHINE for better understanding of extensive air showers 15m
The interpretation of extensive air shower measurements, produced by ultra-high energy cosmic rays, relies on the correct modelling of the hadron-air interactions that occur during the shower development. The majority of hadronic particles is produced at equivalent beam energies below the TeV range. NA61/SHINE is a fixed target experiment using secondary beams produced at CERN using the SPS. Hadron-hadron interactions have been recorded at beam momenta between 13 and 350 GeV/c with a wide-acceptance spectrometer. In this talk we present measurements of the identified secondary hadron spectra and the resonance production from pion-carbon interactions, which are essential for modelling air showers.
Speaker: Dr Alexander Edward Herve (Karlsruhe Institute of Technology)
The impact of a fixed-target experiment with LHC beam for astroparticle physics 15m
There are two main points, where the data from a fixed-target experiment with LHC beam will contribute unique information. Firstly, to better understand the inclusive flux of atmospheric neutrinos at very high, PeV, energies. Secondly, to the apparent over-abundance of GeV muons in ultra-high energy extensive air showers. To contribute towards answering these questions, the experimental limitations and requirements for a fixed-target experiment at LHC are presented and discussed. The investigation of forward D-meson production at high-xF is essential in order to distinguish if PeV neutrinos are indeed astrophysical or may also be produced partly within the atmosphere. Furthermore, the production of GeV muons is deeply related to the pion cascade within air showers, and the corresponding pion-air interactions. More precise fixed-target data for pion-Carbon at LHC beam energies will contribute significantly to a better modelling of the muon content of air showers.
Speaker: Dr Ralf Matthias Ulrich (KIT - Karlsruhe Institute of Technology (DE))
Air Shower Development, pion interactions and modified EPOS Model 15m
In detailed air shower simulations, the uncertainty in the prediction of shower observable for different primary particles and energies is currently dominated by differences between hadronic interaction models. With the results of the first run of the LHC, the difference between post-LHC model predictions has been reduced at the same level than experimental uncertainties of cosmic ray experiments. At the same time new type of air shower observable, like the muon production depth, has been measured adding new constraints on hadronic models. Currently no model is able to reproduce consistently all mass composition measurement possible within the Pierre Auger Observatory for instance. Using new modifications in EPOS and LHC data, we will show how air shower measurements can be used to constrain pion-air interactions in kinematic phase space which can not be tested by laboratory experiments. The goal being a model which can reproduce all primary mass composition measurements from air showers in a consistent way.
Speaker: Dr Tanguy Pierog (KIT)
MPD_EPOS.pdf
Parallel GA01 EGAL Yangtze 1
Revisiting the starburst galaxy NGC 253 with H.E.S.S. 15m
NGC 253 is one of only two starburst galaxies that is found to emit γ-ray emission from hundreds of MeV to multiple TeV energies. An accurate measurement of the GeV and TeV spectra is crucial to determine the underlying particle accelerators, to probe the dominant emission loss mechanism(s) and to probe the importance of cosmic-ray interaction and transport. The precision of the measurement of the γ-ray emission of the starburst galaxy NGC 253 published in 2012 by H.E.S.S. was dominated by the large associated systematic uncertainties. With the improved understanding of the response of the H.E.S.S. experiment, we present an evaluation of systematic uncertainties of the measurement. We show that they are of the same order of magnitude as the statistical uncertainties. The spectral analysis is discussed for H.E.S.S. separately as well as in combination with the Fermi-LAT measurement. No significant deviation from a single power law is observed. The obtained flux parameters are found to be consistent with the previous measurement within systematic uncertainties. However a ∼ 35 % enhanced flux is now observed. The results of the combined spectral fit strengthen the conclusions presented in Abramowski et al. (2012).
Speaker: Clemens Hoischen (University of Potsdam)
Spectral characteristics of Mrk$\,$501 during the 2012 and 2014 flaring states 15m
The BL$\,$Lac object Mrk$\,$501 was observed at Very High Energies (E$\,$>$\,$100$\,$GeV) with H.E.S.S. (High Energy Stereoscopic System) between 2004 and 2014. The source is detected with high significance above $\sim$2$\,$TeV in $\sim$13.6$\,$h livetime. The observations include periods of low flux and active phases. This led to the detection of strong flaring events, which in 2014 showed a flux level comparable to the 1997 historical maximum. Such high flux states enabled spectral variability and flux variability studies down to a timescale of a few minutes in the 2-20$\,$TeV energy range. During the 2014 flare, the source is clearly detected in each time bin. The spectrum does not show intrinsic curvature in this energy range. Flux dependent spectral analyses are also carried out. The peculiarity of this study resides in the unprecedented combination of short timescales and an energy coverage that extends significantly above 10$\,$TeV. The high energies allow us to probe the effect of EBL absorption at low redshifts, jet physics and LIV. The multiwavelength context of these VHE observations will be presented as well.
Speaker: Mr Gabriele Cologna (LSW Heidelberg)
slides_cologna_mrk501_spectra.pdf
Discovery of very-high-energy gamma-ray emission from a hard-X-ray bright HBL RX J1136.5+6737 15m
RX J1136.5+6737 (z=0.1342) is a hard X-ray bright high-peaked frequency BL Lac object as listed in the MAXI 3-year catalog as well as the Swift-BAT catalog. The source has also been detected by Fermi-LAT with a hard photon index of $1.68\pm0.12$, and belongs to the first Fermi-LAT catalog of $>10$ GeV sources, showing bright (photon flux = $11.7\times10^{-11}$ ph cm$^{-2}$ s$^{-1}$) emission above 10 GeV. MAGIC observed the source for about 30 hours in 2014 and discovered very-high-energy (VHE) gamma-ray emission from the source with $>5\sigma$ significance. The averaged flux measured by MAGIC during the 2014 observations corresponds to about 1.5% of the Crab Nebula flux at energies above 200 GeV without significant variability. The measured spectrum shows evidence of extending into the TeV energy range, even though most extragalactic background light models predict the distance of z=0.1342 is beyond the "Cosmic gamma-ray horizon" at 1 TeV. Along with the MAGIC observations, we coordinated simultaneous multi-band observations in X-ray and UV bands by Swift, and in optical-IR bands by ground-based telescopes such as Kanata and KVA. In this contribution, the first results of the MAGIC discovery of VHE emission from RX J1136.5+6737 will be reported. We will also discuss origins of the gamma-ray emission with a broad-band spectral energy distribution using our emission model, which takes into account secondary gamma-ray photons produced from cascades induced by ultra-high-energy gamma-ray or protons propagating through intergalactic space.
Speaker: Dr Masaaki Hayashida (Institute for Cosmic-Ray Research, University of Tokyo)
The Denoised, Deconvolved, and Decomposed Fermi gamma-ray sky 15m
We analyze the 6.5 year all-sky data from the Fermi Large Area Telescope restricted to gamma-ray photons with energies between 0.6-307.2 GeV. Raw count maps show a superposition of diffuse and point-like emission structures and are subject to shot noise and instrumental artifacts. Using the D3PO inference algorithm, we model the observed photon counts as the sum of a diffuse and a point-like photon flux, convolved with the instrumental beam and subject to Poissonian shot noise. The D3PO algorithm performs a Bayesian inference in this setting without the use of spatial or spectral templates; i.e., it removes the shot noise, deconvolves the instrumental response, and yields estimates for the two flux components separately. The non-parametric reconstruction uncovers the morphology of the diffuse photon flux up to several hundred GeV. We present an all-sky spectral index map for the diffuse component. We show that the diffuse gamma-ray flux can be described phenomenologically by only two distinct components: a soft component, presumably dominated by hadronic processes, tracing the dense, cold interstellar medium and a hard component, presumably dominated by leptonic interactions, following the hot and dilute medium and outflows such as the Fermi bubbles. A comparison of the soft component with the Galactic dust emission indicates that the dust-to-soft-gamma ratio in the interstellar medium decreases with latitude. The spectrally hard component exists in a thick Galactic disk and tends to flow out of the Galaxy at some locations. Furthermore, we find the angular power spectrum of the diffuse flux to roughly follow a power law with an index of 2.47 on large scales, independent of energy. Our first catalog of source candidates includes 3106 candidates of which we associate 1381 (1897) with known sources from the second (third) Fermi source catalog. We observe gamma-ray emission in the direction of a few galaxy clusters hosting known radio halos.
Speaker: Valentina Vacca (Max Planck for Astrophysics)
Searching for TeV gamma-ray emission associated with IceCube high-energy neutrinos using VERITAS 15m
A potential clue to finding the long-sought-after sources of cosmic rays is the recent observation of an astrophysical flux of high-energy neutrinos by the IceCube detector, since these possibly originate in hadronic interactions near cosmic-ray accelerators. While the neutrino sky map shows no indication of point sources so far, it is possible to utilize the sensitivity of TeV Cherenkov telescopes, such as VERITAS, to search for hadronic gamma-ray emission at the neutrino locations. Over the last 2 years, the positions of neutrino events detected by IceCube have been observed using the VERITAS array. Observations have been limited to muon neutrino events, since their typical angular reconstruction uncertainty is below 1°, smaller than the 3.5° diameter of the VERITAS field of view. The location of VERITAS further constrains the neutrino event positions that can be observed to those located in the northern sky, or at moderate southern declinations. The list of observed positions was selected from published results and a set of high-energy muon tracks provided by IceCube. We present the current status and some preliminary results from this program.
Speaker: Dr Marcos Santander (Barnard College, Columbia University)
Santander_gamma_nus.pdf
AMON Searches for Jointly-Emitting Neutrino + Gamma-Ray Transients 15m
We present the results of archival coincidence analyses between public neutrino data from the 40-string and 59-string configurations of IceCube (IC40 and IC59) with contemporaneous public gamma-ray data from Fermi LAT and Swift. Our analyses have the potential to discover statistically significant coincidences between high-energy neutrinos and gamma-ray signals, and hence, possible jointly-emitting neutrino/gamma-ray transients. This work is an example of more general multimessenger studies that the Astrophysical Multimessenger Observatory Network (AMON) aims to perform. AMON, currently under development at Penn State, will link multiple current and future sensitive high-energy neutrino, cosmic rays and follow-up observatories as well as gravitational wave facilities. This single network enables near real-time coincidence searches for multimessenger astrophysical transients and their electromagnetic counterparts. We will present the component high-energy neutrino and gamma-ray datasets, the statistical approaches that we used, and the results of analyses of the IC40/59+LAT and IC40/59+Swift datasets.
Speaker: Azadeh Keivani (Pennsylvania State University)
Parallel GA02 GAL Amazon
Study of the diffuse gamma ray emission from the Galactic plane with ARGO-YBJ 15m
The data recorded by ARGO-YBJ in more than 5 years have been analyzed todetermine the diffuse gamma ray emission from the Galactic plane. The spatial distribution of the diffuse gamma rays and their energy spectra at Galactic longitudes 25^o < l <100^o o and Galactic latitudes |b|<5^o have been studied. The regions with 40^o< l <100^o and 65^o < l <85^o have been focused, where Milagro observed an excess with respect to the predictions of current models. The energy range investigated covers from ~350 GeV to ~2TeV, connecting the region explored by Fermi-LAT with the multi-TeV energies studied by Milagro. Great care has been taken in masking the TeV point sources observed by ARGO-YBJ and other experiments. Our results are consistent with the predictions of the Fermi model and do not show any excess as observed by Milagro.
Speaker: Dr Lingling Ma (Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences,)
TeV Gamma-Ray Emission Observed from Geminga by HAWC 15m
Geminga is a radio-quiet pulsar ~250 parsecs from Earth that was first discovered as a GeV gamma-ray source and then identified as a pulsar. Milagro observed an extended TeV source spatially consistent with Geminga. HAWC observes a similarly extended source. Observations of Geminga's flux and extension will be presented.
Speaker: Joshua Wood (University of Maryland, College Park)
20150729-ICRC-2015-Geminga-wide-screen
TeV Observations of the Galactic Plane with HAWC and Joint Analysis of GeV Data from Fermi 15m
A number of Galactic sources emit GeV-TeV gamma rays that are produced through leptonic and/or hadronic mechanisms. Spectral analysis in this energy range is crucial in order to understand the emission mechanisms. The HAWC Gamma-Ray Observatory, with a large field of view and location at 19º N latitude, is surveying the Galactic Plane from high Galactic longitudes down to near the Galactic Center. Data taken with partially-constructed HAWC array in 2013-2014 exhibit TeV gamma-ray emission along the Galactic Plane. A high-level analysis likelihood framework for HAWC, also presented at this meeting, has been developed concurrently with the Multi-Mission Maximum Likelihood (3ML) architecture to deconvolve the Galactic sources and to perform multi-instrument analysis. It has been tested on early HAWC data and the same method will be applied on HAWC data with the full array. I will present preliminary results on Galactic sources from TeV observations with HAWC and from joint analysis on Fermi and HAWC data in GeV-TeV energy range.
Speaker: Mr Hao Zhou (Michigan Technological University)
RCW 86 - A shell-type supernova remnant in TeV gamma-rays 15m
RCW 86 (also known as G315.4-2.3 or MSH 14-3) is a young supernova remnant about 1800 years old with a shell-like structure in the optical, radio, infrared and X-rays regimes with a diameter of about 40'. We will show detailed morphological and spectral studies of the TeV gamma-ray data measured with the H.E.S.S. telescope system. These studies reveal for the first time a shell-like structure in this energy range that correlates with non-thermal X-rays (2 keV - 5 keV) in the south west region of the remnant. The TeV gamma−ray spectrum is best described by an exponential cutoff power law. Leptonic and hadronic gamma-ray emission scenarios are probed for RCW 86 in a multi-wavelength approach, and the implications of these studies will be discussed.
Speaker: Ira Jung-Richardt
RCW 86 an extended SNR viewed at high energy with the new Fermi-LAT Pass 8 event reconstruction 15m
Supernovae Remnants (SNRs) are thought to be the primary source of galactic cosmic rays observed on Earth. Detected in radio, infrared, X-rays and at high (GeV) and very high energy (TeV) gamma rays, RCW 86 is a good candidate for efficient particle acceleration and might be the remnant of the historical supernova SN 185. Using more than 6 years of data acquired by the Fermi Large Area Telescope with the new Pass 8 event reconstruction, RCW 86 is now detected as a significant extended source at GeV energies, with a radius of 0.37°. The results of our deep morphological and spectral analysis provide new constraints on the origin of the gamma-ray emission and on key parameters such as the asymmetry of the morphology, the density of the surrounding medium and the total energy in accelerated particles. These new constraints will be presented and discussed in the light of existing estimates.
Speaker: Benjamin Condon (CNRS)
Search for new supernova remnant shells in the Galactic plane with H.E.S.S. 15m
Amongst the population of TeV gamma-ray sources detected with the High Energy Stereoscopic System (H.E.S.S.) in the Galactic plane, clearly identified supernova remnant (SNR) shells constitute a small but precious source class. TeV-selected SNRs are prime candidates for efficient cosmic-ray acceleration. In this work, we present new SNR candidates that have been identified in the entire H.E.S.S.-I data of the Galactic plane recorded over the past ten years. Identification with known SNR shells from other wavebands are rare but were successful at least in one case. In a few other cases, TeV-only shell candidates are a major challenge for identification as SNR objects due to their lack of detected non-thermal emission in lower frequency bands. We will discuss how these objects may present an important link between young and evolved SNRs, since their shell emission may be dominated by hadronic processes.
Speaker: Gerd Puehlhofer (IAAT)
icrc2015_1299_puehlhofer_hess_newshells_final.pdf
Parallel SH 01 SEP I Mississippi
Solar Energetic Particles
The Longitudinal Distribution of Solar Energetic Particles 15m
Using observations from the High Energy Telescopes on STEREO A and B and similar observations from SoHO, near-Earth, we have identified ~250 individual solar energetic particle events that include >14 MeV protons since the beginning of the STEREO mission (Richardson, et al., Solar Physics, 2014). Between the end of December 2009, when the STEREO A and B spacecraft were, respectively, ahead and behind Earth by ~ 65° in ecliptic longitude, and the end of December 2013, 43 different events were clearly detected at all three locations. The observed intensities of such an event are usually fit with a Gaussian which is a function of the longitudes of the Parker Spiral footpoints at the Sun for each observer. This neglects the fact that the interplanetary magnetic field may have large deviations from Parker Spirals, e.g. due to coronal mass ejections from prior events. Nonetheless, we have fit Gaussians to the peak intensities observed simultaneously at three spacecraft for all 43 events, taking into account particles coming around the Sun both from the east and from the west. The Gaussian peak intensity is poorly correlated with the corresponding CME speed and the FWHM is uncorrelated with the CME speed. Surprisingly, however, there appear to be distinctly non-random variations of the FWHM values from event to event. We will investigate possible causes of this effect.
Speaker: Tycho von Rosenvinge (NASA/Goddard Space Flight Center)
Resolving multiple sources of solar relativistic particles 15m
We perform a comparative study of the time-profile morphology of solar high-energy particle emissions including relativistic electrons in three energy channels of SOHO/EPHIN, relativistic protons as registered by the worldwide network of neutron monitors, and ~100 MeV/n protons and helium in several energy channels of SOHO/ERNE. Based on numerical modeling of the interplanetary transport, we formulate a simple method for resolving the high-energy particle sources operating in solar corona during first hour of the high-energy particle events. The method is applied to Ground Level Enhancement (GLE) and Solar Energetic Particle (SEP) events of the solar cycle 23. We conclude that depending on the GLE-SEP event scenario and detector's vantage point, the observed particles originate from up to three sources. Possible nature of the sources is discussed in the framework of previous and new models of the high-energy particle production associated with global coronal (EIT) waves and CME bow shocks within five solar radii from the Sun.
Speaker: Prof. Leon Kocharov (University of Oulu, Finland)
Kocharov_presentation-118.pdf
SOLAR ENERGETIC PARTICLE EVENTS: TRAJECTORY ANALYSIS AND FLUX RECONSTRUCTION WITH PAMELA 15m
The PAMELA satellite experiment is providing the first direct measurements of Solar Energetic Particles (SEPs) with energies from about 80 MeV to several GeV in near-Earth space, bridging the low energy data from space-based instruments and the Ground Level Enhancement (GLE) data from the worldwide network of neutron monitors. Its unique observational capabilities include the possibility of measuring the flux angular distribution and thus investigating possible anisotropies related to SEP events. This work reports the analysis methods developed to estimate SEP energy spectra as a function of the particle asymptotic pitch angle. The crucial ingredient is provided by an accurate simulation of the asymptotic exposition of the PAMELA apparatus, based on a realistic reconstruction of particle trajectories in the Earth's magnetosphere. Results for the 2006 December 13 and the 2012 May 17 events are presented.
Speaker: Dr Alessandro Bruno (Department of Physics, University of Bari, I-70126 Bari, Italy)
A.Bruno_ICRC2015_085_slides.pdf
Systematic Behavior of Heavy Ion Spectra in Large Gradual Solar Energetic Particle Events 15m
Our Sun accelerates ions and electrons up to near-relativistic speeds in at least two ways; magnetic reconnection during solar flares is believed to produce the impulsive or $^3$He-rich solar energetic particles (SEPs), while diffusive shock acceleration at fast coronal mass ejection - or CME-driven shock waves are thought to produce the larger gradual SEPs. Despite recent advances in our understanding of the properties (e.g., time variations, spectral behavior, longitudinal distributions, compositional anomalies etc.) of large SEP events, the relative roles played by many important physical processes remain poorly understood. These effects include variations in the seed populations, the geometry and speed of the shock, the presence or absence of a preceding CME from the same active region, scattering by ambient turbulence or by self-generated Alfvén waves during acceleration and transport, and the direct presence of flare accelerated material at energies above $\sim$10 MeV/nucleon. Observations and theoretical studies have indicated that many of these effects may manifest in the spectral properties of H and other heavy elements. In this paper, we present results from a survey of the energy spectra of $\sim$0.1-500 MeV/nucleon H-Fe nuclei in 46, isolated and well-connected large gradual SEP events observed by instruments onboard ACE, GOES, SAMPEX \& SoHO and determine how the spectral fit parameters such as the break or roll-over energies vary with the ion's Charge-to-Mass (Q/M) ratio. In particular, we compare our results with predictions of existing and developing models to understand why some large SEP events exhibit species-dependent spectral breaks that vary strongly with the ion's Q/M ratio while others do not.
Speaker: Mihir Desai (SwRI)
A statistical study of 90-MeV proton events observed with SOHO/ERNE 15m
To understand what kind of solar or interplanetary events are capable of producing solar energetic particle (SEP) events with proton energies > 90 MeV, and where and when acceleration of such protons starts. We have selected 40 energetic proton events with intensities > 10−3cm−2 sr−1 s−1 MeV−1 at 93.8–94 MeV, detected by the Energetic and Relativistic Nuclei and Electrons (ERNE) instrument onboard SOHO during solar cycle 23, in 1997–2003. We have estimated the first injection times of the particles using two different methods, the fixed path length method (1.2 AU) and the velocity dispersion analysis. We evaluated the injection time results by comparing each to the estimated height of radio type II/IV burst emission, and then compared the estimated times and heights with related flare and coronal mass ejection (CME) characteristics. Results. We find that all the analysed proton events were associated with CMEs and 82% were associated with on-the-disk GOES X-ray flares (six of the seven non-associated were concluded to show behind-the-limb flaring). Radio type II/IV burst emission association was 95% (of the non-associated two events, one was completely void of radio emission and one showed metric continuum and tilted type III burst lane emission). Most of the first protons were injected when the CME leading edges were below 5 solar radii, and most of the protons reached their maximum intensity while the CMEs were above 10 solar radii. The maximum proton intensities were achieved much earlier than the possible passage of an interplanetary shock, suggesting that the majority of high-energy protons at 90 MeV were accelerated as a result of earlier processes. In roughly half of the events the CME front was above the estimated type II burst location.We suggest that in these cases the type II bursts may be related to CME interaction processes and shocks at the CME flanks.
Speaker: Amjad Al-Sawad (Ministry of Higher Education and Scientific Research/Iraq)
Unseen GLEs (Ground Level Events) 15m
Over the last seventy years, solar energetic particle (SEP) ground level events (GLEs) have been observed by ground-based neutron monitors and muon telescopes at a rate of slightly more than one per year. Ground-based detectors only measure secondary particles, and matching their observations with SEP in-situ measurements at lower energies from spacecraft has been difficult. Now, the Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) instrument provides in-situ measurements that also include composition and pitch-angle distribution and bridge the energy between long-term SEP monitors in space (e.g. ACE and GOES) and the ground-based observations. The PAMELA data show that there are some SEP events (e.g. 23 Jan 2012) where PAMELA sees high-energy (> 1 GeV) particles, yet these are not registered as GLEs. The PAMELA observations indicate that it is possible for the anisotropic distribution of the highest energy SEPs to miss the global network of neutron monitors.
Speaker: Eric Christian (NASA/GSFC)
12:30 PM → 2:00 PM
Lunch break 1h 30m
Large-Scale Distribution of Arrival Directions of Cosmic Rays Detected at the Pierre Auger Observatory and the Telescope Array above $10^{19}$ eV 15m
The large-scale distribution of arrival directions of high-energy cosmic rays is a key observable in attempts to understand their origin. The dipole and quadrupole moments are of special interest in revealing potential anisotropies. An unambiguous measurement of these moments as well as of the full set of spherical harmonic coefficients requires full-sky coverage. This can be achieved by combining data from observatories located in both the northern and southern hemispheres. To this end, a joint analysis using data recorded at the Pierre Auger Observatory and the Telescope Array above $10^{19}$ eV has been performed. For the first time, thanks to the full-sky coverage, the measurement of the dipole moment reported in this study does not rely on any assumption on the underlying flux of cosmic rays. As well, the sensitivity on the quadrupole and higher order moments is the best ever obtained. The resulting multipolar expansion of the flux of cosmic rays allows a comprehensive description of the angular distribution, and in particular to report on the first angular power spectrum of cosmic rays above $10^{19}$ eV.
Speaker: Olivier Deligny (CNRS/IN2P3)
Indications of anisotropy at large angular scales in the arrival directions of cosmic rays detected at the Pierre Auger Observatory 15m
The large-scale distribution of arrival directions of high-energy cosmic rays carries major clues to understand their origin. The Pierre Auger Collaboration has implemented different analyses to search for dipolar and quadrupolar anisotropies in different energy ranges spanning four orders of magnitude. A common phase $\approx 270^\circ$ of the first harmonic modulation in right-ascension was found in adjacent energy intervals below 1 EeV, and another common phase $\approx 100^\circ$ above 4 EeV. A consistency of phase measurements in ordered energy intervals is expected to manifest with a smaller number of events than those needed for the detection of anisotropies with amplitudes standing-out significantly above the background noise. This led us to design a prescribed test aimed at establishing whether this consistency in phases is real at $99\%$ CL. The test required a total independent exposure of 21,000 km$^2$sr yr. Now that this exposure has been reached, we report here the results for the first time. We also report the results of the search for a dipole anisotropy for cosmic rays with energy above 4 EeV including events with zenith angle between $60^\circ$ and $80^\circ$. Compared to previous analyses of events with zenith angles smaller than $60^\circ$, this extension increases by 30$\%$ the size of the data set, and enlarges the fraction of exposed sky from 71$\%$ to 85$\%$. The largest departure from isotropy is found in the energy range above 8 EeV, with an amplitude for the first harmonic in right ascension $r_1^\alpha= (4.4\pm 1.0)\times 10^{-2}$, that has a chance probability $P(\ge r_1^\alpha) = 6.4\times 10^{-5}$, reinforcing the hint previously reported with vertical events alone.
Speaker: Imen Al Samarai
talk_v0.1.pdf
Arrival directions of the highest-energy cosmic rays detected with the Pierre Auger Observatory 15m
We present the results of a search for small scale anisotropies in the distribution of arrival directions of ultra-high energy cosmic rays recorded at the Pierre Auger Observatory. The data set, gathered in ten years of operation, includes arrival directions with zenith angles up to $80^\circ$, and is about three times larger than that used in earlier studies. We update the test based on correlations with active galactic nuclei (AGNs) from the V\'eron-Cetty and V\'eron catalog, which does not yield a significant indication of anisotropy with the present data set. We perform a blind search for localized excess fluxes and for self-clustering of arrival directions at angular scales up to $30^\circ$ and for different energy thresholds between 40 EeV and 80 EeV. We search for correlations with the Galactic Center, the Galactic Plane and the Super-Galactic Plane. We also examine the correlation of arrival directions with relatively nearby galaxies in the 2MRS catalog, AGNs detected by Swift-BAT, a sample of radio galaxies with jets and with the Centaurus A galaxy. None of the searches shows a statistically significant evidence of anisotropy. The two largest departures from isotropy that were found have a post-trial probability $\approx 1.4$\%. One is for cosmic rays with energy above 58 EeV that arrive within $15^\circ$ of the direction toward Centaurus A. The other is for arrival directions within $18^\circ$ of Swift-BAT AGNs closer than 130 Mpc and brighter than $10^{44}$ erg/s, with the same energy threshold.
Speaker: Julien Aublin (urn:Google)
Aublin_ARDIR-1_v3.pdf
TA Anisotropy Summary 15m
The Telescope Array has collected 7 years of data and accumulated the largest UHECR data set in the Northern hemisphere. We make use of these data to search for large- and small-scale anisotropy of UHECR. At small angular scales we examine the data for clustering of events and correlations with various classes of putative sources. At large angular scales we will present a blind search for localized excesses of events anywhere on the sky, and find an excess -- the "hot spot" -- at the highest energies by oversampling using a radius of 20 degrees, centered in the constellation Ursa Major. We will estimate the statistical significance of this excess and show how it manifests itself in various other tests. Finally, we will examine the data for correlations with the large-scale structures in the nearby Universe.
Speakers: Hiroyuki Sagawa (RIKEN), Igor Tkachev (Russian Academy of Sciences (RU)), Peter Tinyakov (Universite Libre de Bruxelles (ULB))
Ultra-High-Energy Cosmic-Ray Hotspot Observed with the Telescope Array Surface Detectors 15m
The Telescope Array Experiment has observed a cluster of ultrahigh energy cosmic rays, $E>57$ EeV, called the Hotspot. This was reported in (Abbasi et al., ApJ, 790, L21 (2014)), and was centered in Ursa Major. Using the first five years of data collected by the TA surface detector, the chance probability of this hotspot in an isotropic cosmic-ray sky was calculated to be 3.4$\sigma$. In this work, we update this result using the latest data collected by the TA surface detector. We also discuss possible origins of the hotspot
Speaker: Kazumasa Kawata (ICRR, University of Tokyo)
icrc2015talk-0414-kawata.pdf
The Possible Extragalactic Source of Ultra-High-Energy Cosmic Rays at the Telescope Array Hotspot 15m
The Telescope Array (TA) collaboration has reported a hotspot, a cluster of 19 cosmic ray events with energies above $57~\rm EeV$ in a circle of $20^\circ$ radius centered at ${\rm R.A.}(\alpha)=146.^\circ7$, ${\rm Dec.}(\delta)=43.^\circ2$. We explore the hypothesis that the hotspot could originate from a single source. By considering the energy dependent deflections that are expected to affect arrival directions of cosmic rays propagating in cosmic magnetic fields, we identify the nearby starburst galaxy M82 and the bright nearby blazar Mrk 180 as two likely candidates. We discuss prospects of discriminating between the candidate sources with current and future spectral data.
Speaker: Dr Haoning He (UCLA)
Parallel DM 01 Yangtze 2
Recent results and status of the XENON program 15m
The XENON program aims at the direct detection of dark matter WIMPs with liquid xenon as target and detecting material. With detectors of increasing target mass and decreasing background, XENON has achieved competitive limits on WIMP-nucleon interaction couplings, but also on axions and axion-like particles. The XENON100 detector has been ongoing at the Laboratori Nazionali del Gran Sasso in Italy since 2009 with a dual phase xenon Time Projection Chamber employing 161 kg of liquid xenon. The most recent results will be presented. Current run mainly focuses on additional calibration for the low energy response of the detector and the validation of new calibration techniques in view of the next generation experiment, XENON1T. XENON1T will be the first experiment to use liquid xenon in a time projection chamber at the ton scale. It is designed to achieve two orders of magnitude higher sensitivity than the current best limits.
Speaker: Julien Masbou
The XMASS Experimental Program and its Current Implementation 15m
XMASS is an experimental program at the Kamioka Observatory in Japan designed for low energy, low background dark matter searches and neutrino physics. The core technology is a self shielding single-phase liquid xenon detector optimized for maximum scintillation light collection. In this talk we describe its current implementation and discuss its general performance after its 2013 refurbishment.
Speaker: Kai Martens (The University of Tokyo)
Results from the fiducial volume analysis of the XMASS-I dark matter data 15m
XMASS-I, the first phase of the XMASS project, is a direct detection dark matter experiment using 832 kg of liquid xenon. The key idea to reduce the background at low energies in XMASS is to use liquid xenon itself as a shield. In this analysis the clean core of the 832 kg liquid xenon volume is used as sensitive fiducial volume by eliminating the volume near the wall which suffers from beta and gamma rays from the outside. In this talk, we will present the physics results for our direct dark matter search using this fiducial volume of the XMASS-I detector.
Speaker: Atsushi Takeda (University of Tokyo)
The DAMIC dark matter experiment 15m
The DAMIC (Dark Matter in CCDs) experiment uses high resistivity scientific grade CCDs to search for dark matter. The CCD's low electronic noise allows an unprecedently low energy threshold of few tens of eV that makes it possible to detect silicon recoils resulting from interactions of low mass WIMPs. In addition the CCD's high spatial resolution and the excellent energy response results in very effective background identification techniques. The experiment has a unique sensitivity to dark matter particles with masses below 10 GeV. Previous results have demonstrated the potential of this technology, motivating the construction of DAMIC100, a 100 grams silicon target detector currently being installed at SNOLAB. In this presentation, the mode of operation and unique imaging capabilities of the CCDs, and how they may be exploited to characterize and suppress backgrounds will be discussed, as well as the expected physics results after one year of data taking.
Speaker: Joao de Mello Neto (Federal University of Rio de Janeiro)
DAMIC_ICRC2015_v1.pdf
Search for Dark Matter annihilations in the Sun using the completed IceCube neutrino telescope. 15m
If Dark Matter consists of Weakly Interacting Massive Particles (WIMPs), these might be gravitationally captured in the Sun where they could self-annihilate into standard model particles. Terrestrial neutrino detectors such as IceCube can observe this as an enhanced neutrino flux in the direction of the Sun. Sensitivity has improved with respect to previous searches due to better analysis methods and reconstructions. In addition, improved veto techniques using the outer layers of the cubic kilometre array have been used to reduce the atmospheric muon background and thus improve sensitivity during the Austral Summer. We will present results from an analysis of 341 days of livetime of IceCube-DeepCore in the 86 string configuration.
Speaker: Mohamed Rameez (Universite de Geneve (CH))
ICRCTalk_v2.pdf
The indirect search for dark matter with the ANTARES neutrino telescope 15m
The indirect search for dark matter is a topic of utmost interest in neutrino telescopes. The ANTARES detector is located on the bottom of the Mediterranean Sea 40 km off the southern french coast. ANTARES has been taking data since 2007 when the first half of the detector was installed. In this talk the results of the different analyses for dark matter signals from different potential sources, including the Sun and the Galactic Center, produced with different analysis methods will be presented. The specific advantages of neutrino telescopes in general and of ANTARES in particular will be explained. As an example, the indirect search for Dark Matter towards the Sun performed by neutrino telescopes currently lead to the best sensitivities and limits on the spin dependent WIMP-nucleon cross section with respect to existing direct detection experiments.
Speaker: Christoph Tönnis (Universitat de Valencia)
Parallel GA 04 Mississippi
Re-examination of the Expected Gamma-Ray Emission of Supernova Remnant SN 1987A 15m
A nonlinear kinetic theory of cosmic ray (CR) acceleration in supernova remnants (SNRs) is employed to re-examine the nonthermal properties of the remnant of SN 1987A for an extended evolutionary period of 5-50 yr. This spherically symmetric model is approximately applied to the different features of the SNR which consist of a Blue Supergiant (BSG) wind and bubble, and the swept-up red Supergiant (RSG) wind structures in the form of an HII region, the "Equatorial Ring" (ER) and the "hourglass" region, all of which are part of a RSG wind whose mass loss rate significantly decreases with elevation above the equatorial plane. The model adapts recent three-dimensional hydrodynamical simulations by Potter et al. (2014). The SNR shock has recently swept up the ER which is the densest region in the immediate circumstellar environment. Therefore the expected gamma-ray energy flux at TeV-energies at the current epoch has already reached its maximal value $\sim 10^{-13}$ erg cm$^{-2}$s$^{-1}$. The general nonthermal strength of the source is expected to decrease roughly by a factor of two over the next 10 yrs.
Speaker: Dr Leonid Ksenofontov (Yu.G. Shafer Institute of Cosmophysical Research and Aeronomy SB RAS)
Search for gamma-ray emission from AGNs with ultra-fast-outflows as candidate cosmic-ray accelerators 15m
Recent X-ray observations of active galactic nuclei (AGNs) have revealed the widespread existence of ultra fast outflows (UFOs), i.e. powerful outflows of baryonic material with velocities $>$10,000 km s$^{-1}$($\sim$0.03 c), seen as variable, blueshifted absorption lines of ionized heavy elements. They have been interpreted as winds driven by the accretion disk, and may be responsible for feedback onto their host galaxies that result in the observed M-sigma relation. In such outflows, various types of shocks are likely to form, either external shocks due to interaction with the ambient medium, or internal shocks due to inhomogeneities within the flow. Such shocks can accelerate electrons and protons to high energies and potentially induce nonthermal emission in various wavebands. In this context, we have searched for gamma-ray emission from AGNs with known UFOs, using Fermi-LAT data $>$100 MeV spanning more than 6 years. The AGN sample of Tombesi et al 2010 is used, with 42 radio-quiet AGNs listed as UFO candidates based on a systematic search for blueshifted Fe K absorption lines. In our current analysis, no significant gamma-ray excess is found from any object in the sample. We compute 95% confidence level gamma-ray upper limits (UL) for all analyzed sources, yielding a mean value for the integrated photon flux ($\geq$100\,MeV) UL of $\sim$3 $\times$ $10^{-9}$ photons cm$^{-2}$ s$^{-1}$, and in the range of $10^{41}$-$10^{45}$ erg s$^{-1}$ for ULs on the gamma-ray luminosity (100 MeV-100 GeV). To assess the properties of this UFO sample, we systematically compared these results with infra-red and radio observations, as well as the estimated kinetic power of the outflow. Our Fermi-LAT upper limits can constrain the ratio of gamma-ray luminosity to outflow kinetic power down to values as low as 0.001. The obtained results impose important constraints on emission models.
Speaker: Ms Yayoi Tomono (Tokai University)
ICRC88_tomono_v3.pdf
Flat Spectrum Radio Quasars through the MAGIC glasses 15m
The detection of Flat Spectrum Radio Quasars (FSRQs) in the Very High Energy (VHE, E>100 GeV) range is challenging, mainly because of their steep soft spectra in this energy band. Up to now only four FSRQs have been detected in VHE, three of them discovered by MAGIC. The gamma-ray observations observations at such high energies are crucial to understand their emission, especially to constrain the localization of the emitting region within the jet due to the absorption from their broad line region (BLR). Typically, FSRQs are detected during high flux states, enhancing the probability of detection with the current instruments' sensitivities. However, the last observation campaigns performed with the MAGIC telescopes show emission during moderate-quiescent states, thus challenging our understanding of the emission mechanisms in FSRQs. In this contribution, we give an overview and present the most recent results of the three FSRQs 3C279, PKS1222+21 and PKS1510-089 in a multiwavelength context with special focus on MAGIC and Fermi-LAT.
Speaker: Josefa Becerra Gonzalez (NASA GSFC)
Origin of cosmic rays excess in the Galactic Center 15m
The center of our Galaxy hosts a Super-Massive Black Hole (SMBH) of about $4 \times $ 10$^6$ M$_{sun}$. Since it has been argued that the SMBH might accelerate particles up to very high energies, its current and past activity could contribute to the population of Galactic cosmic-rays (CRs). Additionally, the condition in the Galactic Center (GC) are often compared with the one of a starburst system. The high supernovae (SN) rate associated with the strong massive star formation in the region must create a sustained CR injection in the GC via the shocks produced at the time of their explosion. Indeed, the presence of an excess of very high energy (VHE) cosmic rays in the inner 100 pc of the Galaxy has been revealed in 2006 by the H.E.S.S. collaboration. On very large scale ($\approx$ 10 kpc), the non-thermal signature of the escaping GC cosmic rays could have been detected recently as the spectacular "Fermi bubbles". The origin of the CRs over-abundance in the GC still remains mysterious: is it due to a single impulsive or stationary accelerator at the center or to multiple accelerators filling the region? In order to answer these questions, we build a 3D model of CR injection and propagation with a realistic 3D gas distribution. We then compare with existing data (H.E.S.S., Fermi). We discuss the CR injection in the region by a spectral and morphology comparison. We place constrains on the SNR rate and on the diffusion parameters.
Speaker: Mrs Lea Jouvin (APC)
presentation_ICRC.pdf
Prospects for Measuring the Positron Excess with the Cherenkov Telescope Array 15m
The excess of positrons in cosmic rays above ∼10 GeV has been a puzzle since it was discovered. Possible interpretations of the excess include acceleration of positron secondaries in local supernova remnants or pulsars, or the annihilation or decay of dark matter particles. To distinguish between these interpretations, the measurement of the positron fraction must be extended to higher energies. One technique to perform this measurement is using the Earth-Moon spectrometer: observing the deflection of positron and electron moon shadows by the Earth's magnetic field. The measurement has been attempted by previous imaging atmospheric Cherenkov telescopes without success. The Cherenkov Telescope Array (CTA) will have unprecedented sensitivity and background rejection that could make this measurement successful for the first time. In addition, the possibility of using silicon photomultipliers in some of the CTA telescopes could greatly increase the feasibility of making observations near the moon. Estimates of the capabilities of CTA to measure the positron fraction using simulated observations of the moon shadow will be presented.
Speaker: Prof. Justin Vandenbroucke (University of Wisconsin - Madison)
break 15m
Parallel GA03 Pulsars Amazon
Constraining photon dispersion relation from observations of the Vela pulsar with H.E.S.S 15m
Constraining photon dispersion relation from observations of the Vela pulsar with H.E.S.S ------------------------------------------------------------------------ *M.Chrétien, J. Bolmont, A. Jacholkowska, for the H.E.S.S. collaboration* Some approaches to Quantum Gravity (QG) predict a modification of the dispersion relations also known as a Lorentz Invariance Violation. The effect is expected to affect photons near an effective QG energy scale. This value has been constrained by observing gamma rays emitted from variable astrophysical sources such as gamma-ray bursts and flaring active galactic nuclei. Pulsars are periodic transient sources with an extreme variability of ms time scale. In 2014, the H.E.S.S. experiment reported the detection above 30 GeV of gamma rays emitted every 89 ms from the Vela pulsar. Using a likelihood analysis, calibrated with a dedicated Monte-Carlo procedure, we obtain the first limit on QG energy scale with the Vela pulsar. In this talk, the method and calibration procedure in use will be described and the results will be discussed.
Speaker: Mathieu Chrétien (LPNHE CNRS/IN2P3)
A Population of TeV Pulsar Wind Nebulae in the H.E.S.S. Galactic Plane Survey 15m
The H.E.S.S. Galactic Plane Survey (HGPS) constitutes the deepest scan of the inner Milky Way in TeV gamma rays to date. The dominant class of objects in this 10-year survey are Galactic pulsar wind nebulae (PWNe). Aside from a uniform reassessment of the observational parameters of PWNe already found in the past years, the HGPS for the first time allows for the extraction of flux upper limits in regions around pulsars without a detected TeV PWN. Including these limits, we systematically investigate the evolution of quantities such as the TeV luminosity and extension over over $\sim 10^5 $ years after the birth of the pulsar. We find that there are trends in their evolution, but also large variations around the average behaviour. This is likely due to the diversity of the surrounding medium and intrinsic starting conditions of the systems. To put the results into context, we present a time-dependent modeling that reproduces both the general trends and the scatter found in the available data of this population.
Speaker: Stefan Klepser (DESY)
klepser_icrc2015_pwn_pop.pdf
Search for gamma rays above 100 TeV from the Crab Nebula using the Tibet air shower array and the 100 m2 muon detector 15m
The Crab Nebula is the standard calibration candle for TeV cosmic gamma-ray experiments. None of those experiments has detected gamma rays above 100 TeV from the Crab Nebula, and the best upper limits have been given by the CASA-MIA experiment. In the circumstances, it is a common understanding that the energy spectrum of the Crab Nebula can be reproduced well by a mechanism based on the synchrotron self-Compton emission of high energy electrons. The observation of the energy spectrum of the Crab Nebula above 100 TeV with high sensitivity is important, in order to confirm the leptonic origin of the TeV gamma-ray emission from the Crab Nebula. To improve the sensitivity of the Tibet air shower array to TeV cosmic gamma rays, we are planning to add an underground 10,000 m$^2$ muon detector array to the existing Tibet air shower array. A small prototype muon detector, 100 m$^2$ in area, was constructed under the Tibet air shower array in the late fall of 2007. In this work, we search for continuous gamma-ray emission from the Crab Nebula above 100 TeV, using the data collected from March 2008 to February 2010 by the Tibet air shower array and the 100 m$^2$ muon detector. We find that our MC simulation is in good agreement with the experimental data. No significant excess is found, and the most stringent upper limit is obtained above 140 TeV.
Speaker: Dr Takashi SAKO (Institute for Cosmic Ray Research, University of Tokyo)
Observations of the Crab Nebula with Early HAWC Data 15m
The High Altitude Water Cherenkov (HAWC) Observatory is a TeV gamma-ray detector which has been completed in early 2015. HAWC started science operations in August 2013 with a fraction of the detector taking data. Several known gamma-ray sources have been already detected with the first HAWC data. Among these sources, the Crab Nebula, the brightest steady gamma-ray source at very high energies in our Galaxy, has been detected with high significance. In this contribution I will present the results of the observations of the Crab Nebula with HAWC, including time variability, and the detector performance based on early data.
Speaker: Francisco Salesa Greus (The Pennsylvania State University)
Salesa_ICRC_348.pdf
Six years of VERITAS observations of the Crab Nebula 15m
The Crab Nebula is the brightest source in the very-high-energy (VHE) gamma-ray sky and one of the best studied non-thermal objects. The dominant VHE emission mechanism is believed to be inverse Compton scattering of low energy photons on relativistic electrons. While it is unclear how the electrons are accelerated to energies of 1016 eV, it is general consensus that the ultimate source of energy is the Crab pulsar at the center of the nebula. Studying VHE gamma-ray emission provides valuable insight into the emission mechanisms and ultimately helps to understand the remaining mysteries of the Crab, for example, how the Poynting dominated energy flow is converted into a particle dominated flow of energy. We report on the results of six years of Crab observations with VERITAS comprising 115 hours of data taken between 2007 and 2013. VERITAS is an array of four 12-meter imaging air Cherenkov telescopes located in southern Arizona. We report on the energy spectrum, light curve, and a study of the VHE extension of the Crab Nebula.
Speaker: Dr Kevin Meagher (Université libre de Bruxelles)
Meagher_CrabNebula_ICRC2015_v5.pdf
The most precise measurements of the Crab nebula inverse Compton spectral component 15m
The Crab pulsar wind nebula (PWN) is one of the best studied astrophysical objects. Due to its brightness at all wavelengths, precise measurements are provided by different kind of instruments, allowing for many discoveries, later seen in other non-thermal sources, and a detailed examination of its physics. Most of the theoretical models for PWN emission are, in fact, based on Crab nebula measurements. The Crab nebula shows a broad-band spectrum spanning from radio frequencies up to VHE gamma rays and consists of two components, one of synchrotron origin and the other one due to radiative inverse Compton losses, starting at a few GeV. We will report the most precise measurements of the inverse Compton component from the Crab Nebula by combining data by the LAT detector on board of the Fermi satellite (1-300 GeV) and by the stereoscopic MAGIC system (>50 GeV). At low energies, the MAGIC results, combined with the Fermi/LAT data, show a flat and broad inverse Compton peak. The overall fit to the data between 1 GeV and 30 TeV is well-described by a modified log-parabola function with an exponent of 2.5, and places the position of the inverse Compton peak at around 53 GeV. The spectral measurements obtained by the MAGIC collaboration cover more than three decades in energy, allowing to address the still-open question about the maximum energy reached by the parent electron population. The broadness of the inverse Compton peak cannot be reproduced by either the constant B-field model or the MHD flow model. The conclusion, based on earlier data, that simple models (constant B-field, spherical symmetry) can account for the observed spectral shape has to revisited at the light of the new MAGIC results. On the other hand, the time-dependent 1D spectral model provides a good fit of the new VHE results when considering a 80uG magnetic field. However, it fails to match the data when including the morphology of the nebula at lower wavelengths.
Speaker: Roberta Zanin (Universitat de Barcelona)
Parallel NU 01 Yangtze 1
Photon-neutrino flux correlations from hadronic models of AGN? 15m
Neutrino production in jetted AGN is linked to hadronic processes such as photomeson production. The same interaction predicts also high-energy photons, mostly via neutral pion decay. While neutrinos escape the source unattenuated, the hadronically produced high-energy photons and pairs initiate pair cascades in most cases which re-distribute their energy to lower frequencies where photons can escape the emission region. Realistic hadronic emission models of AGN jets take into account competing energy losses of injected/accelerated particles as well as all leptonic processes (owing to primary and secondary electrons). This may smear out any intrinsic correlation between emerging photon and neutrino fluxes. The goal of this work is to investigate the degree of observable photon-neutrino flux correlations that is expected from hadronic AGN jet emission models. For this purpose the expected neutrino spectra from a number of hadronically modeled broadband spectral energy distributions (SEDs) of powerful blazars is calculated and compared to the photon fluxes at various frequencies by means of a correlation analysis. The results have implications for the search of the photon sources that are associated to the TeV-PeV neutrino events reported by neutrino observatories.
Speaker: Anita Reimer (University of Innsbruck)
Neutrino_Photon_correlation_upload.pdf
Neutrinos from Clusters of Galaxies and Radio Constraints 15m
Cosmic-ray (CR) protons can accumulate for cosmological times in clusters of galaxies. Their hadronic interactions with protons of the intra-cluster medium (ICM) generate secondary electrons, gamma-rays and neutrinos. In light of the high-energy neutrino events recently discovered by the IceCube observatory, we estimate the contribution from galaxy clusters to the diffuse gamma-ray and neutrino backgrounds. For the first time, we consistently take into account the synchrotron emission generated by secondary electrons and require the clusters radio counts to be respected. For a choice of parameters respecting current constraints from radio to gamma-rays, and assuming a proton spectral index of -2, we find that hadronic interactions in clusters contribute by less than 10% to the IceCube flux, and much less to the total extragalactic gamma-ray background observed by Fermi. They account for less than 1% for spectral indexes <-2. The high-energy neutrino flux observed by IceCube can be reproduced without violating radio constraints only if a very hard (and speculative) spectral index >-2 is adopted. However, this scenario is in tension with the high-energy IceCube data, which seem to suggest a spectral energy distribution of the neutrino flux that decreases with the particle energy. We stress that our results are valid for all kind of sources injecting CR protons into the ICM, and that, while IceCube can to test the most optimistic scenarios for spectral indexes $\ge$-2.2 by stacking few nearby massive galaxy clusters, they cannot give any relevant contribution to the extragalactic gamma-ray and neutrino backgrounds in any realistic scenario.
Speaker: Fabio Zandanel (University of Amsterdam)
Zandanel_Talk.pdf
Neutrinos and the origin of the cosmic rays 15m
We discuss the interplay between the high-energy neutrino flux observed by IceCube and cosmic ray observations. One question is if the neutrino flux can be reconciled with the paradigm that it comes from the sources of the UHECRs. Another one is how many of these neutrinos can stem from cosmic ray interactions with hydrogen in the Milky Way if the chemical composition of the cosmic rays is taken into account.
Speaker: Walter Winter (DESY)
150801ICRC2015.pptx
On the neutrino emission from BL Lacs 15m
The recent IceCube discovery of 0.1-1 PeV neutrinos of astrophysical origin opens up a new era for high-energy astrophysics. There are various astrophysical candidate sources, including active galactic nuclei (AGN) and starburst galaxies. Yet, a firm association of the detected neutrinos with one (or more) of them is still lacking. This talk will focus on the possible association of IceCube neutrinos with BL Lacs, a sub-class of radio loud AGN. We present the results from leptohadronic modeling of six individual BL Lacs, including the closest to Earth, Mrk 421, that were selected as probable counterparts of the IceCube neutrinos. We also show the cumulative neutrino emission from BL Lacs, which was calculated by incorporating our results from the modeling of individual sources to Monte Carlo simulations for the blazar evolution. We finally discuss our results in the light of current IceCube limits (above 2 PeV) and a possible future detection.
Speaker: Dr Maria Petropoulou (Purdue University)
Detectability of GRB blast wave neutrinos in IceCube 15m
Accelerated ultrahigh-energy cosmic rays (UHECRs) in long-lived gamma-ray burst (GRB) blast waves are expected to interact with X-ray to optical-infrared photons of GRB afterglow to produce PeV-EeV neutrinos. These long-lived neutrino fluxes can last for a time scale of days to years, in contrast to the prompt neutrino fluxes under the internal shocks model with a time scale of seconds to minutes and which has been constraint by recent IceCube GRB search. We calculate the expected neutrino events in IceCube in the PeV–EeV range from the blast wave of long-duration GRBs, both for individual nearby GRBs and for the diffuse flux. We show that EeV neutrinos from the blast wave of an individual GRB can be detected with long-term monitoring by a future high-energy extension of IceCube for redshift up to z ∼ 0.5. We also show that with 5 years operation IceCube will be able to detect the diffuse GRB blastwave neutrino flux and distinguish it from the cosmogenic GZK neutrino flux if the UHECRs are heavy nuclei.
Speaker: Dr Lili Yang (University of Nova Gorica)
A HADRONIC SCENARIO FOR THE GALACTIC RIDGE EMISSION 15m
During the last decade the innermost part of our galaxy has been observed as a gamma-ray emitting region described by a ridge-like surface. In particular, in 2005 the H.E.S.S. collaboration reported the measurement of a power-law spectrum with index close to -2.3, between 0.1 and 10 TeV, strongly correlated with dense molecular clouds in that region. Last year the VERITAS collaboration confirmed that finding. Below that energy a diffuse non-thermal emission was also found by the Fermi-LAT observatory with a spectrum, related to this region, which can be smoothly connected to that measured by H.E.S.S. Although several hypotheses have been proposed for the origin of that emission - e.g. flaring activity of the SgrA* supermassive black hole as well as steady leptonic and hadronic emission from freshly accelerated cosmic rays (CR) - it was recently shown as those results can be consistently interpreted in terms of hadronic emission produced by the Galactic CR population in the presence of radial dependent transport. Since the Galactic CR spectrum extends at least up to several PeVs, a very high energy neutrino emission is expected from the considered Galactic Center region which should exceed the atmospheric background for a kilometric scale neutrino telescope. Here, we adopt such scenario to estimate the expected signal in the IceCube observatory and compare it with its recent results. Moreover, we will discuss the detecting chances of neutrino telescopes in the North hemisphere, as ANTARES and the future KM3NeT, which are better positioned for the observation of the Galactic Ridge.
Speaker: Dr Antonio Marinelli (Physics Institute, Pisa University)
Poster 1 CR Amazon Foyer
Amazon Foyer
A branching model for hadronic air showers 1h
We introduce a simple branching model for the development of hadronic showers in the Earth's atmosphere. Based on this model, we show how the size of the pionic component followed by muons can be estimated. Several aspects of the subsequent muonic component are also discussed. We focus on the energy evolution of the muon production depth. We also estimate the impact of the primary particle mass on the size of the hadronic component. Even though a precise calculation of the development of air showers must be left to complex Monte Carlo simulations, the proposed model can reveal qualitative insight into the air shower physics.
Speaker: Vladimir Novotny (Charles University in Prague)
A Look at the Cosmic Ray Anisotropy with the Nonlocal Relativistic Transport Approach 1h
Cosmic Ray anisotropy is a key element in the quest to find the origin of the enigmatic particles. A well known problem is that although most of the likely sources are in the Inner Galaxy, the direction from which the lowest energy particles (less than about 1 PeV) come is largely from the Outer Galaxy. We show that this can be understood taking into account a possible reflection of charged particles by some 'walls' in the Interstellar Medium. This effect is too subtle to be explained by an ordinary diffusion theory and becomes apparent within the frames of the nonlocal relativistic transport theory, which involves conceptions of free motion velocity and path lengths with probability distribution of nonexponential type taken for a turbulent interstellar medium.
Speaker: Dr Renat Sibatov (Ulyanovsk State University, Ulyanovsk, Russia)
A method for reconstructing the muon lateral distribution with an array of segmented counters with time resolution 1h
Although the nature of ultra high energy cosmic rays is still largely unknown, significant progress has been achieved in last decades with the construction of the large arrays that are currently taking data. One of the most important pieces of information comes from the chemical composition of primary particles. It is well known that the muon content of air showers generated by the interaction of cosmic rays with the atmosphere is rather sensitive to primary mass. Therefore, the measurement of the number of muons at ground level is an essential ingredient to infer the cosmic ray mass composition. The energy range from $3 \times 10^{17}$ eV to $10^{20}$ eV is considered using two triangular arrays spaced at 750 m and 1500 m respectively. We introduce here a novel method for reconstructing the muon lateral distribution function with an array of segmented counters. The reconstruction builds on a previous method we recently presented by considering the time resolution of the detectors. We show that the new method improves the statistical uncertainty of the measured number of muons with respect to the previous alternative. The new reconstruction has also the additional advantage of estimating uncertainties in the number of muons without bias. These improvements make a difference in composition analyses. While the increased resolution allows for a better separation between different primary masses, correct uncertainties are required for a meaningful classification of cosmic rays on an event-by-event basis.
Speaker: Brian Wundheiler (Instituto de Tecnologias en Deteccion y Astroparticulas)
A new version of the event generator Sibyll 1h
The event generator Sibyll can be used for the simulation of hadronic multiparticle production up to the highest cosmic ray energies. It is optimized for providing an economic description of those aspects of the expected hadronic final states that are needed for the calculation of air showers and atmospheric lepton fluxes. New measurements from fixed target and collider experiments, in particular those at LHC, allow us to test the predictive power of the model version 2.1, which was released more than 10 years ago, and also to identify shortcomings. Based on a detailed comparison of the model predictions with the new data we revisit model assumptions and approximations to obtain an improved version of the interaction model. In addition a phenomenological model for the production of charm particles is implemented as needed for the calculation of prompt lepton fluxes in the energy range of the astrophysical neutrinos recently discovered by IceCube. After giving an overview of the new ideas implemented in Sibyll and discussing how they lead to an improved description of accelerator data, predictions for air showers and atmospheric lepton fluxes are presented.
Speaker: Ralph Richard Engel (KIT - Karlsruhe Institute of Technology (DE))
A Novel CubeSat-Sized Antiproton Detector for Space Applications 1h
Measuring cosmic antimatter fluxes probes many astrophysical processes. The abundancies and energy spectra of antiparticles support the understanding of the creation and propagation mechanisms of cosmic rays in the Universe. Deviations from theoretical predictions may hint to exotic sources of antimatter or inaccuracies in our understanding of the involved processes. Specifically, geomagnetically trapped antiprotons in Earth's inner radiation belt are directly linked to the production of secondary cosmic ray particles and their motion in Earth's magnetic field. The planned Antiproton Flux in Space (AFIS) experiment is designed to measure this antiproton flux using a novel CubeSat-sized particle detector. This active-target detector consists of 900 scintillating fibers read out by silicon photomultipliers and is sensitive to antiprotons in the energy range below 100 MeV. With its almost 4π angular acceptance, it covers a geometrical acceptance of 270 cm²$\cdot$ sr. The particle identification scheme for antiprotons relies on a combination of Bragg curve spectroscopy and the characteristics of the annihilation process. In order to verify the detection principle, a prototype detector with a reduced number of channels was tested at a stationary proton beam. Its energy resolution was found to be less than 1 MeV for stopping protons of about 50 MeV energy. We will give an overview of the AFIS mission and explain the working principle of the detector. We will also discuss the results from the beam test and the construction of the first full-scale detector. This research was supported by the DFG cluster of excellence "Origin and Structure of the Universe" (www.universe-cluster.de).
Speaker: Mr Thomas Pöschl (Technische Universität München)
final.pdf
An IceTop Module for the IceCube MasterClass 1h
The IceCube MasterClass is an outreach project of the IceCube experiment at South Pole for 9th to 12th grade school students. The MasterClass is designed to provide an authentic astrophysics research experience by demonstrating typical elements of IceCube research. It is a full-day experience of engaging activites, eductional talks, and scripted analyses, where students can reproduce the main science results of IceCube with real data. Interactive applications are a central aspect of the analysis activities, which run directly in standard web browsers and offer students intuitive insights into data processing. This contribution describes a new analysis module which reproduces the measurement of the energy spectrum of cosmic rays with IceTop, the surface component of IceCube. The module features a web application that allows students to interactively fit representative IceTop events to recover the direction and size estimator S125 from the raw data. Data from the web application is processed with a simple spreadsheet application to compute the cosmic-ray flux, which can be compared to the official result.
Speaker: Dr Hans Peter Dembinski (Bartol Institute, Dept of Physics and Astronomy, University of Delaware)
Dembinski_ICRC2015.pdf
Anisotropy search in the Ultra High Energy Cosmic Ray Spectrum in the Northern Hemisphere using the Telescope Array surface detector 1h
The Telescope Array (TA) experiment is located in the western desert of Utah, USA and observes ultra high energy cosmic rays in the northern hemisphere. In the highest part of the energy region, the cosmic ray energy spectrum shape carries information of the source density distribution. We search for directional differences in the energy spectrum shape. In this study, observed cosmic ray energy distributions are compared between sky areas that contain nearby objects, such as the super-galactic plane, and others that do not.
Speaker: Dr Toshiyuki Nonaka (Institute for Cosmic Ray Research, University of Tokyo)
ICRC2015poster_Aniso0727.pdf
Astrophysical expectations for the variation of the UHECR composition across the sky 1h
Using an integrated propagation code that takes into account particle energy losses, nuclear photo-dissociation and deflections by Galactic and extragalactic magnetic fields, we simulate representative sky maps of ultra-high-energy cosmic rays over the entire sky, for a wide range of astrophysical scenarios, with different source density, spectrum and composition. We analyze these sky maps from the point of view of composition variations in different regions of the sky, and present a statistical analysis of the significance of such variations. In particular, we apply the study to the typical differences that might be expected between the northern and southern hemispheres.
Speaker: Simon Bacholle (APC- Paris Diderot university)
Atmospheric monitoring at the Pierre Auger Observatory using the upgraded Central Laser Facility 1h
The Fluorescence Detector (FD) at the Pierre Auger Observatory measures the intensity of the scattered light from laser tracks generated by the Central Laser Facility (CLF) and the eXtreme Laser Facility (XLF) to monitor and estimate the aerosol optical depth (tau(z,t)). These measurements are important to have unbiased and reliable FD reconstruction of the energy of the primary cosmic ray, and the depth of the maximum shower development. In 2013 the CLF was upgraded substantially with the addition of a solid state laser, new generation GPS, a robotic beam calibration system, better thermal and dust isolation, and improved software. The upgrade also includes a back-scatter Raman LIDAR receiver, capable of providing independent measurements of tau(z,t). We describe the new features and applications of the upgraded instrument, including an automated energy calibration system, a steered firing system used for arrival direction studies, and the atmospheric monitoring measurements. We also present the first results after the upgrade using three different procedures to calculate tau(z,t). The first procedure compares the FD hourly response to the scattered light from the CLF (or XLF) against a reference hourly profile measured during an extremely clear night where zero aerosol contents are assumed. The second procedure measures tau(z,t) by comparing simulated FD responses under different aerosol attenuation parameters and selecting the best fit to the actual FD response. The third procedure uses the new Raman LIDAR receiver in-situ to measure the back-scattered light from the CLF laser. The comparison shows a good agreement for the first and second procedures for all FDs located at similar distances from the facilities. However we found higher values of tau(z,t) using the Raman measurements. This difference may indicate that the assumption of a zero aerosol content during the reference night selection may not be accurate.
Speaker: carlos medina-hernandez (colorado school of mines)
mAtmosPosterICRC.pdf
AugerNext: R&D studies at the Pierre Auger Observatory for a next generation ground-based ultra-high energy cosmic ray experiment 1h
The findings so far of the Pierre Auger Observatory and those of the Telescope Array define some requirements for a possible next generation global cosmic ray observatory: it needs to be considerably increased in size, it needs good sensitivity to composition, and it has to cover the full sky. At the Pierre Auger Observatory, AugerNext aims to conduct some innovative initial research studies on a design of a sophisticated hybrid detector fulfilling these demands. Within a European supported ASPERA/APPEC (Astroparticle Physics European Consortium) project for the years 2011-2014, such R&D studies primarily focused on the following areas: i) consolidation of the detection of cosmic rays using MHz radio antennas; ii) proof-of-principle of cosmic ray microwave detection; iii) test of the large-scale application of new generation photo-sensors; iv) generalization of data communication techniques; and v) development of new schemes for muon detection with surface arrays. This contribution summarizes the achievements of these R&D studies within the AugerNext project.
Speaker: Andreas Haungs (Karlsruhe Institute of Technology)
ICRC15-AugerNext-Poster.pdf
Automated procedures for the Fluorescence Detector calibration at the Pierre Auger Observatory 1h
The quality of the physics results, derived from the analysis of the data collected at the Pierre Auger Observatory depends heavily on the calibration and monitoring of the components of the detectors. It is crucial to maintain a database containing complete information on the absolute calibration of all photomultipliers and their time evolution. The low rate of the physics events implies that the analysis will have to be made over a long period of operation. This requirement imposes a very organized and reliable data storage and data management strategy, in order to guarantee correct data preservation and high data quality. The Fluorescence Detector (FD) consists of 27 telescopes with about 12,000 phototubes which have to be calibrated periodically. A special absolute calibration system is used. It is based on a calibrated light source with a diffusive screen, uniformly illuminating photomultipliers of the camera. This absolute calibration is performed every few years, as its use is not compatible with the operation of the detector. To monitor the stability and the time-behavior, another light source system operates every night of data taking. This relative calibration procedure yields more than $2{\times}10^4$ raw files each year, about 1 TByte/year. In this paper we describe a new web-interfaced database architecture to manage, store, produce and analyze FD calibration data. It contains the configuration and operating parameters of the detectors at each instant and other relevant functional parameters that are needed for the analysis or to monitor possible instabilities, used for the early discovery of malfunctioning components. Based on over 10 years of operation, we present results on the long term performance of FD and its dependence on environmental variables. We also report on a check of the absolute calibration values by analyzing the signals left by stars traversing the FD field of view.
Speaker: Gaetano Salina (Istituto Nazionale di Fisica Nucleare)
Azimuthal asymmetry in the Cherenkov radiation of EAS 1h
For the study of Extensive Atmospheric Showers (EAS) is essential the reconstruction method of Cherenkov radiation produced by charged secondary particles. In the recent studies it was shown that to greater accuracy of the reconstruction parameters of the EAS appears as a dependence of the spatial distribution of Cherenkov radiation as function of the azimuth angle, this due to the influence of the geomagnetic field Earth's. The calculation of this dependence, in principle, could improve the accuracy of the determination of the characteristics of the primary particles based on the Cherenkov measurements. In this work, a study is presented to find the azimuth dependence of the data Tunka's.
Speaker: Mr Jorge Cotzomi (FCFM BUAP)
CALET measurements with cosmic nuclei: expected performances of tracking and charge identification 1h
CALET is a space mission currently in the final phase of preparation for a launch to the International Space Station (ISS), where it will be installed on the Exposed Facility of the Japanese Experiment Module (JEM-EF). In addition to high precision measurements of the electron spectrum, CALET will also perform long exposure observations of cosmic nuclei from proton to iron and will detect trans-iron elements with a dynamic range up to Z=40. The energy measurement relies on two calorimeter systems: a fine grained imaging calorimeter (IMC) followed by a total absorption calorimeter (TASC) for a total thickness of 30 X$_{0}$ and 1.3 proton interaction length. A dedicated module (a charge detector, CHD), placed at the top of the apparatus, identifies the atomic number Z of the incoming cosmic ray, while the IMC provides tracking capabilities and a redundant charge identification by multiple dE/dx measurements. In this paper, the expected performances of the tracking and charge identification systems of CALET will be discussed. The CALET mission is funded by the Japanese Space Agency (JAXA), the Italian Space Agency (ASI), and NASA.
Speaker: Paolo Brogi (Universita degli studi di Siena (IT))
CALET perspectives for calorimetric measurements of high energy electrons based on beam test results 1h
CALET is a space mission currently in the final phase of preparation for a launch to the International Space Station (ISS), where it will be installed on the Exposure Facility of the Japanese Experiment Module (JEM-EF). One of the main science goals of the experiment is the measurement of the inclusive electron (+positron) spectrum. By integrating a sufficient exposure on the ISS, CALET will be able to explore the energy region above 1 TeV, where the presence of nearby sources of acceleration is expected to shape the high end of the electron spectrum and leave faint, but detectable, footprints in the anisotropy. In order to meet this experimental goal, CALET has been designed to achieve a large proton rejection capability (>$10^5$) thanks to a full containment of electromagnetic showers in a 27 X$_0$ thick calorimeter (TASC) preceded by a 3 X$_0$ fine-grained pre-shower calorimeter (IMC) with imaging capabilities. In this paper the expected performance of the instrument with electrons will be discussed on the basis of the results of measurements performed during beam calibration tests at CERN-SPS at beam energies up to 290 GeV.
Speaker: Gabriele Bigongiari (Universita degli studi di Siena (IT))
BigongiariICRC2015.pdf
Calibration and sensitivity of large water-Cherenkov Detectors at the Sierra Negra site of LAGO 1h
The Latin American Giant Observatory (LAGO) is an international network of water-Cherenkov detectors (WCD) set in different sites across Latin America. In México, on the top of the Sierra Negra volcano at 4530 m a.s.l., LAGO has completed its first instrumented detector of an array, consisting of a cylindrical WCD with 7.3 m in diameter and 1 m of height, with a total detection area of $40$ m$^2$ and sectioned in four equal slices. Each one of these slices is instrumented with an 8" photo-multiplier tube installed at the top of the detector and looking downwards. The final setup will have three WCD as the one mentioned, distributed in triangular shape and one WCD with 7.3 m in diameter and 5 m of height located in the centre. The data acquisition of this first WCD started in June 2014. In this work the full calibration procedure of this detector will be discused, as well as the report on the preliminary measurements of stability in rate. Effective area and sensitivity to gamma-ray bursts are derived from the LAGO simulation chain, based on Magnetocosmics, CORSIKA and GEANT4. From these results, we discuss the capability of this detector to separate the EM-muon component of extensive air showers.
Speaker: Dr Alberto Carramiñana Alonso (INAOE)
Calibration of a fluorescence detector using a flying standard light source for the Telescope Array observatory 1h
The main calibration items of Fluorescence Detector (FD) observation are the fluorescence yield, the atmospheric attenuation and the detector sensitivity. In 2012-2013, we conducted a joint TA-Auger calibration campaign by a flying device mounted with an ultraviolet LED as a standard light source. This device is called an octocopter, and was built by KIT. An octocopter has excellent portability and is suitable for calibration for FDs at a variety of remote locations. In TA FD observation of the octocopter, a difference in the number of detected photons between measurement and simulation is ± 5%, in the range of systematic error of the light source. In TA, we have begun developing a similar flying standard light source. By mounting a high-performance GPS, the systematic errors of the measured light source position will be improved to less than 1 m. A photodiode mounted directly near the light source measures the relative light intensity of each pulse. We report the progress of development for the octocopter, and the analysis results of the joint calibration campaign using the previous octocopter.
Speaker: Mr Motoki Hayashi (Shinshu University)
Calibration of the absolute amplitude scale of the Tunka Radio Extension (Tunka-Rex) 1h
The Tunka Radio Extension (Tunka-Rex) is an array of 44 radio antenna stations, constituting a radio detector for air showers. It is an extension to Tunka-133, an air-Cherenkov detector in Siberia, which is used as an external trigger for Tunka-Rex and provides a reliable reconstruction of energy and shower maximum. Each antenna station consists of two perpendicularly aligned active antennas, called SALLAs. An antenna calibration of the SALLA with a commercial reference source enables us to reconstruct the incoming radio signal on an absolute scale. Since the same reference source was used for the calibration of LOPES and, in a calibration campaign in 2014, also for LOFAR, these three experiments now have a consistent calibration and, therefore, absolute scale. This was a key ingredient to resolve a longer standing contradiction between measurements of two calibrated experiments. We will present how the calibration was performed and compare radio measurements of air showers from Tunka-Rex to model calculations and published results from other calibrated experiments.
Speaker: Roman Hiller (KIT)
Calibration of the LOFAR antennas 1h
Extensive air showers create short nanosecond-scale pulses in the radio frequencies. These pulses have been measured successfully in the past years at the Low-Frequency Array (LOFAR). Due to the short duration and emission of the signal in the atmosphere, methods based on flux calibration of known sources as used in radio astronomical observations cannot be applied to establish an absolute calibration. To overcome this, we present three approaches that were used to check and improve the antenna model of LOFAR, and to provide an absolute calibration for air shower measurements. In future work these results can be used as an absolute scale for measurements of astronomical transients with LOFAR.
Speaker: Jörg Hörandel (Ru Nijmegen/Nikhef)
Calibration of the TA Fluorescence Detectors with Electron Light Source 1h
The Electron Light Source (ELS) is a linear accelerator used to perform energy calibration of the fluorescence detectors (FD) in the Telescope Array experiment. The ELS shoots a beam of 40 MeV electrons into the atmosphere 100 m in front of the Black Rock Mesa FD. Air fluorescence light is detected from nitrogen molecule excitation by the ELS electron beam. An end-to-end calibration from generation of fluorescence by air to detection of fluorescence photon by FD PMT camera is achieved. We present the calibration method and the comparison between beam data and Monte Carlo simulation.
Speaker: Bokkyun Shin (Hanyang University)
Cascade showers initiated by muons in the Cherenkov water detector NEVOD 1h
Measurements of the energy spectra of cascade showers generated due to interactions of penetrating cosmic ray particles in massive water/ice detectors is one of the main methods of the study of the energy characteristics of the fluxes of muons and neutrinos. In the present paper, results of investigations of cascades initiated by inclined muons in the Cherenkov water detector NEVOD with a volume of 2000 m^3, located at the ground surface and equipped with a spatial lattice of 91 quasi-spherical modules (QSMs), detecting Cherenkov light from any direction with nearly equal efficiency, are discussed. A brief description of the setup features is given. The approaches to the reconstruction of energy and spatial parameters of the showers registered in a dense lattice of QSMs, and questions of the absolute calibration of the QSM response are considered. Preliminary results of the measurements of the energy spectrum of the cascades in the energy range 30 GeV – 10 TeV based on the data accumulated in 2013 – 2015 experimental series (about 11,000 live observation time) and their comparison with expectation for some models of the muon energy spectrum are presented.
Speaker: Prof. Rostislav Kokoulin (National Research Nuclear University MEPhI (Moscow Engineering Physics Institute))
CR-EX 0902 360 Poster 1 CR 228.pdf
CORSIKA modification for rigidity dependent primary selection based on Geomagnetic cutoff rigidity for GRAPES-3 simulations 1h
For the analysis of the GRAPES-3 Muon data, large scale Monte Carlo simulations are required. These simulations are performed using the CORSIKA simulation package developed by the KIT group. However, the geomagnetic cutoff rigidity varies with direction, therefore, a constant threshold for selection of primary energy results in generation of a large number of events that are subsequently rejected due to their rigidity being below the cutoff value in some directions. We have implemented an efficient mechanism in CORSIKA to select only those primary cosmic rays that lie above the cutoff rigidity in a given direction resulting in rejection of those primary cosmic rays that would have otherwise been rejected subsequently. Results based on actual simulations of GRAPES-3 Muon data have shown that by using this rigidity based cut, the actual computation time was reduced by a factor of two without compromising the reliability of the results.
Speaker: Mr Hari Haran Balakrishnan (HECR Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India and GRAPES-3 Experiment, Cosmic Ray Laboratory, Ooty 643 001, India)
CORSIKA_2.pdf
Cosmic Ray Shower Profile Track Finding for Telescope Array Fluorescence Detectors 1h
A simple cosmic ray track finding pattern recognition analysis (PRA) method for fluorescence detectors (FD) has been developed which significantly improves Xmax resolution and its dependence on energy. Events which have a clear rise and fall in the FD view contain information on Xmax that can be reliably reconstructed. Shower maximum must be extrapolated for events with Xmax outside the field of view of the detector, which creates a systematic dependence on the fitting function. The PRA method is a model and detector independent approach to removing these events, by fitting shower profiles to a set of triangles and applying limits on the allowable geometry.
Speaker: Mr Jon Paul Lundquist (Telescope Array Project)
Cosmic-ray positron measurements: on the origin of the e+ excess and limits on magnetar birthrate 1h
Positrons were discovered in cosmic rays 50 years ago. During the last 25 years, reliable magnetic spectrometer observations consistently revealed an excess of these particles above a few GeV with respect to the expected secondary component. The most recent measurements of the positron flux and the e+/(e++e-) ratio carried out by the Pamela and AMS experiments confirm the average trend of previous magnetic spectrometer observations up to 50 GeV and indicate that this excess is observed up to about 500 GeV. Many different hypotheses were suggested in the literature to explain these observations. However, when the characteristics of possible sources of e+ are taken into account, astrophysical objects and in particular, pulsars and, possibly, magnetars, remain the most plausible candidates even if disk formation may critically affect the actual contribution of these stars to cosmic-ray positrons. The magnetar birthrate is revised within the proposed scenario.
Speaker: Catia Grimani (University of Urbino "Carlo Bo")
Data Accessibility, Reproducibility and Trustworthiness with LAGO Data Repositories 1h
Nowadays, one of the most challenging scenarios scientists and scientific communities are facing is the huge amount of data emerging from vast networks of sensors and from computational simulations performed in a diversity of computing architectures and e-infrastructures. In this work we present the strategy of the Latin American Giant Observatory (LAGO) to catalog and preserve a vast amount of data produced by the water-Cherenkov Detector network and the complete LAGO simulation chain that characterize each site. Metadata, Permanent Identifiers and the facilities from the LAGO Data Repository are described. These initiatives allow researchers to find data and directly use them in a code running by means of a Science Gateway that provides access to different clusters, Grid and Cloud infrastructures worldwide.
Speaker: Dennis Cazar Ramírez (Universidad San Francisco de Quito)
Development of a High Altitude LAGO Site in Peru 1h
The Latin American Giant Observatory (LAGO) Project is an extended Cosmic Ray Observatory mainly oriented to perform basic research in three branches: high energy phenomena, space weather and atmospheric radiation at ground level. To observe the high energy component (over 10 GeV) of Gamma Ray Bursts (GRBs), the LAGO Collaboration is installing Water Cherenkov Detectors (WCDs) in high altitude sites. Extensive Air Showers (EAS) produced in the atmosphere by GRBs high energy photons could be detected by WCD arrays given their good sensitivity to secondary photons and other particles in the cascades, by looking for excesses over the secondary particle flux. In this work the current developments to build and characterize a high altitude ($>4600$ m a.s.l.) LAGO site in the central highlands of Peru are described.
Speaker: Stephany Vargas (Escuela Politécnica Nacional)
Development of a high efficient PMT Winston-cone system for fluorescence measurement of extensive air showers 1h
Fluorescence telescopes are an important technique to measure extensive air showers initiated by ultra-high energetic cosmic rays. They detect the longitudinal profile of the energy deposited in the atmosphere by the de-excitation of nitrogen molecules in the UV-range. In the past years the development of photomultiplier tubes (PMT) has led to an increase of more than $30\%$ in photon detection sensitivity, by using new super-bialkali (SBA) photocathodes. Thus, the telescopes can detect even fainter signals over a farther area with a significant increase in aperture. To develop a telescope for a next generation cosmic ray observatory, a camera needs to have a maximal sensitive area of the focal plane. Winston-cones can efficiently cover the dead area between the photocathode of the PMTs. Such a highly efficient system composed of a SBA PMT and Winston cone has been developed based on the design of the fluorescence telescopes of the Pierre Auger Observatory. This contribution shows the development of the optical detection system and first tests in one of the fluorescence telescopes.
Development of the TALE Surface Detector Array 1h
TALE, the Telescope Array Low Energy extension is designed to lower the energy threshold to about $10^{16.5}$ eV. The TALE surface detector will include an infill array of 76 scintillation counters (40 with 400 m spacing and 36 with 600 m spacing) and an addition to the TA SD of 27 counters. We have already deployed 35 counters with 400 m spacing in April 2013. For the additional 68 counters, we will use refurbished AGASA scintillation counters, each of which consists of AGASA scintillators, a new PMT and an improved the Telescope Array surface detector electronics. Here we report the status of the detectors and simulation.
Speaker: Shoichi Ogio (Osaka City University)
Development of the Waseda CALET Operations Center (WCOC) for Scientific Operations of CALET 1h
The CALET project aims at a long duration observation of high energy cosmic rays onboard the International Space Station (ISS). The CALET detector features a very thick calorimeter of 30 radiation-lengths which consists of imaging and total absorption calorimeters. It will directly measure the cosmic-ray electron spectrum in the energy range of 1GeV--20TeV with 2-% energy resolution. The data obtained with CALET onboard ISS will be transferred to JAXA using two data relay satellite systems operated by NASA and JAXA, respectively. To operate the CALET onboard ISS, the CALET Ground Support Equipment (CALET-GSE) is being prepared in JAXA. Simultaneously, Waseda CALET Operations Center (WCOC) is being established to perform operations and monitoring related to the scientific mission. The real-time data received by CALET-GSE is immediately transferred to WCOC. Scientific raw data are also transferred to WCOC on an hourly basis after time-order correcting and complementing replay data. Mission operations at WCOC includes (1) real-time monitoring and operations, (2) operations planning, and (3)and processing raw, level-0, scientific data to level-1 data that will be used for scientific analysis. In this paper we will review the role of WOCC and report on its development.
Speaker: Yoichi Asaoka (Waseda University (JP))
Diffusion and Anisotropy of Cosmic Rays in the Galaxy: Beyond the Dipole 1h
The transport of Galactic cosmic rays in both turbulent and regular magnetic fields can be described in terms of diffusion and drift motions. These produce gradients of cosmic-ray densities. The anisotropy resulting from these gradients for an observer located anywhere in the Galaxy is commonly described in terms of a pure dipole moment, the amplitude of which is proportional to the gradient at the observer point normalised by the density at the same observer point. By calculating the angular distribution on the sphere of the observer in the specific case of cosmic rays propagating diffusively from a single source, we show that this recipe to estimate the dipole moment is only an approximation, and that higher order moments are actually also expected. Since a dipole moment is by essence a vector, it is conceivable to build configurations of sources where the global vector cancels even with a non-vanishing gradient of cosmic-ray density. In this case, the non- vanishing gradient would show up at higher order moments that do not add linearly, such as the moment describing a symmetric quadrupole. Although the dipole moment is expected to remain dominant for an observer located on Earth and for sources distributed in the Galactic disk, the description given in this paper of the anisotropy expected within a pure diffusion model could contribute to some extent to explain the observed anisotropies of low-energy cosmic rays beyond the dipole.
Education, Outreach and Public Relations of the Pierre Auger Observatory 1h
The scale and scope of the physics studied at the Pierre Auger Observatory continue to offer significant opportunities for original outreach work. Education, outreach and public relations of the Auger Collaboration are coordinated in a dedicated task whose goals are to encourage and support a wide range of efforts that link schools and the public with the Auger scientists and the science of cosmic rays, particle physics, and associated technologies. The presentation will focus on the impact of the Collaboration in Mendoza Province, Argentina and beyond. The Auger Visitor Center in Malarg\"{u}e has hosted over 95,000 visitors since 2001, and a fifth collaboration-sponsored science fair was held on the Observatory campus in November 2014. The Rural Schools Program, which is run by Observatory staff and which brings cosmic-ray science and infrastructure improvements to remote schools, continues to broaden its reach. Numerous online resources, video documentaries, and animations of extensive air showers have been created for wide public release. Increasingly, collaborators draw on these resources to develop Auger related displays and outreach events at their institutions and in public settings to disseminate the science and successes of the Observatory worldwide. The presentation will also highlight education and outreach activities associated with the planned upgrade of the Observatory's detector systems and future physics goals.
Speaker: Dr Charles Timmermans (Nikhef/Radboud University)
Effects of Turbulent Magnetic Fields in Cosmic Ray Ansiotropy 1h
Cosmic ray anisotropy has been observed to be present in a wide energy range by a variety of experiments such as Milagro and the IceCube Observatory. However, a satisfactory explanation has been elusive for more than fifteen years now. A possible solution for the TeV-PeV cosmic ray anisotropy is the introduction of turbulent magnetic interactions on the arrival direction. We perform test particle simulations in compressible magnetohydrodynamic turbulence to study how cosmic rays' arrival direction distribution is perturbed when they stream along the local turbulent magnetic field. In this work, we discuss the effects arising from propagation in this inhomogeneous and turbulent interstellar magnetic field.
Speaker: Paolo Desiati (University of Wisconsin - Madison)
ELLIPTIC FLOW in nuclear interaction of astroparticle at energy $10^{16}$ eV. 1h
107 cascades, created by secondary particles of astroparticle interaction at $10^{16}$ eV, were detected in the stratospheric emulsion chamber. Their azimuth distribution reveals a distinct anisotropy. Estimation of the elliptic flow coefficient v2 gives a value 0.35 $\pm$ 0.02. The distribution of cascade p(t) is also azimuth anisotropic and its maximal value coincides with the direction of the impact parameter.
Speaker: OLEG DALKAROV (P.N.Lebedev Physical Institute)
Energy Spectrum and Mass Composition of Ultra-High Energy Cosmic Rays Measured by the hybrid technique in Telescope Array 1h
The energy spectrum and mass composition of Ultra-High Energy Cosmic Rays (UHECRs) measured using a hybrid analysis will be presented. TA consists of three FD stations and 507 SDs. A hybrid analysis reconstructs the position and direction of the air shower more accurately than the monocular FD analysis and measures the longitudinal development and calorimetric energy of the shower precisely. Information of the mass composition of UHECR, Xmax, is obtained from the measured longitudinal development. The analysis performance, energy spectrum and mass composition of the UHECR obtained from the TA hybrid mode will be presented.
Speakers: Daisuke Ikeda (Institute for Cosmic Ray Research, University of Tokyo), Dr William Hanlon (University of Utah)
Energy Spectrum and Mass Composition of Ultra-High Energy Cosmic Rays Measured with the Telescope Array Fluorescence Detector Using a Monocular Analysis 1h
The Telescope Array (TA) experiment is the largest hybrid detector to observe ultra-high energy cosmic rays (UHECRs) in the northern hemisphere. We report on results of the energy spectrum of UHECRs covering a wide energy range, and the mass composition using the maximum shower depth, from analyzing data collected in monocular mode by the fluorescence detectors of TA during the first seven years.
Speaker: Toshihiro FUJII (University of Chicago, University of Tokyo)
150723TAFDMonocularAnalysisICRC15.pdf
ENERGY THRESHOLD DETERMINATION FOR AMIGA MUON COUNTERS VIA GEANT4 SIMULATION 1h
One of the first improvements of the Pierre Auger Observatory is the Auger Muons and Infill for the Ground Array (AMIGA) detector, in order to measure the cosmic ray spectrum and the chemical composition in the energy range from $10^{17}$eV. The muon detectors of the AMIGA *infill* count muons from extensive air showers observed by Auger Observatory, which are then reconstructed by the surface and fluorescence detectors. Muons with energy greater than or equivalent to 1 GeV propagating in the soil are able to reach the muon detector. Although the air shower muonic component is attenuated much less than the electromagnetic component, the shielding of approximately 2.25 m of soil adds 540 g/cm$^{2}$ of vertical mass (approximately 60% more than the atmosphere above the Pierre Auger Observatory). Thus, in order to better understand attenuation mechanisms (shielding effects) of muons, a Monte Carlo simulation with Geant4 was made to determine the muon energy threshold, i.e., the minimum kinetic energy the muon should have to go through the 2.25 m of soil and produce a signal in the AMIGA counters. The energy threshold is determined by taking into account the primary particle as well as the secondary particles produced in the soil above the detector. The information on the energy threshold is important to understand the process of data analysis. This threshold can be used to test the Geant4 simulation program, since the muon energy threshold is well calculated via the Bethe-Bloch formula. From the energy thresholds and the energy distribution at ground level for different particles from extensive air showers, the contribution of those particles to the data recorded by the detectors can be calculated. This contribution is crucial to correctly determine the number of muons in an extensive air shower, which is one of the main aims of the AMIGA enhancement. Keywords: AMIGA detectors, Geant4 simulation, muons, energy threshold, Bethe-Bloch formula.
Speaker: Mr Luiz Augusto Stuani Pereira (Unicamp)
Experimental method to measure the positron and electron fluxes in AMS-02 1h
The Alpha Magnetic Spectrometer AMS-02 is a high energy particle physics detector, operational on the International Space Station since May 2011. The AMS-02 goal is the fundamental physics research in space with high energy cosmic rays, during its 20 year duration mission. The latest published results, with 30 months of data, show an excess of high energy positrons whose origin is still highly uncertain. These positrons, in addition to being produced by spallation of cosmic rays on interstellar medium, may be produced in nearby pulsars, annihilation of Dark Matter particles, or still unknow processes. In this poster, I will review the analysis technique used for measuring positron flux and electron flux, as well as positron fraction. This analysis is based on three subdetectors: the Transition Radiation Detector (TRD), the silicium tracker, and the Electromagnetic Calorimeter (ECal). I will present a method which allows the combination of estimators constructed from these three subdetectors, in order to separate first leptons and protons, and secondly positrons and electrons. I will also detail the influence and the determination of the charge confusion between the positrons and electrons at high energy. The positron and electron flux, as well as the positron fraction, will be shown and discussed.
Speaker: Sami Caroff (Centre National de la Recherche Scientifique (FR))
FAMOUS - A fluorescence telescope using SiPMs 1h
The FAMOUS telescope is a prove-of-concept study for the usage of silicon based photo sensors (SiPMs) in fluorescence telescopes. Such telescopes detect the fluorescence light emitted by ultra-high energy cosmic ray particles impinging on the Earth's atmosphere. Available instruments, like the fluorescence telescopes of the Pierre Auger Observatory in Argentina, are using photo multiplier tubes for photon detection. The FAMOUS camera aims to make use of the advantages of recent developments in photo detection by SiPM sensors, like increasing the duty cycle due to the ability to operate SiPMs under bright moon light. Build in a 50 cm-diameter aluminum tube, and employing refractive optics driven by a Fresnel-lens, a seven-pixel prototype camera has been installed and developed. First results look very promising. The next stage of the prototype will be equipped with a 61-pixel camera, a more lightweight tube, more efficient light concentrators, and a customized and more stable power supply. The results of the test measurements and the status of the next stage prototype will be presented.
Speaker: Thomas Bretz (RWTH Aachen)
Heavy ion beam test at CERN-SPS with the CALET Structure Thermal Model 1h
We will report testing and calibration of the heavy-ion energy and charge resolution of the CALET cosmic-ray instrument that will fly on the International Space Station in 2015. The beam tests were carried out using a test instrument that is functionally equivalent to CALET. CALET will measure the energy spectra and arrival directions of cosmic-ray electrons to 20 TeV and hadrons to 1 PeV with exceptional resolution. It will measure the spectra of high-energy nuclei to about Z=40. It will also measure the cosmic gamma radiation with superior resolution to search for signatures of dark matter annihilation in the gamma-ray and electron spectra. We preformed beam tests at CERN-SPS in February and March 2015 to calibrate energy, angular and charge resolution with direct primary beams and secondary fragments of Ar of 13, 19, and 150 A GeV/c. The beam tests were carried out using a test instrument that is functionally equivalent to the calorimeter (CAL) of CALET. I will present our ion run purpose and experimental method and setups, and preliminary results.
Speaker: Tadahisa Tamura (Kanagawa University (JP))
High $p_\mathrm{T}$ muons from cosmic ray air showers in IceCube 1h
Cosmic ray air showers with primary energies above $\sim 1$ TeV can produce muons with high transverse momentum ($p_\mathrm{T} > 2$ GeV). These isolated muons can have large transverse separations from the shower core up to several hundred meters. Together with the muon bundle they form a double track signature in km$^3$-scale neutrino telescopes such as IceCube. These muons originate from the decay of heavy hadrons, pions, and kaons produced very early in the shower development, typically in (multiple) high $p_\mathrm{T}$ jets. If high $p_\mathrm{T}$ muons are produced simultaneously in two jets that are oriented back-to-back such interactions can also produce distinctive triple track signatures in IceCube. The separation from the core is a measure of the transverse momentum of the muon's parent particle and the muon lateral distribution depends on the composition of the incident nuclei. Hence, the composition of high energy cosmic rays can be determined from muon separation measurements. Moreover for $p_\mathrm{T} > 2$ GeV particle interactions can be described in the context of perturbative quantum chromodynamics (pQCD) which can be used to calculate the muon lateral separation distribution. Thus these muons may help to test pQCD predictions of high energy interactions involving heavy nuclei. We discuss the contributions from different components of air showers to the high $p_\mathrm{T}$ muon flux. Based on dedicated simulations the prospects of composition measurements using high $p_\mathrm{T}$ muons in km$^3$-scale neutrino telescopes are studied. We present analysis methods to study laterally separated muons in IceCube with lateral separations larger than 150 m using data taken from May 2012 to May 2013
Speaker: Mr Dennis Soldin (University of Wuppertal)
Improving the universality reconstruction using independent measurements of water-Cherenkov detectors and additional muon counters 1h
Shower universality has demonstrated to be a sturdy tool to describe particle showers produced by primary cosmic rays. The secondary particles at the observation level can be described by a four component model: the well known electromagnetic and muonic components, the contribution due to the electromagnetic halo of the muons, and the electromagnetic particles originating from pion decays close to ground following closely the development of the muonic component. Due to the high amount of particles produced, those distributions can be described within three parameters: The total energy $E$, the depth of shower maximum $X_{\rm max}$, and the muon content $N_{\mu}$. The energy and $X_{\rm max}$ are governed by the pure electromagnetic component, while the muon scale ($N_{\mu}$) gives cause to differences between hadronic interaction models and primary particles, affecting the three remaining components. Though predictions on these macroscopic parameters are already viable with a single detector type (e.g. an array of water-Cherenkov detectors), large correlations between the quantities are apparent and need to be taken into account when interpreting the data. To overcome the degeneracy, additional muon counters allow for an independent measurement of the muon number at ground and at the same time reduce systematic uncertainties due to the hadronic interaction model used. The procedure is exemplified for the case of the Pierre Auger Observatory by parameterizing the signal response of particles in the water-Cherenkov array operating with underground muon detectors. The universal parameterizations allow us to estimate independently the $E$ and $N_{\mu}$ on an event-by-event basis. The incorporation of muon detectors evidences e.g. the possibility of an unbiased energy estimation based only on the universality description of the shower.
Speaker: Markus Roth (KIT)
In-flight operations and status of the AMS-02 silicon tracker 1h
The AMS-02 detector is a large acceptance magnetic spectrometer operating on the International Space Station since May 2011. More than 60 billion events have been collected by the instrument as of today. One of the key subdetectors of AMS-02 is the silicon microstrip Tracker, designed to precisely measure the trajectory and absolute charge of cosmic rays in the GeV-TeV energy range. In addition with the magnetic field is also measuring the particles rigidity and the sign of the charge. This report presents the Tracker online operations and calibration during the first four years of data taking in space. The track reconstruction efficiency and the resolution will be also reviewed.
Speaker: Xiaoting Qin (Universita e INFN, Perugia (IT))
Poster_690
Inelastic and diffractive cross section measurements with the CMS experiment 1h
The inelastic cross section has been measured in proton-proton and proton-lead collisions at centre-of-mass energies per nucleon up to 8 TeV at the LHC. Nuclear scaling effects play an important role in the simulation of cosmic ray interactions and are studied in collisions with lead nuclei. Furthermore, the probability of diffractive interactions influences the efficiency of the energy transport in extensive air showers and, thus, for example the depth of the shower maximum. We present an overview of the related results published by the CMS Collaboration.
Speaker: Colin Baus (KIT - Karlsruhe Institute of Technology (DE))
CrossSection_Poster.pdf
Initial results of a direct comparison between the Surface Detectors of the Pierre Auger Observatory and of the Telescope Array 1h
The Pierre Auger Observatory (Auger) in Mendoza, Argentina and the Telescope Array (TA) in Utah, USA aim at unraveling the origin and nature of Ultra-High Energy Cosmic Rays (UHECR). At present, there appear to be subtle differences between Auger and TA results and interpretations. Joint working groups have been established and have already reported preliminary findings. From an experimental standpoint, the Surface Detectors (SD) of both experiments makes use of different detection processes not equally sensitive to the components of the extensive air showers making it to the ground. In particular, the muonic component of the shower measured at ground level can be traced back to the primary composition, which is critical for understanding the origin of UHECRs. In order to make direct comparisons between the SD detection techniques used by Auger and TA, a two-phase approach is followed. First, one water Cherenkov detector ("Auger North" design) was deployed and operated locally at the TA Central Laser Facility. After a couple of months of operation before the summer, we expect to observe about 20 Auger SD events in coincidence with nearby TA stations. And a regular Auger station and a TA station will be added to the setup to allow for station-level comparisons. In a second phase, event-level comparisons of relatively low-energy showers with energies in the 10$^{18}$ eV range will be possible as a result of co-locating six additional Auger North stations contiguous to TA surface detector stations. In this contribution, we present the status and prospects of this joint research project, including the first Auger SD data that were recorded in coincidence with the TA SD shower triggers.
Speaker: Ryuji Takeishi (Institute for Cosmic Ray Research, University of Tokyo, Kashiwa, Chiba, Japan)
AugerSDatTA.pdf
Investigation of angular distributions in the interaction of cosmic-ray particles with a dense target and comparison with data of the Large Hadron Collider. 1h
Cosmic ray measurements are carried out on at a detector station located in the Tian Shan mountains at an altitude of 3340 meters above sea level using the complex installations "Hadron-9" and "Hadron-44". The main objective of these studies is the interaction of cosmic rays with nuclei, in particular the study of anomalous events occurring in the cores of extensive air showers (EAS). Analysis was performed for 10199 detected events, of which 2657 events interacted directly in the target. 462 events with a Gamma-ray number of n≥4 could be identified. For these events angular correlations were investigated using two-dimensional correlation functions of the form Δη-Δφ. Here Δη is the difference of pseudorapidities (η= -ln(tan(θ/2)) with θ the polar angle measured by the deviation from the beam axis deviation, and Δφ is the difference between the azimuth angles of two particles. As a result we received a well-defined structure for the paired 0.5 < Δη < 4.5, 0.4 < Δφ < 2.6 two-particle correlation functions, almost similar to the results obtained in the "Observation of long-range, near-side angular correlations in proton-proton collisions at the LHC". This is the first observation of such a structure in the two-particle correlation function of the interaction of cosmic rays with matter.
Speaker: Yernar Tautayev (Institute of Physics and Technology, Almaty, Kazakhstan)
ICRC.2015 .pdf
ГААГА_плакат__КЛ1111.pdf
Investigation of the energy deposit of inclined muon bundles in the Cherenkov water detector NEVOD 1h
An excess of multi-muon events in comparison with simulations performed in frame of widely used hadron interaction models was found in several cosmic ray experiments at very- and ultra-high energies of primary particles. In order to solve this so-called 'muon puzzle', investigations of the energy characteristics of EAS muon component are required. A possible approach to such investigations is the measurement of the energy deposit of EAS muons in the detector material: the appearance of an excessive fraction of very high-energy muons should be reflected in the dependence of the energy deposit on the energy of primary particles. The experiment on the study of the energy deposit of muon bundles is being conducted at the NEVOD-DECOR experimental complex. As a measure of the energy deposit, the sum of the responses of quasi-spherical modules of the Cherenkov calorimeter NEVOD is used. The local muon density in the event and the muon bundle arrival direction are estimated from the data of coordinate-tracking detector DECOR. Registration of inclined muon bundles of different multiplicities at various zenith angles allows to evaluate primary particle energies and to explore the energy interval from ~ 10^16 to 10^18 eV. Experimental data accumulated from May 2012 to April 2015 (about 17,000 hours live observation time) have been analyzed and compared with CORSIKA based simulations. It is found that the average specific energy deposit (i.e., the calorimeter response normalized to the local muon density in the events) appreciably increases with zenith angle, thus reflecting the increase of the muon energy in the bundles near horizon. An evidence for an increase of the energy deposit at primary energies above 10^17 eV is seen in the measured dependence of the specific energy deposit on the muon density. Possible methodical and physical reasons of such anomalous behavior are analyzed.
Speaker: Prof. Igor Yashin (National Research Nuclear University MEPhI (Moscow Engineering Physics Institute))
Investigation of the flux of albedo muons with NEVOD-DECOR experimental complex 1h
Results of investigation of the near-horizontal muons are presented in the range of zenith angles of 85 – 95 degrees. In this range, so-called 'albedo' muons (atmospheric muons scattered in the soil into the upper hemisphere) are detected. Measurements have been conducted with the NEVOD-DECOR experimental complex located on the campus of MEPhI. The basis of the complex is the Cherenkov water detector NEVOD with the volume of 2000 m^3 equipped with a dense spatial lattice of quasi-spherical modules (91 in total). Each module consists of six FEU-200 PMTs with flat photocathodes directed along the axes of the orthogonal coordinate system. The coordinate detector DECOR is deployed around the NEVOD. DECOR includes eight vertically suspended eight-layer assemblies of plastic streamer tube chambers with resistive cathode coating with the total sensitive area 70 m^2. Chamber planes are equipped with two-coordinate external strip readout system. Detector DECOR allows to localize tracks of a near-horizontal muons with high angular (better than 1 degree) and spatial (about 1 cm) accuracy and allows to determine the muon direction by time-of-flight technique with probability of error of the order 10^-2. More reliably, muon direction can be obtained from the NEVOD data using the directionality of Cherenkov light. The combination of these two independent methods allows to determine the muon direction with the probability of error less than 10^-8. The results of the measurements of the flux of albedo muons for experimental series with the duration of about 20,000 hours 'live' time and comparison of them with different models of muon scattering in soil are presented.
Speaker: Dr Semen Khokhlov (National Research Nuclear University MEPhI (Moscow Engineering Physics Institute))
LARGE-SCALE ANISOTROPY OF TeV-BAND COSMIC RAYS 1h
The expected anisotropy in the 1 to 104 TeV energy range is calculated for Galactic cosmic rays with both anisotropy in the diffusion tensor and source discreteness taken into account.We find that if the sources are distributed radially (but with azimuthal symmetry) in proportion to Galactic pulsars, the expected anisotropy almost always exceeds the observational limits by one order of magnitude in the case of isotropic diffusion. If the radial diffusion is more than an order of magnitude smaller than the azimuthal diffusion rate, the radial gradient of the sources can be accommodated about 5% of the time. If the sources are concentrated in the spiral arms, then the anisotropy depends on our location between them, but in some spatial window, roughly equidistant from adjacent spiral arms, the observational constraints on anisotropy are obeyed roughly 20%–30% of the time for extremely anisotropic diffusion. The solar system is in that window less than 10% of the time, but it may be there now. Under the assumption of isotropic diffusion, nearby supernovae are found to produce a discreteness anisotropy that is nearly two orders of magnitude in excess of the observational limit if all supernovae are assumed to contribute equally with a source rate 1 in every 100 years
Speaker: Rahul Kumar (Ben Gurion University)
LHAASO-KM2A PMT test 1h
To fulfill the requirements of testing the photomultiplier tubes (PMTs) of the electromagnetic detector at the Large High Altitude Air Shower Observatory, a multifunctional PMT test bench with a two-dimensional (2D) scanning system is developed. With this 2D scanning system, 16 PMTs are scanned simultaneously to test their uniformity and cathode transit time difference. The di-distance method is developed to measure the linear dynamic range of the PMTs using the test bench. The primary test results are presented.
Speaker: Sun Zhandong (Southwest Jiaotong University)
LHAASO-WFCTA Optical System Optimization for High Precision Cherenkov Shower Reconstruction 1h
Wide Field-of-view air Cherenkov Telescope Array (WFCTA) is an essential component of the Large High Altitude Air Shower Observatory (LHAASO). WFCTA comprises 24 movable identical telescopes specialized for measuring the energy spectrums of the cosmic ray ingredients. In this paper, we describe the synthesis optimization design of the optical system, including the mirror segments, the camera and the Winston cone light collectors for individual telescope. We also evaluate the imaging performance through Monte Carlo simulation as well as spot scanning experiments. Finally, based on these properties, a high precision Cherenkov image reconstruction technique is discussed, which is implemented to improve the imaging resolution so as to fulfill the precise Cherenkov shower reconstruction.
Speaker: Dr Chong Wang (Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences)
Lightning Detection at the Pierre Auger Observatory 1h
As part of the Auger Engineering Radio Array, an extension of the Pierre Auger Observatory with antennas in the MHz range, it is necessary to monitor the local atmospheric conditions. These have a large influence on the radio emission induced by air showers. In particular, amplified signals up to an order of magnitude have been detected as an effect of thunderstorms. For a more detailed investigation and the detection of thunderstorms, a new lightning detection system has been installed at the Pierre Auger Observatory in Argentina. In addition, an electric field mill measures the electric field strength at ground level at the antenna array. With these measurements, data periods can be classified for their influence by thunderstorms. Additionally, a lightning-based trigger for the water-Cherenkov detectors was developed to read individual stations when lightning strikes nearby. With these data a possible correlation between the formation of lightning and cosmic ray showers can be investigated even at low energies of about $10^{15}$ eV. In this talk the structure and functionality of the lightning detection system are described, and first data analyses are shown.
Local density spectra of electron and muon EAS components in primary energy range from 10^14 to 10^18 eV 1h
The system of calibration telescopes (SCT) of the Cherenkov water detector (CWD) NEVOD is used as a shower array. SCT consists of two planes (80 m^2) with 40 scintillation counters (40×20×2 cm^3) in each. One plane is located on the roof of the CWD, and another one on its bottom. The distance between two planes is 9.45 m. Each registration channel of SCT is able to evaluate the counter response amplitude in the range from ~1 to ~50 relativistic particles, which corresponds to electron densities up to ~500 particles/sq.m. The triggering system identifies three types of events in SCT. The telescope trigger allows selecting muon tracks for calibration of the CWD photomultipliers and scintillation counters themselves. Other two triggers provide registration of the multiparticle events in each plane of SCT. The top plane is used as a detector of electron component of EAS, and the bottom one provides registration of the EAS muon bundles. The technique of EAS investigations with the SCT is based on the phenomenology of local density of charged particles because each plane of the setup has an area much less than transverse sizes of EAS. We have measured the spectrum of charged particle local density in the range from 0.5 to 200 m^-2 with the top plane, and the spectrum of local muon density in the range from 0.2 to 56 m^-2 with the bottom plane. Comparison with EAS simulations shows that the primary particle energy range which can be investigated with the SCT extends from 10^14 to 10^18 eV. This energy range includes the interval of 10^14−10^15 eV which is still insufficiently studied both in satellite and EAS experiments.
Speaker: Mr Mikhail Amelchakov (National Research Nuclear University MEPhI (Moscow Engineering Physics Institute))
Measurement of the average electromagnetic longitudinal shower profile at the Pierre Auger Observatory 1h
In addition to the standard $X_\mathrm{max}$ and energy, the longitudinal profiles of extensive air showers contain some more interesting information. For energies above $10^{17.8}$ eV, we present the average profiles as a function of depth measured for the first time at the Pierre Auger Observatory. The profile shapes for different energy ranges are all well reproduced by a Gaisser-Hillas function with two parameters. A detailed analysis of the systematic uncertainties is done using data and a full detector simulation, and the results are compared with predictions of hadronic interaction models for different primaries.
Speaker: Francisco Diogo (LIP (Lisboa))
MEASUREMENT OF THE ISOTOPIC COMPOSITION OF HYDROGEN AND HELIUM NUCLEI IN COSMIC RAYS WITH THE PAMELA-EXPERIMENT 1h
The cosmic-ray hydrogen and helium (1H,2H,3He,4He) isotopic composition between 100 MeV/n and 1.4 GeV/n has been measured with the satellite-borne experiment PAMELA. The rare isotopes 2H and 3He in cosmic rays are believed to originate mainly from the interaction of high energy protons and helium with the galactic interstellar medium. The energy spectrum of these components carries fundamental information regarding the propagation of cosmic rays in the galaxy which are competitive with those obtained from other secondary to primary measurements such as B/C. The isotopic composition was measured between 100 and 1100 MeV/n for hydrogen and between 100 and 1400 MeV/n for helium isotopes using two different detector systems over the 23rd solar minimum from July 2006 to December 2007.
Speaker: Wolfgang Menn (University of Siegen)
Measurement of the water-Cherenkov detector response to inclined muons using an RPC hodoscope 1h
The Pierre Auger Observatory operates a hybrid detector composed of a Fluorescence Detector and a Surface Detector array. Water-Cherenkov detectors are the building blocks of the array and as such play a key role in the detection of secondary particles at the ground. A good knowledge of the detector response is paramount to lower systematic uncertainties and thus to increase the capability of the experiment in determining the muon content of the extensive air showers with a higher precision. In this work we report on a detailed study of the detector response to single muon traversals as a function of traversal geometry. A dedicated Resistive Plate Chambers (RPC) hodoscope was built and installed around one of the detectors. The hodoscope is formed by two stand-alone low gas flux segmented RPC detectors with the test water-Cherenkov detector placed in between. The segmentation of the RPC detectors is of the order of 10 cm. The hodoscope is used to trigger and select single muon events in different geometries. The signal recorded in the water-Cherenkov detector and performance estimators were studied as a function of the trajectories of the muons and compared with a dedicated simulation.
Speaker: Mr Pedro Assis (LIP)
201508_FINAL_icrcposter_muonHodoscope.pdf
Measuring cosmic ray ions fluxes with AMS-02 1h
One of the key characteristic of Alpha Magnetic Spectrometer (AMS-02) is its capability to measure the relative abundances and absolute fluxes of the nuclear components of the galactic cosmic rays (CRs), from hydrogen up to iron (Z=26), in a kinetic energy range from GeV/n to TeV/n. In this contribution we discuss the methodology for the precise identification ions with AMS-02, which is relevant for the estimation of the flux ratio of secondary-to-primary CRs species, such as boro-to-carbon ratio. This is important because a precise measurement is needed to test the different propagation models and to constrain their free parameters. The raw data are first processed to extract the relevant information for the ions study, for a more efficient handling of the entire data sample. The charge identification is a combination of Z measurements from the upper and lower time of flight scintillator layers and the inner and outer silicon tracker layers (2 located at the edges and 7 layers in the inner part of the detector). The resolution and efficiency of the charge selection process is estimated by creating independent "pure" data samples for each detection layer, exploiting the available redundancy of the charge measurement. The method for the calculation of the detector acceptance for each ion species is also described. For the correct estimation of the ion fluxes we had to properly understand the fragmentation properties in case of interaction inside the detector (if the primary particle undergoes a charge change it might be wrongly identified). To tackle this problem, we developed dedicated analysis tools to study the interaction properties on Monte Carlo simulated events, that allow us to estimate the location, survival probability and fragmentation branches for each species.
Speaker: Dr Tescaro Diego (Instituto de Astrofísica de Canarias, Tenerife, Spain)
Measuring the energy of cosmic-ray helium with the TRD of AMS-02 1h
Since May 2011 the AMS-02 experiment is installed on the ISS and is observing cosmic radiation. It consists of several state-of-the-art sub-detectors, which redundantly measure charge and energy of traversing particles. Due to the long exposure time of AMS-02 of many years the measurement of cosmic-ray energy spectra is mainly limited not by statistics, but by detector response. The measurement of momentum for protons and ions, for example, is limited by the spatial resolution and magnetic field strength of the silicon tracker. The maximum detectable rigidity (MDR, rigidity is momentum per charge) for protons is about 2 TV, for Helium below 4 TV (E<2.1 TeV/amu). In this contribution we investigate the possibility to extend the range of the energy measurement for heavy nuclei (Z>=2) with the transition radiation detector (TRD). The main purpose of the TRD of AMS-02 is the discrimination between light particles (electrons and positrons) and heavy particles (protons), and was thus designed as a threshold detector. The response function of the TRD, however, shows a steep increase in signal from the level of ionization at a Lorentz factor γ of about 500 to γ ≈ 5000, where the transition radiation signal saturates. The increase of the signal over this energy range may be used to measure the Lorentz factor for very high energy cosmic-ray ions, e.g. for helium nuclei between about 500 GeV/amu and 5 TeV/amu, well beyond the limits of the silicon tracker. From the response curve and the signal fluctuations in the TRD we derive the energy resolution of the TRD and compare it to the resolution of the silicon tracker. Furthermore, the geometric acceptance available to a TRD-based measurement can be greater by an order of magnitude compared to a standard tracker-based analysis.
Speaker: Andreas Obermeier (Rheinisch-Westfaelische Tech. Hoch. (DE))
Measuring the Muon Production Depth in Cosmic Ray Air Showers with IceTop 1h
IceTop, the surface component of the IceCube Neutrino Observatory, detects air showers initiated by cosmic ray nuclei and gamma rays. The ground level muons are correlated with the energy and mass of the primary particle. This correlation is enhanced by resolving those muons which are produced early in the shower. The muon production depth (MPD) is reconstructed as a function of muon arrival time at ground level and distance from the shower core. This technique is most efficient when there are numerous muons that can be separated from the electromagnetic component of the shower. We use CORSIKA simulations to study the ability of IceTop to reconstruct the MPD distribution as a function of the shower's impact point, energy, and zenith angle. We explore the improvement of the measurement of the primary particle energy and mass that the reconstructed MPD can provide.
Speaker: Hershal Pandya (University of Delaware)
Meteorological effects of muon component at the mountain muon detectors. 1h
Temperature effect of mountain muon detectors which exceeds a little that expected theoretically, was studied in this work. Meteorological effects of such detectors have their own peculiarities and practically were not investigated before. Data from multidirectional detectors YangBaJing, Moussala, Bure, Mt. Hermon, Yerevan (2000 м) were used for calculations from the created in IZMIRAN database of muon detectors mddb. To exclude model dependence the meteorological effects were studied by different methods.
Speaker: Lev Pustilnik (Israel Cosmic Ray Center, and Tel Aviv University)
Modelling muon and neutron fluxes and spectra on the Earth's ground induced by primary cosmic rays 1h
The SecondaryCR model evaluates particle fluxes and spectra of secondary e-, e+, mu+, mu-, gammas, protons, neutrons, Cherenkov light etc. at different positions, altitudes and times in the Earth atmosphere. We developed this model of secondary cosmic rays production in the Earth's atmosphere in the previous studies. It is based on existing models evaluating particles transport in heliosphere, magnetosphere and interactions of primary cosmic rays with the atmosphere. For the evaluation at 1AU on magnetopause we use results of HelMod model. Transparency of magnetosphere was obtained by GeoMag model and finally secondary production in the Earth's magnetoshpere was simulated by Corsika package. The fluxes and spectra of neutrons and muons propagated to the ground over the globe during 22nd and 23rd solar cycle were simulated. The results are discussed in connection with neutron monitor measurements. Possibility to evaluate a neutron monitor response function from the SecondaryCR model simulations is discussed.
Speaker: Blahoslav Pastirčák (Institute of Experimental Physics SAS, Košice, Slovakia)
Modelling the Production of Cosmogenic Radionuclides due to Galactic and Solar Cosmic Rays 1h
Cosmogenic radionuclides such as 10Be, 14C and 36Cl are a product of the interaction of high energetic primary cosmic ray particles, in particular galactic cosmic rays (GCR), with the Earth's atmosphere. Because GCRs are modulated on their way through the interplanetary medium the GCR-induced production of these radionuclides is anti-correlated to the solar cycle. In addition, during phases of strong solar activity also solar energetic particle (SEP) events occur frequently. While the production due to GCRs can be seen as background production, in particular so-called Ground Level Enhancement (GLE) events, strong SEP events which can be detected at the Earth's surface, may strongly contribute to the production of 10Be, 14C and 36Cl, a topic by now highly discussed in the literature. Using energy spectra of modern GLE events we will investigate the influence of 58 out of the 71 GLEs and statistically investigate the possibility to detect such events in present ice-core and tree-ring records.
Speaker: Konstantin Herbst (Christian-Albrechts-Universität zu Kiel)
Muon Array with RPCs for Tagging Air showers (MARTA) 1h
We discuss the concept of an array with Resistive Plate Chambers (RPC) for muon detection in ultra-high energy cosmic ray (UHECR) experiments. RPC have been used in particle physics experiments due to their fast timing properties and spatial resolution. The operation of a ground array detector poses challenging demands, as the RPC must operate remotely under extreme environment, with limited power and minimal maintenance. In its baseline configuration, each MARTA unit includes one 1.5x1.2 m^2 RPC, with 64 pickup electrodes (pads). The DAQ system is based on a ASIC, allowing to readout the high number of channels with low power consumption. Data is recorded using a dual technique: single particle counting with a simple threshold on the signal from each pad and charge integration for high occupancy. The RPC, DAQ, High Voltage and monitoring systems are enclosed in an aluminum-sealed case, providing a compact and robust unit suited for outdoor environments, which can be easily deployed and connected. The RPCs developed at LIP-Coimbra are able to operate using very low gas flux, which allows running them for few years with a small gas reservoir. Several full-scale units are already installed and taking data in several locations and with different configurations, proving the viability of the MARTA concept. By shielding the detector units with enough slant mass to absorb the electromagnetic component in the air showers, a clean measurement of the muon content is allowed, a concept to be implemented in a next generation of UHECR experiments. The specificities of a MARTA unit are presented, which include particle counting with high efficiency, time resolution and spatial segmentation. The potential of the MARTA concept for muon measurements in air showers is assessed, as well as tentative methods for calibration and cross-calibrations with existing detectors.
Speaker: Raul Sarmento (LIP)
Neutrons produced by the Earth's crust due to Lunar and Solar tides 1h
The results presented in the report are based on the measurements of thermal neutrons flux produced by the Earth's surface during the experiment carried out in Pamir region at the altitude of 4200 m above sea level for the period from August 1 till August 14, 1994. The neutrons in the Earth's atmosphere are produced mainly during the interactions between the primary cosmic rays nucleons and nuclei with energy over 1 GeV with the nuclei of the elements of the atmosphere at the fission of the atmosphere's elements nuclei. At this energy over 90% of the primary cosmic rays are protons. So we consider that the neutrons in the Earth's atmosphere are mainly produced during the interactions between the primary cosmic rays protons with energy over 1 GeV and the nuclei of the atmosphere's atoms. Consequently, neutrons intensity variations in the atmosphere can be associated with the variations of the protons flux. Geomagnetic cut-off rigidity for the experimental site (Moskvina meadow) is 9.2 GV, so energy threshold for the primary protons is 8.3 GeV. The period from August 1 till August 14, 1994 was quiet in terms of heliophysical and geophysical conditions. No essential variations of cosmic rays in the interplanetary space and neutrons at the ground-based neutron monitors were observed, geomagnetic conditions was quiet, no chromospheric flares on the Sun were detected. During the period from August 1 till August 9 Kp-index did not exceed 2, on August 8 for a long time it was about 0. At the end of August 9 Kp-index began to increase and reached 4 at the evening of August 10. It left at this level till August 14 and then decreased. Under quiet geomagnetic conditions and absence of chromospheric flares the intensity of the secondary cosmic rays neutrons at the Moskvina meadow was expected to stay almost constant. Although spatial anisotropy of the cosmic rays intensity leads to cosmic rays daily variations due to the Earth's rotation, their value is small: for energy of several GeV daily variations are less than 1%. Nevertheless, according to the measurements during the period from August 1 till August 14, 1994 neutrons counting rate changed twofold and more throughout the day. Neutrons flux increased with approaching to the crossing of the local meridian by the Moon or the Sun, and then it decreased to the former level. The mentioned circumstances exclude the possibility for explanation of these variations by the known extraterrestrial factors. In the present report the authors show that the observed increases of the neutron intensity are caused by lunar and solar tides.
Speaker: Dr Nikolay Volodichev (D.V.Skobeltsyn Institute of Nuclear Physics, M.V.Lomonosov Moscow State University)
New electronics for the surface detectors of the Pierre Auger Observatory 1h
The surface detector array of the Pierre Auger Observatory consists of 1660 water Cherenkov detectors that sample the charged particles and photons of air showers initiated by energetic cosmic rays at the ground. Each detector records data locally with timing obtained from GPS units and power from solar panels and batteries. In the framework of the planned upgrade of the Auger Observatory, new electronics has been designed for the surface detectors. The electronics upgrade includes better timing with up-to-date GPS receivers, higher sampling frequency, increased dynamic range, increased processing capability, and better calibration and monitoring systems. It will also process the data of the additional scintillator detectors planned for the upgrade. In this paper, the design of the new electronics will be presented and its performance will be discussed.
Speaker: James Beatty (Ohio State University)
New software package of modelling of cosmic rays transport in the atmospherethe 1h
In this paper the RUSCOSMICS software package based on the GEANT4 toolkit and its possibilities in the cosmic rays are considered. Energy spectra of secondary cosmic rays particles resulting from the proton transport modeling trough the Earth atmosphere are presented. A calculations error is estimated and a comparison with experimental data is carried out. Also on the basis of the secondary cosmic rays flux intensity we investigate a contribution of different particles (protons, muons, electrons, positrons) in the ionization process in the atmosphere. The altitude profiles of ionization are presented and also a radiation absorbed dose calculation is carried out. The obtained data are compared with results of other authors.
Speaker: yury Balabin (PGI)
New upper limit on strange quark matter flux with the PAMELA space experiment 1h
Speaker: Marco Ricci (Istituto Nazionale Fisica Nucleare Frascati (IT))
Nuclei charge measurement with AMS-02 Silicon Tracker 1h
The Alpha Magnetic Spectrometer (AMS-02) is an astroparticle physics detector installed on the International Space Station (ISS) on May 16th 2011 during the STS-134 NASA Endeavour Shuttle mission. The purpose of the experiment is to study with unprecedented precision and statistics charged particles and nuclei in an energy range from 0.5 GeV to few TeV. The AMS-02 Tracker System accurately determines the trajectory and absolute charge (Z) of cosmic rays by multiple measurements of the coordinates and energy loss in nine layers of double sided silicon micro-strip detectors. This energy loss is proportional to the square of the particle charge thus allowing the distinction between different nuclei. The analog readout and the high dynamic range of the front end electronics allows to identify nuclear species from hydrogen up to iron and above. The charge resolution is naturally degraded by a number of detector effects that need to be correctly accounted for. In this contribution we describe the procedure that has been used to accurately calibrate the Tracker response and optimize its performances in terms of charge resolution. We will discuss the resulting analysis methods available to identify different particle species in the tracker, and present the overall measured performances.
Speaker: Mrs Stefania Vitillo (Universite de Genève)
Poster_Stefania.pdf
NuMoon: Status of ultra high energy particle searches with LOFAR 1h
The lunar askaryan technique is one of the few ways to obtain a large enough collecting area to detect ultra high energy cosmic rays and neutrinos at the highest end of the spectrum, above 10$^{21}$ eV. The flux of these particles is unknown, but if they are found they either point back to the best cosmic accelerators or may be the products of the decay of exotic particles and a step towards dark matter identification. The large collecting area is especially true for frequencies between 100-200 MHz, where the radiation is spread out over a wider angle and thus more of the lunar surface can be used for a possible detection. The NuMoon project therefore observes the Moon at these frequencies to search for nanosecond pulses. A first project with the Westerbork Synthesis Radio Telescope has placed the most stringent upper limits on the flux of ultra high energy cosmic rays and neutrinos. The next step is to observe with LOFAR, currently the most sensitive low frequency telescope. In this contribution I will present the status and plans of the project.
Speaker: Sander ter Veen (ASTRON)
On the correlation of the angular and lateral distributions of electrons after multiple scattering allowing for energy losses 1h
We calculate analytically the correlation coefficient of the scattering angle and the lateral deflection for electrons being multiply scattered by small angles while losing energy. We show that when average losses are assumed for the bremsstrahlung process the behaviour of the correlation coefficient with electron energy is completely different from that when only the ionisation losses are assumed. We also show how the correlation changes when fluctuations in the bremsstrahlung are allowed for . Based on these results an attempt to understand the correlation for electrons in EAS is made.
Speaker: Prof. Maria Giller (University of Lodz)
PAMELA'S MEASUREMENT OF GEOMAGNETIC CUTOFF VARIATIONS DURING SOLAR ENERGETIC PARTICLE EVENTS 1h
Data from the PAMELA satellite experiment were used to measure the geomagnetic cutoff for high-energy (above 80 MeV) protons during the solar particle events on 2006 December 13 and 14. The variations of the cutoff latitude as a function of rigidity were studied on relatively short timescales, corresponding to single spacecraft orbits (about 94 minutes). Estimated cutoff values were cross-checked with those obtained by means of a trajectory tracing approach based on dynamical empirical modeling of the Earth's magnetosphere. We find significant variations in the cutoff latitude, with a maximum suppression of about 6 degrees for 80 MeV protons during the main phase of the storm. The observed reduction in the geomagnetic shielding and its temporal evolution were compared with the changes in the magnetosphere configuration, investigating the role of IMF, solar wind and geomagnetic (Kp, Dst and Sym-H indexes) variables and their correlation with PAMELA results.
A.Bruno_ICRC2015_287_poster.pdf
PAMELA'S MEASUREMENT OF GEOMAGNETICALLY TRAPPED AND ALBEDO PROTONS 1h
Data from the PAMELA satellite experiment were used to perform a detailed measurement of under-cutoff protons at low Earth orbit. On the basis of a trajectory tracing approach using a realistic description of the magnetosphere, protons were classified into geomagnetically trapped and albedo. The former includes stably-trapped protons in the South Atlantic Anomaly, which were analyzed in the framework of the adiabatic theory, investigating energy spectra, spatial and angular distributions. PAMELA data were compared with other spacecraft measurements and with predictions of recent theoretical models. The albedo protons were classified into quasi-trapped, concentrating in the magnetic equatorial region, and un-trapped, spreading over all latitudes and including both short-lived (precipitating) and long-lived (pseudo-trapped) components. Features of the penumbra region around the geomagnetic cutoff were investigated in detail. PAMELA results significantly improve the characterization of the high energy proton populations in near Earth orbits.
Parallelization schemes for AIRES's Monte Carlo 1h
In this work we introduce different parallelization schemes implemented in the AIRES (AIR-shower Extended Simulations) software, in order to perform simulations, without thinning algorithm, in HPC clusters. The AIRES's particle stack was modified to define a new structure allowing its parallelization using MPI library. Adopting this new structure, three different parallelization tactics were implemented according to how particles are transferred between the working nodes: 1) Transfer based on the amount of particles stored in the stacks, 2) Transfer based on the energy of particles stored in the stacks 3) Transfer based on the energy of particles stored in the stacks, with decisions according the characteristics of the particle's nucleus type. As part of this paper will be present a comparison of the obtained results between the most performant parallelized version of AIRES and original version of AIRES, considering longitudinal and lateral profiles of vertical showers induced by Fe primaries of $ 10^{16.75} eV $. Towards the end of this work we will include an analysis of performance results of each parallelization tactic evaluated by different simulations of vertical showers whose energies are between $ 10^{15.75} eV $ and $ 10^{18.75} eV $.
Speaker: Leonardo Dominguez (Departamento de Computación de Alta Prestación - Comisión Nacional de Energía Atómica)
Performance and Operational Status of Muon Detectors in the Telescope Array Experiment 1h
Measurement of shower particles using scintillators at ground level, with different absorber thicknesses, enables detailed studies of the Telescope Array experiment's energy scale and of hadronic interaction models. We designed and constructed two types of such detectors. In this report, we present their performance and operational status.
Speaker: Toshiyuki Nonaka (Institute for Cosmic Ray Research, University of Tokyo)
ICRC2015poster_tamuon.pdf
Predicted CALET Measurements of Heavy and Ultra-Heavy Cosmic Ray Nuclei 1h
The CALorimetric Electron Telescope (CALET) is a Japanese-Italian-US astroparticle observatory expected to be installed on the ISS in 2015. The main calorimeter (CAL) on CALET is comprised from top to bottom of a charge detector (CHD) with two crossed layers of scintillator paddles, an imaging calorimeter (IMC) with planes of scintillating fibers interleaved with tungsten sheets, and a total absorption calorimeter (TASC) made of lead tungstate logs. The main science objectives of CAL are to measure the combined cosmic ray electron and positron spectrum to 20 TeV, gamma rays to 10 TeV, and nuclei $1 \leq Z \leq 40$ to 1,000 TeV. In this paper we present the expected numbers and energy spectra of heavy ($26 \le Z < 30$) and ultra-heavy (UH) ($30 \le Z \le 40$) Cosmic Ray (CR) nuclei that CAL will measure in a planned 5 year mission in the full detector geometry accounting for geomagnetic screening and interactions in the CHD. We will also present the numbers of UH CR nuclei that it will measure using the expanded acceptance permitted utilizing the earth's geomagnetic field to screen for events above $\sim 600$ MeV/nucleon. Above this threshold the UH charges can be resolved using the CHD with a trajectory correction from the top half of the IMC without the need for energy measurement in the TASC.
Speaker: Dr Brian Flint Rauch (NASA-Natl. Aeronaut. & Space Admin. (US))
Rauch_CALET_UH_Poster_ICRC2015_v7.pdf
PROTON AND LIGHT ION INTERACTIONS IN COSMIC RAY EXPERIMENT "STRATOSPHERE" IN COMPARISON WITH RECENT COLLIDER RESULTS 1h
Estimation of physical properties of exited fireball from complex final pattern of produced particles is key challenge in nucleus-nucleus collisions at high energies. Effective way to better understanding and interpretation of results consists in analyses of interaction of smaller systems, created in proton-proton or in proton-nucleus collisions.On the basis of such approach interactions of cosmic ray light nuclei and protons with different targets have been studied in the experiment "Stratosphere" at energies above10 TeV in Lab system[1]. Results have shown that in rare events, produced by alpha-particles and light nuclei, transverse momentum spectra of secondary γ-quantain soft region (up to 2 GeV/c) have exponential character with large values of inverse slope of the distributions: TA ~ 0,8 GeV/c. On the contrary, in the proton interactions the slope is essentially smaller Tp~ 0,2 GeV/c. For charged secondary particles the high order intermittency analyses have again demonstrated the large difference between events produced by protons and nuclei. So, the essential system size dependence in forward production dynamics has been obtained on limited statistics. Similar events were observed by JASSE and Concorde cosmic ray collaborations. New instanton-induced interpretation has been suggested for explanation. Obtained result is an important issue to be tested at collider experiments. The launch of the Large Hadron Collider (LHC) open broad new possibilities for high energy physics at TeV scale. Previous RHIC exploration on soft physics at midrapidity [2] were developed in the work of the ALICE collaboration [3]. In the very forward region the new experiments have been performed at LHC forward detector – LHCf. In proton-proton collisions at 900 GeV and at 7 TeV transverse momentum distribution for inclusive neutral pions has been measured in 2010 [4] and in p-Pb collisions at 5.02 TeV in 2013 [5]. All proton induced data (with antiproton–proton collisions at 630 GeV from UA 7 experiment)have shown that there is the weak dependence of average value of the neutral pion PT distribution from CMS energy [6]. The exponential fit for the spectra [4, 5] well enough coincide with the correspond estimation from our Stratosphere experiment. In the proposal [7] a new forward particle production experiment PHENIX-RHICf has suggested, in which p-p, proton-Nitrogen, and Nitrogen-Nitrogen, Fe-Nitrogen, - as a future options, - are considered. Realization of the direct collider measurements of light ion collisions will be very important both for frontier problems of high energy heavy ion physics and foractualhigh energy cosmic ray problems. Reference 1. A.KhArgynova et al., Proc. of 27-th ICRC, p.1477-1480, Hamburg, 2001. 2. S. Esumi - Soft physics at Phenix, ProgrTheorExpPhys, 2015, 03A104 3. B. Abelev et al., ALICE collaboration, pp and Pb-Pb, HAL arXiv: hal-01104892, 19 Jan, 2015. 4. O. Adriani et al., - LHCf collaboration, arXiv: 1205.4578. 5. O. Adriani et al., - LHCf collaboration, arXiv: 1403.7845. 6. M.Hiroaki - for LHCf collaboration, ICRC 2013, Rio de Janeiro. 7. Y. Ito et al., - proposal – forward particle production at RHIC, arViv: 1401.1004, 1409.4860.
ICRC2015_1.pdf
poster_icrc_Loc.pdf
R&D of EAS radio detection in China 1h
In order to study ultra-high-energy cosmic-ray (UHECR) sources, we need not only to know their direction, energy and chemical composition, but also large statistics of experimental data, which requires that the detector should have a large effective area and a high duty cycle. Radio antennas present some attractive aspects in this perspective, with very low unit costs, easiness of deployment over large areas and 100% duty cycle; they are therefore suitable for detecting UHECRs. In the Tianshan Mountain range (Xinjiang Autonomous Region, China), a radio-interferometer named 21 CMA was deployed, which aims at studying the epoch of reionization by detecting the hydrogen 21 cm radiation. On this site, the Sino-French cooperation experiment TREND (Tianshan Radio Experiment for Neutrino Detection) has performed autonomous detection and identification of EAS with a stand-alone and self-triggered array of 50 radio antennas. This inspires us to investigate the polarization characteristics of the radio signal with a hybrid array of 21 scintillators and 35 antennas measuring the x, y and z components of the electric field emitted by air showers. This hybrid setup is expected to provide a quantitative evaluation of the EAS identification & background rejection of the radio technique. If successful, this experiment would open the door for stand-alone, giant radio arrays dedicated to the study of high energy cosmic particles, such as the GRAND project.
Speaker: Dr Zhaoyang Feng (IHEP,CAS)
Results from the Telescope Array from data collected in hybrid-trigger mode 1h
The Telescope Array is a hybrid detector which consists of a surface detector (SD) and three air fluorescence detector sites surrounding the SD array. Hybrid data collection began in May 2008, with independent triggering of the two detector systems. Since October 2010, the SD array has been triggered with an external trigger from the fluorescence detectors (called a "hybrid-trigger") designed to collect SD information for events at primary energies where the standard SD trigger is inefficient. In this paper, we introduce our hybrid-trigger performance and report on analysis results using this trigger. 4 years of data will go into this analysis.
Speaker: Hisao Tokuno (UTokyo)
Search for energy dependent patterns in the arrival directions of cosmic rays at the Pierre Auger Observatory 1h
Energy-dependent patterns in the arrival directions of cosmic rays could arise from deflections in galactic and extragalactic magnetic fields. We report on searches for such patterns in the data of the surface detector of the Pierre Auger Observatory at energies above E = 5 EeV in regions within approximately 15° of the arrival directions of events with energy E > 60 EeV. No significant patterns are found with this analysis which can be used to constrain parameters in propagation scenarios.
Speaker: Dr Tobias Winchen (Bergische Universität Wuppertal)
Search for isotropic microwave radiation from electron beam in the atmosphere 1h
We report a search for 12.5 GHz microwave radiation from electron beams in the atmosphere. Ultrahigh-energy cosmic rays (UHECRs) are observed indirectly through extensive air showers (EASs) by particle detectors on the ground or fluorescence detectors using a remote sensing method. If isotropic radiation of microwave from EAS is detected, it can be used for future observation of the UHECR based on a remote sensing method just like fluorescence detector with 100 % duty cycle like particle detectors. Week attenuation in the atmosphere is another advantage to measure microwave radiation. To study microwave radiation from EAS, we used Electron Light Source (ELS) located at the Telescope Array Observatory in Utah, USA. The ELS emitted electron beams vertically into the atmosphere. Energy of the electron in the beam is 40 MeV which is similar to that in the EAS. About 600 million electrons are contained in a beam, which is equivalent to the shower maximum of an air shower created from 10 to 17 eV cosmic ray. The beam is triangular pulse of which the base is 20 ns. Commercial equipment for the satellite television are utilized for the microwave detection system. 1.2 m diameter parabola with 12.5 GHz receiver which measures vertical and horizontal polarizations is fixed on a concrete pad which is located at 80 m away from the electron beam. About 1500 beam shots were observed and no microwave signal has been detected. In this contribution we will report details of this detector, its calibration and obtained upper-limit on the intensity of isotropic radiation of 12.5 GHz microwave.
Speaker: Prof. Tokonatsu Yamamoto (Konan Univeristy)
MbrPoster150731.pdf
MbrPoster150731.pptx
Search for UHE Photons with the Telescope Array Hybrid Detector 1h
In order to understand sources of ultra high energy cosmic rays, we search for ultra high energy photons with the Telescope Array experiment. The Telescope Array is a hybrid detector consisting of an array of scintillation detectors, which measure the lateral profile of air showers, and fluorescence detectors, which measure the longitudinal profile of air showers. This information is used to search for photon-like events. We will report on the analysis method, and the result of a photon search using five years of TA data.
Speaker: Katsuya Yamazaki (University of Tokyo)
Search for Ultra-relativistic Magnetic Monopoles with the Pierre Auger Observatory 1h
Ultra-relativistic magnetic monopoles, possibly a relic of phase transitions in the early universe, would deposit an amount of energy comparable to UHECRs in their passage through the atmosphere, producing highly distinctive air shower profiles. We have performed a search for ultra-relativistic magnetic monopoles in the sample of air showers with profiles measured by the Fluorescence Detector of the Pierre Auger Observatory. No candidate was found to satisfy our selection criteria and we establish upper limits on the isotropic flux of ultra-relativistic magnetic monopoles - the first from an UHECR detector - improving over previous results by up to an order of magnitude.
150724MonopolePosterICRC15.pdf
Seasonal variations in the intensity of muon bundles detected at the ground level 1h
Experimental data accumulated in a 3-year long series of measurements (from May 2012 to April 2015) of cosmic ray muon bundles with the coordinate-tracking detector DECOR are analyzed. It has been found that the measured rate of the events exhibits clear seasonal variations, repeated every year of observations. The amplitude of the first annual harmonic of the event rate has been estimated as (5.7 +/- 0.1) % with the maximal intensity in January, and the minimal one in July. Thus, the difference between the average intensity of muon bundles recorded in winter and in summer exceeds 10 %. Taking into account that the mean energy of muons registered in the bundles is of the order of several tens GeV, the observed difference cannot be described in frame of a well-known mechanism of the formation of the temperature effect due to decays of low energy particles in the atmosphere, which is typical for single muons detected at the ground level. An alternative explanation related with changes of the shape of the lateral distribution function of EAS muons in the atmosphere with a variable temperature profile is discussed.
Sidereal anisotropy of Galactic cosmic ray observed by the Tibet Air Shower experiment and the IceCube experiment 1h
The IceCube experiment presented in 2012 the declination dependence of the first and second harmonic coefficients of the sidereal cosmic-ray anisotropy at 20 TeV and 400 TeV. In this presentation, we calculate the coefficients for the comic ray data observed by the Tibet ASgamma experiment at median energies of 12 TeV and 300 TeV during a period between November 1999 and May 2010. By using these coefficients combined, we analyze the sidereal anisotropy for the first time based on the two-hemisphere observations by the IceCube and the Tibet ASgamma experiments.
Speaker: Prof. Kazuoki Munakata (Shinshu University, Nagano, Japan)
Simulations for CALET Energy Calibration Confirmed Using CERN-SPS Beam Tests 1h
CALorimetric Electron Telescope (CALET) is a detector for the precise measurement of cosmic ray electrons, gamma-rays and nuclei on the International Space Station. CALET has an imaging and a thick calorimeter, which provide excellent energy resolution and particle identification. For the on-orbit calibration, we plan to use the minimum ionizing particles of cosmic rays such as protons and helium nuclei. We have carried out MC simulations to develop an algorithm of penetrating event selection by event reconstruction and to estimate the on-orbit event rate for the calibration. We have also carried out the beam tests at the CERN-SPS to assess the detector performance and the validity of our MC simulation and calibration methods. In this paper, we present the calibration methods and expected detector performance with beam test results.
Speaker: Yosui Akaike (University of Tokyo (JP))
Status and Prospects of the Auger Engineering Radio Array 1h
The Auger Engineering Radio Array (AERA) is a low-energy extension of the Pierre Auger Observatory. It is used to detect radio emission from extensive air showers in the 30 - 80 MHz frequency band. A focus of interest is the dependence of the radio emission on shower parameters such as the energy and the distance to the shower maximum. After three phases of deployment, AERA now consists of 153 autonomous radio stations with different spacings, covering an area of about 17 km$^2$. The size, station spacings, and geographic location at the same site or near other Auger low-energy detector extensions, are all targeted at cosmic ray energies above $10^{17}$ eV. The array allows us to explore different technical schemes to measure the radio emission as well as to cross calibrate our measurements with the established baseline detectors of the Auger Observatory. We will report on the most recent technological developments and experimental results obtained with AERA.
Speaker: Mr Johannes Schulz (Department of Astrophysics/IMAPP, Radboud University Nijmegen, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands)
Poster_ICRC_final.pdf
Study of UHECR Composition Using Telescope Array's Middle Drum Detector and Surface Array in Hybrid Mode 1h
The seven year Telescope Array (TA) Middle Drum hybrid composition measurement shows agreement between Ultra-High Energy Cosmic Ray (UHECR) data and a light composition obtained with QGSJetII-03 or QGSJet-01c models. The data are incompatible with a pure iron composition, for all models examined, for energies log10(E/eV)>18.2. This is consistent with previous TA results. This analysis is presented using an updated version of the pattern recognition analysis (PRA) technique developed by TA.
Studying Cosmic Ray Composition with IceTop using Muon and Electromagnetic Lateral Distributions 1h
In this contribution we will consider the methods at our disposal to estimate the mass of primary cosmic rays on an event-by-event basis using IceTop, the surface component of the IceCube detector at the geographical South Pole. We reconstruct the events using two lateral distribution functions, one for the muon component and one for the electrons and gamma rays. This results in a few parameters that are sensitive to primary mass: the muon density at large lateral distances and the steepness of the lateral distribution of the electromagnetic component of the air shower. This approach is complementary to the technique already used in IceCube, whereby one can get a mass sensitive parameter using the air shower size in IceTop together with several observables from the deep portion of the detector. Most importantly, this approach allows the study of composition-dependent anisotropy, since the zenith angle range is not constrained by the requirement of detecting the air shower in the deep detector.
Speaker: Javier Gonzalez (Bartol Research Institute, Univ Delaware)
Taiwan Astroparticle Radiowave Observatory for Geo-synchrotron Emissions (TAROGE) 1h
TAROGE is an antenna array on the high mountains of Taiwan's east coast for the detection of ultra-high energy cosmic ray (UHECR) in energy above 10^19 eV. The antennas will point toward the ocean to detect radiowave signals emitted by the UHECR-induced air-shower as a result of its interaction with the geomagnetic field. Looking down from the coastal mountain, the effective area is enhanced by collecting both direct-emission as well as the ocean-reflected signals. This instrument also provides the capability of detecting Earth-skimming tau-neutrino through its subsequent tau-decay induced shower. A prototype station with 12 log-periodic dipole array antennas for 110-300MHz was successfully built at 1000 m elevation near Heping township, Taiwan in July 2014 to prove the detection concept. It has been operating smoothly for radio survey and optimization of instrumental parameters. We plan to install another station on a higher mountain in summer 2015. In this report, we discuss the design of TAROGE, performance of the prototype station, expected sensitivity, and future prospect.
Speaker: Prof. Jiwoo Nam (LeCosPA and Department of Physics, National Taiwan University)
Telescope Array measurement of UHECR composition from stereoscopic fluorescence detection 1h
The chemical composition of ultra-high-energy cosmic rays (UHECRs) affects the observable distribution of air-shower $X_{\rm max}$ values, the atmospheric slant depth at which the number of secondary shower particles reaches its maximum. The observed $X_{\rm max}$ distributions at various primary UHECR energies can be compared with the distributions predicted by detailed detector simulations for any assumed composition and high-energy hadronic interaction model. In this poster, we present measurements of $X_{\rm max}$ by the Telescope Array (TA) fluorescence detectors with stereoscopic shower reconstruction. We find that for all hadronic models considered, the data collected since TA operation began in 2007 is consistent with a chiefly light UHECR composition.
Speakers: Dr Thomas Stroman (University of Utah), Dr Yuichiro Tameda (Institute for Cosmic Ray Research, University of Tokyo)
Testing for uniformity of UHECR arrival directions 1h
Arrival directions of ultra-high energy cosmic rays (UHECRs) exhibit mainly isotropic distribution with a hint of small deviations in particular energy bins. In this paper available UHECR data are tested for circular uniformity of arrival directions using methods developed in directional statistics.
Speaker: Anatoly Ivanov (Shafer Institute for Cosmophysical Research & Aeronomy)
The AMIGA Muon Counters of the Pierre Auger Observatory: Performance and Studies of the Lateral Distribution Function 1h
The AMIGA enhancement (Auger Muons and Infill for the Ground Array) of the Pierre Auger Observatory consists of a 23.5 km$^2$ infill area where air shower particles are sampled by water-Cherenkov detectors at the surface and by 30 m$^2$ scintillation counters buried 2.3 m underground. The Engineering Array of AMIGA, completed since February 2015, includes 37 scintillator modules (290 m$^2$) in a hexagonal layout. In this work, the muon counting performance of the scintillation detectors is analysed over the first 22 months of operation. A parametrisation of the detector counting resolution and the lateral trigger probability are presented. Finally, preliminary results on the observed muon lateral distribution function (LDF) are discussed.
Speaker: Dr Brian Wundheiler (Instituto de Tecnologías en Detección y Astropartículas)
The Cosmic Ray Nuclear Composition Measurement Performance of the Non-Imaging CHErenkov Array (NICHE) 1h
The Non-Imaging CHErenkov Array (NICHE) will eventually measure the flux and nuclear composition of cosmic rays from below $10^{15}$ eV to $10^{18}$ eV by using measurements of the amplitude and time-spread of the air-shower Cherenkov signal to achieve a robust event-by-event measurement of XMax and energy. NICHE will have sufficient area and angular acceptance to have significant overlap with TA/ TALE, within which NICHE is located, in both fluorescence and Cherenkov measurements allowing for energy cross-calibration. In order to quantify NICHE's ability to measure the cosmic ray nuclear composition, two different cosmic ray composition models, one based on the poly-gonato model of J. Hörandel (AstroPart 19, 2003) and the other based on the H4a model of T. Gaisser (Astropart 35, 2012), using simulated $X_{Max}$ distributions of the composite composition as a function of energy. These composition distributions were then unfolded into individual components via an analysis technique that included NICHE's simulated $X_{Max}$ and energy resolution performance as well as the effects of finite event statistics as a function of measured energy. In this talk, NICHE's ability to distinguish between these two CR composition evolution models and determine the individual components as a function of energy will be presented.
Speaker: John Krizmanic (USRA/CRESST/NASA/GSFC)
Krizmanic-ICRC15-Paper0562.pdf
The distribution of shower longitudinal profile widths as measured by Telescope Array in stereo mode 1h
Observing UHECR air showers in stereo mode provides a precise measurement of their longitudinal profiles. The Gaisser-Hillas function fits air shower profiles well on average. The range of shower widths can be sensitive to details of average inelasticity and multiplicity in the early part of the shower. Such a measurement can then also be used to constrain the interaction models used in simulating UHECRs. This work can augment the conventional stereo composition measurement. The distribution of the Gaisser-Hillas function FWHM value will be made in bins of energy, matching the bins used in the stereo composition analysis. These distribution will then be compared to Monte Carlo simulations using standard interaction models (QGSJet, Sibyll, EPOS).
Speaker: Douglas Bergman (University of Utah)
The effect of geomagnetic field on radio signal patterns from cosmic ray air showers 1h
Abstract: Different type of mechanisms are involved in generation and propagation of radio signals from cosmic ray air showers. The geomagnetic origin is one of such mechanisms which is very important especially in low frequency band studies. Based on CORSIKA and CoREAS we investigate the influence of earth magnetic field parameter on filtered peak radio amplitude patterns in 32–64 MHz frequency band using a specifically designed computer code. Simulated showers are from Proton and Iron primary particles with 10^17 eV initial energy. It is found that radio signal patterns are heavily dependent on the Earth magnetic field values so that they change fundamentally as we go from southern to northern hemisphere. We have chosen Pierre Auger Observatory in South and Tehran in North hemisphere for comparison purposes. Analyzing these patterns can clearly show the importance and influence of the Earth magnetic field parameter on the radio signal patterns from cosmic ray air showers.
Speaker: Mr Mohammad Sabouhi (Department of Physics , Semnan University, P.O. Box 35196-45399, Semnan, Iran)
The Guane Array of the LAGO Project 1h
The Space Weather program of the Latin American Giant Observatory (LAGO) is based on the installation of single or small arrays of water-Cherenkov detectors (WCD) spanned across Latin America. The Guane Array is one of the LAGO detection network nodes and it is located in the city of Bucaramanga, Colombia, at $986$ m a.s.l. The array is composed of three autonomous LAGO WCD installed at the vertices of a $105$ m side equilateral triangle. Each WCD is locally operated by a low power consumption single board computer and the first steps of the data analysis are done on board of the detector to reduce data transfer, as a test for the operation of WCD in remote sites. The array operates with two complementary analysis modes: the counting mode, a single particle technique implementation at each individual detector, and the shower mode, which allows the offline identification of spacetime correlated signals over the array. In this work it will be presented the capabilities, characterization and first results of the Guane Array.
Speaker: Christian Sarmiento-Cano (Universidad Industrial de Santander)
The Influence of Magnetic Fields on UHECR Propagation from Virgo A 1h
Active galactic nuclei (AGN) are considered as one of the most appropriate sources of cosmic rays with energy exceeding $~\sim 10^{18}~$eV. Virgo$~$A (M87 or NGC 4486) is the second closest to the Milky Way active galaxy. According to existing estimations it can be a prominent source of ultra high energy cosmic rays (UHECR). However not many events have been registered in the sky region near Virgo$~$A, possibly due to magnetic field influence. In present work we check UHECR events from recent sets of data (AUGER, Telescope Array etc.) for possibility of their origination in this AGN. We carried out the simulation of UHECR motion from Virgo$~$A taking into account their deflections in galactic (GMF) as well as extragalactic (EGMF) magnetic fields according to several latest models. The maps of expected UHECR arrival directions have been obtained as a result. It has been found following: 1)$~$UHECR deflection caused by EGMF is comparable with GMF one, moreover the influence of EGMF sometimes is dominating; 2)$~$effect of EGMF demonstrates obvious asymmetry in final distribution of expected UHECR arrival directions; 3)$~$the results of simulation depend on chosen GMF model and are still open for further discussion.
Speaker: Oleh Kobzar (Institute of Nuclear Physics, Polish Academy of Sciences)
The lunar Askaryan technique: a technical roadmap 1h
The lunar Askaryan technique, which involves searching for Askaryan radio pulses from particle cascades in the outer layers of the Moon, is a method for using the lunar surface as an extremely large detector of ultra-high-energy particles. The high time resolution required to detect these pulses, which have a duration of around a nanosecond, puts this technique in a regime quite different from other forms of radio astronomy, with a unique set of associated technical challenges which have been addressed in a series of experiments by various groups. Implementing the methods and techniques developed by these groups for detecting lunar Askaryan pulses will be important for a future experiment with the Square Kilometre Array (SKA), which is expected to have sufficient sensitivity to allow the first positive detection using this technique. Key issues include correction for ionospheric dispersion, beamforming, efficient triggering, and the exclusion of spurious events from radio-frequency interference. We review the progress in each of these areas, and consider the further progress expected for future application with the SKA.
Speaker: Justin Bray (University of Manchester)
The multi-sources M. C. collision generator GHOST for C R simulations at LHC energies 1h
GHOST (1) is .an extension of HDPM (Hybrid dual parton model) originally implemented in CORSIKA(2). It reproduces the pseudo-rapidity charged distribution for NSD events measured by LHCb, CMS and TOTEM …up to √s = 8TeV. At this energy, two pairs of normal generators are centered symmetrically, respectively at small rapidity 1.05 and mid rapidity 4.1, with respective widths 0.95 and 1.8 units of rapidity. Together with NSD and inelastic components, we detail the diffractive component (single and double). A more important rise of central rapidity density suggests also an enhancement of the total multiplicity. The semi-inclusive data is used to evaluate the consequences of the violation of the KNO scaling. The fluctuations of multiplicity are governed by the Negative binomial distribution and the opportunity of an asymptotic form of the energy dependent functions introduced by UA5 is investigated at UHE.; the results in limited pseudo rapidity intervals are used to evaluate a partial scaling , adjust the parameters of GHOST and describe the semi-inclusive pseudo rapidity distributions expected on a large range of rapidity. The validity of the relation between transverse momentum Pt and multiplicity at very high energy is also considered. Those improvements have consequences in the simulation of EAS suggesting a maximum depth at higher altitude and a muon content more important than with previous models at least for $E_{o}\geq 2.10^{16}$ eV. Comparisons are performed in addition with unexpected signals observed in EAS and in Gamma ray families in the energy range √s = 2-14 TeV(up to $10^{17}$eV for EAS). (1) Proceedings ISVHECRI CERN 2014 (to be published in EPJ) (2) The simulation program CORSIKA, J.Knapp, D.Heck J.N. Capdevielle, G. Schatz, T.Thouw,
Speaker: Jean-Noël CAPDEVIELLE (CNRS)
The muon detector prototype AMD for the determination of the muon content in UHECRs 1h
Precise measurements of the muon content of extensive air showers are essential for the identification of the chemical composition of ultra-high-energy cosmic rays. We therefore propose a new scintillator detector prototype, the Aachen Muon Detector (AMD). It can complement existing ground arrays composed of e.g. water Cherenkov detector stations. The detector consists of 64 scintillator tiles read out by silicon photomultipliers (SiPM) which are located in a steel housing which could be placed beneath the existing detector stations. SiPMs promise a photon detection efficiency which outperforms current photomultiplier tubes. In combination with their compact package, low cost per light sensor and a moderate bias voltage ($<100$ volts) a modular and robust design can be achieved. We present the current status of the AMD prototype, including first characterization measurements of the scintillator tiles and first promising simulation studies. We use a detailed detector simulation based on Geant4 to determine the efficiency of the AMD detector to reconstruct the simulated muon number in air showers.
Speaker: Christine Peters (RWTH Aachen University)
The NICHE Array: status and plans 1h
The Non-Imaging CHErenkov Array (NICHE) will be a low energy extension to Telescope Array and TALE using an array of closely spaced (~200 m) light collectors covering an area of ~2 square km. It will be deployed in the field of view of TALE and will overlap it in energy range. Showers with energies 1-100 PeV will be reconstructed using both the Cherenkov light Lateral Distribution and the Cherenkov time Width Lateral DIstribution. These two methods will allow shower energy and Xmax to be determined. A prototype of the array, called j-NICHE, is currently being built and deployed. The design and plans for the full array are presented along with a plan to deploy the first 25 counters to get a true Cherenkov hybrid air shower measurement.
The north-south asymmetry change during solar magnetic field reversal measured by PAMELA. 1h
The north-south asymmetry of galactic cosmic rays has been measured in the PAMELA experiment during the time period 2010-2014. Inside this period the solar magnetic field has been flipped. This gave the opportunity to follow the variation of the asymmetry effect. The variation of high energy cosmic rays ratio for particles arriving from Nord and South has been measured with aid of PAMELA instrument calorimeter. The solar magnetic field polarity flip has been taking place during part of this time interval. It was obtained that the value of this ratio has changed during the same time. So the obtained result confirm the conclusion about connection of Nord-South particle flux asymmetry with solar magnetic field.
Speaker: Dr Alexander Karelin (NRNU MEPhI)
The Sites of the Latin American Giant Observatory 1h
The Latin American Giant Observatory (LAGO) is an extended cosmic ray observatory, which consists in a wide network of water Cherenkov detectors (WCDs) located in nine different countries. The geographic distribution of the LAGO sites, with different altitudes and geomagnetic rigidity cut-offs, combined with the new electronic system for control, atmospheric sensing and data acquisition on board of each detector, allow the realization of multiple and variated astrophysics, space physics and atmospherics physics studies at regional scale. This work will describe the LAGO sites and the capabilities of the LAGO detection network spanned across Latin America.
The study on the potential of muon measurements on the determination of the cosmic ray composition using a new fast simulation technique 1h
In this work we study the energy evolution of the number of muons in air showers. Motivated by future plans for UHECR experiments, the analysis developed here focus on how the evolution of the moments of the shower observables distributions (Xmax and the number of muons at ground) can be used to assess the validity of a mass composition scenario, surpassing the current uncertainties on the shower description. The cosmic ray composition is an essential ingredient for an astrophysical interpretation of the data. However, the inference of composition from air shower measurements is limited by the theoretical uncertainties on the high energy hadronic interactions. Statistical analyses using the energy evolution of different observables, like the moments of the Xmax and of the moments of the number of muons distributions, can provide an efficient method to surpass these limitations imposed by the uncertainties in hadronic interaction models and provide more reliable information about the cosmic ray abundance. A new technique is presented here to generate a large set of simulated shower observables minimizing computer processing time. Fast algorithms to simulate the longitudinal development of the shower (i.e. CONEX) are long available. However, the number of muons is measured along the lateral development of the shower, which implies that tridimensional simulations are needed (i.e. CORSIKA). This paper presents a parameterization of the main shower characteristics that can be used to simulate the muon lateral distribution on ground using fast simulation algorithms. The parametrization was used in CONEX to produce a large library of showers. Xmax and the lateral distribution of muons were simulated. These showers were used to explore and discriminate among hypothetic astrophysical scenarios of mass composition.
Speaker: Mario Pimenta (LIP Laboratorio de Instrumentaco e Fisica Experimental de Particulas)
The TUS orbital detector simulation 1h
The TUS space experiment is aimed to study energy spectrum and arrival distribution of UHECR at energy range above 1020 eV by the measurement of the EAS fluorescent radiation in atmosphere. The TUS mission is planned for launch at the end of 2015 at the dedicated "Lomonosov" satellite. TUSSIM program package was developed to simulate the TUS detector performance including the Fresnel mirror optical parameters, the light concentrator of the photo detector and the front end and trigger electronics. In order to investigate the detector response, we employ the software package ESAF of JEM-EUSO experiment for the fluorescent radiation of EAS. Trigger efficiency is crucially dependent on the background level that is changed from ~0.2*106 to ~15*106 ph/(m2*microsec*sr) at moonless and full moon nights respectively. The TUSSIM algorithms is described and the expected TUS statistics is presented for 5 years of data collection from 500 km solar-sinchronized orbit taking into account the background light intensity change during the space flight.
Speaker: Dr Leonid Tkachev (JINR, Dubna)
Time asymmetries in the Surface Detector signals of the Pierre Auger Observatory. 1h
The asymmetry in the risetime of signals in Auger surface detector stations with respect to the direction of an incoming air shower is a source of information on shower development. The asymmetry is due to a combination of the longitudinal evolution of the shower and geometrical effects related to the angles of incidence of the particles into the detectors. The magnitude of the effect depends upon the zenith angle and state of development of the shower and thus provides a novel observable sensitive to the mass composition of cosmic rays above $4 {\times} 10^{18}$ eV. By comparing measurements with predictions from shower simulations, we find for both of our adopted models of hadronic physics (QGSJETII-04 and EPOS LHC) that the mean cosmic ray mass increases with energy, as has been inferred from other studies. However the absolute values of the mass are dependent on the shower model and on the range of distance from the shower core selected. Thus the method has uncovered further deficiencies in our understanding of shower modelling that ought to be resolved before the mass composition can be inferred from $(\sec\theta)_{max}$.
Speaker: Ignacio Minaya (urn:Google)
Poster_ICRC2015_405.pdf
Transition radiation at radio frequencies from ultra-high energy neutrino-induced showers. 1h
Detection of transition radiation from neutrino-induced showers escaping a dense medium is a promising technique which might be employed in future generations of ultra-high energy neutrino detectors. Using the well-known Zas-Halzen-Stanev (ZHS) Monte Carlo simulation, we have computed the electric field created by showers crossing a dense medium-air interface. Our calculations show that transition radiation is sizeable in a wide solid angle range with full coherence up to $\sim$ 1 GHz. These properties could make possible the design of large aperture detectors with low signal threshold. The work reported here represents a stepping stone for future dedicated investigations of particular experimental setups.
Speaker: Pavel Motloch (University of Chicago)
Ultra-High Energy Air Shower Simulation without Thinning in CORSIKA 1h
Interpretation of EAS measurements strongly depends on detailed air shower simulations. One of the big limitations is the calculation time of Monte-Carlo programs like CORSIKA at very high energies. Thinning algorithm has been introduced in the past to reduce the computation time and disk space of the output at the price of the loss of small scale structures in simulated air showers. Thanks to the newly developed parallelization scheme and special tools to study multiple thinning level for a given shower on a limited disk space, it is now possible to compare thinned and unthinned simulation of a single shower to quantify these losses. Preliminary results will be presented together with the details of the last release of CORSIKA.
CORSIKA_ICRC2015.pdf
Understanding the anisotropy of cosmic rays at TeV and PeV energies 1h
The anisotropy in cosmic-ray arrival directions in the TeV-PeV energy range shows both large and small-scale structures. While the large-scale anisotropy may arise from diffusive propagation of cosmic rays, the origin of the small-scale structures remains unclear. We perform three-dimensional Monte-Carlo test-particle simulations, in which the particles propagate in both magnetostatic and electromagnetic turbulence derived from a three-dimensional isotropic power spectrum. However, in contrast to earlier studies, we do not use a backtracking method for the computation of the particle trajectories, and hence anisotropy must build up from a large-scale isotropic (or dipole) boundary condition. It has been recently argued that the turbulent magnetic field itself generates the small-scale structures of the anisotropy if a global cosmic-ray dipole moment is present. Our code is well suited to test that hypothesis. We also investigate the impact of a finite phase velocity of interstellar turbulence.
Speaker: Martin Pohl (DESY)
Zenithal dependence of muon intensity 1h
The zenital dependence of muon intensity which reaches the earth's surface is well known as proportional to cos^n (theta). Generally, for practical purposes and simplicity in calculations, n is taken as 2. However, compilations of measurements show dependence on the geographical location of the experiments as well as the muons energy range. Since analytical solutions appear to be increasingly less necessary because of the higher accessibility to low cost computational power, accurate and precise determination of the value of the exponent n, under different conditions, can be useful in the necessary calculations to estimate signals and backgrounds, either for terrestrial and underground experiments. In this work we discuss a method for measuring n using a simple muon telescope and the results obtained for measurements taken at Campinas (SP), Brazil (22^o 54' W, -41^o 03', 854 m asl) and at Fermilab - Batavia (IL), United States (41.8319∘𝑁, 88.2572∘ 𝑊, 220 m). After validation of the method, we intend to extend the measurements for more geographic locations due to the simplicity of the method, and thus collect more values of n that currently exist in compilations of general data on cosmic rays.
Speaker: Ms Monica Nunes (UNICAMP)
Poster 1 DM and NU Amazon Foyer Terrace
Amazon Foyer Terrace
A dual-PMT optical module (D-Egg) for IceCube-Gen2 1h
The next upgrade of IceCube Neutrino observatory (IceCube-Gen2) enhances the detection capability of neutrinos with a few hundred TeV energies or greater by the increased instrumented volume in the glacier ice. Enhancement of the optical sensor performance in detecting ultra-violet photons can be a key factor for IceCube-Gen2 to achieve a higher sensitivity as more Cherenkov lights are expected in the short wavelengths. We have developed an optical module housing two 8" photo-multiplies (PMTs) in an UV transparent oval shaped glass. The two high-QE PMTs are installed in a way facing both up and down so that the resultant angular acceptance is more uniform. This uniformity of optical acceptance further improves the downward-going event detection and background veto efficiency compared to the current IceCube optical sensors. In addition, the improvements on UV transmittance of the housing glass and the inner gel lead to an improvement of the photon detection efficiency by a factor of four at wavelengths shorter than 340 nm. Here, the initial performance of the first prototype module of D-EGG is reported. We also present simulation studies of the IceCube-Gen2 performance with the new dual-PMT modules.
Speaker: Lu Lu (Chiba University)
ICRC_D-Egg_Poster_Lu.pdf
A fussy revisitation of antiprotons as a tool for Dark Matter searches 1h
Antiprotons are regarded as a powerful probe for Dark Matter (DM) indirect detection and indeed current data from PAMELA have been shown to lead to stringent constraints. However, in order to exploit their constraining/discovery power properly and especially in anticipation of the exquisite accuracy of upcoming data from AMS, great attention must be put into effects (linked to their propagation in the Galaxy) which may be perceived as subleasing but actually prove to be quite relevant. We revisit the computation of the astrophysical background and of the DM antiproton fluxes fully including the effects of: diffusive reacceleration, energy losses including tertiary component and solar modulation (in a force field approximation). We show that their inclusion can somewhat modify the current bounds, even at large DM masses, and that a wrong interpretation of the data may arise if they are not taken into account. The numerical results for the astrophysical background are provided in terms of fit functions; the results for Dark Matter are incorporated in the new release of the PPPC4DMID.
Speaker: Mathieu Boudaud (LAPTh Annecy France)
A method of electromagnetic shower identification by using isolated bars with the DAMPE BGO calorimeter 1h
A method is proposed for electron/hadron discrimination for 3D imaging BGO calorimeter DAMPE (DArk Matter Particle Explorer) experiment. The technique uses isolated bars which are extracted by comparing to their nearby bars in the same layer. We find that the energy distribution and location of isolated bars are highly sensitive to the type of interaction of incident particle. Based on the Monte Carlo investigation of the characters of isolated bars, we demonstrate a particle identification algorithm that can efficiently distinguish electromagnetic shower and hadronic shower. The method is verified by using beam test data taken at CERN PS and SpS.
Speaker: Chi WANG (USTC)
A Precision Optical Calibration Module for IceCube-Gen2 1h
A next generation of IceCube is under design targeting the Precision IceCube Next Generation Upgrade (PINGU) for the neutrino mass ordering and an extended array for astrophysical neutrino sources. A new level of precision is needed in order guarantee improved performances respect IceCube. A better calibration system will enable a better understanding of the ice and will therefore significantly reduce systematic effects. We present a new instrument called the Precision Optical Calibration Module (POCAM). By keeping the outer topology identical to that of the IceCube Digital Optical Module (DOM), cost effective construction and deployment is ensured. The design of the POCAM is based on the principle of an inverted integrating sphere. An appropriately placed LED in combination with a diffusing layer on the inside of the sphere results in an isotropic light emission from the apertures in the spherical housing. The output of the LED is monitored in-situ to high precision, it therefore ensures control over the output from the apertures. The POCAM has been simulated and tested in the framework of Geant4. A prototype POCAM is under construction. We will report about the status of the POCAM R&D.
Speaker: Kai Krings (Technische Universität München, Physik-Department)
Acoustic positioning system for KM3NeT 1h
KM3NeT is the next generation neutrino telescope in the Mediterranean Sea employing the technique of Cherenkov photon detection. The Acoustic Positioning System (APS) is a mandatory sub-system of KM3NeT that must provide the position of the telescope's mechanical structures, in a geo-referenced coordinate system. The APS is important for a safe and accurate deployment of the mechanical structures and, for science sake, for precise reconstruction of neutrino-induced events. The KM3NeT APS is composed of three main sub-systems: 1) an array of acoustic receivers (hydrophones and piezos) rigidly connected to the telescope mechanical structures; 2) a Long Base-Line (LBL) of acoustic transmitters (beacons) and receivers, anchored on the seabed in known positions; 3) a farm of PCs for the acoustic data analysis, on-shore. On shore, the positions of the acoustic receivers are calculated by measuring the ToF (Time Of Flight) of the LBL beacons' signals on the acoustic receivers, thus determining, via multi-lateration, the position of the acoustic receivers with respect to the geo-referenced LBL. The synchronized and syntonized electronics and the data transmission/acquisition allows for calculating the latencies of the whole data acquisition chain with accuracy better than 100 ns. The APS, in combination with compass and tilt, pressure, current and sound velocity data, is expected to measure the positions of the digital optical modules in the deep sea with accuracy of about 10 cm. Since data are continuously transmitted to shore and distributed to the local data acquisition network at the shore station, acoustic data are available also for Earth and Sea science users. The KM3NeT APS is also an excellent tool to study the feasibility of a neutrino acoustic detector and a possible correlation between acoustic and optical signals.
Speaker: Piera Sapienza (INFN)
poster_ICRC_VIOLA.pdf
Boosting the boost: the effect of tidal stripping on the subhalo luminosity 1h
In the paradigm of ΛCDM, structures form hierarchically, implying that large structures contain smaller substructures. These so-called subhalos can enhance the dark matter annihilation signal that one expects to see from a given host halo, the effect of which is called the boost factor. In the literature this boost factor is typically calculated assuming a density profile for the substructure, or analogously a concentration-mass relation, corresponding to that of field halos. However, since subhalos accreted in a gravitational potential of their host loose mass through tidal stripping and dynamical friction, they have a quite characteristic density profile, different from that of the field halos of the same mass. In this work we attempt to quantify the effect of tidal stripping on the boost factor. We find that the boost factor increases by a factor few for host halos ranging from sub-galaxy to cluster masses.
Speaker: Richard Bartels (University of Amsterdam)
Calibration, performances and tests of the first detection unit of the KM3NeT neutrino telescope 1h
KM3NeT is the next generation neutrino telescope being installed in the Mediterranean Sea. The first detection unit of the telescope is ready for installation in the deep Mediterranean Sea in the summer of 2015. Eighteen digital optical modules have been mounted on a vertical string for the detection of the Cherenkov light emitted by muons induced by up-going neutrinos. This paper reports on the integration and calibration of the optical modules and of the full detection unit, as well as the future installation in the deep sea and the on-shore operation. The additional information carried out by the new type of photo-detection units when comparing to the old generation of optical modules is also discussed.
Speaker: Alexandre Creusot (Universite de Paris VII (FR))
Confronting recent AMS-02 positron fraction and Fermi-LAT Extragalactic $\gamma$-ray Background measurements with gravitino dark matter 1h
The positron fraction measured by the space-based detectors PAMELA, {\it Fermi}-LAT and AMS-02 presents anomalous behaviour as energy increase. In particular AMS-02 observations provide compelling evidence for a new source of positrons and electrons. Its origin is unknown, it can be non-exotic (e.g. pulsars), be dark matter (DM) or maybe a mixture. We test the gravitino of bilinear R-parity violating supersymmetric models as this source. As the gravitino is a spin 3/2 particle, it offers particular decay channels, $W^{\pm}l^{\mp}_i$, $Z\nu_i$, and $H\nu_i$. We compute the electron, positron and $\gamma$-ray\ fluxes produced by each gravitino decay channel as it would be detected at the Earth's position. Combining the flux from the different decay modes we can fit AMS-02 measurements of the positron fraction, as well as the electron and positron fluxes, with a gravitino dark matter mass in the range $1-2$ TeV and lifetime of $\sim 1.0-0.8\times 10^{26}$ s. The high statistics measurement of electron and positron fluxes, and the flattening in the behaviour of the positron fraction recently found by AMS-02 allow us to determine that the preferred gravitino decaying mode by the fit is $W^{\pm}\tau^{\mp}$, unlike previous analyses. Then we study the viability of these scenarios through their implication in $\gamma$-ray observations. We set limits on the gravitino lifetime using the Extragalactic $\gamma$-ray Background recently reported by the {\it Fermi}-LAT Collaboration and a state-of-the-art model of its known contributors. These limits exclude the gravitino parameter space which provides an acceptable explanation of the AMS-02 data. Therefore, we conclude that the gravitino of bilinear R-parity violating models is ruled out as the unique primary source of electrons and positrons needed to explain the rise in the positron fraction.
Speaker: German Gomez-Vargas (Pontifical Catholic University of Chile)
Design studies for a neutrino telescope based on optical fiber hydrophones 1h
Acoustic detection may provide way to observe ultra-high energy cosmic neutrinos, i.e. energies above 10^18 eV, and their extra-galactic sources [1, 2]. The expected flux of cosmic neutrinos with ultra-high energy is low, so that large scale neutrino telescopes are needed for this emerging field of astroparticle physics. Using the acoustic signals induced by a neutrino interaction in water (or ice) has the advantage that sound can travel for many kilometers with only small attenuation in the relevant frequency range. A hydrophone network that uses the sea as a detection medium may therefore be the solution to detect the ultra-high energy neutrinos. It has been advocated that fiber optic hydrophone technology is a promising means to establish a sensitive, cost-effective and large scale sensor network [3]. In this technology several hydrophone sensors are integrated on a optical fiber. The sensors transform the acoustic pressure in to strain in the fiber. Subsequently, this strain causes a wavelength shift of the light that travels through the fiber and that is sensed using an interrogator. Hydrophones based on optical fibers, provide the required sensitivity to detect the small signals from neutrinos. At the same time, optical fibers form a cost-effective and straightforward way for the installation of a large scale network. In this talk we discuss the system design for a fiber optic hydrophone network. We provide a flow-down from the scientific objectives to the instrument requirements. This has led to the design of a new and improved hydrophone sensor. Measurements to characterize the sensor and to show its performance will be presented. In addition the performance of the interrogator is discussed and measurements are shown, leading to an overall performance prediction of the technology. [1] G. A. Askaryan. Acoustic recording of neutrinos. Zemlia i Vselennaia, 1:13–16, 1979. [2] J. G. Learned. Acoustic radiation by charged atomic particles in liquids: An analysis. Phys. Rev. D, 19:3293–3307, June 1979. [3] E. J. Buis et al. Fibre laser hydrophones for cosmic ray particle detection. Journal of Instrumentation, 9(03):C03051, 2014.
Speaker: Dr Ernst-Jan Buis (TNO)
Development of an automatic test system for the PMTs used in the BGO ECAL of DAMPE 1h
An automatic system has been developed for the batch test of the photomultiplier tubes (PMTs) in the BGO electromagnetic calorimeter (ECAL) of Dark Matter Particle Explorer (DAMPE). There are 616 PMTs (Hamamatsu R5610A-01) used in the BGO ECAL, which are critical for the realization of high dynamic readout and high precision measurement of the scintillation light from BGO crystals. In order to cover the large dynamic range of energy measurement of DAMPE, signals are read out from three dynodes of the PMTs. The charge ratios of the dynodes are of paramount importance to the energy reconstruction of high energy incident particles so that all the PMTs must be tested and calibrated. In addition, considering of the high reliability and quality requirements as a space-borne experiment, over 800 PMTs were tested during the mass production and screening procedure both for the Qualification Model and the Flight Model. Therefore, a light-emitting diode (LED) based system was designed to test the performance of PMTs automatically.
The test system is composed of a signal generator, a LED driver module, a dark box, and a readout system which consists of front end electronics (FEE) board, data acquisition (DAQ) board and a data acquisition software based on Labwindows/CVI. An arbitrary waveform generator drives the LED source for illuminating 22 PMTs through optical fibers in one dark box. Then 66 dynode signals are read out by a FEE board, sent to DAQ module, stored in the computer and finally analyzed with a root program. As two dark boxes can be controlled simultaneously by the readout system, it takes about 30 minutes to test 44 PMTs per time, which reduces the workload greatly and guarantees the project schedule. The details about this system and the test results are presented in this paper.
Speaker: Jianing Dong (USTC)
poster-jndong.pdf
poster-jndong.pptx
Development of new data acquisition system at Super-Kamiokande for nearby supernova bursts 1h
Super-Kamiokande (SK) is a 50-kiloton water Cherenkov detector. It is one of the most sensitive neutrino detectors and can be used for supernova observations by detecting supernova burst neutrinos. Recently, it is reported that Betelgeuse (640ly) is shrinking 15% in 15 years (C. H. Townes et al., 2009). Although this report does not immediately imply the supernova explosion of Beteleuse, it triggered the possibility of the nearby supernova. The simulation study based on the Livermore model predicts the 30MHz neutrino event during a burst from a supernova within a few hundred light years. The current SK data acquisition (DAQ) system can record only the first 20% of these events and a large fraction of the data afterwards will be lost. To overcome this problem, we developed a new DAQ system to record the number of hit PMTs. This system enabled us to store high-rate events and study a time profile of the number of neutrinos emitted at the supernova. This new system uses the number of hits from existing frontend electronics modules as inputs and is synchronized with them. Therefore, we can easily correlate the data from the new system and the existing system. The data is transferred to the computers via Ethernet with SiTCP. High frequency detailed data are stored for 1 minute in the 4GB DDR2 memory and they are transferred when a supernova burst is detected. The summarized data are constantly read out by the computers and stored in the disk for a week. We will monitor the event rate by this data and pre-scale the data of the existing DAQ system. The controlled pre-scaling enables us to measure the energy spectrum. Now the system is under commissioning. We will report the status of the operation.
Speaker: Asato Orii (University of Tokyo)
Development of the time domain simulation of impulsive radio signals for ARAcalTA 1h
The Askaryan effect is the coherent radio emission of an electron excess in a particle cascade. ARA (Askaryan Radio Array) is being built to observe the Askaryan radiation from ultra high energy neutrino (E > 10PeV) induced showers in ice around the South Pole. In order to study further the characteristics of the coherent emission, and also validate ARA detection system response, we set up a replica of the ARA experiment, the ARAcalTA. We used the electron linear accelerator on Telescope Array site to shoot 40MeV electron bunch in an ice target, the electron excess in ice provokes the coherent radiation that is detected by the ARA sensors. Because of the impulsive nature of the expected signal, we developed a simulation chain entirely in the time domain (instead of frequency). We present the simulation combining a Geant4 particle tracking and a particle per particle radio emission calculation. These results are in turn linked to the detector calibration and simulation to obtain the final expected waveform. We demonstrate that in absence of other background, the coherent radiation can be observed and characterized with ARAcalTA.
Speaker: Keiichi Mase (Chiba University)
Development of TRBs for Silicon Tracker Detector of DAMPE satellite 1h
The Silicon Tungsten Tracker (STK) is a detector of the DAMPE satellite to measure the incidence direction of high energy cosmic ray. It consists of 6 layers of silicon micro-strip detectors interleaved with Tungsten converter plates. The entire STK contains 73,728 readout channels totally and can be read out according to external average 50 Hz trig. It's a great challenge for space mission that all data acquisition (DAQ) works of detector signal digitization, data process and transfer are finished in 3 milliseconds dead time. In order to meet above requirements, 8 identical Tracker Readout Boards (TRB) are developed to control and read the front Application Specific Integrated Circuits (ASIC) signals. 8 TRBs work simultaneously according to every trig. In each TRB, there are 2 Field Programmable Gate Arrays (FPGA), 48 serial ADCs to process front 144 ASICs. A SRAM is also adopted in each TRB for data buffer. LVDS and RS422 are used for scientific data and telemetry communication with payload DAQ. Benefiting from the FPGA's rich resources and feature of work in parallel, data process includes pedestal subtraction, common noise subtraction, cluster finding and data compressing is realized inside two FPGAs. The TRB readout electronics of hardware and software for STK will be introduced in this poster.
Speaker: zhang fei (IHEP)
Fiber laser design and measurements for fiber optical hydrophones in their application for ultra-high energy neutrino detection 1h
The detection of ultra-high energy neutrinos with energies above 10^18 eV requires a neutrino telescope that is at least an order of magnitude larger than what has been achieved today [1]. A potential technology for a large scale neutrino telescope, which is sensitive enough to detect the low thermo-acoustic signals induced by the cosmic rays in water, is offered by fiber optical hydrophones [2]. Optical fibers form a natural way to create a distributed sensing system in which several transducers are attached to a single fiber. The detection system in this case will consist of several transducers, erbium doped fiber lasers and an interferometric interrogator. Next to the advantage of having multiple sensors on a single fiber, this technology has a low power consumption and no electromagnetic interference with other read-out electronics. Maybe even more important, fiber optics technology provides a cost-effective and straightforward way to implement a large number of hydrophones. In this paper we will show the results of investigations on one of the key components of the technology, i.e. the optical fiber laser. For the targeted application in a fiber optical hydrophone, the fiber laser technology requires a development beyond the present state of the art. In this light, design studies on in the various laser types and laser geometries have been carried out and trade-offs are made, supported with lab measurements. Moreover the multiplexing and cross-talk between several lasers on a single fiber have been investigated. Finally, the integration of the fiber laser in to the acoustical transducer will be shown. [1] E.Waxman. Neutrino astrophysics: A new tool for exploring the universe. *Science*, 315(5808):63-65, 2007. [2] E. J. Buis et al. Fibre laser hydrophones for cosmic ray particle detection. *Journal of Instrumentation*, 9(03):C03051, 2014.
Speaker: Vincent Baas (TNO)
Generation-2 IceCube Digital Optical Module and DAQ 1h
With recent exciting observations of astrophysical TeV- to PeV-energy neutrinos and new competitive measurements of GeV-energy atmospheric neutrino oscillations in the IceCube neutrino observatory at the South Pole, the design of a second generation Antarctic neutrino observatory, IceCube-Gen2, is underway. The design calls for two new instrumented volumes, one a denser in-fill array to extend the sensitivity of IceCube to energies low enough to gain sensitivity to the neutrino mass hierarchy, and one approximately ten times larger than IceCube, about 10 cubic kilometers in extent, to improve the sensitivity of IceCube to high energy astrophysical neutrinos and their sources. The detectors will share many common hardware elements and will leverage the successful hardware and software of the first generation experiment. They will feature updated data acquisition electronics using commercially available components and taking advantage of advances in embedded computing power. We will look at the status of the modernized in-ice Digital Optical Module (DOM) and the supporting surface electronics and data acquisition components.
Speaker: Michael DuVernois (University of Wisconsin)
GSL in Unified DE-DM Dominated LQC 1h
Thermodynamic study is the common approach to understand dark energy (DE) and dark matter (DM) riddle. The respective approach is still not comparatively matured in loop quantum cosmology (LQC). Our present work follows the study of the status of generalized second law (GSL) in unified DE-DM dominated LQC scenario.
Speaker: Dr Julie Saikia (Pub Kamrup College)
Isospin violating dark matter in Stückelberg portal scenarios 1h
In this work we study the phenomenological aspects of Stückelberg portals where the mediator between the Standard Model and the dark matter (DM) is a massive Z' boson. Those scenarios are well motivated by certain string theory constructions and naturally lead to i sospin violating interactions of DM particles with nuclei. We show that within this construction the relations between the DM coupling to neutrons and protons for both, spin-independent (fn/fp) and spin-dependent (an/ap) interactions are generically different from plus and minus 1 (i.e. different couplings to protons and neutrons) leading to a potentially measurable distinction from other popular portals. Finally, we perform a scan over all the parameters of the model and we incorporate bounds from searches for dijet and dilepton resonances at the LHC as well as LUX bounds on the elastic scattering of DM off nucleons to determine the experimentally allowed values of fn/fp and an/ap. We also obtain the phenomenological consequences of this kind of constructions for direct detection and indirect detection signals.
Speaker: Victor Martin-Lozano (IFT-UAM/CSIC)
Moon shadow observation with the ANTARES neutrino telescope 1h
The ANTARES detector is the largest neutrino telescope currently in operation in the North Hemisphere. One of the main goals of the ANTARES telescope is the search for point-like neutrino sources. For this reason both the pointing accuracy and the angular resolution of the detector are important and a reliable way to evaluate these performances is needed. One possibility to measure the angular resolution and the pointing accuracy is to analyse the shadow of the Moon, i.e. the deficit in the atmospheric muon flux in the direction of the Moon induced by absorption of cosmic rays. Analysing the data taken between 2007 and 2012, the Moon shadow is detected with about 3σ significance in the ANTARES data. The first measurement of the ANTARES angular resolution and absolute pointing for atmospheric muons using a celestial calibration source is obtained. The presented results confirm the good pointing performance of the detector as well as the predicted angular resolution.
Speaker: Matteo Sanguineti (INFN Genova - Università di Genova)
Multi-PMT optical modules for IceCube-Gen2 1h
Following the first observation of astrophysical high-energy neutrinos by IceCube, planning for a next-generation neutrino detector at the South Pole is under way, which will expand IceCube's sensitivity both towards high and low neutrino energies. In parallel to upgrading the proven IceCube design, new optical sensor concepts are explored which have the potential to further significantly enhance the performance of IceCube-GenTwo. One concept pursued is the multi-PMT optical module which, in contrast to the "conventional" layout with a single 10" photomultiplier (PMT), features 24 3" PMTs inside a pressure vessel. This design results in several advantages such as increased effective area, improved angular acceptance and directional sensitivity. The layout is based on the proven design of the KM3NeT optical module which is now being adapted and enhanced for the use in the deep ice. We present the current state of the hardware developments as well as first simulations investigating the impact of multi-PMT modules on detector performance.
Speaker: Lew Classen (University Erlangen-Nuremberg)
Performance of the Read-out Electronics of the Qualification Model of DAMPE BGO Calorimeter in Environmental Tests and CERN Beam Experiment 1h
The DAMPE (DArk Matter Particle Explorer) is a scientific satellite mainly aimed at indirectly searching for dark matter in space. One critical sub-detector of the DAMPE payload is an electromagnetic calorimeter, which consists of 308 BGO (Bismuth Germanate Oxid) crystal bars and 616 PMTs (photomultiplier tubes), for precisely measuring the energy of cosmic rays from 5 GeV to 10 TeV. The calorimeter, with 1848 readout channels and a dynamic range of 2×10^5 for each crystal bar, is equipped with a complex readout system which contains 16 front-end electronics boards (FEE) with a total power consumption of 26 W. The qualification model of the BGO calorimeter, as well as its readout electronics, has been constructed and passed a series of environmental tests, such as EMC (Electromagnetic Compatibility) test, vibration test, thermal cycling test and thermal-vacuum test. The readout electronics system performed well and each electronics channel achieved a dynamic range of 0 to 12.5 pC with a resolution better than 3 fC and nonlinearity less than 1%. Test results showed that it could adapt to the harsh space environments. Later in the fall of 2014, an accelerator beam experiment was successfully carried out at CERN with PS and SPS facilities, which suggested that the design specifications of the BGO calorimeter and its readout electronics were achieved.
Speaker: Dr Deliang Zhang (University of Science and Technology of China)
Performances and main results of the KM3NeT prototypes 1h
The KM3NeT collaboration aims to build a km3-scale neutrino telescope in the Mediterranean Sea. The first phase of construction comprises the deep-sea and onshore infrastructures at the KM3NeT-It (100 km offshore Capo Passero, Italy) and KM3NeT-Fr (40 km offshore Toulon, France) sites and the installation of 31+7 detection units. For the next step (KM3NeT 2.0) completion of two detectors are planned as extension of the detectors realized during the first phase of construction: ARCA for high energies (E > TeV) in Italy and ORCA for low energies (GeV range) in France. A prototype digital optical module made of 31'' PMTs was deployed in April 2013 inside the ANTARES neutrino telescope. This prototype, attached to an ANTARES string, is operating since its installation. It validated the multi-PMT technology and demonstrated the capability to identify muons with a single optical module searching for local time coincidences between PMTs inside the optical module. A prototype detection unit made of three optical modules was installed at the KM3NeT-It site. It was deployed in May 2014; it is active and taking data. More than 700 hours of data have been recorded and analyzed. The experience achieved with this prototype detection unit validates the submarine deployment procedures, the mechanics and the electronic of the apparatus, the data taking and analysis procedures. Through the study of $~^{40}$K decay in sea water and dedicated data taking periods with flashing LED beacons, it is possible to calibrate in time the detector with nanosecond stability. A dedicated algorithm has been developed to select atmospheric muons and reconstruct their zenith angle with a resolution of about 8 degrees. An excellent agreement is found when comparing the detected signal from muons with Monte Carlo simulations. The performance and results of the two prototypes will be presented.
Poster_PPM-DU.pdf
PINGU camera 1h
IceCube is the world's largest neutrino telescope located at the geographic South Pole, that utilizes more than 5000 optical sensors to observe Cherenkov light from neutrino interactions. A hot water drill was used to melt holes in the ultra-pure Antarctic ice, in which strings of optical sensors were deployed at a depth of 1500m to 2500m. The recent observation of high energy neutrinos consistent with astrophysical origin, as well as measurements of neutrino oscillation parameters and world-leading searches for dark matter, have demonstrated the great potential of this detector type. Extensions to the IceCube detector are now being considered. Ice properties, including the refrozen hole ice, have emerged as major source of uncertainty for event reconstruction. A camera system integrated with optical sensor modules could be tremendously beneficial in order to better understand ice properties and interpret calibration measurements. In this presentation we will describe the merits of the camera system and present a preliminary design. The preliminary design foresees a system of high resolution cameras located inside the DOM, to study the refrozen and surrounding ice. The impact of the camera system on geometry calibration, sensor location and orientation will be discussed.
Speakers: Carsten Rott (Sungkyunkwan University), Debanjan Bose (Sungkyunkwan University)
Progress on the development of a wavelength-shifting optical module 1h
We report on the development of a photon sensor sensitive to single photons that employs wavelength-shifting and light-guiding techniques to maximize the collection area and to minimize the dark noise rate. The sensor is tailored towards applications in ice-Cherenkov neutrino detectors using inert and cold, low-radioactivity and UV transparent ice as a detection medium, such as IceCube-Gen2 or MICA. The goal is to decrease the energy threshold as well as to increase the energy resolution and the vetoing capability of the neutrino telescope, when compared to a setup with optical sensors similar to those used in IceCube. The detector captures photons with wavelengths between 250$\,$nm to 400$\,$nm. These photons are re-emitted with wavelenghts above 400$\,$nm by a wavelength shifter coating applied to a 90$\,$mm diameter polymer tube which guides the light towards a small-diameter PMT via total internal reflection. By scaling the results from smaller laboratory prototypes, the total efficiency of the proposed detector for a Cherenkov spectrum is estimated to exceed that of a standard IceCube optical module by a factor of 2.7. The status of the prototype development and performance of its main components as well as the potential for future IceCube extensions will be discussed.
Speaker: Dustin Hebecker (Humboldt Universität zu Berlin / DESY)
Poster_post_ICRC_final.pdf
Self Consistent Simulation of Dark Matter Annihilation And Background 1h
Future space based experiments such as CALET and DAMPE will measure the electron and positron cosmic-ray spectrum with better energy resolution and up to higher energy, making detection of small features in the spectrum, which might originate from Dark Matter annihilation or decay in the galactic halo, possible. For precise prediction of these features, the numerical cosmic ray propagation code GALPROP is used, and was extended to calculate the flux at Earth from different Dark Matter scenarios with any given injection spectrum. The results from GALPROP for both the cosmic-ray background spectrum and the component from Dark Matter annihilation are strongly dependent on the bin size in energy used in the calculation, due to energy loss playing a major role in the propagation of electrons. A modification to partly compensate the influence of the discretization of the energy shifted particles has been implemented in the code. The effect of this improvement is demonstrated with examples of the expected spectra for the cosmic ray background in combination with several Dark Matter candidates calculated at different energy binning. http://www.crlab.wise.sci.waseda.ac.jp/eng/wp-content/uploads/downloads/2015/03/icrc.png This figure shows the background electron flux is subjected to a shift in power law index due to finite energy bin size, as shown by the difference between the results for calculation with a bin size of 4% (magneta line), and 30% (orange dots)of the energy. In the result for the modified code (green dots) the change is compensated, giving results matching the finer energy binning. The AMS-02 results and a possible Dark Matter Contribution(electron+positron channel,Mass of DM=400GeV, Boost Factor=130) are shown in maroon and grey respectively.
Speaker: Saptashwa Bhattacharyya (Waseda University)
Simulation studies of the expected proton rejection capabilities of CALET 1h
The CALorimetric Electron Telescope (CALET) is a Japanese led international space mission by JAXA (Japanese AeroSpace Agency) in collaboration with the Italian Space Agency (ASI) and NASA. The instrument will be launched to the International Space Station in 2015. The major scientific goals for CALET are to measure the flux of cosmic-ray electrons (including positrons) from 1 GeV to 20 TeV, gamma rays to 10 TeV and nuclei with Z=1 to 40 up to 1,000 TeV. These measurements are essential to search for dark matter signatures, investigate the mechanism of cosmic-ray acceleration and propagation in the Galaxy and discover possible astrophysical sources of high-energy electrons nearby the Earth. The instrument consists of two layers of segmented plastic scintillators for the cosmic-ray charge identification, a 3 radiation length thick tungsten-scintillating fiber imaging calorimeter and a 27 radiation length thick lead-tungstate calorimeter. Protons are the largest source of background for the high-energy electron observation. As the ratio of protons to electrons increases at higher energies, a proton rejection power better than $10^5$ is necessary to measure the electron spectrum with a proton contamination below a few percent in the TeV energy region. In this work, a Monte Carlo based study of the proton rejection capability CALET can achieve from GeV to TeV energies is presented. Both standard analysis based on consecutive selection criteria and multivariate analysis are applied to simulated samples of signal and background events. Finally, the resulting accuracy and signal-to-background ratio expected in the electron spectrum measurement are assessed.
Speaker: Roberta Sparvoli
Site Characterization and Detector Development for the Greenland Neutrino Observatory 1h
The PeV neutrinos discovered by IceCube are of astrophysical origin, and their progenitors could be any of several source classes, including active galactic nuclei, gamma-ray bursts, or pulsars. Such high-energy accelerators would produce neutrinos up to hundreds of PeV, which motivates the development of neutrino telescopes with the sensitivity, energy resolution, and pointing resolution required to distinguish among models of the IceCube neutrinos as well as cosmogenic neutrinos. Radio detection of Askaryan radiation from neutrino showers in ice is well-suited to the detection of the highest energy neutrinos, with degree-scale pointing resolution and the ability to build sparse arrays, but the energy threshold of current experiments is currently set by the temperature of the ice. The uncorrelated thermal noise can be averaged away by combining the signals from several antennas in a phased array. We report here on a June 2015 trip to Summit Station in Greenland for testing a phased array of dipoles, including the sensitivity of the array and background measurements of the site. We also discuss prospects for the Greenland Neutrino Observatory.
Speaker: Stephanie Wissel (UCLA)
Software framework and reconstruction software of the DAMPE gamma-ray telescope 1h
An overview is given for the offline software framework and reconstruction software of the DAMPE (DArk Matter Particle Explorer) gamma-ray telescope. DAMPE is one of the five satellite missions in the framework of the Strategic Pioneer Research Program in Space Science of the Chinese Academy of Sciences, with a launch date scheduled for the fall 2015. The telescope consists of silicon-tungsten tracker-converter, comprising 6 layers of double-sided silicon-strip detectors, interleaved with 3 layers of tungsten converters, BGO calorimeter, and plastic scintillator, serving as anti-coincidence detector, and a layer of neutron detector in the bottom of the calorimeter. DAMPE analysis and reconstruction software is implemented based on the custom-made software framework, where the core software is written in C++, while the management part is done in Python. We take advantage of the boost-python libraries, whereby the bridge between the core and management part is done, allowing us to fully exploit the computational power of modern CPUs, while keeping the framework flexible and easy to deploy. The building blocks of the framework are the algorithms, which are stacked together and configured in the the job-option files. The geometry of the detector is implemented in the GDML format, through the direct conversion from the CAD drawings of the detector to the geant4-compatible format. The data flow is handled by the dedicated input-output service, based on ROOT. The simulation algorithms are implemented with the Geant4 tool kit. In the heart of the reconstruction software lies the pattern recognition for the initial track finding, which is refined further by the track filtering algorithm, based on the adaptation of Kalman technique. The software has been extensively put on test during the beam test campaigns at CERN, in 2014-2015 years, proving its sustainability to a wide range of data-processing challenges, encountered in a particle-physics experiment.
Speaker: Dr Andrii Tykhonov (Universite de Geneve (CH))
poster-ICRC-tykhonov.pdf
Space qualification of the Silicon Tungsten Tracker of DAMPE 1h
Silicon Tungsten Tracker (STK) is one of the key payloads of Dark Matter Particle Explorer (DAMPE), which is planned to be launched at the end of 2015. In order to verify the design of STK, an Engineering Qualification Model (EQM) of STK was developed in 2014 and qualified for several space environmental tests, including vibration test, shock test, thermal vacuum test, thermal balance test and thermal cycling test. All the test results demonstrate the high reliability and good performance of STK, which also trigger the following production of flight model (FM).
Speaker: Wenxi Peng (IHEP)
Status and prospects for the Askaryan Radio Array (ARA) cosmogenic neutrino detector 1h
The Askaryan Radio Array (ARA) is an ultra-high energy >100 PeV cosmic neutrino detector which is in phased construction near the South Pole. ARA searches for radio Cherenkov-like emission from particle cascades induced by neutrino interactions in the ice using radio frequency antennas (~150-800MHz) deployed at a design depth of 200m in the Antarctic ice. A prototype ARA Testbed station was deployed at ~30m depth in the 2010-2011 season and the first three full ARA stations were deployed in the 2011-2012 and 2012-2013 seasons. We present the status of the array and plans for the near-term construction of a full ARA-37 detector with profound discovery potential for most models of cosmogenic neutrinos from 100 PeV to 100 EeV in energy.
The Calibration Units of the KM3NeT Neutrino Telescope 1h
KM3NeT is a network of deep-sea neutrino telescopes to be deployed in the Mediterranean Sea, that will perform neutrino astronomy and oscillation studies. It consists of three-dimensional arrays of thousands of optical modules that detect the Cherenkov light induced by charged particles resulting from the interaction of a neutrino with the surrounding medium. The performance of the neutrino telescope relies on the precise timing and positioning calibration of the detector elements. The exact location of optical modules (which is affected by sea currents) can be monitored through an acoustic positioning system, while external light sources are used to achieve the required sub-nanosecond time resolution and to measure water optical properties. Other environmental conditions which may affect light and sound transmission, such as water temperature, pressure and salinity, must also be continuously monitored. For these purposes, KM3NeT foresees the deployment of several dedicated Calibration Units (CUs), whose base will host the detector calibration devices (Laser beacon, acoustic emitter and hydrophone). A few of these CUs will additionally be equipped with an Instrumentation Unit with a semi-autonomous and recoverable inductive line supporting the environmental monitoring instruments. This contribution describes the technical design and construction of the first Calibration Unit, to be deployed on the French site as part of KM3NeT Phase 1, as well as the purpose and characteristics of the different instruments that it will support.
Speaker: Veronique Van Elewyck (Universite Paris Diderot)
The Dark Box instrument for fast automatic testing of the photomultipliers for KM3NeT 1h
Since the early days of experimental particles physics photomultipliers have played an important role in detector design. Also in astroparticle physics research, photomultipliers are largely used, in particular in experiments employing the technique of the detection of Cherenkov photons. Currently, the KM3NeT Collaboration is building a water Cherenkov neutrino telescope in the Mediterranean Sea based on the next generation optical modules with multiple low price 3-inch photomultiplier tubes. In its final layout, the KM3NeT neutrino telescope will host several hundred thousand photomultipliers, which must be tested and calibrated during the production of the optical modules. To overcome a possible bottleneck in the production process of testing and calibration of the massive amount of photomultipliers for KM3NeT, we developed the Dark Box instrument to accelerate the process. The Dark Box setup is designed to provide fast simultaneous automatic testing of 62 photomultipliers to verify their compliance to requirements for timing and ToT resolution and the occurrence of spurious pulses. In addition, the Dark Box can be easily converted into a general instrument for testing and calibrating large amounts of photomultipliers other than those for KM3NeT. We report on the design and performance of the Dark Box instrument for the high-statistics measurement of the characteristics of photomultipliers and of their calibration.
Speaker: Paolo Piattelli (INFN)
The data acquisition system of the KM3NeT detector 1h
The KM3NeT neutrino telescope is part of a deep-sea research infrastructure being constructed in the Mediterranean Sea. The basic element of the detector is the Detection Unit, a 700 meter long vertical structure hosting 18 Digital Optical Modules (DOMs). The DOM comprises 31 3'' photomultiplier tubes (PMTs), various instruments to monitor environmental parameters, and the electronic boards for the digitization of the PMT signals and the management of data acquisition. Dedicated readout electronics have been developed and are installed inside each DOM, allowing to measure the time of arrival and the duration of photon hits, on each of the 31 photomultiplier tubes, with a time resolution of 1 ns. Moreover, the data transmission system of the DOMs supports a data transfer rate up to 250 Mbps, which corresponds to a photon-hit rate of 15 kHz on each PMT. Due to the extreme operation conditions of the abyssal site, the all-data-to-shore concept is used in order to minimize the complexity of the offshore detector. The processing of the data transmitted to onshore is performed by the Trigger and Data Acquisition System (TriDAS). The networking infrastructure and computing resources are conceived to be modular and scalable in order to manage the full data rate from the final cubic-kilometer scale telescope. The electronics and the DAQ system described in the poster are currently under test in the first Detection Unit deployed offshore Toulon and operated since spring 2015.
Poster_Elec_DAQ.pdf
The electron spectrum from annihilation of Kaluza-Klein dark matter in the Galactic halo 1h
The Kaluza-Klein (KK) particles, which are the feasible candidate for the dark matter, produce electrons and positrons when they annihilate in the Galactic halo. When the electrons and positrons propagate in the Universe, their direction is randomaized by the Galactic magnetic field, and energy is reduced by some energy loss mechanisms. We calculate the electron and positron spectrum expected from KK particle annihilation to be observed at Earth, taking account of propagation effects in the Galaxy. We assume the lightest KK particle (LKP) in the mass range from 500 GeV to 1000 GeV is the dark matter consisting of the Galactic halo, and we treat the particle spectra from LKP annihilation which include electron-positron component from two-body decays and ``continuum'' emission. We calculate the effects of diffusion and energy loss in the Galaxy, and analyze the resulting spectra. These spectra strongly depend on the LKP mass and will be compared with recent observational data taking account of energy resolution of detectors. We can set some constraints for the boost factor of dark matter concentration in the Galactic halo. In addition, we will discuss the recent result on positron fraction based on our calculation.
Speaker: Mr Satoshi Tsuchida (Ritsumeikan University)
The KM3NeT Multi-PMT Digital Optical Module 1h
The KM3NeT collaboration is currently constructing the first phase of a cubic kilometer-scale neutrino detector in the Mediterranean Sea. The basic detection element, the Digital Optical Module (DOM), houses 31 three-inch PMT's inside a 17 inch glass sphere. This multi-PMT concept yields a factor three increase in photocathode area, compared to a design with a single 10 inch PMT, leading to a significant cost reduction. Moreover, this concept allows for an accurate measurement of the light intensity (photon counting) and offers directional information with an almost isotropic field of view. We will discus these aspects and the enabling technologies, which include 3D-printed support structures, and custom low-powered PMT bases, which provide the HV and digitization of the analog signal. An FPGA based readout system transfers all sub-ns timestamped photon signals to shore via optical fibers. The DOM design has been validated and its physics potential have been proven in currently operational prototypes deployed in the French and Italian sites at 2500m and 3500m depth respectively.
Speakers: Dr Daan van Eijk (Nikhef), R Bruijn (Nikhef)
The Mechanical structure and deployment procedure of the KM3NeT detection unit. 1h
In this paper we provide a detailed description of the mechanical structure of the 750 m high KM3NeT detection unit. The choices made for the different materials and their behaviour under the loads expected during deployment an during the lifetime of the experiment will be discussed, as will the motion of the unit under influence of the sea currents. The unique method of deployment, which entails unfurling of the unit from the seabed using a purpose built launcher, will be described.
Speaker: Prof. Paul Kooijman (University of Amsterdam)
The observability of gamma-ray spectral features from Kaluza-Klein dark matter annihilation 1h
The lightest Kaluza-Klein particle (LKP), which appears in the theory of universal extra dimensions, is one of the good candidates for cold dark matter. We assume the LKP mass ranges from 500 GeV to 1000 GeV. We focus on the LKP annihilation modes which contain gamma-rays as final products. The gamma-ray spectrum from LKP annihilation has a characteristic peak structure near the LKP mass (``lines'') from two-body decays and continuum emission. Gamma rays do not lose energy during propagation after production near the galactic center where dark matter concentration is expected, so we can treat it easier than electron. We investigate the detectability of this peak structure by considering energy resolution of near-future detector, and calculate the expected count spectrum of the gamma-ray signal. The observed gamma-ray spectrum will show the peak clearly, if the LKP mass is heavier. In contrast, if the LKP mass is light, constraint for the boost factor becomes strictly. Detecting such peak structure would be conclusive evidence that dark matter is made of LKP.
The optical module of the Baikal-GVD neutrino telescope 1h
The BAIKAL-GVD neutrino telescope in Lake Baikal is intended for studying astrophysical neutrino fluxes by recording the Cherenkov radiation of the secondary muons and showers generated in neutrino interactions. The first stage of BAIKAL-GVD will be equipped with about 2400 optical modules. Each of these optical modules consists of a large area photomultiplier R7081-100 made by Hamamatsu Photonics and its associated electronics housed in a pressure resistant glass sphere. We describe the design of the optical module, the front-end electronics and the laboratory characterization and calibration before deployment.
Speaker: Bair Shaybonov (JINR)
Time and amplitude calibration of the Baikal-GVD neutrino telescope 1h
The first stage of the Baikal-GVD neutrino telescope will be composed of more than two thousand light sensors, Optical Modules (OMs), installed deep underwater in Lake Baikal. We describe developed calibration methods which use OM LEDs, the calibration laser source, atmospheric muons etc. and discuss the performance of these methods.
Time synchronization and time calibration in KM3NeT 1h
The KM3NeT neutrino telescope is a next generation Cherenkov array containing thousands of optical modules being installed in deep sea at a depth larger than 2500 m and more than 40 km distance from the shore. For the precise event reconstruction sub-nanosecond precision synchronization between modules is required. Its realization exploits the White Rabbit system to synchronize clocks between nodes through Ethernet over optical fiber. This system was modified for the KM3NeT architecture that is designed for clock distribution based on a common broadcast line. The calibration procedure for electronics latencies, fiber path asymmetries and wavelength dependent light velocity on fiber is described. LED beacons installed on optical modules, laser beacons at the sea bottom and K40 decays are also used to monitor the detector time synchronization in situ. Application of the time calibration procedure to the first detection unit string with 18 optical modules and it performance will be presented.
Speaker: Dr Mieke Bouwhuis (NIKHEF)
Poster 1 GA Mississippi Foyer
Mississippi Foyer
A data mining approach to recognizing source classes for unassociated gamma-ray sources 1h
The Fermi-LAT 3rd source catalog (3FGL) provides spatial, spectral, and temporal properties for 3033 gamma-ray sources. While 2041 sources in the 3FGL are associated with AGNs (58% of the total), pulsars (5%) and the other classes (4%), 992 sources (33%) remain as unassociated sources. In recognizing source classes for unassociated gamma-ray sources of the Fermi-LAT source catalogs, various data mining techniques have been applied, e.g. artificial neural network and classification tree. As a robust alternative to these data mining techniques, we present the Mahalanobis Taguchi (MT) method to recognize source classes. The MT method creates a multidimensional Mahalanobis space from characteristic variables of a normal class (e.g. AGN) to identify sources of the normal class from those of the other classes with Mahalanobis distances. In this paper, we present the results of the source classification for the unassociated gamma-ray sources in 3FGL by applying the MT method.
Speaker: Prof. Kenji Yoshida (Shibaura Institute of Technology)
A major electronics upgrade for the H.E.S.S. Cherenkov telescopes 1-4 1h
A new time-dependent likelihood technique for detection of gamma-ray bursts with IACT arrays 1h
In imaging atmospheric Cherenkov telescope arrays (IACTs), the standard method of statistically inferring the existence of a source is based on the maximum likelihood method of Li&Ma (1983). We will present a new statistical approach, also based on maximum likelihood theory, which takes into account a priori knowledge of the source light curve. This approach is especially useful for observations of rapidly decaying gamma-ray bursts (GRBs). Using Monte Carlo simulations, the new maximum likelihood test statistic is evaluated under realistic conditions for GRBs observed by current generation IACT arrays, and a moderate improvement in sensitivity is projected. To calculate the improvement, we conservatively assume that the Li&Ma integration time has been optimally chosen, which isn't possible in reality without prior knowledge of the burst fluence. The sensitivity improvement depends on the decay index of the burst and the observing delay, but is projected to be approximately 30% for a typical observation near the threshold of detection (typical is defined as a burst observed with a 2min delay and that decays as a power law of index -1). An even larger improvement is projected for quickly observed, rapidly decaying GRBs. The method is shown to be relatively resilient to uncertainties in the light curve, as long as it still captures the decaying nature of the GRB flux. We will also discuss results established by using this technique to analyze VERITAS GRB observations.
Speaker: Mr Ori Weiner (Columbia University)
Advanced models for AGN emission 1h
Active Galactic Nuclei have been in the focus of gamma-ray telescopes for the past years. With the ever growing sample of AGN the need for physically motivated, self-consistent modeling is also growing. The major questions to be answered by models are: What are the main constituents of AGN jets? What are the acceleration mechanisms? Are AGN possible accelerators for UHECR and possible source of UHE neutrinos? We will present new modeling approaches for AGN, which have a focus on the self-consistent approach. Two types of models have emerged from our work: A homogeneous model containing acceleration via Fermi mechanisms, leptonic and photo-hadronic radiation mechanisms and time variability. And a spatially extended model containing the same radiation processes, but a pitch-angle resolved acceleration process. The results contain the radio signature of extended jets with predictions for the motion of radio cores correlated with TeV emissions and also possible discrimination criteria for hadronic and leptonic radiation models.
Speaker: Felix Spanier (North-West University)
Analysis of GeV-band gamma-ray emission from SNR RX J1713.7-3946 1h
RX J1713-3946 is the brightest shell-type supernova remnant (SNR) of the TeV gamma-ray sky. Earlier Fermi-LAT results on low energy gamma-ray emission suggested that, despite large uncertainties in the background determination, the spectrum is inconsistent with a hadronic origin. We update the GeV-band spectra using improved estimates for the diffuse galactic gamma-ray emission and more than double the volume of data. We further investigate the viability of hadronic emission models for RX J1713-3946. We produced a high-resolution map of the diffuse Galactic gamma-ray background corrected for the HI self-absorption and used it in the analysis of more than five years worth of Fermi-LAT data. We used hydrodynamic scaling relations and a kinetic transport equation to calculate the acceleration and propagation of cosmic rays in SNR. We then determined spectra of hadronic gamma-ray emission from RX J1713-3946, separately for the SNR interior and the cosmic-ray precursor region of the forward shock, and computed flux variations that would allow us to test the model with observations. We find that RX J1713-3946 is now detected by Fermi-LAT with very high statistical significance, and the source morphology is best described by that seen in the TeV band. The measured spectrum of RX J1713-3946 is hard with index $\gamma=1.53\pm0.07$, and the integral flux above 500 MeV is $F = (5.5\pm1.1)\times10^{-9}$ photons cm$^{-2}$ s$^{-1}$. We demonstrate that scenarios based on hadronic emission from the cosmic-ray precursor region are acceptable for RX J1713-3946, and we predict a secular flux increase at a few hundred GeV at the level of around 15% over ten years, which may be detectable with the upcoming Cherenkov Telescope Array (CTA) observatory.
Speaker: Robert Brose (DESY)
Analysis of the first observations with the new MAGIC Sum-Trigger-II 1h
The MAGIC telescopes were built with the aim of achieving the lowest possible energy threshold among the current generation of Cherenkov telescopes. This was mandatory to detect sources with emission mainly below 100 GeV, as distant AGNs and pulsars. In 2009, the second MAGIC telescope started operation, and in the last years, a major upgrade of the system took place. One of the main improvements has been the development of a new version of the Sum-Trigger concept, valid for stereoscopic observations. This Sum-Trigger-II system was installed during Winter 2013/14, and since then, we have collected the first test data to characterize its scientific capabilities. In this contribution the results of the analysis of the first Crab pulsar data taken with the Sum-Trigger-II are shown, demonstrating the potential of this new system to study gamma-ray sources with high sensitivity above 40 GeV.
Speaker: Marcos López Moya (University Complutense of Madrid)
Blazar Alerts with the HAWC Online Flare Monitor 1h
The High Altitude Water Cherenkov (HAWC) Gamma Ray Observatory monitors the gamma-ray sky in the 100 GeV to 100 TeV energy range with >95% uptime and unprecedented sensitivity for a survey instrument. The HAWC Collaboration has implemented an online flare monitor that detects episodes of rapid flaring activity from extragalactic TeV sources in the declination band from -26 to 64 degrees. This allows timely alerts to be sent to multiwavelength instruments without human intervention. The preliminary configuration of the online flare monitor achieves sensitivity to flares of at least 1 hour duration that attain an average flux of 10 times that of the Crab Nebula. While flares of this magnitude are not common, several flares reaching the level of 10 Crab have been observed in the TeV band in the past decade. With its survey capabilities and high duty cycle, HAWC will expand the observational data set on these particularly extreme flares. We will discuss results from the first alerts issued by the online flare monitor and the prospects for multiwavelength studies of blazar dynamics, the extragalactic background light, and the intergalactic magnetic field using extreme blazar flares detected by HAWC. We will also highlight upcoming improvements to the flare monitor that will extend its sensitivity to weaker flares.
Speaker: Dr Thomas Weisgarber (for the HAWC Collaboration)
Constraining the properties of new gamma-ray MSPs with distance and velocity measurements 1h
The millisecond pulsar (MSP) luminosity distribution is useful to address e.g. contributions to the distribution of the diffuse positrons and gamma rays within our Galaxy. Gamma-ray luminosity versus spin-down power (Edot) is also a key observable to constrain emission models. The Shklovskii effect consists of an artificial increase of the apparent period derivative value (Pdot) over the intrinsic one due to the pulsar's transverse motion. Accounting for this effect can significantly change the Edot value in many cases: it depends on the MSP's distance and proper motion. In this contribution we will focus on the gamma-ray detection of four MSPs with the Fermi Large Area Telescope (LAT) and on parallax and proper motion measurements for an ensemble of gamma-ray MSPs using Nançay radio telescope data, that we use to compute the Shklovskii corrections and update the luminosity vs Edot relation, bringing new constraints on these pulsars' properties.
Speaker: Helene Laffon (CENBG)
Construction of a medium size prototype Schwarzschild-Couder telescope as candidate instrument for the Cherenkov Telescope Array: Overview of mechanical and optical sub-systems. 1h
The design of a 9.5-m prototype Schwarzschild-Couder telescope (pSCT) with an aplanatic two-mirror optical system has been developed to evaluate its capabilities for the future Cherenkov Telescope Array Observatory (CTAO). The construction of this novel imaging atmospheric Cherenkov telescope (IACT) is scheduled for early autumn of 2015 at the Fred Lawrence Whipple Observatory in Southern Arizona, USA. The pSCT is expected to verify superior performance of this instrument (high angular resolution, wide field of view, reduced focal plane plate scale, high channel density low cost camera electronics, single photon counting operation regime, etc.) as compared to the traditional Davies-Cotton IACTs constructed for the VERITAS and HESS ground based gamma-ray observatories. An array of SC telescopes operating as a possible extension of the CTA installation is expected to significantly enhance the research capabilities of the observatory for very high-energy (E>100 GeV) gamma-ray astronomy. In this contribution we present the design overview of the pSCT mechanical and optical sub-systems and the status of the telescope construction.
Speaker: Prof. Vladimir Vassiliev (University of California Los Angeles)
Cosmic ray acceleration and nonthermal emission from ultra-fast outflows in active galactic nuclei 1h
There is mounting evidence for the widespread existence of ultra-fast outflows in active galactic nuclei, which are powerful outflows of baryonic material approaching mildly relativistic velocities, observed as variable, blue-shifted X-ray absorption lines of ionized heavy elements. Occurring in both radio-loud and radio-quiet objects, they are plausibly interpreted as winds driven by the accretion disk, and their interaction with their environment may be the key cause of known correlations between the properties of supermassive black holes and their host galaxies. In such outflows, collisionless shocks are likely to form at different locations, either external shocks due to interaction with the ambient medium, or internal shocks due to inhomogeneities within the flow. We discuss the possibility of acceleration of electrons and hadrons at such shocks, including that of ultra-high-energy cosmic rays. Expectations for the consequent nonthermal emission from the radio band up to high-energy gamma-rays are also presented, and compared with existing data on selected objects of interest, such as ESO 323-G77 and 3C 120. Prospects for further observations with current and future instruments are addressed.
Speaker: Susumu Inoue (Institute for Cosmic Ray Research, University of Tokyo)
Cosmic-Ray Induced Gamma-Ray Emission From Starburst Galaxies 1h
In star-forming galaxies, gamma rays are mainly produced through the collision of high-energy protons in cosmic rays and protons in the interstellar medium (ISM) (i.e. cosmic ray-induced π0 γ-radiation). For a "normal" star-forming galaxy like the Milky Way, most cosmic rays escape the Galaxy before such collisions, but in starburst galaxies with dense gas and huge star formation rate, most cosmic rays do suffer these interactions. We construct a "thick-target" model for starburst galaxies, in which cosmic rays are accelerated by supernovae, and escape is neglected. This model gives an upper limit to the gamma-ray emission and tests the calorimetry relation between gamma rays and cosmic rays for starbursts. Only two free parameters are involved in the model: cosmic-ray proton acceleration energy rate from supernova and the proton injection spectral index. We apply the model to five observed starburst galaxies: M82, NGC 253, NGC 1068, NGC 4945 and Circinus, and find the calorimetric relation holds for most of the starbursts, but for Circinus, other gamma-ray sources must be presented to explain for its GeV excess. The pionic gamma-ray emission is calculated from 10 MeV to 10 TeV, which covers the Fermi Gamma-ray Space Telescope (Fermi) energy range. We also apply the model to the extragalactic gamma-ray background emission(EGB) by assuming all star-forming galaxies are calorimetric, finding that star-forming galaxies cannot make the entire signal, other gamma-ray sources must also exist.
Speaker: Ms Xilu Wang (University of Illinois at Urbana and Champaign)
poster_Xilu_Wang.jpeg
Creating a high-resolution picture of Cygnus with the Cherenkov Telescope Array 1h
The Cygnus region hosts one of the most remarkable star-forming regions in the Milky Way. Indeed, the total mass in molecular gas of the Cygnus X complex exceeds 10 times the total mass of all other nearby star-forming regions. Surveys at all wavelengths, from radio to gamma-rays, reveal that Cygnus contains such a wealth and variety of sources---supernova remnants (SNRs), pulsars, pulsar wind nebulae (PWNe), Hii regions, Wolf-Rayet binaries, OB associations, microquasars, dense molecular clouds and superbubbles---as to practically be a galaxy in microcosm. The gamma-ray observations along reveal a wealth of intriguing sources at energies between 1 GeV and tens of TeV. However, a complete understanding of the physical phenomena producing this gamma-ray emission first requires us to disentangle overlapping sources and reconcile discordant pictures at different energies. This task is made more challenging by the limited angular resolution of instruments such as the Fermi Large Area Telescope, ARGO-YBJ, and HAWC and the limited sensitivity and field of view of current imaging atmospheric Cherenkov telescopes (IACTs). The Cherenkov Telescope Array (CTA), with its improved angular resolution, large field of view, and order of magnitude gain in sensitivity over current IACTs, has the potential to finally create a coherent and well-resolved picture of the Cygnus region between a few tens of GeV and a hundred TeV. We describe a proposed strategy to study the Cygnus region using CTA data, which combines a survey of the whole region at 65°< l < 85° and -3.5° < b < 3.5° with deeper observations of two sub-regions that host rich groups of known gamma-ray sources.
Speaker: Amanda Weinstein (Iowa State University)
Development of a SiPM Camera for a Schwarzschild-Couder Cherenkov Telescope for the Cherenkov Telescope Array 1h
We present a development of a novel 11328 pixel silicon photomultiplier (SiPM) camera for use with a ground-based Cherenkov telescope with Schwarzschild-Couder optics as a possible mid-size telescope for the Cherenkov Telescope Array (CTA), which is the next generation very-high-energy gamma-ray observatory. . The finely pixelated camera samples air-shower images with more than twice the optical resolution of cameras that are used in current Cherenkov telescopes. Advantages of the higher resolution will be a better event reconstruction yielding improved background suppression and angular resolution of the reconstructed gamma-ray events, which is crucial in morphology studies of, for example, Galactic particle accelerators and the search for gamma-ray halos around extragalactic sources. Packing such a large number of pixels into an area of only half a square meter and having a fast readout directly attached to the back of the sensors is a challenging task. For the prototype camera development SiPMs from Hamamatsu with through silicon via (TSV) technology are used. We give a status report of the camera design and highlight a number of technological advancements that made this development possible.
Speaker: Nepomuk Otte (Georgia Institute of Technology)
Divergent pointing with the Cherenkov Telescope Array for surveys and beyond 1h
The galactic and extragalactic surveys are two of the main proposed legacy projects of the Cherenkov Telescope Array (CTA). Considering Cherenkov telescopes field of view (<10°), the time needed for those projects is large. The many telescopes of CTA will allow taking full advantage of new pointing modes in which telescopes point slightly offset from one another. This divergent pointing mode leads to an increase of the array field of view (~ 14° or larger) with competitive performance compared to normal pointing. We present here a study of the performance of the divergent pointing for different array configurations and number of telescopes. We show that for a fixed survey sensitivity, using divergent pointing instead of normal pointing results in a non-negligible gain in observing time and reduced fluctuations in survey depth. We review multiple science cases benefiting from the large field-of-view offered by the divergent pointing.
Speaker: Lucie Gerard
Exploiting the time of arrival of Cherenkov photons at the 28 m H.E.S.S. telescope for background rejection: Methods and performance 1h
In 2012, the High Energy Stereoscopic System (H.E.S.S.) was expanded by a fifth telescope (CT5). With an enormous effective mirror diameter of 28 m, CT5 is able to detect the Cherenkov light of very faint gamma-ray air showers, thereby significantly lowering the energy threshold of this telescope compared to the other four telescopes. Extracting as much information as possible from the recorded shower image is crucial for background rejection and to reach an energy threshold of a few tens of GeV. The camera of CT5 is conceived to register the time of the charge pulse maximum with respect to the beginning of the 16 ns integration window of each pixel. This information can be utilised to improve the event reconstruction. It also helps to reduce the background contamination at low energies. We present new techniques for background rejection based on CT5 timing information and evaluate their performance.
Speaker: Raphaël Chalmé-Calvet (LPNHE)
Exploring the gamma ray sky above 30 TeV with LHAASO 1h
The gamma ray sky at energies above a few tens of TeV is almost completely unexplored. Sources of photons above ~30 TeV must however exist because cosmic rays are accelerated in the Milky Way at least up to the knee energy. Photon emission in this energy range, with a high degree of confidence, has an hadronic origin and traces the proton and nuclei acceleration sites. Gamma ray astronomy above 30 TeV is therefore of fundamental importance for the identification of cosmic ray sources. LHAASO is a project of a multi-component air shower detector, to be built in Sichuan, China, at 4410 m of altitude. One element of the detector, the KM2 array, a grid of scintillators and muon detectors distributed over an area of ~1 Km$^2$ will be able to monitor in one year the northern sky at 100 TeV with a sensitivity of 1% of the Crab Nebula flux. In this paper the capabilities of LHAASO in gamma ray astronomy above 30 TeV are reviewed, and the scientific potential in identifying or constraining galactic and extragalactic cosmic ray sources is discussed.
Speaker: Silvia Vernetto (Istituto Nazionale di Astrofisica)
FACT - Charged Cosmic Ray Particles as a Tool for Atmospheric Monitoring 1h
FACT is the first Imaging Air Cherenkov Telescope to use solid-state photosensors (G-APD/SiPM) in order to measure the light flashes induced by air-showers. A vital part of the telescope system is the atmosphere. Typically, external devices such as LIDARs are used to quantify the quality of the atmospheric condition. Due to the exceptional stability of G-APD sensors, a different approach to monitor the quality of the atmosphere can be implemented. Due to this stability variations of the measured charged cosmic ray flux are an effect of changes of the atmosphere. Trigger rates of FACT are already used to identify strong disturbances for example clouds or Calima. In a new study, we use the data taken during the past years to investigate more subtle effects like the difference between summer and winter atmosphere predicted by Monte Carlo simulations.
Speaker: Dr Dorothee Hildebrand (ETH Zurich)
FACT - Performance of the First SiPM camera 1h
The First G-APD Cherenkov Telescope (FACT) is the first operational test of the performance of silicon photomultipliers (SiPM) in Cherenkov Astronomy. These novel photon detectors promised to be an inexpensive and robust alternative for vacuum photomultiplier tubes, but had never been applied in an imaging airshower cherenkov telescope (IACT) up to now. For more than three years FACT has operated on La Palma, Canary Islands (Spain), for the purpose of long-term monitoring of astrophysical sources. Stable performance of the photo detectors is crucial and therefore has been studied in great detail. Special care has been taken in regards to their temperature and overvoltage dependence through implementation of a feedback method in order to keep their properties stable. Several indipendent long term measurements were conducted to analyse and verify SiPM gain stability. Dark count spectra, which also make for an excellent self calibration mechanism, were used to study and correct for temperature dependencies. Ratescans make it possible to derive a method, for quickly finding apropriate trigger thresholds by measuring pixel currents, and thus allow for a consistent data aquisition rate. Dedicated measurements with an LED flasher are used to study the correct application of SiPM bias voltages. In this talk, the results of the long term studies will be presented and the applicability of SiPMs in IACTs for long term monitoring will be shown.
Speaker: Dominik Neise (ETH Zurich)
FACT – Influence of SiPM Crosstalk on the Performance of an Operating Cherenkov Telescope 1h
The First G-APD Cherenkov telescope (FACT) is the first operational telescope of its kind with a camera equipped with silicon photon detectors (G-APD aka. SiPM). SiPMs have a high photon detection efficiency (PDE), while being more robust to bright light conditions than the commonly used photo-multiplier tubes. This technology has allowed us to increase the duty cycle beyond that of the current generation of imaging air Cherenkov telescopes. During the last four years, the operation of FACT has proven that SiPMs are a suitable photon detectors for an application in the field of earth-bound gamma-ray astronomy. Nevertheless, it has been argued that crosstalk, after-pulses and dark counts are the main drawback of SiPMs, as these effects produce photon-like signals that would add up the signal background. Consequently, it is necessary to understand their impact on the analysis of data from FACT. In this presentation, we will show results of a study about the influence of different settings of crosstalk and dark counts on the performance of FACT i.e. its energy resolution and energy threshold. For that purpose, we used Monte Carlo simulations and compared them to actual data from the SiPM camera of FACT.
Speaker: Mr Jens Buß (TU Dortmund)
FACT – Novel mirror alignment using Bokeh and enhancement of the VERITAS SCCAN alignment method 1h
Imaging Air Cherenkov Telescopes, including the First G-APD Cherenkov Telescope (FACT), use segmented reflectors. These offer large and fast apertures for little resources. However, one challenge of segmented reflectors is the alignment of the single mirrors to gain a sharp image. For Cherenkov telescopes, high spatial and temporal resolution is crucial to reconstruct air shower events induced by cosmic rays. Therefore one has to align the individual mirror positions and orientations precisely. Alignment is difficult due to the large number of degrees of freedom and because most techniques involve a star. Most current methods are limited, because they have to be done during good weather nights which overlaps with observation time. In this contribution, we will present the mirror alignment of FACT, done using two methods. Firstly, we show a new method which we call Bokeh alignment. This method is simple, cheap and can even be done during daytime. Secondly, we demonstrate an enhancement of the SCCAN method by F. Arqueros et al., and first implemented by the McGill VERITAS group. Using a second camera, our enhanced SCCAN is optimized for changing weather, changing zenith distance, and changing reference stars. Developed off site in the lab on a 1/10th scale model of FACT, both our alignment methods resulted in a highly telescope independent procedure, e.g. both our methods run without communication to the telescope's drive. We compare alignment results by using the point spread function of star images, ray tracing simulations, and overall muon rates before and after the alignment.
Speaker: Sebastian Mueller (ETH Zuerich)
MirrorAlignmentBokehAndNamod.pdf
Fermi Gamma-ray Burst Monitor Capabilities for multi-messenger time-domain astronomy 1h
Owing to its wide sky coverage and broad energy range, the Fermi Gamma-ray Burst Monitor (GBM) is an excellent observer of the transient hard X-ray sky. GBM detects about 240 triggered Gamma-Ray Bursts (GRBs) per year, including over 30 which also trigger the Swift Burst Alert Telescope (BAT). The number of GRBs seen in common with Swift is smaller than expected from the overlap in sky coverage because GBM is not as sensitive as the BAT and the GBM GRB population is thus skewed to the brighter, closer bursts. This population includes about 45 short GRBs per year, giving GBM an excellent opportunity to observe the electromagnetic counterpart to any gravitational wave candidate resulting from the merger of compact binary members. The same characteristics make GBM an ideal partner for neutrino searches from nearby GRBs, and for the elusive Very-High Energy (VHE) counterparts to GRBs. With the deployment of the next-generation gravitational-wave detectors (Advanced LIGO/VIRGO) and VHE experiments (CTA and HAWC) within the lifetime of the Fermi Gamma-ray Space Telescope, the prospects for breakthrough observations are good.
Speaker: Valerie Connaughton
Fermi LAT observations of high energy gamma rays from the Moon 1h
We have measured the gamma-ray emission spectrum of the Moon using a the data collected by the Large Area Telescope onboard the Fermi satellite during its first 77 months of operation, in an energy range from 30 MeV up to a few GeV. We have developed a full Monte Carlo simulation describing the interactions of cosmic rays with the Moon surface and the subsequent production of gamma rays using the FLUKA code. The observations can be explained in the framework of this model, where the production of gamma rays is due to the interactions of charged cosmic rays with the surface of the Moon. From the simulation results we have also inferred the cosmic-ray proton spectrum at low energies starting from the gamma-ray measurements. A time evolution study of the gamma-ray emission will be also presented.
Speaker: Francesco Loparco (Universita e INFN, Bari (IT))
FIPSER a novel low cost and high performance readout for astrophysics 1h
Low-cost and low-power digitization systems become increasingly important in particle-physics and particle-astrophysics experiments as the number of channels is continuously rising. Specialized readout concepts have been developed in the past that aimed at lower costs and made detector systems with many ten thousand channels feasible. As the number of channels in experiments is still on the rise new readout concepts are needed that meet upcoming demands. We propose a novel readout system FIPSER (FI xed Pulse Shape Efficient Readout) that is primarily aimed for the digitization of detector signals that are a few nanoseconds long and vary in amplitude, but do not change their shape. FIPSER has the potential to lower the costs of the readout, including the front-end electronics, by an order of magnitude to less than $10 and power consumption to less than 50mW per channel. FIPSER will make new groundbreaking experiments possible that have previously not been feasible due to conflicting power, thermal, and performance requirements.
GAMERA - a new modeling package for non-thermal spectral modeling 1h
GAMERA is a new open-source C++ package which handles the spectral modelling of non-thermally emitting astrophysical sources in a simple and modular way. It allows the user to devise time-dependent models of leptonic and hadronic particle populations in a general astrophysical context (including SNRs, PWNs and AGNs) and to compute their subsequent photon emission. Moreover, this package also contains the necessary tools to create Monte-Carlo population synthesis models. In this poster, I will explain the basic design concept of GAMERA and present several examples of its implementation.
Speaker: Dr Joachim Hahn (MPIK)
Gamma-Ray and Cosmic Ray Escape in Intensely Star-Forming Systems 1h
Regions of intense star-formation naturally generate high number densities of cosmic rays and as such, they are of particular interest as potential contributors to the extragalactic gamma-ray background (EGRB) and as potential sources of very high-energy cosmic rays (VHECRs). While models of the starburst contribution to the EGRB often assume cosmic rays are confined in starbursts, cosmic rays must escape from these galaxies if they contribute to the spectrum of VHECRs as observed at Earth. The conditions in star-forming galaxies which are responsible for such high cosmic-ray injection rates also lead to large gamma-ray fluxes, except in the case of Compton thick systems where the highest energy photons are prevented from escaping. To address these contrasting ideas, we model the gamma-ray fluxes from galaxies where cosmic rays are confined and from galaxies with strong galactic winds and explore the relationship between cosmic-ray confinement and gamma-ray absorption. We present results for the nearby starburst galaxy M82 and the ultraluminous infrared galaxy Arp 220 as examples.
Speaker: Tova Yoast-Hull (University of Wisconsin-Madison)
Gamma-ray properties of low luminosity AGNs 1h
We present results of the analysis of the Fermi-LAT data from low-luminosity Seyfert galaxies, whose X-ray spectra are consistent with predictions of the hot flow (ADAF) model. We use our precise hot flow model (fully GR and with a Monte Carlo computation of radiative processes) to fit the X-ray data and then we estimate the gamma-ray flux from hadronic processes in the two-temperature plasma forming the flow. We find that the strongest gamma-ray signal may be expected from NGC 4258 and NGC 4151 and at the positions of both objects we find marginally significant signals, with sigma ~ 3. For all studied objects we derive upper limits (UL) for the gamma-ray flux. By comparing them with predictions of the ADAF model we find that the Fermi-LAT ULs strongly constrain non-thermal acceleration processes in hot flows (with the energy content in the non-thermal component of proton distribution amounting to at most ~10%) as well as the values of some crucial parameters, most significantly the magnetic field strength. We also find that the component above 4 GeV in the gamma-ray spectrum of Cen A may be due to hadronic emission from a hot accretion flow with parameters consistent with the above constraints. Under the assumption that this emission is produced by protons accelerated up to ~10^16 eV, as predicted by some acceleration models, we calculate the expected neutrino flux.
Speaker: Mr Rafal Wojaczynski (Department of Astrophysics, University of Lodz)
Gamma-rays from accretion process onto millisecond pulsars 1h
We consider a simple scenario for the accretion of matter onto rotating, magnetised neutron star in order to understand the processes in the inner pulsar magnetosphere during the transition stage between different accretion modes. We analyse a quasi-spherical accretion process onto rotating, magnetized compact object in order to search for radiative signatures which could accompany the accretion process onto a millisecond pulsar close to the transition stage. It is argued that different accretion modes can be present in a single object for specific range of parameters characterising the millisecond pulsar and the surrounding medium. We show that the radiation processes characteristic for the ejecting pulsar, i.e. curvature and synchrotron radiation produced by primary electrons in the pulsar outer gap, can be accompanied by the inverse Compton radiation produced by secondary leptons which up-scatter thermal radiation from the hot polar cap region caused by the matter accreting onto the neutron star surface. We conclude that during the transition from the pure ejector to the pure accretor mode (intermediate accretion state) additional components can appear in the $\gamma$-ray spectra of millisecond pulsars. This additional spectral component could allow to constrain the particle content of the pulsar inner magnetosphere such as the multiplicity and energies of secondary leptons.
Speaker: Wlodek Bednarek (University of Lodz)
GRAINE project: An overview and status of the 2015 balloon-borne experiment with emulsion gamma-ray telescope 1h
The observation of high-energy cosmic gamma-rays provides us with direct information of high-energy phenomena in the universe. Currently, AGILE and Fermi-LAT are observing gamma-ray sky and many understandings are being brought to us. However, past and current observations have significant limitations. The improvement of angular resolution and polarization sensitivity is one of keys for a breakthrough of the limitations. We are pushing forward GRAINE project, 10MeV-100GeV cosmic gamma-ray observation with a precise (0.08deg@1-2GeV) and polarization sensitive large aperture area ($\sim$10m$^2$) emulsion telescope by repeated long duration balloon flights. We demonstrated the feasibility and performance of the emulsion gamma-ray telescope using accelerator beams with gamma-rays/electrons/muons and atmospheric gamma-rays at mountain height. In 2011, the first balloon-borne, emulsion gamma-ray telescope experiment was successfully performed with a 125cm$^2$ aperture area and 4.3 hour flight duration. We demonstrated the working and performance of the emulsion gamma-ray telescope at a balloon flight for the first time. And the first understanding of the background was obtained with the emulsion gamma-ray telescope at a balloon flight. Based on the experience and achievements of the 2011 balloon experiment, we are planning a next balloon experiment on Japan-Australia scientific ballooning at Alice Springs with a 3600 cm$^2$ aperture area and $\sim$1day flight duration in May 2015. In the next balloon experiment, we aim to detect the Vela pulsar, a well-known bright gamma-ray source, with more than 5$\sigma$ significance and to demonstrate the overall performance of the emulsion gamma-ray telescope. Then, we will start the observation with the highest imaging resolution and polarization sensitivity. And phase resolving of the pulse emission from the Vela pulsar will be also challenged. An overview and status of the 2015 balloon experiment are presented.
Speaker: Dr Satoru Takahashi (Kobe University)
takahashi.pdf
GRAINE project: Flight data analysis of balloon-borne experiment in 2015 with emulsion gamma-ray telescope 1h
GRAINE is a balloon-borne experiment to observe cosmic gamma-ray with precise angular resolution and polarization sensitivity. Main gamma-ray detector is nuclear emulsion which can record three dimensional charged particle track with sub-micron position accuracy. We use multi-stage shifter technique in order to give time information to penetrating tracks of nuclear emulsion. Arrival direction of gamma-ray can be reconstructed to the celestial sphere by combining attitude data from star camera. By measuring the beginning of electron-positron pair with nuclear emulsion, our telescope can be achieved gamma-ray angular resolution one order of magnitude better than Fermi-LAT, and polarization sensitivity. First balloon-borne experiment of GRAINE was performed in 2011 in TARF, Japan. Equipment of our telescope operated completely well and we measured atmosphere gamma-ray which are background when we observe cosmic gamma-ray. Second balloon-borne experiment will be done in May 2015 in Alice Springs, Australia by JAXA international program. We aim to detect the Vela pulsar and also demonstrate the angular resolution best ever gamma-ray telescope. In this experiment, we overall use new type nuclear emulsion which are researching and developing at Nagoya University to improve sensitivity for charged particle. Emulsion films were transported to the University of Sydney by plane, and emulsion's handling such as resetting, drying, and packing will be performed there. For the second balloon experiment, the following equipment will be installed on a gondola with fabric pressure vessel: emulsion telescope, 3 star cameras, temperature meters, pressure meters, GPS systems, and batteries. After development of all emulsion films at University of Sydney, emulsion films will be scanned with fully automated readout system at Nagoya University. We will analyze using these scanned data to search gamma-ray events. Attitude data using star cameras will also be analyzed. Flight data analysis of GRAINE second balloon-borne experiment in 2015 is presented.
Speaker: Mr Keita OZAKI (Kobe University)
H.E.S.S. discovery of very-high-energy gamma-ray emission of PKS 1440-389 1h
Blazars are the most abundant class of known extragalactic very-high-energy (VHE, E>100 GeV) gamma-ray sources. However, one of the biggest difficulties in investigating their VHE emission resides in their limited number, since less then 60 of them are known by now. In this contribution we report on the H.E.S.S. observations of the BL Lac object PKS 1440-389. This source has been selected as target for H.E.S.S. based on its high-energy gamma-ray properties measured by Fermi-LAT. The extrapolation of this bright, hard-spectrum gamma-ray blazar into the VHE regime made a detection on a relatively short time scale very likely, despite its uncertain redshift. H.E.S.S. observations were carried out with the 4-telescope array from March to May 2012 and resulted in a clear detection of the source. Contemporaneous multi-wavelength data will be used to construct its spectral energy distribution and we will discuss possible emission mechanisms explaining the observed broad-band emission of PKS 1440-389.
Speaker: Heike Prokoph (Linnaeus University)
HAWC: Design, Operation, Reconstruction and Analysis 1h
The High-Altitude Water Cherenkov (HAWC) Observatory was completed and began full operation in early 2015. The detector consists of an array of 300 water tanks, each containing ~200 tons of purified water and instrumented with 4 PMTs. Located at an elevation of 4100m a.s.l. near the Sierra Negra volcano in central Mexico, HAWC has a threshold for gamma-ray detection well below 1 TeV and a sensitivity to TeV-scale gamma-ray sources an order of magnitude better than previous air-shower arrays. The detector operates 24 hours/day and observes the overhead sky (~2 sr), making it an ideal survey instrument. We describe the configuration of HAWC with an emphasis on how the design was optimized, including the size depth and spacing of the water tanks, the positioning of the PMTs and the requirements of the readout system. We also describe how the data are acquired, reconstructed, and analyzed. Finally, we will demonstrate the sensitivity of the detector using the observation of the Crab plerion. This paper serves as a detailed technical description of the foundations of the numerous analyses presented at this meeting by members of the HAWC collaboration.
Speaker: Andrew Smith (University of Maryland, College Park)
HESS observations of PKS 1830-211 1h
PKS 1830-211 is a lensed blazar located at z=2.5. The recent addition of a 28 m Cherenkov telescope (CT5) to the H.E.S.S. array extended the experiment's sensitivity towards low energies, providing access to gamma-ray energies down to 30 GeV. Data towards PKS1830-211 were taken with CT5 in August 2014, following a flare alert by the Fermi collaboration at the beginning of the month. The H.E.S.S observations were aimed at detecting a gamma ray flare delayed by ~25 days from the Fermi flare. These HESS data are presented and discussed.
Speaker: Jean-Francois Glicenstein (CEA)
High energy emission from extended region within the blazar jet during quiet gamma-ray state 1h
During the quiet $\gamma$-ray state of blazars the high energy emission is likely to be produced in the extended part of the inner jet in which the conditions can change significantly. Therefore, homogeneous SSC model is not expected to describe correctly the quiet state emission features. We consider inhomogeneous SSC model for the large part of the inner jet in which synchrotron and IC emission of relativistic electrons is taken into account self-consistently by applying the Monte Carlo method. The results of calculations are compared with the observations of some BL Lacs in the low state.
Speaker: Piotr Banasiński (University of Lodz)
High energy gamma-ray study of the microquasar 1E 1740.7-2942 with Fermi-LAT 1h
The microquasar 1E 1740.7-2942, discovered by the Einstein satellite, is located near the Galactic Center at an angular distance of 50' from Sgr A*, and the brightest X-ray source above 20 keV in the Galactic Center region. It has extended radio lobes reaching distances of up to a few parsecs and its core radio emission is variable. In X-ray energies it shows the spectral and timing properties similar to those of black hole candidates like Cyg X-1. GRANAT/SIGMA reported a burst of soft gamma-ray emission (300-600 keV) in 1990s which was interpreted as an electron-positron annihilation signal, but other satellite observations could not confirm the high energy feature reported by SIGMA, although a high energy tail extending up to 600 keV with a power-law photon index of $1.9\pm0.1$ has been reported by INTEGRAL, indicating a non-thermal process which might accelerate particles to even higher energies. In this paper we report the result of gamma-ray study of 1E 1740.7-2942 above 100 MeV using the six-year Fermi-LAT archival data, and its implication on particle acceleration process in microquasars is discussed.
Speaker: Masaki Mori (Ritsumeikan University)
ICRC2015poster-morim.pdf
Long term stability analysis on the MD-A under TIBET III array 1h
The underground muon detector with water Cherenkov technique is constructed as the upgrad of the Tibet air shower array, aiming at a higher sensitivity for gamma-ray observation. In one of the modules (MD-A), the full-sealing large Tyvek bag is used as a closed? container. As the MD-A has been operated for more than one year, the long term stability of the performance of such detector is reported.
Speakers: Mr LIU Cheng (IHEP, CAS), Mr QIAN Xiangli (IHEP, CAS)
Long term variability study for the radio galaxy M87 with MAGIC 1h
M 87 is the closest extragalactic VHE object located in the Virgo cluster of galaxies at a distance of ~16 Mpc (redshift z=0.00436). It is the first and brightest radio galaxy detected in the TeV regime, well studied from radio to X-ray energies. The structure of its relativistic plasma jet, which is misaligned with respect to our line of sight, is spatially resolved in X-ray (Chandra), optical and radio (VLA/VLBA) observations. Thus the time correlation between the TeV flux and emission at different wavelengths provides a unique opportunity to localize the VHE emission process occurring in active galaxy nuclei. In 2005, gamma-ray emission at TeV energies was detected for the first time in M87. The very high energy (VHE, E>100 GeV) gamma-ray emission displays strong flux variability on timescales as short as a day. For more than 10 years, along with X-ray, optical and radio bands, it has been monitored in the TeV band by imaging atmosphere Cherenkov telescopes such as MAGIC, HESS and VERITAS. In 2008 and 2010, M87 underwent several periods of TeV activities, and rapid flares with short timescale variability were detected. MAGIC continued to monitor M87 but no major flares were detected since 2010. However, the monitoring data set allows us to study the source in quiescent flux state. Here we present the status of these studies using the data from the last 4 years of MAGIC observations.
Speaker: Ms Priyadarshini Bangale (MPI for Physics, Munich)
Low multiplicity technique for GRB observation by LHAASO-WCDA 1h
Detection of GeV photons from GRBs is crucial in understanding the most violent phenomenon in our universe. Due to the limited effective area of space-born experiment, very few GRBs are detected with GeV photons. Large area EAS experiments at high altitude can reach a much larger effective area around 10 GeV, for which single particle technique is usually used to lower the threshold energy but its sensitivity is poor due to losing of primary direction information. To reach an energy threshold as low as 10 GeV and keep the primary direction information at the same time, low multiplicity trigger is required, but random coincidences rather than cosmic ray showers overwhelms the signals, and it is a great challenge for traditional trigger logic and reconstruction algorithm to discriminate the signals from the noises. A new method is developed for LHAASO-WCDA to work under low multiplicity mode. With this technique, the LHAASO detector can even work under multiplicity as low as 2 while keeping the direction information at the same time. The sensitivity and expectation of LHAASO-WCDA with low multiplicity technique to GRBs are presented.
Speakers: Prof. Hanrong Wu (Institute of High Energy Physics, CAS), Prof. Huihai He (Institute of High Energy Physics, CAS)
Multiwavelength Analyses of Long-Term Lower Flux State Observations of Intermediate-Frequency-Peaked BL Lacertae Sources: W Comae and 3C 66A 1h
Intermediate-frequency-peaked BL Lacertae objects (IBLs) are a class of blazars characterized by a spectral energy distribution (SED) with a lower-energy synchrotron peak than a majority of extragalactic sources detected by ground-based imaging atmospheric Cherenkov telescopes (IACTs). Because of this shift in the SED, the peak gamma-ray flux falls outside the very-high-energy regime (VHE, >100 GeV) covered by IACTs such as VERITAS, making IBLs difficult to detect except during infrequent times of elevated flux. However, the study of these sources in a lower flux state is essential for developing a complete understanding of the blazar paradigm. We present the results of multiwavelength analyses of long-term lower flux state observations completed for two IBL sources: W Comae and 3C 66A. For both sources, data from VERITAS were analyzed for the VHE regime. The study of W Comae extends from 2008 to 2014, resulting in a 6 standard deviation (σ) detection from ~40 observing hours. Analysis of 3C 66A from 2007 to 2015, totaling ~67 hours, resulted in a 17σ lower flux state detection. We will report on the results from these VHE analyses as well as contemporaneous multiwavelength data and comment on how these lower state IBL detections fit within the context of the blazar paradigm.
Speaker: Dr Lucy Fortson (University of Minnesota)
Naima: a Python package for inference of particle distribution properties from nonthermal spectra 1h
The ultimate goal of the observation of nonthermal emission from astrophysical sources is to understand the underlying particle acceleration and evolution processes, and few tools are publicly available to infer the particle distribution properties from the observed photon spectra from X-ray to VHE gamma rays. Naima is an open source Python package that provides models for non-thermal radiative emission from homogeneous distribution of relativistic electrons and protons. Contributions from synchrotron, inverse Compton, nonthermal bremsstrahlung, and neutral-pion decay can be computed for a series of functional shapes of the particle energy distributions, with the possibility of using user-defined particle distribution functions. In addition, Naima provides a set of functions that allow to use these models to fit observed nonthermal spectra through an MCMC procedure, obtaining probability distribution functions for the particle distribution parameters. In this contribution I will present the models and methods available in Naima and an example of their application to the understanding of a galactic nonthermal source.
Speaker: Victor Zabalza (University of Leicester)
proceedings.pdf
NectarCAM : a camera for the medium size telescopes of the Cherenkov Telescope Array 1h
NectarCAM is a camera proposed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) covering the central energy range of ~100 GeV to ~30 TeV. It has a modular design and is based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 8 degrees. Each module includes the photomultiplier bases, high voltage supply, pre-amplifier, trigger, readout and Ethernet transceiver. The recorded events last between a few nanoseconds and tens of nanoseconds. The camera trigger will be flexible so as to minimize the read-out dead-time of the NECTAr chips. NectarCAM can sustain a data rate of more than 4 kHz with less than 5% dead time. The camera concept, the design and tests of the various subcomponents and results of thermal and electrical prototypes are presented. The design includes the mechanical structure, cooling of the electronics, read-out, clock distribution, slow control, data-acquisition, triggering, monitoring and services.
New concepts of timing calibration systems for large-scale Cherenkov arrays in astroparticle physics experiments 1h
We present new concepts of timing calibration systems for large-scale Cherenkov arrays in astroparticle physics experiments like Cherenkov arrays detecting extensive air showers (EAS) and water Cherenkov neutrino arrays. The concepts are based on a fast powerful LED light source on board of a pilotless remotely controlled helicopter in case of EAS Cherenkov arrays and on multiple LED sources driven by a single driver. We describe parameters of LED sources developed especially for these kinds of applications and discuss some preliminary results of laboratory and in-situ tests.
Speaker: Bayarto Lubsandorzhiev (Institute for Nuclear Research of RAS)
Observation of the $^{26}Al$ emission distribution throughout the Galaxy with INTEGRAL/SPI 1h
We present $^{26}Al$ map distribution throughout the Galaxy measured by the SPI spectrometer aboard the INTEGRAL observatory. This emission at 1.809 MeV is associated with the $^{26}Al$ decay and to the production of heavy elements in the Galaxy. The only available $^{26}Al$ map to date has been released, more than fifteen years ago, thanks to the COMPTEL instrument. However, at the present time, SPI offers a unique opportunity to enrich this first result. The data accumulated between 2003 and 2013 which amounts to 2$\times$ 10$^{8}$ s of observing time are used to perform a dedicated analysis, aiming to deeply investigate the spatial morphology of the $^{26}Al$ emission. The data are first compared with several sky maps based on observations at various wavelengths to model the $^{26}Al$ distribution throughout the Galaxy. For most of the distribution models, the inner Galaxy flux is compatible with a value of 3.3$\times$ 10$^{-4}$ ph. cm$^{-2}$.s$^{-1}$ while the preferred template maps correspond to young stellar components such as core-collapse supernovae, Wolf-Rayet and massive AGB stars. To get more details about this emission, an image reconstruction is performed using an algorithm based on the maximum-entropy method. In addition to the inner Galaxy emission, several excesses suggest that some sites of emission are linked to the spiral arms structure. Lastly, an estimation of the $^{56}Fe$ line flux, assuming a spatial distribution similar to $^{26}Al$ line emission, results in a $^{56}Fe$ to $^{26}Al$ ratio around 0.14, which agrees with the most recent studies and with the SN explosion model predictions.
Speaker: Dr Laurent Bouchet (IRAP)
On the On-Off Problem: an Objective Bayesian Analysis 1h
The On-Off problem, aka. Li-Ma problem, is a statistical problem where a measured rate is the sum of two parts. The first is due to a signal and the second due to a background, both of which are unknown. Mostly frequentist solutions are being used, but they are only adequate for high count numbers. When the events are rare such an approximation is not good enough. Indeed, in high-energy astrophysics this is often the rule, rather than the exception. I will present a universal objective Bayesian solution that depends only on the initial three parameters of the On/Off problem: the number of events in the on-source region, the number of events on the off-source region, and their ratio-of-exposure. With a two-step approach it is possible to infer the signal's significance, strength, uncertainty or upper limit in a unified a way. The approach is valid without restrictions for any count number including zero and may be widely applied in particle physics, cosmic-ray physics and high-energy astrophysics. I apply the method to gamma-ray burst data.
Speaker: Max Ludwig Ahnen (ETH Zurich)
mahnen_poster_icrc.pdf
Optical Polarimetry Campaign on Markarian 421 During the 2012 Large Flaring Episodes 1h
n 2012, Fermi/LAT gamma-ray and radio observations have registered the largest ever recorded flaring episodes from the blazar Markarian 421. The unprecedented activity state of the source has remained high, and much above the normal behaviour seem from the source also for the year 2013, characterising a dramatic and long-lasting change of behaviour in the emission of the object. This unique event has been followed, and showed extreme signatures in all bands in which it was observed, from radio to VHE gamma-rays. Polarisation monitoring of the source has nevertheless been somewhat more scarce, and direct observation of the peak activity in 2012 was prevented by the source's proximity with the Sun at the time. As part of our continuous monitoring programme of TeV emitting blazars in optical polarimetry at the Liverpool Telescope, whose first phase used the RINGO2 fast polarimeter and lasted from late 2010 to early 2013, we have observed Mkn 421 with regular coverage and a sub-weekly cadence for over two years. This continued monitoring allowed us to follow the polarisation behaviour of the source for over two years and up to the days preceding the dramatic flare event in 2012. In the weeks before the multi-wavelength and high-energy outbursts, Mrk 421 presented an unprecedented increase in its degree of polarisation, which rose by a factor of 5, not witnessed in decades from this object. The source also showed a never-seen large rotation of its polarisation angle, by over 180 degrees. In this talk we will present our entire dataset on Mkn 421 , concentrating in discussing the unprecedented events in optical polarisation that preceded the HE outburst. The main question we put ourselves is if what we have seen is a polarimetric precursor of the high activity that followed. If yes, what connections can we establish between them, and what remains mysterious to us about it?
Speaker: Ulisses Barres (Centro Brasileiro de Pesquisas Físicas)
Performance of the Mechanical Structure of the SST-2M GCT Telescope for the Cherenkov Telescope Array 1h
The Cherenkov Telescope Array (CTA) project aims to create the next generation Very High Energy gamma-ray telescope array. It will be devoted to the observation of gamma rays over a wide band of energy, from 20 GeV to 300 TeV. Two sites are foreseen, one in the northern and the other in the southern hemisphere, allowing the viewing of the whole sky. The southern site will be equipped with about 100 telescopes, composed of three different classes, Large, Medium and Small Size Telescopes, covering the low, intermediate and high energy regions, respectively. The energy range of the Small Size Telescopes (SSTs) extends from 1 TeV to 300 TeV. Among them, the Gamma-ray Cherenkov Telescope (GCT), a telescope based on a Schwarzschild-Couder dual-mirror optical formula, is one of the prototypes under construction proposed to be part of the southern site of the future Cherenkov Telescope Array. This contribution focuses on the mechanical structure of this telescope. It presents the mechanical design and discusses how this in the context of CTA specifications. It also describes recent developments in the assembly and installation of the opto-mechanical prototype of GCT on the French site of the Paris Observatory.
Speaker: Dr Jean-Laurent Dournaux (GEPI. CNRS, Observatoire de Paris)
Performance studies of the new stereoscopic Sum-Trigger-II of MAGIC after one year of operation 1h
MAGIC is a stereoscopic system of two Imaging Air Cherenkov Telescopes (IACTs) located at La Palma (Canary Islands, Spain) and working in the field of very high energy gamma-ray astronomy. It makes use of a traditional digital trigger with an energy threshold of around 55 GeV. A novel trigger strategy, based on the analogue sum of signals from partially overlapped patches of pixels, leads to a lower threshold. In 2008, this principle was proven by the detection of the Crab Pulsar at 25 GeV by MAGIC in single telescope operation. During Winter 2013/14, a new system, based on this concept, was implemented for stereoscopic observations after several years of development. In this contribution the strategy of the operative stereoscopic trigger system, as well as the first performance studies, are presented. Finally, some possible future improvements to further reduce the energy threshold of this trigger are addressed.
Speaker: Dr Francesco Dazzi (Max-Planck-Institute for Physics Munich)
Photon Reconstruction for H.E.S.S. Using a Semi-Analytical Model 1h
The High Energy Stereoscopic System (H.E.S.S.) is an array of five Imaging Atmospheric Cherenkov Telescopes (IACTs) designed to detect and image cosmogenic gamma-rays with very high energies. Originally consisting of just four identical IACTs (CT1-4) with an effective mirror diameter of 12$\,$m each, it was expanded with a fifth IACT (CT5) with a mirror diameter of 28$\,$m in 2012. Being the largest IACT worldwide, CT5 allows to lower the energy threshold of H.E.S.S., enabling to close the energy gap between space-based detectors and IACTs. Events can be analysed either monoscopically (i.e. using only information of CT5) or stereoscopically (requiring at least two triggered telescopes per event). To achieve a good performance, a sophisticated event reconstruction and analysis framework is indispensable. This is particularly important for H.E.S.S. since it is now the first IACT array that consists of different telescope types. An advanced reconstruction method is based on a semi-analytical model of electromagnetic particle showers in the atmosphere ("model analysis"). The properties of the primary particle are reconstructed by comparing the image recorded by each triggered telescope with the Cherenkov emission from the shower model using a log-likelihood maximisation. Due to its performance, this method has become one of the standard analysis techniques applied to CT1-4 data. Now it has been modified for use with the five-telescope array. We present the adapted model analysis and its performance in both monoscopic and stereoscopic analysis mode.
Speaker: Markus Holler (LLR - Ecole Polytechnique)
poster_Holler_Model.pdf
Progress on the electromagnetic particle detector and the prototype array of LHAASO-KM2A 1h
A prototype array for the LHAASO-KM2A, which consists of 39 detector units, was set up at the Yangbajing cosmic ray observatory(4300m a.s.l., Tibet, P.R. China) and has been in stable operation since Octoter 2014. In this paper, we present the performances of the prototype electromagnetic particle detector and the prototype array.
Rapid variability at very high energies in Mrk 501 1h
Flaring states of the BL Lac object, Mrk 501 were observed by the High Energy Stereoscopic System (H.E.S.S.) during 2012 and 2014. Observations in 2014 recorded flux levels higher than one Crab unit and revealed rapid variability at very high energies ($\sim$ 2-20 TeV). The high statistics afforded by the flares allowed us to probe the presence of minutes timescale variability and study its statistical characteristics at purely TeV energies owing to the high threshold energy of approximately 2 TeV. Doubling times of a few minutes are estimated for fluxes, F(> 2 TeV). Statistical tests on the lightcurves show interesting temporal structure in the variations including deviations from a normal flux distribution similar to those found in the PKS 2155-304 flare of July 2006, at nearly an order of magnitude higher threshold energy. Rapid variations at such high energies put strong constraints on the physical mechanisms in the blazar jet.
Speaker: Dr Nachiketa Chakraborty (Max-Planck-Institut fuer Kernphysik)
Recent pulsar results from VERITAS on Geminga and the missing link binary pulsar PSR J1023+0038 1h
In recent years, the Fermi-LAT gamma-ray telescope has detected a population of over 160 gamma-ray pulsars, which has enabled the detailed study of gamma-ray emission from pulsars at energies above 100 MeV. Further, since the surprising detection of the Crab pulsar in very high-energy (VHE; E > 100 GeV) gamma rays by the MAGIC and VERITAS collaborations, there has been an ongoing effort in the gamma-ray astrophysics community to detect new pulsars in the VHE band. However, the Crab remains the only pulsar so far detected in VHE gamma rays, raising the question of whether or not the Crab is unique and also making it more difficult to constrain model predictions that attempt to explain the VHE emission. Presented here are recent VERITAS results from observational campaigns on the brightest northern-hemisphere high-energy gamma-ray pulsar Geminga and the missing link binary pulsar PSR J1023+0038, which have both resulted in upper limits on a possible gamma-ray flux. These limits are placed into context with the current theoretical framework attempting to explain the origin of VHE gamma-ray emission from pulsars. Additionally, future plans for pulsar observations with VERITAS will be briefly discussed.
Speaker: Gregory Richards (Georgia Institute of Technology)
greg_poster.pdf
Redshift measurement of Fermi Blazars for the Cherenkov Telescope Array 1h
Blazars are active galactic nuclei, and the most numerous High Energy (HE) and Very High Energy (VHE)gamma-ray emitters. Their optical emission is often dominated by non-thermal, and, in the case of BL Lacs, featureless continuum radiation. This renders the determination of their redshift extremely difficult. Indeed as of today only about 50% of gamma-ray blazars have a measured spectroscopic redshift. The knowledge of redshift is fundamental because it allows the precise modeling of the VHE emission and also of its interaction with the extragalactic background light (EBL). The beginning of the Cherenkov Telescope Array (CTA) operations in the near future will allow detecting several hundreds of new BL Lacs. Using the first Fermi catalogue of sources above 10 GeV (1FHL), we performed simulations which demonstrate that at least half of the 1FHL BL Lacs detectable by CTA will not have a measured redshift. Indeed the organization of observing campaigns to measure the redshift of these blazars has been recognized as a necessary support for the AGN Key Science Project of CTA. Taking advantage of the recent success of an X-shooter GTO observing campaign, we thus devised an observing campaign to measure the redshifts of as many as possible of these candidates. The main characteristic of this campaign with respect to previous ones will be the use of higher resolution spectrographs and of 8 meter class telescopes. We are starting submitting proposals for our observations. In this paper we will briefly describe the selection of the candidates, the characteristics of our observation and the expected results.
Speaker: Dr Paolo Goldoni (APC/CEA-Irfu)
ROI: A Prototype Data Model for the Cherenkov Telescope Array 1h
The Cherenkov Telescope Array (CTA) will be a ground-based gamma-ray observatory with full-sky coverage in the very-high energy (VHE) regime. It is proposed to consist of more than 100 telescopes and should produce large amounts of data, possibly exceeding the volume of current VHE Imaging Atmospheric Cherenkov Telescopes by ~two orders of magnitude. This volume of data represents a new challenge to the VHE community, which is looking for new data formats to transfer and store the CTA data. One of the prototypes currently under study is the ROI (Regions Of Interest) file format for camera images. It stores only those pixels of a camera image that are close to the shower, thus removing the major part of the night sky background while keeping all pixels that might belong to the shower. Simple, on-the-fly compression is used to reduce the file size even further. Here, we explain the ROI prototype in detail and present preliminary results applied to real data and simulations.
Speaker: Mr Ramin Marx (MPIK Heidelberg)
Search for Gamma-ray Production in Supernovae located in a dense interstellar medium with Fermi LAT 1h
Supernovae (SNe) exploding in a dense circumstellar medium (CSM) are hypothesized to accelerate cosmic rays in collisionless shocks and emit GeV gamma rays and TeV neutrinos on a time scale of several months. We perform the first systematic search for gamma-ray emission in Fermi LAT data in the energy range from 100 MeV to 300 GeV from the ensemble of SNe exploding in dense CSM. We study a sample of 147 SNe Type IIn and search for a gamma-ray excess at each SNe location using the maximum likelihood method for each source in a one year time window. In order to enhance a possible weak signal, we simultaneously study the closest and optically brightest sources of our sample in a joint likelihood analysis in three different time windows (1 year, 6 months and 3 months). We do not find a significant excess in gamma rays for any individual source nor for the combined sources and provide flux upper limits at 95% confidence level (CL) for both cases. We calculate model independent limits on the gamma-ray flux for individual sources as well as the combined source sample. In addition, we derive limits on the gamma-ray luminosity and the ratio of gamma-ray to optical luminosity as a function of the index of the proton injection spectrum assuming a generic gamma-ray production model.
Speaker: Anna Franckowiak (SLAC)
Search for VHE gamma-ray emission from the Geminga pulsar and nebula with the MAGIC telescopes 1h
Geminga pulsar appears to be one of the most promising candidates to emit VHE gamma-ray pulsed emission. In order to detect the third pulsar with power-law spectral component above of the measured cutoff, after Crab and Vela, we analyzed 63 hours of data taken with MAGIC. To discuss the connection with HE gamma rays, 6 years of Fermi-LAT data were also analyzed. No significant pulsation was found with MAGIC observations. The obtained flux upper limits above 50 GeV are above the power law extrapolation above 10 GeV based on Fermi-LAT data. We also searched for steady emission from the pulsar wind nebula in the same dataset, resulting in no significant detection.
Speaker: marcos lopez (University Complutense of Madrid)
Selection of AGN to study the extragalactic background light with HAWC 1h
The extragalactic background light (EBL) is all the electromagnetic energy released by resolved and unresolved extragalactic sources since the recombination era. Its intensity and spectral shape provide information about the evolution of galaxies throughout cosmic history. Since direct observations of the EBL are very difficult to perform, the study of the interaction between the low energy EBL photons and high energy photons from distant sources becomes relevant to constrain the EBL intensity. The main goal of this study is to investigate the opacity of the EBL to gamma rays by observing a sample of active galaxies with the High Altitude Water Cherenkov (HAWC) Gamma-Ray Observatory. Current gamma-ray observations up to 20 TeV performed by Imaging Atmospheric Cherenkov Telescopes (IACTs) have constrained the EBL intensity in the $0.1-50$ $\mu$m region. HAWC which monitors the gamma-ray sky in the 100 GeV to 100 TeV energy range, will be able to detect at least 12 active galaxies at redshifts below 0.3 and thus constrain the EBL in the poorly-measured $1-100$ $\mu$m region.
Speaker: Mrs Sara Coutiño (INAOE)
Sensitivity of the LHAASO-WCDA for various Gamma ray sources 1h
The Large High Altitude Air Shower Observatory (LHAASO) will be constructed at Mt. Haizi in Sichuan Provice, China. As a major component of the LHAASO project, the Water Cherenkov Detector Array (WCDA) is designed to record air showers produced by cosmic rays and gamma rays in the energy range from 100 GeV to 100 TeV. Complementing the Imaging Atmospheric Cherenkov Telescopes with large field-of-view and long duty cycle, and the space-based gamma-ray detectors with high energy reach, WCDA is well-suited to study particle acceleration in Pulsar Wind Nebulae, Supernova Remnants, Active Galactic Nuclei and Gamma-ray Bursts. Results of the sensitivity calculation of the detector on steady point sources, extended sources, transient sources and GRBs are presented in this talk.
Speakers: Dr Bo Gao (Institute of High Energy Physics,CAS), Dr Hanrong Wu (Institute of High Energy Physics,CAS), Mr Huicai Li (Naikai University), Dr Mingjun Chen (Institute of High Energy Physics,CAS), Prof. Zhiguo Yao (Institute of High Energy Physics,CAS)
Shaping the GeV-spectra of bright blazars 1h
The non-thermal spectra of jetted Active Galactic Nuclei (AGN) show a variety of shapes and degrees of curvature in their low and high energy components. From some of the brightest Fermi-LAT blazars prominent spectral breaks at a few GeV have been regularly detected which is inconsistent with conventional cooling effects. We propose that the broad variety of spectral shapes including prominent breaks can be understood as an impact of injection modes. We therefore present an injection model embedded in a leptonic blazar emission model for external Comptonloss dominated jets of AGN which aims towards bridging jet emission with acceleration models using a phenomenological approach. In our setup we consider the effects of continuous time-dependent injection of electrons into the jet with differing rates, durations, locations and power-law spectral indices, and evaluate its impact on the ambient emitting particle spectrum observed at a given snapshot time. We found that varying the injection parameters has indeed notable influence on the spectral shapes, which in turn can be used to set interesting constraints on the particle injection scenario. We apply our model to the flare state spectral energy distribution of 3C 454.3 and PKS 1510-089 to constrain the required injection parameters. Our results indicate that impulsive-like particle injection is disfavored here. With this model we provide a basis for analyzing ambient electron spectra in terms of injection requirements, with implications for particle acceleration modes.
blazar_spectra_poster.pdf
Simulation of diffusive particle propagation and related TeV $\gamma$-ray emission at the Galactic Center 1h
Observations of the Galactic Center with the H.E.S.S. instrument have led to the detection of an extended region of diffuse TeV $\gamma$-ray emission. The origin of this emission is not yet fully understood, although the spatial correlation between the density distribution of giant molecular clouds located at the center of our Galaxy and the intensity of the observed $\gamma$-ray excess points towards a hadronic production scenario. The energy amount required to accelerate charged hadrons producing a $\gamma$-ray emission as observed could have been delivered by a single supernova explosion. Assuming that highly energetic particles have been released by a single central source, we analyzed if the diffusion of relativistic hadrons is fast enough to produce an extended TeV emission through interactions with ambient matter as observed. We numerically analyzed charged-particle motion in turbulent magnetic fields with regard to the environmental conditions of the Galactic Center region. We present diffusion coefficients derived from a statistical analysis of the tracking of ensembles of particles in such a turbulent environment. The derived diffusion coefficients were used to simulate the diffuse $\gamma$-ray emission from the Galactic Center region via a discretization of the diffusion equation. The results of this modeling are presented and compared to the H.E.S.S. measurement, including both spectral and morphological analysis.
Speaker: Alexander Ziegler (ECAP, University of Erlangen-Nuremberg, Germany)
Simulation study on a large field of view cherenkov telescope 1h
The large field of view and low threshold energy are highly desirable properties for the ground based observations of high energy GRBs. However, larger field of view is difficult to achieve for current imaging atmospheric cherenkov telescopes (IACT), and the threshold below O(100)GeV is also a challenging for current EAS arrays. An alternative solution is to adopt the refractive optics system for IACTs to enlarge the field of view while keeping the low threshold energy. In this work, simulation studies on the effective area, angular resolution and gamma-ray sensitivity for such large field of view IACT are presented.
Speaker: Dr Yi Zhang (IHEP)
Status and plans for the Array Control and Data Acquisition System of the Cherenkov Telescope Array 1h
The Cherenkov Telescope Array (CTA) is the next-generation atmospheric Cherenkov gamma-ray observatory. CTA will consist of two installations, one in each hemisphere, containing tens of telescopes of different sizes. The CTA performance requirements and the inherent complexity associated with the operation, control and monitoring of such a large distributed multi-telescope array leads to new challenges in the field of the gamma-ray astronomy. The ACTL (array control and data acquisition) system will consist of the hardware and software that is necessary to control and monitor the CTA array, as well as to time-stamp, read-out, filter and store -at aggregated rates of few GB/s- the scientific data. The ACTL system must be flexible enough to permit the simultaneous automatic operation of multiple sub-arrays of telescopes with a minimum personnel effort on site. One of the challenges of the system is to provide a reliable integration of the control of a large and heterogeneous set of devices. Moreover, the system is required to be ready to adapt the observation schedule, on timescales of a few tens of seconds, to account for changing environmental conditions or to prioritize incoming scientific alerts from time-critical transient phenomena as gamma ray bursts. This contribution provides a summary of the main design choices and plans for building the ACTL system.
Speaker: Igor Oya (DESY Zeuthen)
ACTL_ICRC2015_Poster_v3.pdf
Status of Water Cerenkov Detector Array of LHAASO project 1h
A Large High Altitude Air Shower Observatory (LHAASO) is planned to be built in next year. As an important component of LHAASO project, Water Cherenkov Detector Array (WCDA) is a high sensitivity gamma ray and cosmic ray detector, which is mainly to survey the northern sky for VHE gamma ray sources. Currently, the R&D is quite finished, including a prototype water Cherenkov detector and an engineering array at 1% scale (3×3 cells). In this paper, the basic design, performance and R&D work of WCDA will be described.
Speakers: Bo Gao (Institute of High Energy Physics, Chinese Academy of Sciences), Dr Mingjun Chen (Institute of High Energy Physics, Chinese Academy of Sciences)
Study of the VHE diffuse emission in the central 200 pc of our Galaxy with H.E.S.S. 1h
The Very High Energy Galactic Center Ridge was revealed by H.E.S.S. in 2006, after subtraction of the point sources HESS J1745-290 possibly associated with Sgr A* and HESS J1747-281 associated with the composite supernova remnant G0.09+0.1. The hard spectrum of the Ridge emission and its spatial correlation with the local gas density suggest that the emission is due to collisions of multi-TeV cosmic rays with the dense clouds of interstellar gas present in this region. The much larger H.E.S.S. dataset (250 hrs) that is now available from this region and the improved analysis method dedicated to faint emission allow us to reconsider the characterization of this gamma-ray emission in the central 200 pc of our Galaxy through a detailed morphology study and the exctraction of the total energy spectrum with much better accuracy. To test the various contributions to the total gamma-ray emission, we use a 2D maximum likelihood approach that allows to constrain a phenomenological model of the signal.We discuss the nature of the various components, and their implication on the cosmic-ray distribution in the central 200 pc of our Galaxy. Finally, we will reveal an additional source in this region and will discuss its potential nature.
Speaker: Dr Anne Lemière (APC)
Study on the large dimensional refractive lens for the future large field-of-view IACT 1h
Sub-100GeV to TeV is a crucial energy window in gamma ray astronomy because of its important role connecting the space experiments and the ground-based observations. The observations in this energy range are expected to provide rich information about the high energy emission from GRBs and AGNs, with which EBL can be measured, and knowledge about the galaxy formation and the evolution of the early universe can be obtained. One pursuit of the next generation Imaging Atmospheric Cherenkov Telescopes (IACT) is to achieve larger field of view by using a refractive optics system as light collector. In this work, preliminary test results on the optical properties (transmittance, angular resolution, etc.) of a prototype 0.9m diameter water lens are presented and discussed.
Speakers: Prof. Luobu Danzeng (Tibet University), Prof. Tianlu Chen (Tibet University)
Systematically characterizing regions of the First Fermi-LAT SNR Catalog 1h
While supernova remnants (SNRs) are widely thought to be powerful cosmic-ray accelerators, indirect evidence comes from a small number of well-studied cases. Here we systematically determine the gamma-ray emission detected by the Fermi Large Area Telescope (LAT) from all known Galactic SNRs, disentangling them from the sea of cosmic-ray generated photons in the Galactic plane. Using LAT data we have characterized the 1-100 GeV emission in 279 regions containing SNRs, accounting for systematic uncertainties caused by source confusion and instrumental response. We have also developed a method to explore some systematic effects on SNR properties caused by the modeling of the interstellar emission (IEM). The IEM contributes substantially to gamma-ray emission in the regions where SNRs are located. To explore the systematics we consider different model construction methods, different model input parameters, and independently fit the model components to the gamma-ray data. We will describe this analysis method in detail. In the First Fermi-LAT SNR Catalog there are 30 sources classified as SNRs, using spatial overlap with the radio position. For all the remaining regions we evaluated upper limits on SNRs' emission. In this work we will present a study of the aggregate characteristics of SNRs, such as comparisons between GeV and radio sizes as well as fluxes and spectral indexes and with TeV.
Speaker: Dr Francesco de Palma (INFN and Pegaso University)
TAIGA experiment – status, first results and perspectives 1h
The aim of the TAIGA (Tunka Advanced Instrument for cosmic ray physics and Gamma Astronomy) is to construct in the Tunka Valley (50 km from Lake Baikal) a complex, hybrid array for multi–TeV gamma-ray astronomy and CR studies. The array will consist of a wide angle Cherenkov array - Tunka-HiSCORE with ~3 km2 area, a net of IACT telescopes and muon detectors with total area of up to 2000 m2. We present the current status of the array construction, sensitivity to local sources of gamma-rays and first results from operation of the array prototype.
Speaker: Prof. Leonid Kuzmichev (SINP MSU)
TeV gamma-rays from the globular cluster NGC 6624 containing energetic millisecond pulsar J1823-3021A 1h
Recently very energetic millisecond pulsar, J1823-3021A, has been discovered to emit pulsed GeV gamma-rays in the globular cluster NGC 6624. Assuming that this pulsar injects relativisitic leptons into its surrounding (as expected from modelling of radiative processes within the inner pulsar magnetosphere), we calculate the minimum level of expected TeV gamma-ray emission produced by these leptons in the Inverse Compton scaterring process of stellar radiation from the globular cluster NGC 6624. The results of calculations are confronted with sensitivities of the present and future Cherenkov telescopes.
The first GCT camera for the Cherenkov Telescope Array. 1h
The Gamma-ray Cherenkov Telescope (GCT) is proposed to be part of the Small Size Telescope (SST) array of CTA (the Cherenkov Telescope Array). Its dual mirror optical design allows the use of a compact camera of diameter roughly 0.4m, the curved focal plane of which is equipped with 2048 pixels of ~0.2° angular size, resulting in a field of view of ~9°. The GCT camera is designed to record the flashes of Cherenkov light from gamma-ray initiated electromagnetic cascades, which last only a few tens of nanoseconds. Modules based on "TARGET" ASICs provide the required fast electronics, allowing sampling at 1 GSample/s and digitization, as well as first level of triggering using the analogue outputs of the photosensors. The GCT camera is the first fully assembled prototype for a dual mirror Cherenkov telescope ever built and is currently being commissioned in the UK. On-telescope testing of its performance is expected to take place in France in September 2015. In this paper we give a detailed description of the mechanics and electronics of the camera and discuss recent progress with testing and commissioning.
Speaker: Andrea De Franco (University of Oxford)
The FRaNKIE code: a tool for calculating multi-wavelength interstellar emissions in galaxies 1h
The Fast Radiation transport Numerical Kalculation for Interstellar Emission (FRaNKIE) code is a Monte Carlo code for calculating the electromagnetic emissions in galaxies. The code is highly parallel and optimised for both CPUs and co-processor accelerators. The code takes into account the interaction of the photon field with the interstellar medium in a self-consistent way, providing a detailed model for the interstellar radiation field. I will describe the implementation details of the code and present results of its application to the problem of calculating the interstellar radiation field of the Milky Way. The radiation field is an essential input to CR propagation codes for calculating the cosmic-ray lepton energy losses from inverse Compton scattering and the resulting gamma-ray emission.
Speaker: Dr Troy Porter (Stanford University)
The H.E.S.S. multi-messenger program 1h
Based on fundamental particle physics processes like the production and subsequent decay of pions in interactions of high-energy particles, close connections exist between the acceleration sites of high-energy cosmic rays and the emission of high-energy gamma rays, high-energy neutrinos and other messengers like gravitational waves. In most cases these connections provide both spatial and temporal correlations of the different emitted particles. The combination of the complementary information provided by these messengers allows to lift ambiguities in the interpretation of the data and enables novel and very sensitive analyses. In this contribution we'll introduce and describe the H.E.S.S. multi-messenger program. The core of this newly installed program is the combination of high-energy neutrinos and high-energy gamma rays. We'll furthermore present searches for high-energy gamma-ray emission in coincidence with Fast Radio Bursts (FRBs) and gravitational waves. We'll provide an overview over current and planned analyses and present recent results.
Speaker: Dr Fabian Schüssler (Irfu, CEA-Saclay)
The measurement of the expansion rate of the Universe from gamma-ray attenuation 1h
The extragalactic background light (EBL) contains fundamental cosmological and galaxy evolution information. Very high energy observations of extragalactic sources, such as blazars, can be used to extract this information because of the pair-production interaction between gamma-ray and EBL photons. We present (almost) simultaneous broad-band data of a dozen BL Lacs that allow us to make the first statistically significant detection of the cosmic gamma-ray horizon (CGRH), which is a measure of how far gamma-ray photons of different energies can travel through the Universe due to EBL attenuation. From a comparison of our CGRH detection with an EBL model built from multiwavelength data taken with deep galaxy surveys, we conclude that there is no a significant amount of light escaping to galaxy surveys, at least, in the low redshift Universe. This CGRH detection also allow us to present an independent and novel technique aimed at measuring the expansion rate of the Universe from gamma-ray observations.
Speaker: Dr Alberto Dominguez (Clemson University)
The stereo Topo-trigger: a new concept of stereoscopic trigger system for imaging atmospheric Cherenkov telescopes 1h
Imaging atmospheric Cherenkov telescopes such as the MAGIC telescopes are built to achieve the lowest possible energy threshold. The trigger system of these telescopes is one of the most important parts to achieve it. The main problem when decreasing the energy triggered by an IACT is the rapid increase of accidental triggers caused by the ambient light and the after pulses of the photomultipliers. The coincidence trigger between the telescopes strongly suppresses the accidental rate recorded by the telescope. At lower trigger threshold, however, it is difficult to discriminate at the trigger level between the triggers produced by accidental triggers or real cosmic events. In this contribution we present a topological trigger, dubbed Topo-trigger, a novel technique that discriminates between the events triggered by cosmic rays and accidental triggers allows a decrease of up to 85 % of the accidental events triggering MAGIC system in stereo. We have simulated and tested this algorithm in the MAGIC telescope while keeping more than 99 % of the gamma rays triggered. According to simulations, this trigger system increases the collection area at the analysis level of about 30 % at the lowest energies and between 10-20 % at the energy threshold. The decrease in the analysis energy threshold of the telescope is ~8 %. The selection algorithm proposed here was tested on real MAGIC data taken with the current trigger configuration and we find that no triggers are lost due to the algorithm proposed. A full implementation of the Topo-trigger was installed in MAGIC at the end of 2014 and the first results of its performance will also be shown.
Speaker: Ruben Lopez-Coto (Institut de Fisica d'Altes Energies - IFAE)
The TIBET AS+MD Project; progress report 2015 1h
We plan to build a large (approximately 10,000 m**2) water Cherenkov- type muon detector array under the existing Tibet air shower array at 4,300 m above sea level, to observe 10-1000 TeV gamma rays from cosmic-ray accelerators in our Galaxy with wide field of view at very low background level. A gamma-ray induced air shower has significantly less muons compared with a cosmic-ray induced one. Therefore, we can effectively discriminate between primary gamma rays and cosmic-ray background events by means of counting number of muons in an air shower event by the muon detector array. We will make a progress report on the project, as some part of it started data-taking in 2014.
Speaker: Dr Masato TAKITA (Institute for Cosmic Ray Research, the University of Tokyo)
The VHE gamma-ray periodicity of PG1553+113: a possible probe of a system of binary supermassive black hole 1h
The blazar PG1553+113 is an active galaxy with uncertain redshift detected at very high energies (VHE; E > 100 GeV) both during high and quiescent states. We have observed with the MAGIC telescopes from La Palma PG 1553+113 at VHE since 2005, making this blazar one of the best studied MAGIC sources. Recently, the Fermi/LAT collaboration has reported the detection of a hint of a ~2-year periodicity in the integral flux emitted by the source both at high energy gamma rays (E>100 MeV) and at optical wavelengths. Remarkably, this periodicity, if confirmed, might be interpreted as an evidence of the presence of a binary supermassive black hole system in the nucleus of PG1553+113. In this contribution, we present the result of our analysis of 10 years of PG 1553+113 MAGIC data. In particular, we test the hypothesis of a periodic modulation of the overall emitted flux at VHE, search for evidences of correlation with the emission detected at other wavelengths, and critically discuss our findings in the framework of the binary supermassive black hole model.
Speaker: Elisa Prandini (University of Geneva)
Time calibration for the LHAASO-WCDA project 1h
As a major component of the LHAASO project, the main physical goal of the Water Cherenkov Detector Array (WCDA) is to survey the northen sky for VHE gamma ray sources. One of the key issues to fulfill this goal is the angular resolution and the pointing precision of the detector, which depends much on the time calibration of the whole array. In this paper, a new time calibration technique based on LED and plastic fibers is introduced. The test results of a prototype system of one cluster consisting of 40 fibers show that a precision of 0.1 ns, which meets the requirement of the experiment, can be achieved. This technique has some advantages such as robustness, scalability and cost effectiveness, so that having a great application potential to some other large area air shower experiments.
Speakers: Dr Bo Gao (Institute of High Energy Physics, Chinese Academy of Sciences), Dr Hanrong Wu (Institute of High Energy Physics, Chinese Academy of Sciences), Mr Huicai Li (School of Physics, Nankai University), Dr Mingjun Chen (Institute of High Energy Physics, Chinese Academy of Sciences), Ms Xiaojie Wang (Institute of High Energy Physics, Chinese Academy of Sciences), Prof. Zhiguo Yao (Institute of High Energy Physics, Chinese Academy of Sciences)
Time-dependent injection as a model for rapid blazar flares 1h
The detection of very rapid flares on the order of minutes in blazars has spawned a lot of theoretical activity. Even though many models take time-dependent effects (such as varying magnetic fields, etc) into account, a time-dependent nature of the injection process is usually omitted. In this presentation it is shown using the standard one-zone model that time-dependent injection has strong effects on the resulting spectra of blazars. Due to the time-dependency of the injection the particles cannot reach an equilibrium state and the kinetic equation for the electron distribution function becomes non-linear. This leads to (i) much faster electron cooling and (ii) a change in the cooling process after some time depending on the injection parameters. This change in the cooling process has direct and very significant effects for the spectrum of a flaring blazar.
Speaker: Michael Zacharias
Triggerless scheme and trigger pattern of the LHAASO-WCDA project 1h
The Water Cherenkov Detector Array (WCDA) of the LHAASO project is to be built in Daocheng, Sichuan Province of China. It comprises of 4 neighboring ponds, each in dimension of 150 m $\times$ 150 m, and divided into 900 cells, with a PMT in each cell. A triggerless scheme is to be adopted for the data acquiring system, in which all the single channel signals are synchronized and transferred to an online computing cluster, to build into events based on a dedicated trigger pattern. The trigger pattern is introduced in this paper. The feature of the trigger pattern is noise tolerance and scalability, and it can be generalized for some other air shower experiments.
Speakers: Dr Bo Gao (IHEP, Beijing), Dr Hanrong Wu (IHEP, Beijing), Mr Huicai Li (School of Physics, Univeristy of Nankai), Dr Mingjun Chen (IHEP, Beijing), Ms Xiaojie Wang (IHEP, Beijing), Prof. Zhiguo Yao (IHEP, Beijing)
Updated results from VERITAS on the Crab pulsar 1h
The Crab pulsar and plerion are some of the brightest and best studied non-thermal astrophysical sources. The recent discovery of pulsed gamma-ray emission above 100 GeV from the Crab pulsar with VERITAS (the Very Energetic Radiation Imaging Telescope Array System) challenges commonly accepted pulsar emission models and puts the gamma-ray emission region far out in the magnetosphere – close to or even beyond the light cylinder. We present updated VERITAS results from the analysis of a data set that is twice the original data set published in 2011. The results are discussed in the context of discriminating between different models put forward to explain gamma-ray emission mechanisms and acceleration regions within the Crab pulsar's magnetosphere.
Speaker: Thanh Nguyen
Upper limits on diffuse gamma-rays measured with KASCADE-Grande 1h
KASCADE-Grande was a multi-detector array to measure individual air showers of cosmic rays in the energy range of 10 PeV up to 1 EeV. Based on full data sets measured by KASCADE-Grande, an upper limit to the flux of ultra-high energy gamma rays in primary cosmic rays is determined. The analysis is performed by selecting air showers with low muon contents due to a small fraction of secondary hadrons in gamma ray showers with respect to hadronically induced cosmic ray showers. A preliminary result on the 90% C.L. upper limit to the relative intensity of gamma-ray induced showers with respect to all cosmic-ray primaries will be presented, and discussed with limits reported in previous measurements.
Speaker: Dr Donghwa Kang (Karlsruhe Institute of Technology)
Upper limits on the VHE $\gamma$-ray flux from the ULIRG Arp 220 and other galaxies with VERITAS 1h
The cores of Ultra-Luminous InfraRed Galaxies (ULIRGs) are very dense environments, with a high rate of star formation and hence supernova explosions. They are thought to be sites of cosmic-ray acceleration, and are predicted to emit $\gamma$-rays in the GeV to TeV range. So far, no ULIRG has been detected in $\gamma$-rays. Arp 220, the closest ULIRG to Earth, has been well studied, and detailed models of $\gamma$-ray production inside this galaxy have been derived. They predict a rather hard $\gamma$-ray spectrum up to several TeV. Due to its large rate of star formation, high gas density, and its close proximity to Earth, Arp 220 is thought to be a very good candidate for observations in very-high-energy (VHE, above 100 GeV) $\gamma$-rays. Arp 220 was observed by the VERITAS telescopes for more than 30 hours with no significant excess over the cosmic-ray background. The upper limits on the VHE $\gamma$-ray flux of Arp 220 derived from these observations are the most sensitive limits presented so far and are starting to constrain theoretical models. The observations of Arp 220 are compared to the VERITAS flux limits derived for other galaxies.
Speaker: Henrike Fleischhack (DESY)
poster_ULIRGS.pdf
VERITAS Discovery of Very High-Energy Gamma-Ray Emission from RGB J2243+203 1h
In this talk, we report the VERITAS discovery of very high energy (E > 100 GeV) gamma ray emission from RGB J2243+204, previously detected in radio and X-ray. This source is also consistent with the Fermi-LAT gamma-ray source 1FHL J2244.0+2020. RGB J2243+204 has been classified both as an intermediate-frequency-peaked BL Lac object and as a high-frequency-peaked BL Lac object in the past. Despite displaying a featureless spectrum, the source distance has been constrained through optical imaging, allowing the redshift of the source to be estimated at greater than 0.39. The source was detected by VERITAS at a statistical significance > 5.7 sigma with 4 hours of VERITAS exposure between 21 Dec 2014 and 24 Dec 2014 (UT). A preliminary flux estimate of ~4% Crab above 180 GeV was previously announced in ATel #6849 (24 Dec 2014). In this talk, the complete VERITAS observations, analysis, and spectral results of RGB J2243+204 will be summarized. Quasi-simultaneous observations with VERITAS, Fermi-LAT and Swift XRT will also be presented.
Speaker: David Kieda (University of Utah)
Icrc2011_V4.pdf
VERITAS long-term (2006-2014) observations of the BL Lac object 1ES 0806+524 1h
The high-frequency-peaked BL Lac object 1ES 0806+524 (z=0.138) was discovered as a source of very-high-energy (VHE, E>100 GeV) gamma-ray photons in 2008 with the VERITAS telescope array, at a level of 1.8% of the Crab Nebula flux above 300 GeV. Since then, VERITAS has continued observing the source over multiple seasons, significantly improving the significance of the detection. We report the results of the analysis of the 2006-2014 VERITAS data, corresponding to a total exposure of about 80 hours. We present the new, average VHE spectrum of the source, together with the multi-year light-curve constraining long-term VHE variability.
Speaker: Matteo Cerruti (Harvard-Smithsonian Center for Astrophysics)
VERITAS Observations of HESS J1943+213 1h
HESS J1943+213 is a very-high-energy (VHE; > 100 GeV) gamma-ray point source detected during the H.E.S.S. Galactic Plane Survey. Radio, infrared, X-ray, and GeV gamma-ray counterparts have been identified for HESS J1943+213; however, the classification of the source is still uncertain. Recent publications have argued primarily in favor of either an extreme BL Lac object behind the Galactic plane or a young pulsar wind nebula. We present deep VERITAS observations of HESS J1943+213, which provide the most significant VHE detection of the source so far, with >20 sigma excess. The source is detected at ~2% Crab Nebula flux above 200 GeV, consistent with the H.E.S.S. detection. The source spectrum is well fit by a power-law function. Moreover, no significant flux variability is detected over the course of VERITAS observations. We place the VERITAS results in a multi-wavelength context to comment on the HESS J1943+213 classification.
Speaker: Karlen Shahinyan (University of Minnesota)
HESSJ1943_ICRC_2015_poster.pdf
VERITAS Observations of M31 (the Andromeda Galaxy) 1h
Diffuse gamma rays are tracers of cosmic rays, providing information on their origin and diffusion. M 31 (the Andromeda Galaxy) is the closest spiral galaxy to the Milky Way (d = 750 kpc) and is very well studied at all wavelengths, thus it is a prime target for the study of diffuse gamma-ray emission. The very-high-energy (VHE, E > 100 GeV) gamma-ray observatory VERITAS has conducted 45 hours of observations of M 31 and an upper limit on the VHE flux will be presented. An updated Fermi-LAT (100 MeV < E < 300 GeV) analysis will also be presented. These observations will be compared with predictions of the gamma-ray flux derived from the inelastic scattering of VHE cosmic rays of the interstellar medium (ISM) and the interstellar radiation field. M 31 provides an ideal opportunity to probe these mechanisms. Its proximity and spatial extent, significantly larger than the VERITAS point spread function but smaller than the field-of-view, enables the star-forming ring, 10 kpc from the galaxy core, with its dense ISM and numerous supernova remnants to be resolved.
Speaker: Ralph Bird (UCD Dublin)
Water quality monitoring and measurement for the LHAASO-WCDA with the cosmic muon signals 1h
The Large High Altitude Air Shower Observatory (LHAASO) project is to be built at Daocheng, Sichuan Province, 4400 m a.s.l., in a few years. As one of the major components of the LHAASO project, LHAASO-WCDA, a water Cherenkov detector array with an area of 90000 m2, contains around 400,000 tons of purified water. To gain full knowledge of the water Cherenkov technique and to investigate the engineering issues, a 9-cell detector array has been built at the Yang-Ba-Jing site. With the array, a method of water quality monitoring and measurement with cosmic muon signals is studied, whose results show that a precision at some ten percentages can be achieved, satisfying the requirement of the experiment. The results are compared with those from a full Monte Carlo simulation. This method is proposed to be applied in the LHAASO-WCDA project.
Speakers: Dr Bo Gao (Institute of High Energy Physics, CAS), Prof. Chunxu Yu (School of Physics, Nankai University), Dr Hanrong Wu (Institute of High Energy Physics, CAS), Mr Huicai Li (School of Physics, Nankai University), Dr Mingjun Chen (Institute of High Energy Physics, CAS), Ms Xiaojie Wang (Institute of High Energy Physics, CAS), Prof. Zhiguo Yao (Institute of High Energy Physics, CAS)
Poster 1 SH Theater Foyer
Theater Foyer
3D simulations of heliospheric propagation of heavy-ion solar energetic particles 1h
In recent years, a wealth of spacecraft measurements of heavy ion solar energetic particles have become available, thanks to data from the ACE and STEREO spacecraft. Interesting features in heavy ion time intensity profiles, such as the decay of the Fe/O ratio over time in some events, have been observed. Heliospheric propagation effects have been invoked in the literature as a possible cause of Fe/O decays. Recent modelling work has shown that drifts due to the gradient and curvature of the large scale Parker spiral magnetic field, are a significant source of perpendicular transport for partially ionised heavy ions. Modelling these effects requires a fully 3D description. Here we present results of 3D test particle simulations of heavy ion SEP propagation in the heliosphere, for a Parker spiral magnetic field in a variety of scattering conditions. We simulate intensity profiles of heavy ions as would be observed at 1 AU, and compare them with recent data from STEREO and ACE.
Speaker: Silvia Dalla (University of Central Lancashire)
|
CommonCrawl
|
High-level artemisinin-resistance with quinine co-resistance emerges in P. falciparum malaria under in vivo artesunate pressure
Rajeev K. Tyagi1,2,7 na1,
Patrick J. Gleeson1,2,8 na1,
Ludovic Arnold1,2 na1,
Rachida Tahar3,4,
Eric Prieur1,2,
Laurent Decosterd5,
Jean-Louis Pérignon1,2,9,
Piero Olliaro6 &
Pierre Druilhe1,2
Humanity has become largely dependent on artemisinin derivatives for both the treatment and control of malaria, with few alternatives available. A Plasmodium falciparum phenotype with delayed parasite clearance during artemisinin-based combination therapy has established in Southeast Asia, and is emerging elsewhere. Therefore, we must know how fast, and by how much, artemisinin-resistance can strengthen.
P. falciparum was subjected to discontinuous in vivo artemisinin drug pressure by capitalizing on a novel model that allows for long-lasting, high-parasite loads. Intravenous artesunate was administered, using either single flash-doses or a 2-day regimen, to P. falciparum-infected humanized NOD/SCID IL-2Rγ−/−immunocompromised mice, with progressive dose increments as parasites recovered. The parasite's response to artemisinins and other available anti-malarial compounds was characterized in vivo and in vitro.
Artemisinin resistance evolved very rapidly up to extreme, near-lethal doses of artesunate (240 mg/kg), an increase of > 3000-fold in the effective in vivo dose, far above resistance levels reported from the field. Artemisinin resistance selection was reproducible, occurring in 80% and 41% of mice treated with flash-dose and 2-day regimens, respectively, and the resistance phenotype was stable. Measuring in vitro sensitivity proved inappropriate as an early marker of resistance, as IC50 remained stable despite in vivo resistance up to 30 mg/kg (ART-S: 10.7 nM (95% CI 10.2–11.2) vs. ART-R30: 11.5 nM (6.6–16.9), F = 0.525, p = 0.47). However, when in vivo resistance strengthened further, IC50 increased 10-fold (ART-R240 100.3 nM (92.9–118.4), F = 304.8, p < 0.0001), reaching a level much higher than ever seen in clinical samples. Artemisinin resistance in this African P. falciparum strain was not associated with mutations in kelch-13, casting doubt over the universality of this genetic marker for resistance screening. Remarkably, despite exclusive exposure to artesunate, full resistance to quinine, the only other drug sufficiently fast-acting to deal with severe malaria, evolved independently in two parasite lines exposed to different artesunate regimens in vivo, and was confirmed in vitro.
P. falciparum has the potential to evolve extreme artemisinin resistance and more complex patterns of multidrug resistance than anticipated. If resistance in the field continues to advance along this trajectory, we will be left with a limited choice of suboptimal treatments for acute malaria, and no satisfactory option for severe malaria.
Artemisinin (ART) derivatives have become the keystone of malaria treatment and control [1]. ART has the advantage of killing all asexual blood stages of Plasmodium falciparum parasites, as well as affecting sexual development [2], resulting in rapid clinical and parasitological cure at an individual level, and a reduction in malaria transmission rates on a public health scale. All currently recommended first- and second-line treatments for uncomplicated malaria are a combination of ART with an unrelated antimalarial (artemisinin-based combination therapy, ACT) [1]. For severe malaria, artesunate (a type of ART; AS) is the first-line treatment, and quinine is the only available alternative [1]. Malaria control is thus highly reliant on ART, and adequate replacements are not forthcoming [3].
Historically, Southeast Asia has been the epicenter of malaria drug-resistance development – resistance to all major antimalarials has emerged there. P. falciparum resistance to ART (ART-R) given as part of ACT, was first reported from western Cambodia in 2008 [2, 4] and has already spread across the Greater Mekong subregion [5,6,7,8,9,10,11]. The ART-R phenotype is recognized clinically as a prolongation of parasitemia clearance as measured by peripheral blood smears (delayed parasite clearance time; DPCT) in patients with uncomplicated falciparum malaria. Unexplained slow parasite clearance times have been reported with high frequency among Ugandan children treated with intravenous AS for severe malaria [12] and in East Africa, where residual submicroscopic parasitemia after ACT has been reported [13].
Infections with DPCT still show some therapeutic response to ART. Frank ART-R, a situation where ART would fail to cause an appreciable reduction of parasite levels in patients' blood, has not yet been documented [5, 14]. Concerningly, reports are starting to emerge of multidrug-resistant malaria with treatment failures to ART and other key drugs, including quinine [15, 16].
Understanding ART-R has proved challenging both in the field and the laboratory [5, 6, 17,18,19,20]. In contrast to other antimalarials, no significant correlation between clinical response to ART and conventional in vitro determination of the 50% drug inhibitory concentration (IC50) is seen [5, 6]. For in vivo studies, only non-human malaria parasites that infect rodents have been available [21, 22]. Recently, however, substantial progress has been made. A series of in vitro and clinical studies have characterized the variable susceptibility of different parasite blood-stages to ART [23] and identified kelch-13 as an important P. falciparum gene associated with ART-R [10]. Besides kelch-13, these studies (including genome wide association studies; GWAS) [24], associated a number of other malaria parasite genes, such as RAD5 (which lies within 10 kb of kelch-13), ferredoxin, tetratricopeptide, and nt1, with ART-R. The altered regulation of many genes and metabolic pathways rather than a single gene polymorphism might be responsible for the ART-R phenotype [25,26,27,28]. The ring-stage survival (RSA) and trophozoite maturation inhibition assays have been developed following the observation of stage-specific susceptibility to ART, and are more sensitive at detecting decreased ART responsiveness than conventional laboratory methods [29, 30].
Despite the advances made, we have no way to foretell if P. falciparum can evolve beyond DPCT towards higher, more troublesome, levels of resistance. The successive loss of other antimalarial compounds to the rising tide of resistance, together with the remarkable potency of ART, has led to a worldwide switch to ACT. The consequences of this major shift in drug pressure on the P. falciparum genome, particularly the speed and strength with which ART-R might evolve, are difficult to gauge using available models.
Having developed a novel host that facilitates in vivo studies with P. falciparum [31, 32] – the Pf- NSG model grafted with human erythrocytes (huRBC), which allows high, long-lasting P. falciparum loads – we systematically assessed the resilience of P. falciparum in the face of defined ART exposure in vivo and characterized the resulting phenotype, particularly the drug-sensitivity profile, using both in vivo and in vitro methods concurrently.
We saw a remarkably rapid selection of very high-grade, stable resistance to ART with a delayed shift in IC50. Remarkably, despite exclusive exposure of the parasite to AS, strong co-resistance to quinine also developed in the same strain. Once again, P. falciparum has demonstrated its adaptability and proven its rank as one of humanity's greatest challenges.
Four- to six-week-old male and female NOD/SCID IL-2Rγ−/− (NSG) mice (Charles River, France) were housed in sterile isolators and supplied autoclaved tap water with a γ-irradiated pelleted diet ad libitum. They were manipulated under pathogen-free conditions using a laminar-flux hood.
Human erythrocytes (huRBC)
HuRBC were used as host-cells for all in vitro and in vivo experiments. Packed huRBC were provided by the French Blood Bank (Etablissement Français du Sang, France) and taken from donors with no history of Malaria. HuRBC were suspended in SAGM (Saline, Adenine, Glucose, Mannitol solution) and kept at 4 °C for a maximum of 2 weeks. Before injection, huRBC were washed thrice in RPMI-1640 medium (Gibco-BRL, Grand Island, NY, USA) supplemented with 1 mg of hypoxanthine per liter (Sigma-Aldrich, St Louis, MO, USA) and warmed for 10 min to 37 °C.
P. falciparum parasites and culture
The P. falciparum Uganda Palo Alto Marburg strain (FUP/CB or PAM) was used for all experiments [33]. This pan-sensitive strain is used as a laboratory reference for antimalarial assays [34, 35]. Over time, strains with different levels of ART-R were cryopreserved using the glycerol/sorbitol method as described [36]. Parasites were cultured in vitro with 5% hematocrit, at 37 °C with 5% CO2, using RPMI-1640 medium (Gibco-BRL) with 35 mM HEPES (Sigma-Aldrich), 24 mM NaHCO3, 10% albumax (Gibco-BRL), and 1 mg/L of hypoxanthine (Sigma-Aldrich). When required, cultures were synchronized by either plasmagel (Roger Bellon, Neuilly-sur-Seine, France) flotation [37] or exposure to 5% sorbitol (Sigma-Aldrich) [38]. At regular intervals, cultures were tested for Mycoplasma contamination using PCR.
In vivo replication of P. falciparum in the NSG-IV model
P. falciparum was maintained in huRBC grafted in NSG immunocompromised mice undergoing additional modulation of innate defenses using clodronate-containing liposomes, as described previously [31, 32] ('Pf-NSG' model). The proportion of huRBC in mouse blood (chimerism) was measured during experiments every 6 ± 4.5 days (mean ± standard deviation (SD)) by flow cytometry (Facscalibur, BD Biosciences, Franklin Lakes, NJ, USA) using a FITC-labeled anti-human glycophorin monoclonal antibody (Dako, Denmark). Human erythrocytes were found to constitute 77.4% ± 19.9% (mean ± SD) of erythrocytes in mouse blood during periods of drug pressure. Mice were inoculated intravenously with 300 μL of 1% non-synchronized P. falciparum-infected huRBC. Follow-up of infection was performed by daily Giemsa-stained thin blood films drawn from the tail vein. In this paper, we report parasitemia as a percentage of all erythrocytes found in mouse peripheral blood; the true percentage of huRBC parasitized in the mice is higher, proportional to the level of chimerism, because murine erythrocytes cannot be infected but were included in counts.
Estimates of the total parasite biomass in each mouse were calculated based on the mean corpuscular volume of mouse erythrocytes (45 fL), the mean corpuscular volume of huRBC (86 fL), hematocrit in the mice of 0.7, weight of NSG mice (25 g), and a conservative estimate of 5.5 mL of blood per 100 g of mouse weight using the following equation:
$$ \mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{infected}\kern0.5em \mathrm{RBC}=\frac{\left(0.055\kern0.5em \mathrm{mL}/\mathrm{g}\right)\left(25\mathrm{g}\right)(0.7)}{\left[86\mathrm{fL}+\left({\mathrm{mouse}}_{\mathrm{Chimerism}}/{\mathrm{human}}_{\mathrm{Chimerism}}\right)45\mathrm{fL}\right]}\times \left(\mathrm{huRBC}\kern0.5em \mathrm{parasitemia}\right) $$
In vivo induction of drug resistance
Mice were initially infected with drug-naïve parasites from in vitro culture of cryopreserved stabilates and subsequently put under discontinuous sub-therapeutic AS drug pressure. Sodium AS (a gift from Sigma-Tau, Italy) was dissolved in 10% dimethyl sulfoxide (DMSO) in RPMI-1640 (stock solution 30 mg/mL) each day of injection, then diluted 10-fold in RPMI-1640, sterilized through a 0.22 μm Millex filter (Millipore, MA, USA), further diluted in sterile RPMI-1640 as appropriate, and delivered intravenously via the retro-orbital sinus.
For the single-dose protocol, one dose of AS (ranging from 2.4 mg/kg to 240 mg/kg) was given, then parasitemia was monitored every 24 h and allowed to recover back to pre-treatment levels (AS pressure cycle; APC) before a further dose of AS was administered. For the 2-day protocol, two doses of AS (starting at 2.4 mg/kg/injection up to 80 mg/kg/injection) were delivered 24 h apart, then parasitemia was monitored every 24 h and was allowed to recover back to pre-treatment levels (APC) before a further two doses of AS were given (i.e., for a 2-day dose of 2.4 mg/kg, the mouse was injected with a total of 4.8 mg/kg AS per APC). The length of APC varied from case to case. When parasitemia failed to drop significantly (see below) after exposure to a given dose, the concentration was increased. The parasite strain used for the 2-day protocol had already developed resistance to a single dose of 30 mg/kg AS, and was then subjected to the 2-day regimen starting at 2.4 mg/kg/injection. Parasite strains were named ART-Rx, where x is the dose of AS (in mg/kg) to which resistance was established in that strain.
To determine what should be considered a significant drop in parasitemia, the normal day-to-day fluctuation of parasitemia was calculated from 13 non-drug-exposed NSG-IV mice (geometric mean of variability ± 18.3%, 95% confidence interval (CI) 12.5–27%). Taken from this, the parasite was deemed to be resistant to a given dose when parasitemia failed to drop more than 27% by the next day (all reported measures of parasite reduction are from the day after drug administration). We analyzed the drop in parasitemia seen among five mice infected with the PAM-sensitive strain the day after a single administration of intravenous AS to define a 'sensitive response' to AS in this model. The mean reduction was 78.4% with a SD of 18.2%. We conservatively chose a drop in parasitemia greater than 60.2%, corresponding to the mean (1 SD) as the definition of a sensitive response to guide decisions about dosing. For definitive statistical comparisons of parasitemia responses, a paired t test was used. Stability of resistance was determined when required by re-challenging the parasite strain in its new host with the dose of drug to which it had last shown resistance. The ART-R P. falciparum strain was continuously perpetuated in vivo by sub-inoculation directly from one mouse to another by the intravenous route, except where otherwise indicated.
In vitro drug sensitivity assays
The primary technique used to determine IC50 was the double-site enzyme-linked pLDH immunodetection assay, as previously described [39]. The 3H-hypoxanthine isotopic method [40] was used as a secondary confirmatory assay. All in vitro results shown below come from the double-site enzyme-linked pLDH immunodetection assay.
For both methods, P. falciparum parasites at 0.05% parasitemia, synchronized at ring stage, were incubated at 2% hematocrit in 96-well microtiter plates (Nunc, Sigma-Aldrich) with serial dilutions of various anti-malarial drugs in 200 μL of complete culture medium at 37 °C and 5% CO2 for 72 h. Non-drug-exposed wells were used as positive controls, and wells containing non-infected huRBC served as negative controls.
Stock solutions of the drugs (5 mL,1.5 mg/mL) were prepared by dissolving sodium AS (gift from Sigma-Tau), chloroquine sulphate (Rhone-Poulenc-Rorer, Vitry, France), dihydroartemisinin (DHA; Sigma-Tau), pyrimethamine (ICN Biochemicals, Aurora, Ohio), quinine hydrochloride (Sanofi, Montpellier, France), lumefantrine (Sigma-Aldrich), and mefloquine hydrochloride (Hoffman-La Roche, Basel, Switzerland) in 10% DMSO in RPMI-1640, whereas amodiaquine dihydrochloride and halofantrine hydrochloride were dissolved in 30% DMSO in RPMI-1640. Drug solutions were diluted 10-fold in RPMI-1640, sterilized by filtration through a 0.22 μM filter, and serially diluted in a 96-well incubation plate.
IC50 values were determined by performing a four-parameters, variable slope, non-linear regression analysis taking the least-squares fit without constraints, using Graph Pad Prism 6 software. Comparison of IC50 values and hillslopes was performed using the extra sum-of-squares F test (GraphPad, Inc., CA, USA).
In vivo co-resistance studies
Mice infected with the ART-R240 strain were given either single treatments or combinations of the following regimens: three doses of quinine hydrochloride 73 mg/kg every 8 h intravenously, four doses of halofantrine hydrochloride 1 mg/kg every 24 h intravenously, one dose of amodiaquine dihydrochloride 73 mg/kg orally (delivered by oro-gastric canula), one dose of chloroquine sulphate 73 mg/kg orally, or one dose of mefloquine hydrochloride 50 mg/kg intra-peritoneally, as previously described [41]. Stock solutions were made by dissolving 150 mg of quinine, chloroquine, and mefloquine in 5 mL of 10% DMSO, 150 mg of amodiaquine in 30% DMSO, and 60 mg of halofantrine in 30% DMSO, then dissolved 10-fold in RPMI-1640, and sterilized by filtration before being made up to the final concentration.
Determination of mouse plasma drug concentrations
Plasma concentrations of AS and DHA in blood samples (40–60 μL) collected from the retro-orbital sinus in four mice at 1, 2, and 4 h post intravenous drug administration were determined by reversed phase liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) using an adaptation of the previously described method [42]. Murine plasma was purified by protein precipitation with acetonitrile, evaporation, and reconstitution in 10 mM ammonium formate/methanol (1:1) adjusted to pH 3.9 with formic acid. Separations were done on a 2.1 mm × 50 mm Atlantis dC18 3 μm analytical column (Waters, Milford, MA, USA). The chromatographic system (CTC Analytics AG, Zwingen, Switzerland) was coupled to a triple stage quadrupole Thermo Quantum Discovery Max mass spectrometer equipped with an electrospray ionization interface (Thermo Fischer Scientific Inc., Waltham, MA, USA). The selected mass transitions were m/z 221.1 → 163.1, with a collision energy of 14 eV for AS and DHA, and m/z 226.2 → 168.1, with a collision energy of 20 eV for the stable isotope-labeled internal standard DHA-13CD4. Inter-assay precision obtained with plasma QC samples at 30, 300, and 3000 ng/mL of DHA and AS were 1.3, 2.1, 11.3%, and 7.3, 4.7, and 10.8%, respectively. Mean absolute deviation from nominal values of QC samples (30, 300, and 3000 ng/mL) during the analysis were 5.4, 5.9, and 1.3% and 3.8, 9.7, and 2.1%, for DHA and AS, respectively. The lower limit of quantification was 2 ng/mL. The laboratory participates in the External Quality Control program for anti-malarial drugs (http://www.wwarn.org/).
ART-R P. falciparum DNA was isolated from parasitized blood using QIAamp DNA mini kit (Qiagen, Limburg, Netherlands). A non-synonymous point mutation of ubp1 in P. chabaudi (PCHAS020720) was reported by others [43] as being a marker of ART resistance in a rodent model. The orthologous gene in P. falciparum (PF3D7_0104300) is conserved and was amplified using the primers (500 nM) forward: 5'-TACAGGCTTTATATAGTACAGTGTC-3′, reverse: 5'-TTTTCGTTCGTACTTATAGGCACAGG-3′, and AmpliTaq DNA Polymerase (1 U) (Hoffman-La Roche). The 451 bp PCR fragment was purified using the QIAquick PCR purification kit (Qiagen). Polymorphisms in PF3D7_0104300 were assessed by digesting the PCR fragment with the restriction enzymes Mae III for V3275F and Rsa I for V3306F, corresponding to V2697F and V2728F in PCHAS 020720, respectively.
Genetic sequencing
Genes of interest in P. falciparum coding for the proteins RAD5, cNBP, RPB9, PK7, FP2A, Pfg27, Pfcrt, and Pfnhe, two fragments overlapping the kelch-13 propeller domain [44,45,46], and Pfmdr1 gene were analyzed by PCR-sequencing. Primers used for Pfmdr1 PCR and sequencing were previously described by Basco and Ringwald [47], and Pfmdr1 gene copy analysis was performed as previously described [48]. For Pfnhe, two primer couples were designed for nested PCR on the basis of the 3D7 sequence. Control samples were taken from in vitro cultures of the P. falciparum 3D7 strain, and the sensitive progenitor PAM strain prior to any ART exposure (PAMwt); for the RAD5 experiment, additional control clinical isolates collected in the late 1990s were used from Brazil, Comoro Islands, Senegal, and Thailand. Experimental samples were recovered from P. falciparum-infected mice at various points during the ART resistance induction process (NSG415, 416, 424, 433, and 440). Genomic DNA was prepared using QIAamp DNA mini kit (Qiagen), according to the manufacturer's instructions, in 50 μL of Milli-Q water; 1 μL of DNA was PCR-amplified with 500 nM of the corresponding forward and reverse primers (Additional file 1), 0.8 mM dNTPs, 1.5 mM MgCl2, 2.5 U Taq DNA polymerase (Hoffman-La Roche) in a volume of 50 μL with the following cycling program: 2 min at 94 °C, 30 cycles of 15 s at 94 °C, 30 s at 57 °C, 45 s at 72 °C, and a final extension of 2 min at 72 °C. The total contents of the reaction were electrophoresed on a 1% agarose gel and stained with ethidium bromide. The amplicons were extracted from the gel using the QIAquick® gel extraction kit (Qiagen). Concentration of the amplicons was measured by NanoDrop (Thermo Fischer Scientific Inc.) at 260 nm wavelength before sequencing of both strands was performed (Plateforme de séquençage, Institut Cochin, Paris/Eurofins MWG Operon). Sequences were analyzed with DNAstar software (DNAStar, Madison, WI, USA).
Determination of the lowest effective dose (LED) for ART-sensitive progenitors
We infected seven mice with the PAM P. falciparum strain before any drug exposure to determine the LED. Single doses of 0.6, 0.3, and 0.15 mg/kg AS each caused a significant drop in parasitemia (> 27%, i.e., the upper 95% CI of normal fluctuation). Since 0.075 mg/kg AS failed to reduce parasitemia beyond normal day-to-day fluctuations, a single dose of 0.15 mg/kg AS (0.00375 mg AS/mouse) was established as the LED in this model (Fig. 1). Effective doses of AS produced pyknotic parasites as seen in humans (Additional file 2).
Determination of the lowest effective dose (LED). Parasitemia trends from individual NSG mice that each received a unique dose of (a) 0.6 mg/kg, 0.3 mg/kg, 0.15 mg/kg, or (b) 0.075 mg/kg of artesunate (AS) are shown. We infected mice with the Uganda Palo Alto Marburg (FUP/CB or PAM) progenitor strain before it was subjected to any drug pressure. Arrows indicate day of intravenous drug delivery. In panel a, day 0 represents the fourth day post-inoculation of mice. Results were reproducible in several mice treated at each dose
Rapid induction of high level ART resistance in P. falciparum
We applied intense, discontinuous, sub-curative AS drug pressure in vivo to high P. falciparum parasitemia in NSG mice using the intravenous route. After each drug exposure, parasitemia was allowed to recuperate back to pre-treatment levels (APC) and, once resistance was established, the AS dose was increased (Fig. 2 and Additional file 3). For the single-dose regimen, the median APC length was 4 days (range 2–14 days).
Examples of selection for single-dose artemisinin resistance. Demonstrative parasitemia trends as seen at different time points during the resistance-selection process are shown from mice that received single flash doses of (a) 15 mg/kg, (b) 120 mg/kg, or (c) 240 mg/kg artesunate. Arrows indicate day of intravenous drug delivery. Results were reproduced in several mice as indicated in Table 1 and Table S2A
During the single-dose regimen, after pre-conditioning of the drug-naïve parasites with 3 single doses of AS in one mouse, we passed the parasite line through 7 generations of mice by sub-inoculation, using 5, 9, 6, 1, 6, 10, and 6 mice in each generation, respectively (total 43).
In the first generation, we let parasites multiply to high parasitemias (25–35%) creating a pool of ~ 1.3 × 1010P. falciparum-infected erythrocytes. We saw resistance to 2.4 mg/kg AS after 3 APC in 1 out of 4 mice exposed to that dose, then to 3.3 mg/kg AS after 2 APCs in 1 out of 3 mice, and to 4 mg/kg AS in 2 out of 2 mice exposed to a mean 1.5 APC.
In the second generation, resistance to 3.3 mg/kg was established in another mouse (1 APC), and to 4 mg/kg in 2 further mice (mean 1.5 APC, range 1–2). Later, we confirmed 4 mg/kg resistance in a new host. Seeing as resistance was so forthcoming, we increased drug pressure readily to 15 mg/kg AS, to which indeed 4 out of 5 mice exposed became resistant (mean 5 APC, range 2–9) (Additional file 4).
Resistance to 30 mg/kg AS then emerged in 2 out of 4 mice exposed to that dose (mean 1.5 APC, range 1–2). However, it was not stable and, in the third generation, an average of 3.6 APC (range 2–5) was required before it was re-established (ART-R30). Subsequently, in 1 mouse, after applying variable-intensity drug pressure, resistance to 60 mg/kg AS was obtained (5 APC).
We confirmed the stability of resistance to 60 mg/kg AS (ART-R60) immediately after sub-inoculation into the fourth generation, and after just three further exposures to 120 mg/kg AS, the strain showed the first signs of resistance to that dose.
In the fifth generation, we observed resistance to 120 mg/kg AS (ART-R120) in all 4 mice exposed after an average of 3 APC (range 2–4). Then, in 1 mouse, the parasite went on to develop resistance against 240 mg/kg AS after 4 APC (78, 44, 60, and 13% reduction in parasitemia seen with each APC, respectively).
After sub-inoculation into the sixth generation, the parasite strain established resistance to 240 mg/kg AS in 4 out of 6 mice exposed to that dose (2.75 APC, 1–7).
In the seventh generation, resistance was immediately stable, after sub-inoculation, to 240 mg/kg AS in all 6 mice (ART-R240) (mean ± SD percentage drop in parasitemia of sensitive control 78.4% ± 18.2% vs. ART-R240 9.1% ± 6.3%; p = 0.0002).
Since further dose doubling would exceed the lethal dose for 50% of mice [41, 49], 240 mg/kg was the highest dose administered. We used NSG mice infected with the sensitive progenitor PAM strain as controls, and all treatments using the above doses were found effective. This represents a 3200-fold decrease in in vivo AS sensitivity, occurring within 51 APC over a 45-week period (Table 1, Additional file 2, Additional file 3, and Additional file 5). Further, we observed gametocytes in thin blood smears from mice infected with parasites expressing the ART-R phenotype (Additional file 6).
Table 1 Number of artesunate pressure cycles (APC) used to select for single-dose resistance in individual mice
Induction of resistance to a 2-day regimen
Two doses of the same AS concentration administered 24 h apart – a double dose (DD) – caused a significant reduction in parasitemia in animals in which a single dose of the same concentration had failed.
We started with a concentration of 2.4 mg/kg/dose for the DD regimen using a parasite strain already resistant to a single dose of 30 mg/kg AS. The ART-R30 strain became resistant to DD 2.4 mg/kg AS after just 1 APC. We passed the parasite line through four generations of mice with 6, 8, 3, and 4 mice in each generation, respectively. Once resistance was seen, we increased the dose concentration 2-fold, until reproducible resistance to DD 80 mg/kg AS (i.e., 160 mg/kg total) was achieved (ART-RDD80) (Fig. 3, Additional file 7, and Additional file 8) (mean ± SD percentage drop parasitemia of sensitive control 95.9% ± 5.7% vs. ART-RDD80 25.7% ± 0.6%; p = 0.03).
Examples of selection for double-dose artemisinin resistance. Demonstrative parasitemia trends as seen at different time points during the resistance selection process are shown from mice that received a 2-day regimen comprising two doses 24 h apart of (a) 9.6 mg/kg, (b) 38.4 mg/kg, or (c) 80 mg/kg artesunate (i.e., total of 19.2 mg/kg, 86.8 mg/kg, or 160 mg/kg AS per APC). Arrows indicate day of intravenous drug delivery. Results were reproduced in several mice as indicated in Table S2B
It was possible to select for resistance to the highest dose used in 41% of the mice that survived the 2-day protocol, in contrast with 80% of mice that underwent the single-dose protocol (Table 2).
Table 2 Number of mice used and outcome for both dosing regimens
Verification of DHA concentration in mouse plasma
We measured levels of AS and DHA at 1 and 2 h post injection of 120 mg/kg AS in four ART-R120-infected mice (Additional file 9). Serum concentrations of DHA at 1 h were 3159, 3219, 1573, and 2423 ng/mL in each mouse, respectively, and we confirmed resistance to these levels on blood films drawn the following day. The mean t1/2 of DHA in the infected NSG-IV model was 36 min (range 20.9–53.2 min).
Stability of the ART-resistant phenotype
Stability was assessed in three different manners:
Transmission to new animals: The parasite was found to maintain stable AS resistance after sub-inoculation into fresh mice for 60 mg/kg AS in 1 out of 1 mouse, 120 mg/kg AS in 5 out of 11 mice, and 240 mg/kg AS in 6 out of 6 mice (Additional file 3).
Cryopreservation and in vitro growth: At various points, parasites resistant to a given AS concentration were cryopreserved and stored for 1–6 months, thawed, and then cultured in vitro for 8 to 12 days. After inoculation of cultured parasites into new mice, the ART-R30, ART-R120, and ART-R240 strains maintained their pre-freezing resistant phenotype (Fig. 4a, b).
Prolonged in vivo replication in the absence of drug pressure: We infected three mice with the ART-R120 strain, and confirmed resistance by administration of 120 mg/kg AS. The parasites were then allowed to grow in vivo without any drug pressure for 1 month. Upon re-treatment of the two surviving mice with 120 mg/kg AS, they both showed the same resistant response as had been seen 1 month prior (mean ± SD percentage drop in parasitemia, start: 10% ± 14.1% vs. end: 8.4% ± 11.8%; p = 0.94). The in vitro response also remained unchanged (IC50 AS: F = 0.03, p = 0.87; IC50 DHA: F = 1.1, p = 0.3) (Fig. 4c, d).
Evidence for stability of artemisinin resistance. a Following cryopreservation of resistant parasites with unchanged IC50: parasitemia trends from mice infected with the ART-R30 strain following cryopreservation and cultivation in vitro are shown. Arrows indicate day of re-challenge with 30 mg/kg AS. b Following cryopreservation of parasites with increased IC50: parasitemia trends from animals infected with ART-R120 (orange) and ART-R240 (purple, pink) following cryopreservation and cultivation in vitro. Arrows indicate the day of re-challenge with either 120 or 240 mg/kg artesunate (AS). c In vitro response following in vivo replication without drug pressure: In vitro sensitivities for AS and dihydroartemisinin (DHA) measured for ART-R120 parasites grown ex vivo, that were sampled before and after 1 month of drug pressure-free in vivo replication (see d), are tabulated. d In vivo following drug free replication: We maintained the ART-R120 parasite in vivo for 4 weeks without drug pressure in three mice; parasitemia trends of the two mice that survived are shown (red, blue). Challenges performed before and after treatment with 120 mg/kg AS (arrows) show stability of the resistant phenotype. We employed a lower intensity huRBC grafting protocol for this experiment to increase mouse survival, which caused a drop in parasitemia in the interim
In vitro drug sensitivity profiles of ART-R parasites show a two-step pattern
We monitored 50% IC50 values over the course of resistance development for both single-dose and 2-day regimens, and compared them to the sensitive progenitor.
The initial IC50 (95% CI) values for the sensitive strain to AS and DHA were 10.7 nM (10.2–11.2) and 13.8 nM (12.9–14.6), respectively. The ART-R30 strain did not show any increase in IC50 for AS (11.5 nM (6.6–16.9); F = 0.525, p = 0.47); however, there was a significant change in the slope of the curve compared to the sensitive control (hillslope − 4.4 (–6 to –3.6) vs. − 1.9 (–6.4 to –0.8); F = 7.5, p = 0.008). It was not until the strain became resistant to 120 mg/kg AS in vivo that the IC50 rose sharply for both AS (to 82.5 nM (69.5–95.8); F = 191.3, p < 0.0001) and DHA (to 54.6 nM (51.6–57.6); F = 300.3, p < 0.0001). The ART-R240 strain reached an IC50 of 100.3 nM (92.9–118.4) (F = 304.8, p < 0.0001) for AS.
In parasites submitted to a 2-day regimen, we saw the same pattern, with a delayed shift in IC50 (Fig. 5 and Additional file 10).
In vitro artesunate sensitivities at different levels of in vivo resistance. In vitro artesunate (AS) dose–response curves, with SD error bars, are shown for parasites resistant in vivo to (a) single dose AS 30 mg/kg (purple), 120 mg/kg (blue), and 240 mg/kg (red) or (b) 2-day regimen AS 19.2 mg/kg/dose (green) and 80 mg/kg/dose (orange), and compared to the artemisinin-sensitive progenitor strain (black). Mean IC50 values (nM) are indicated in parentheses
ART-R parasites are also resistant to quinine, amodiaquine, and halofantrine both in vivo and in vitro
Despite exclusive exposure to AS, the ART-R240 parasite strain showed markedly decreased responses to quinine, amodiaquine, and halofantrine. Indeed, the IC50 increased by 4.6-fold to quinine (49.7 nM (46.6–52.8) vs. 226.9 nM (145.8–392.1); F = 23.12, p < 0.0001), 3.8-fold to halofantrine (7.9 nM (7.3–8.6) vs. 30.4 nM (25.9–34.9); F = 159.3, p < 0.0001), and 11.7-fold to amodiaquine (11.3 nM (10.6–12.1) vs. 132.4 nM (5.5–149.3); F = 243.7, p < 0.0001); similarly, the DD ART-RDD80 strain increased its IC50 2.1-fold to quinine (F = 98.9, p < 0.0001) and 4.5-fold to amodiaquine (F = 152.5, p < 0.0001). Sensitivities to chloroquine (50.1 nM (46.5–53.7) vs. 53 nM (42.7–68.3); F = 0.39, p = 0.54), mefloquine (41.7 nM (39.1–44.4) vs. 39.1 nM (34.1–44.5); F = 0.82, p = 0.37), lumefantrine (7.5 nM (6.3–8.7) vs. 7.8 nM (6.2–9.8); F = 0.13, p = 0.72), and pyrimethamine (16.2 nM (13.8–18.8) vs. 19.9 nM (16.5–24.6); F = 4.75, p = 0.05) remained unchanged (Fig. 6 and Additional file 10).
In vitro drug sensitivity profile of the ART-R240 strain. In vitro dose–response curves, with 95% CI error bands, of the ART-R240 strain (red) to (a) dihydroartemisinin (DHA), (b) amodiaquine, (c) mefloquine, (d) chloroquine, (e) quinine, (f) halofantrine, (g) lumefantrine, and (h) pyrimethamine are shown and compared to the sensitive progenitor strain (ART-S) used as a control in each experiment (black). Results were reproducible in several independent experiments. Mean IC50 values (nM) are indicated in parentheses. The probability (p) of these IC50 values being from curves measured using the same strain of parasite, as determined by the extra sum-of-squares F test, are shown for each drug
Since the model accommodates simultaneous in vitro and in vivo studies with P. falciparum, this pattern of in vitro co-resistance to main-stream anti-malarial drugs could also be analyzed in vivo (Fig. 7). Therapeutic doses of 219 mg/kg quinine did not induce any decrease in parasitemia in vivo using ART-R240 strain (n = 4, mean ± SD percentage drop in parasitemia 4.8% ± 6.8%); the same dose was effective for the sensitive strain (n = 2, mean ± SD percentage drop in parasitemia 92.2% ± 0.01%; p = 0.03). In addition, we confirmed in vivo resistance to amodiaquine in 4 mice (mean ± SD percentage drop in parasitemia sensitive control 76.6% ± 5.2% vs. ART-R240 9.3% ± 0.14%; p = 0.03), and halofantrine in 3 mice (median, range percentage increase in parasitemia after 3 days of treatment 16.9%, 15.9–114.4%). Conversely, we observed in vivo susceptibility to treatment with mefloquine (2 mice, mean ± SD percentage drop in parasitemia 67.5% ± 7.8%; p = 0.005, compared to normal day-to-day fluctuation) and chloroquine (3 mice, mean ± SD percentage drop in parasitemia 73.3% ± 0.7%; p < 0.001, compared to normal day-to-day fluctuations).
In vivo co-resistance of ART-R240 parasites to quinine, amodiaquine, and halofantrine. The ART-R240 strain, which had shown various patterns of co-resistance to other anti-malarials in vitro, was assessed in vivo with the same compounds either alone or in combination with artesunate. a The ART-R240 parasites showed full in vivo resistance to quinine (QN) 219 mg/kg (three doses of 73 mg/kg every 8 h IV). However, the same parasites in the same mice were sensitive to either chloroquine (CQ) (73 mg/kg PO) or mefloquine (MQ) (50 mg/kg i.p.). b, c In vivo resistance to amodiaquine (AQ) (73 mg/kg PO) and halofantrine (HF) (1 mg/kg IV per day, 4 consecutive days) confirmed in vitro indications. d As expected, resistance was seen to a combination of artesunate (AS) and amodiaquine (AQ), whereas parasites in the same animal remained susceptible to chloroquine. e Susceptibility to the artesunate-mefloquine combination was seen in keeping with in vitro results
We also addressed the in vivo response of ART-R240 to two critical combinations in clinical use: AS plus amodiaquine was ineffective (mean ± SD percentage drop in parasitemia 13.1% ± 0.14% vs. 76.5% in the sensitive control), while AS plus mefloquine was effective (mean ± SD percentage drop in parasitemia 66.8% ± 33.6%; p = 0.004, compared to normal day-to-day fluctuations) (Fig. 7d, e).
Thus, in vivo findings mirrored the in vitro sensitivity profiles.
Restriction fragment length polymorphism assessment of two putative polymorphisms, V3275F and V3306F, in the P. falciparum orthologue of the ubp1 gene revealed no such mutation in the ART-R240, ART-RDD38.4, or parent PAM strain.
Genetic sequencing of PF3D7_1343400 (RAD5 homolog) encoding a putative DNA-repair protein identified the non-synonymous a3392t SNP (MAL13–1718319) in all of the ART-R P. falciparum samples recovered from experimental mice, wherein they had shown resistance to single doses of 38.4 mg/kg, 120 mg/kg, and 240 mg/kg AS, and to a 2-day regimen of 80 mg/kg/day AS. This RAD5 mutation was not identified in the wild type progenitor PAM strain prior to undergoing ART exposure (PAMwt), nor in any of four control clinical P. falciparum isolates collected from Brazil, Senegal, Comoro Islands, and Thailand in the late 1990s. We did not identify any mutation of cNBP in the PAMwt control or ART-R parasites (Additional file 11).
Sequencing of the putative Kelch-13 propeller domain in PF3D7_1343700 (kelch-13) showed no difference between control (3D7, PAMwt) and ART-R strains; it revealed none of the 20 non-synonymous SNPs that have been reported from clinical isolates, nor the SNP identified in P. falciparum that evolved in vitro ART tolerance (M476I) after being cultured for 5 years under artemisinin pressure [45] (Additional file 11). None of the other non-synonymous SNPs in RPB9, PK7, FP2a, or Pfg27 reported in association with the in vitro ART tolerance seen in that strain were found either [45]. Sequencing of exon two of PfCRT revealed the rare CVIKT haplotype [50] linked to moderate resistance to chloroquine in agreement with the in vitro response (chloroquine IC50 = 53 nM).
Pfmdr1 analysis showed a duplication of gene copy number from 1 to 2 copies, and acquisition of the N86Y mutation after in vivo artemisinin drug pressure. No sequence changes were found in the 611 bp PfNHE fragment gene, flanking the DNNND repeat, which is related to quinine resistance [51].
Our results indicate that the P. falciparum human malaria parasite can evolve levels of resistance to ART that are much higher than the DPCT phenotype currently observed, and which could carry much graver consequences both for individual patients and global public health. The mechanisms of this stronger resistance are likely distinct from those underlying DPCT.
Progressive drug pressure in this model selected for high-level, stable resistance to ART in vivo rapidly and reproducibly. Parasites were characterized both in vivo and in vitro, yielding convergent data. The most concerning findings are (1) the degree of resistance selected for and (2) co-resistance to quinine, the only alternative for severe malaria. These results justify concerns about the potential of ART-R strengthening to insurmountable levels in patients, particularly if alternative treatments do not make it through the development pipeline fast enough to offset the prevailing ART drug pressure.
The Pf-NSG model – borne out of our malaria vaccine development project [31, 32] – includes a number of key features that facilitated the selection of ART-R P. falciparum. Mice had parasite biomasses ranging from 2.5 × 109 to 3.8 × 109 per mouse, which is in the range seen in an uncomplicated human infection [52]. These parasites were exposed to AS and its bio-active metabolites (primarily DHA) under similar pharmacokinetics to human infection through metabolic factors that cannot be accounted for in vitro. Drug disposition in these mice (DHA t½ of 36 min) is comparable to patients with malaria [53, 54]. While our drug administration protocol was designed to hasten the evolution of ART-R in vivo with single doses, it is not unrealistic to expect ART mono-therapy [55], poor treatment compliance [56, 57], and counterfeit products [58,59,60] to lead to similarly sub-therapeutic, resistance-selective dosing schedules in the field.
Notable differences between this model and human malaria are that both sexual recombination of parasite genes in the vector and effects of host immunity are by-passed through direct sub-inoculation between mice devoid of an adaptive immune system.
The model allowed us to exert progressive AS pressure, rapidly selecting for ART-R and to characterize resistant strains by their pattern of response to a range of antimalarial drugs in vivo and in vitro. Two stages could be distinguished during the evolution of ART-R. First, parasites showed substantial resistance to AS in vivo (up to absence of response to a single dose 30 mg/kg, i.e., 400-fold decrease in sensitivity) without an associated shift in IC50. This discrepancy between early in vivo resistance and conventional in vitro assays fits with the DPCT pattern seen in humans [6, 7, 61,62,63], supporting the relevance of this model. It confirms that IC50 is not a reliable marker of ART-R.
The phenotype of the second stage of ART-R in this model is in stark contrast to the clinical manifestations of DPCT. This extreme phenotype is clearly different as (1) there is a complete absence of response to very high doses of intravenous AS (240 mg/kg, i.e., 3200-fold decrease in sensitivity), (2) a major shift in DHA-IC50 was demonstrated, and (3) the parasites demonstrated full co-resistance to quinine. The second stage was further characterized as having reproducible stability.
Only two clinical cases of ART-R with increased DHA IC50 (14.0 nM and 14.4 nM) have been reported [62]; the absolute increase of DHA IC50 that we observed (99.9 nM) is far greater, confirming that it differs substantially from DPCT. We can expect that measuring conventional IC50 in the field will continue to fail to unmask in vivo ART-R, even if resistance strengthens to considerably higher levels. The novel RSA could provide a more sensitive means for detecting the early emergence of ART-R, although it is technically challenging [29, 30]. As IC50 did increase in our model, in contrast to the more moderately resistant parasites in the field, the need to perform RSA was less evident, although this could be of interest.
Not only is the degree of resistance achieved alarming, but also the ease with which ART-R selection occurred, specifically 80% of attempts with single-dose and 41% with 2-day treatments. The 2-day regimen was less efficient at inducing ART-R, the shift in IC50 was lower, and co-resistance was less pronounced. This suggests that measures, such as intensified schedules, higher doses and improved compliance with anti-malarial therapy may retard the advancement of ART-R but, ultimately, are unlikely to be sufficient.
The most burning question that remains is, what point along the road to stable, high-level ART-R, as seen in this model, are we currently witnessing in humans? AS is administered at 4 mg/kg/day for uncomplicated malaria as part of a 3-day ACT course. In areas where ART-R has emerged in humans, the percentage parasite reduction rate after 24 h in patient's blood after drug treatment has decreased only modestly, from 99% to 85–91% [64]. We selected for a strain that showed no significant drop in parasitemia at 24 h (i.e., percentage parasite reduction rate after 24 h, 0–27%) after exposure to the human dose of 4 mg/kg AS. This full-resistance phenotype was maintained throughout a step-wise strengthening of the dose up to 240 mg/kg AS, leaving a frightening margin for increase in resistance in the field. Thus, if wild parasites evolve along the same trajectory as observed in our P. falciparum experimental model, we are currently only seeing the tip of the iceberg in the clinic. The absence of adaptive immunity and reduced innate immunity in these mice makes it difficult to extrapolate our findings to human hosts, particularly the speed at which similar resistance may arise.
In the search for a molecular surveillance marker, genetic studies of well-defined clinical isolates from Southeast Asia have demonstrated an association between the DPCT phenotype and non-synonymous mutations of the propeller region in kelch-13 [10, 11, 45, 46, 65, 66] and, to a lesser extent, an SNP in RAD5, which ranked first in one GWAS [44] and fourth in a meta-analysis of relevant GWAS [46]. In a recent GWAS from the China-Myanmar border, RAD5 was significantly associated with ART-R, while kelch-13 was not flagged at all [24]. In our highly ART-R strains we found no kelch-13 mutation; conversely, we found selection of the exact same RAD5 SNP identified in clinical samples [44, 46]. A limitation is that we refrained from performing whole genome sequencing, which would likely reveal numerous mutations, the roles of which would require lengthy investigation and could be the focus of future studies.
The significance of the many kelch-13 mutations is not as straightforward as was once thought [67]. In the original Southeast Asia focus of ART-R, approximately 30 different SNPs have been found in kelch-13, circa 20 of which are in the paddle region. Mutations in this region have been confirmed by four distinct GWAS to be significantly associated with DPCT in Southeast Asian parasites [26, 44, 46, 68]. However, a substantial number of isolates with the same mutations (in the locations with high DPCT prevalence) showed no sign of delayed clearance and, perhaps more importantly, a number of isolates with the wild type genotype showed DPCT [10, 45]. Data from Africa are even more puzzling – in the absence of any clear DPCT phenotype, an unexpectedly large number of kelch-13 propeller SNPs were found in parasites from 14 African sites, some at high frequency; 15 of these 24 SNPs were novel, but 3 have previously been associated with DPCT in Southeast Asia [69]. Thus, we are now faced with a number of kelch-13 mutant alleles of uncertain clinical significance. On the other hand, SNPs in RAD5 are extremely rare outside Asia [70], yet one was selected for in our parasites of African origin under ART pressure.
Our results add a further layer of complexity, showing that far stronger ART-R can exist in P. falciparum without kelch-13 propeller domain mutations, implying that other ART-R genes or mechanisms exist and will need to be characterized. The two strains and the novel in vivo model we developed provide the tools to do so. In practical terms, ART-R should no longer be considered excluded just because there is an absence of kelch-13 mutations. This has important consequences for ART-R surveillance in Africa.
Our results are in keeping with a recent study that relates ART-R to an interaction of dihydroartemisinin with phosphatidylinositol-3-phosphate kinase, and indicates that elevated phosphatidyl-inositol-3 phosphate can be associated with resistance in the absence of kelch-13 mutations [71]. kelch-13 is not a direct target of ART [27, 28]. Indirect effects of kelch-13 mutations on phosphatidyl-inositol-3 phosphate and glutathione may counteract ART [28], but it is unlikely to be the only player. Whatever the mechanism, the suggestion that far stronger resistance might yet evolve stealthily in humans calls for urgent and radical measures to monitor and contain ART-R.
We did not run a control group in parallel. During our experience with P. falciparum in successive mouse models [32], using both drug-sensitive and drug-resistant parasites [41], and more recently in the NSG model [31], we never observed a spontaneous change in drug response. These models were developed for our vaccine development project; the many animals infected by P. falciparum either contributed to understanding innate defense against malaria [31, 72] or were passively immunized to screen vaccine candidates [32, 73]. In this context, the parasite employed for the present study had already been passaged in mice for 7 months and proved to have maintained sensitivity to ART derivatives and other drugs both in vivo (Figs. 1 and 7) and in vitro (Figs. 5 and 6), where it served as the sensitive reference. However, a control parasite line should have been maintained in mice, in parallel, without drug exposure – this is a limitation of the study.
We repeatedly find ourselves on the back-foot in the campaign against malaria as there is a lack of tools to help us anticipate how the parasite will adapt to policy changes. GWAS, which have been extensively used, have major limitations. They can only characterize resistant parasites after they have emerged and merely provide circumstantial, rather than causative, evidence. One practical suggestion could be the application of novel models, such as the one presented here, to study the evolution and analyze the phenotypic adaptation of malaria parasites to drug pressure in vivo. While clinical efficacy data should remain the gold standard, the model presented here could be used as a tool to assess the phenotype of isolates with given genotypes (e.g., novel kelch-13 mutations identified in Africa). Patient isolates can readily grow in the Pf-NSG model [31], allowing in vitro and in vivo methods to be used concurrently on clinical isolates. The model may also be used to characterize in vivo responses to experimental molecules at a preclinical level, and to trial alternative drug combinations (including triple therapy) that might bridle the evolution of ART-R [3]. This will allow an estimation of the time to resistance evolution for each compound or combination, without the impractical delays seen using in vitro methods [23].
The concomitant development of full resistance to quinine, halofantrine, and amodiaquine in the ART-R240 strain, despite exclusive exposure to AS, was unforeseen. However, it is not all too surprising as in vitro resistance to quinine has previously been reported after exclusive exposure to ART [74]. Resistance to structurally unrelated antimalarials has been linked to changes in the Pfmdr1 gene, which encodes the P-glycoprotein pump essential for parasite detoxification [75]. In this study, the ART-R120 strain and ART-R240 had an amplified pfmdr gene, in agreement with the high level of resistance developed towards AS, quinine, halofantrine, and amodiaquine [48]. An association of AS-mefloquine treatment failure with increased pfmdr copy number has been reported in north-western Cambodia [76].
The phenomenon of multidrug resistance despite single drug exposure is well recognized in microbiology and, in some instances, is mediated by up-regulation of a pro-mutagenic DNA repair response [77]. Parasites from Cambodia have a pro-mutagenic phenotype, favoring acquisition of new mutations [78]. Intense oxidative stress caused by AS exposure could stimulate this process [79]. It remains to be seen if the mutation in RAD5, a gene encoding a DNA post-replication repair protein [80], contributes to a pro-mutagenic state and development of multidrug resistance, or if it improves DNA repair.
Co-resistance to IV quinine and to one of the most widely used ACTs (AS-amodiaquine) – two critical weapons in the anti-malaria armamentarium – was fully verified both in vivo and in vitro. Resistance to quinine also arose using the DD regimen, indicating it has unlikely occurred by chance. Though high quinine IC50 values have occasionally been reported ex vivo (e.g., 829 nM and 1019 nM) [29, 41], to our knowledge, frank resistance to treatment with a 219 mg/kg dose, as seen here, has not been reported from the clinic. Given the widespread use of ACT worldwide, the suggestion that ART pressure might also favor quinine resistance is of major concern.
These results were obtained in vivo using P. falciparum maintained in huRBC. Should clinical resistance to ART and ACT evolve further along the trajectory seen here, with co-resistance to quinine and other antimalarials, we would be left abruptly with no satisfactory option for treating severe malaria and a compromised choice of treatments for uncomplicated malaria [3]. Indeed, the current dependence on ARTs for both uncomplicated and severe malaria, together with a lack of viable therapeutic alternatives, leaves decision-makers with very limited options. This would have dire consequences not only in the management of individual cases, but would cripple efforts to achieve malaria control globally.
artemisinin-based combination therapy
APC:
artesunate pressure cycle
ART-R:
artemisinin resistance of any level
DD:
double dose
dihydroartemisinin
DMSO:
DPCT:
delayed parasite clearance time
GWAS:
genome wide association study
huRBC :
human erythrocytes
IC50 :
50% inhibitory concentration
lowest effective dose
NSG:
NOD/SCID IL-2Rγ−/− mice
PAM:
Uganda Palo Alto Marburg strain of P. falciparum
RSA:
ring-stage survival
Organisation WH. Guidelines for the Treatment of Malaria. In: 2nd edition: World Health Organisation; 2010; 2011.
White NJ. Qinghaosu (artemisinin): the price of success. Science. 2008;320(5874):330–4.
Phyo AP, von Seidlein L. Challenges to replace ACT as first-line drug. Malar J. 2017;16(1):296.
Dondorp AM, Yeung S, White L, Nguon C, Day NP, Socheat D, von Seidlein L. Artemisinin resistance: current status and scenarios for containment. Nat Rev Microbiol. 2010;8(4):272–80.
Phyo AP, Nkhoma S, Stepniewska K, Ashley EA, Nair S. McGready R, ler Moo C, Al-Saai S, Dondorp AM, Lwin KM et al: Emergence of artemisinin-resistant malaria on the western border of Thailand: a longitudinal study. Lancet. 2012;379(9830):1960–6.
Dondorp AM, Nosten F, Yi P, Das D, Phyo AP, Tarning J, Lwin KM, Ariey F, Hanpithakpong W, Lee SJ, et al. Artemisinin resistance in Plasmodium falciparum malaria. N Engl J Med. 2009;361(5):455–67.
Noedl H, Se Y, Schaecher K, Smith BL, Socheat D, Fukuda MM. Evidence of artemisinin-resistant malaria in western Cambodia. N Engl J Med. 2008;359(24):2619–20.
Hien TT, Thuy-Nhien NT, Phu NH, Boni MF, Thanh NV, Nha-Ca NT. Thai le H, Thai CQ, Toi PV, Thuan PD et al: In vivo susceptibility of Plasmodium falciparum to artesunate in Binh Phuoc Province, Vietnam. Malar J. 2012;11:355.
Kyaw MP, Nyunt MH, Chit K, Aye MM, Aye KH, Aye MM, Lindegardh N, Tarning J, Imwong M, Jacob CG, et al. Reduced susceptibility of Plasmodium falciparum to artesunate in southern Myanmar. PLoS One. 2013;8(3):e57689.
Ashley EA, Dhorda M, Fairhurst RM, Amaratunga C, Lim P, Suon S, Sreng S, Anderson JM, Mao S, Sam B, et al. Spread of artemisinin resistance in Plasmodium falciparum malaria. N Engl J Med. 2014;371(5):411–23.
Menard D, Khim N, Beghain J, Adegnika AA, Shafiul-Alam M, Amodu O, Rahim-Awab G, Barnadas C, Berry A, Boum Y, et al. A Worldwide Map of Plasmodium falciparum K13-Propeller Polymorphisms. N Engl J Med. 2016;374(25):2453–64.
Hawkes M, Conroy AL, Kain KC. Spread of artemisinin resistance in malaria. N Engl J Med. 2014;371(20):1944–5.
Beshir KB, Sutherland CJ, Sawa P, Drakeley CJ, Okell L, Mweresa CK, Omar SA, Shekalaghe SA, Kaur H, Ndaro A, et al. Residual Plasmodium falciparum parasitemia in Kenyan children after artemisinin-combination therapy is associated with increased transmission to mosquitoes and parasite recurrence. J Infect Dis. 2013;208(12):2017–24.
Dondorp AM, Fairhurst RM, Slutsker L, Macarthur JR, Breman JG, Guerin PJ, Wellems TE, Ringwald P, Newman RD, Plowe CV. The threat of artemisinin-resistant malaria. N Engl J Med. 2011;365(12):1073–5.
Dell'Acqua R, Fabrizio C, Di Gennaro F, Lo Caputo S, Saracino A, Menegon M, L'Episcopia M, Severini C, Monno L, Castelli F, et al. An intricate case of multidrug resistant Plasmodium falciparum isolate imported from Cambodia. Malar J. 2017;16(1):149.
Imwong M, Hien TT, Thuy-Nhien NT, Dondorp AM, White NJ. Spread of a single multidrug resistant malaria parasite lineage (PfPailin) to Vietnam. Lancet Infect Dis. 2017;17(10):1022–3.
Kwansa-Bentum B, Ayi I, Suzuki T, Otchere J, Kumagai T, Anyan WK, Osei JH, Asahi H, Ofori MF, Akao N, et al. Plasmodium falciparum isolates from southern Ghana exhibit polymorphisms in the SERCA-type PfATPase6 though sensitive to artesunate in vitro. Malar J. 2011;10:187.
Phompradit P, Wisedpanichkij R, Muhamad P, Chaijaroenkul W, Na-Bangchang K. Molecular analysis of pfatp6 and pfmdr1 polymorphisms and their association with in vitro sensitivity in Plasmodium falciparum isolates from the Thai-Myanmar border. Acta Trop. 2011;120(1-2):130–5.
Pillai DR, Lau R, Khairnar K, Lepore R, Via A, Staines HM, Krishna S. Artemether resistance in vitro is linked to mutations in PfATP6 that also interact with mutations in PfMDR1 in travellers returning with Plasmodium falciparum infections. Malar J. 2012;11:131.
Jambou R, Legrand E, Niang M, Khim N, Lim P, Volney B, Ekala MT, Bouchier C, Esterre P, Fandeur T, et al. Resistance of Plasmodium falciparum field isolates to in-vitro artemether and point mutations of the SERCA-type PfATPase6. Lancet. 2005;366(9501):1960–3.
Afonso A, Hunt P, Cheesman S, Alves AC. Cunha CV, do Rosario V, Cravo P: Malaria parasites can develop stable resistance to artemisinin but lack mutations in candidate genes atp6 (encoding the sarcoplasmic and endoplasmic reticulum Ca2+ ATPase), tctp, mdr1, and cg10. Antimicrob Agents Chemother. 2006;50(2):480–9.
Puri SK, Chandra R. Plasmodium vinckei: selection of a strain exhibiting stable resistance to arteether. Exp Parasitol. 2006;114(2):129–32.
Witkowski B, Lelievre J, Barragan MJ, Laurent V, Su XZ, Berry A, Benoit-Vical F. Increased tolerance to artemisinin in Plasmodium falciparum is mediated by a quiescence mechanism. Antimicrob Agents Chemother. 2010;54(5):1872–7.
Wang Z, Cabrera M, Yang J, Yuan L, Gupta B, Liang X, Kemirembe K, Shrestha S, Brashear A, Li X, et al. Genome-wide association analysis identifies genetic loci associated with resistance to multiple antimalarials in Plasmodium falciparum from China-Myanmar border. Sci Rep. 2016;6:33891.
Mok S, Ashley EA, Ferreira PE, Zhu L, Lin Z, Yeo T, Chotivanich K, Imwong M, Pukrittayakamee S, Dhorda M, et al. Population transcriptomics of human malaria parasites reveals the mechanism of artemisinin resistance. Science. 2014;47(6220):431–5.
Cheeseman IH, Miller BA, Nair S, Nkhoma S, Tan A, Tan JC, Al Saai S, Phyo AP, Moo CL, Lwin KM, et al. A major genome region underlying artemisinin resistance in malaria. Science. 2012;336(6077):79–82.
Wang J, Zhang CJ, Chia WN, Loh CC, Li Z, Lee YM, He Y, Yuan LX, Lim TK, Liu M, et al. Haem-activated promiscuous targeting of artemisinin in Plasmodium falciparum. Nat Commun. 2015;6:10111.
Siddiqui G, Srivastava A, Russell AS, Creek DJ. Multi-omics Based Identification of Specific Biochemical Changes Associated With PfKelch13-Mutant Artemisinin-Resistant Plasmodium falciparum. J Infect Dis. 2017;215(9):1435–44.
Witkowski B, Amaratunga C, Khim N, Sreng S, Chim P, Kim S, Lim P, Mao S, Sopha C, Sam B, et al. Novel phenotypic assays for the detection of artemisinin-resistant Plasmodium falciparum malaria in Cambodia: in-vitro and ex-vivo drug-response studies. Lancet Infect Dis. 2013;13(12):1043–9.
Chotivanich K, Tripura R, Das D, Yi P, Day NP, Pukrittayakamee S, Chuor CM, Socheat D, Dondorp AM, White NJ. Laboratory detection of artemisinin-resistant Plasmodium falciparum. Antimicrob Agents Chemother. 2014;58(6):3157–61.
Arnold L, Tyagi RK, Meija P, Swetman C, Gleeson J, Perignon JL, Druilhe P. Further improvements of the P. falciparum humanized mouse model. PLoS One. 2011;6(3):e18045.
Badell E, Oeuvray C, Moreno A, Soe S, van Rooijen N, Bouzidi A, Druilhe P. Human malaria in immunocompromised mice: an in vivo model to study defense mechanisms against Plasmodium falciparum. J Exp Med. 2000;192(11):1653–60.
Fandeur T, Bonnefoy S, Mercereau-Puijalon O. In vivo and in vitro derived Palo Alto lines of Plasmodium falciparum are genetically unrelated. Mol Biochem Parasitol. 1991;47(2):167–78.
Siddiqui WA, Schnell JV, Geiman QM. A model in vitro system to test the susceptibility of human malarial parasites to antimalarial drugs. The American journal of tropical medicine and hygiene. 1972;21(4):393–9.
De Lucia S, Tsamesidis I, Pau MC, Kesely KR, Pantaleo A, Turrini F. Induction of high tolerance to artemisinin by sub-lethal administration: A new in vitro model of P. falciparum. PloS One. 2018;13(1):e0191084.
Rowe AW, Eyster E, Kellner A. Liquid nitrogen preservation of red blood cells for transfusion; a low glycerol-rapid freeze procedure. Cryobiology. 1968;5(2):119–28.
Lambros C, Vanderberg JP. Synchronization of Plasmodium falciparum erythrocytic stages in culture. J Parasitol. 1979;65(3):418–20.
Reese RT, Langreth SG, Trager W. Isolation of stages of the human parasite Plasmodium falciparum from culture and from animal blood. Bull World Health Organ. 1979;57(Suppl 1):53–61.
Druilhe P, Moreno A, Blanc C, Brasseur PH, Jacquier P. A colorimetric in vitro drug sensitivity assay for Plasmodium falciparum based on a highly sensitive double-site lactate dehydrogenase antigen-capture enzyme-linked immunosorbent assay. The American journal of tropical medicine and hygiene. 2001;64(5-6):233–41.
Desjardins RE, Canfield CJ, Haynes JD, Chulay JD. Quantitative assessment of antimalarial activity in vitro by a semiautomated microdilution technique. Antimicrob Agents Chemother. 1979;16(6):710–8.
Moreno A, Badell E, Van Rooijen N, Druilhe P. Human malaria in immunocompromised mice: new in vivo model for chemotherapy studies. Antimicrob Agents Chemother. 2001;45(6):1847–53.
Hodel EM, Zanolari B, Mercier T, Biollaz J, Keiser J, Olliaro P, Genton B, Decosterd LA. A single LC-tandem mass spectrometry method for the simultaneous determination of 14 antimalarial drugs and their metabolites in human plasma. J Chromatogr B Anal Technol Biomed Life Sci. 2009;877(10):867–86.
Hunt P, Afonso A, Creasey A, Culleton R, Sidhu AB, Logan J, Valderramos SG, McNae I. Cheesman S, do Rosario V et al: Gene encoding a deubiquitinating enzyme is mutated in artesunate- and chloroquine-resistant rodent malaria parasites. Mol Microbiol. 2007;65(1):27–40.
Takala-Harrison S, Clark TG, Jacob CG, Cummings MP, Miotto O, Dondorp AM, Fukuda MM, Nosten F, Noedl H, Imwong M, et al. Genetic loci associated with delayed clearance of Plasmodium falciparum following artemisinin treatment in Southeast Asia. Proc Natl Acad Sci U S A. 2013;110(1):240–5.
Ariey F, Witkowski B, Amaratunga C, Beghain J, Langlois AC, Khim N, Kim S, Duru V, Bouchier C, Ma L, et al. A molecular marker of artemisinin-resistant Plasmodium falciparum malaria. Nature. 2014;505(7481):50–5.
Takala-Harrison S, Jacob CG, Arze C, Cummings MP, Silva JC, Dondorp AM, Fukuda MM, Hien TT, Mayxay M, Noedl H, et al. Independent emergence of artemisinin resistance mutations among Plasmodium falciparum in Southeast Asia. J Infect Dis. 2015;211(5):670–9.
Basco LK, Ringwald P. Molecular epidemiology of malaria in Yaounde, Cameroon. III. Analysis of chloroquine resistance and point mutations in the multidrug resistance 1 (pfmdr 1) gene of Plasmodium falciparum. The American journal of tropical medicine and hygiene. 1998;59(4):577–81.
Price RN, Uhlemann AC, Brockman A, McGready R, Ashley E, Phaipun L, Patel R, Laing K, Looareesuwan S, White NJ, et al. Mefloquine resistance in Plasmodium falciparum and increased pfmdr1 gene copy number. Lancet. 2004;364(9432):438–47.
Nontprasert A, Pukrittayakamee S, Nosten-Bertrand M, Vanijanonta S, White NJ. Studies of the neurotoxicity of oral artemisinin derivatives in mice. The American journal of tropical medicine and hygiene. 2000;62(3):409–12.
Nagesha HS, Casey GJ, Rieckmann KH, Fryauff DJ, Laksana BS, Reeder JC, Maguire JD, Baird JK. New haplotypes of the Plasmodium falciparum chloroquine resistance transporter (pfcrt) gene among chloroquine-resistant parasite isolates. The American journal of tropical medicine and hygiene. 2003;68(4):398–402.
Menard D, Andriantsoanirina V, Khim N, Ratsimbasoa A, Witkowski B, Benedet C, Canier L, Mercereau-Puijalon O, Durand R. Global analysis of Plasmodium falciparum Na(+)/H(+) exchanger (pfnhe-1) allele polymorphism and its usefulness as a marker of in vitro resistance to quinine. Int J Parasitol Drugs Drug Resist. 2013;3:8–19.
White NJ, Pongtavornpinyo W, Maude RJ, Saralamba S, Aguas R, Stepniewska K, Lee SJ, Dondorp AM, White LJ, Day NP. Hyperparasitaemia and low dosing are an important source of anti-malarial drug resistance. Malar J. 2009;8:253.
Melendez V. Metabolic profile of artesunate and DHA using human plasma, liver, and hepatocytes. In: Annual progress report. Pennsylvania. USA: Armed Forces Research Institute of Medical Sciences; 2003. p. 214.
Davis TM, Phuong HL, Ilett KF, Hung NC, Batty KT, Phuong VD, Powell SM, Thien HV, Binh TQ. Pharmacokinetics and pharmacodynamics of intravenous artesunate in severe falciparum malaria. Antimicrob Agents Chemother. 2001;45(1):181–6.
Yeung S, Van Damme W, Socheat D, White NJ, Mills A. Access to artemisinin combination therapy for malaria in remote areas of Cambodia. Malar J. 2008;7:96.
Cohen JL, Yavuz E, Morris A, Arkedis J, Sabot O. Do patients adhere to over-the-counter artemisinin combination therapy for malaria? evidence from an intervention study in Uganda. Malar J. 2012;11:83.
Lemma H, Lofgren C, San Sebastian M. Adherence to a six-dose regimen of artemether-lumefantrine among uncomplicated Plasmodium falciparum patients in the Tigray Region, Ethiopia. Malar J. 2011;10:349.
Newton PN, McGready R, Fernandez F, Green MD, Sunjio M, Bruneton C, Phanouvong S, Millet P, Whitty CJ, Talisuna AO, et al. Manslaughter by fake artesunate in Asia--will Africa be next? PLoS Med. 2006;3(6):e197.
Sengaloundeth S, Green MD, Fernandez FM, Manolin O, Phommavong K, Insixiengmay V, Hampton CY, Nyadong L, Mildenhall DC, Hostetler D, et al. A stratified random survey of the proportion of poor quality oral artesunate sold at medicine outlets in the Lao PDR - implications for therapeutic failure and drug resistance. Malar J. 2009;8:172.
El-Duah M, Ofori-Kwakye K. Substandard artemisinin-based antimalarial medicines in licensed retail pharmaceutical outlets in Ghana. Journal of vector borne diseases. 2012;49(3):131–9.
Amaratunga C, Sreng S, Suon S, Phelps ES, Stepniewska K, Lim P, Zhou C, Mao S, Anderson JM, Lindegardh N, et al. Artemisinin-resistant Plasmodium falciparum in Pursat province, western Cambodia: a parasite clearance rate study. Lancet Infect Dis. 2012;12(11):851–8.
Noedl H, Se Y, Sriwichai S, Schaecher K, Teja-Isavadharm P, Smith B, Rutvisuttinunt W, Bethell D, Surasri S, Fukuda MM, et al. Artemisinin resistance in Cambodia: a clinical trial designed to address an emerging problem in Southeast Asia. Clinical infectious diseases : an official publication of the Infectious Diseases Society of America. 2010;51(11):e82–9.
Leang R, Barrette A, Bouth DM, Menard D, Abdur R, Duong S, Ringwald P. Efficacy of dihydroartemisinin-piperaquine for treatment of uncomplicated Plasmodium falciparum and Plasmodium vivax in Cambodia, 2008 to 2010. Antimicrob Agents Chemother. 2013;57(2):818–26.
Das D, Tripura R, Phyo AP, Lwin KM, Tarning J, Lee SJ, Hanpithakpong W, Stepniewska K, Menard D, Ringwald P, et al. Effect of high-dose or split-dose artesunate on parasite clearance in artemisinin-resistant falciparum malaria. Clinical infectious diseases : an official publication of the Infectious Diseases Society of America. 2013;56(5):e48–58.
Ghorbal M, Gorman M, Macpherson CR, Martins RM, Scherf A, Lopez-Rubio JJ. Genome editing in the human malaria parasite Plasmodium falciparum using the CRISPR-Cas9 system. Nat Biotechnol. 2014;32(8):819–21.
Straimer J, Gnadig NF, Witkowski B, Amaratunga C, Duru V, Ramadani AP, Dacheux M, Khim N, Zhang L, Lam S, et al. Drug resistance. K13-propeller mutations confer artemisinin resistance in Plasmodium falciparum clinical isolates. Science. 2015;347(6220):428–31.
Sibley C. Artemisinin resistance: the more we know, the more complicated it appears. J Infect Dis. 2015;211(5):667–9.
Miotto O, Amato R, Ashley EA, MacInnis B, Almagro-Garcia J, Amaratunga C, Lim P, Mead D, Oyola SO, Dhorda M, et al. Genetic architecture of artemisinin-resistant Plasmodium falciparum. Nat Genet. 2015;47(3):226–34.
Taylor SM, Parobek CM, DeConti DK, Kayentao K, Coulibaly SO, Greenwood BM, Tagbor H, Williams J, Bojang K, Njie F, et al. Absence of putative artemisinin resistance mutations among Plasmodium falciparum in Sub-Saharan Africa: a molecular epidemiologic study. J Infect Dis. 2015;211(5):680–8.
Murai K, Culleton R, Hisaoka T, Endo H, Mita T. Global distribution of polymorphisms associated with delayed Plasmodium falciparum parasite clearance following artemisinin treatment: genotyping of archive blood samples. Parasitol Int. 2015;64(3):267–73.
Mbengue A, Bhattacharjee S, Pandharkar T, Liu H, Estiu G, Stahelin RV, Rizk SS, Njimoh DL, Ryan Y, Chotivanich K, et al. A molecular mechanism of artemisinin resistance in Plasmodium falciparum malaria. Nature. 2015;520(7549):683–7.
Arnold L, Tyagi RK, Mejia P, Van Rooijen N, Perignon JL, Druilhe P. Analysis of innate defences against Plasmodium falciparum in immunodeficient mice. Malar J. 2010;9:197.
Druilhe P, Spertini F, Soesoe D, Corradin G, Mejia P, Singh S, Audran R, Bouzidi A, Oeuvray C, Roussilhon C. A malaria vaccine that elicits in humans antibodies able to kill Plasmodium falciparum. PLoS Med. 2005;2(11):e344.
Chavchich M, Gerena L, Peters J, Chen N, Cheng Q, Kyle DE. Role of pfmdr1 amplification and expression in induction of resistance to artemisinin derivatives in Plasmodium falciparum. Antimicrob Agents Chemother. 2010;54(6):2455–64.
Reed MB, Saliba KJ, Caruana SR, Kirk K, Cowman AF. Pgh1 modulates sensitivity and resistance to multiple antimalarials in Plasmodium falciparum. Nature. 2000;403(6772):906–9.
Lim P, Alker AP, Khim N, Shah NK, Incardona S, Doung S, Yi P, Bouth DM, Bouchier C, Puijalon OM, et al. Pfmdr1 copy number and arteminisin derivatives combination therapy failure in falciparum malaria in Cambodia. Malar J. 2009;8:11.
Kohanski MA, DePristo MA, Collins JJ. Sublethal antibiotic treatment leads to multidrug resistance via radical-induced mutagenesis. Mol Cell. 2010;37(3):311–20.
Lee AH, Fidock DA. Evidence of a Mild Mutator Phenotype in Cambodian Plasmodium falciparum Malaria Parasites. PLoS One. 2016;11(4):e0154166.
Gupta DK, Patra AT, Zhu L, Gupta AP, Bozdech Z. DNA damage regulation and its role in drug-related phenotypes in the malaria parasites. Sci Rep. 2016;6:23603.
Ginsburg H. Progress in in silico functional genomics: the malaria Metabolic Pathways database. Trends Parasitol. 2006;22(6):238–40.
This work would not have been possible without the financial contributions received from the Vac4All Initiative. We are indebted to Nicolas Widmer for performing pharmacokinetic calculations. We thank Christian Roussilhon, Geneviève Milon, Karima Brahimi, and Edgar Badell for their advice and contributions. We are grateful to Dr. Nico van Rooijen, Antion Longo (Sigma-Tau), and Philippe Brasseur for their generous gifts of essential materials.
This work was funded by the Vac4All Initiative. The initial mouse model employed was developed by the Bio-Medical Parasitology Unit at Institut Pasteur. Vac4all thereafter covered the expenses of personnel, reagents, animals, molecular studies, rent, and sundries required to gather the results presented.
The datasets supporting the conclusions of this article are included within the article and its additional files. The resistant strains are available on request to the corresponding author.
Rajeev K. Tyagi, Patrick J. Gleeson and Ludovic Arnold contributed equally to this work.
The Vac4All Initiative, 26 Rue Lecourbe, 75015, Paris, France
Rajeev K. Tyagi
, Patrick J. Gleeson
, Ludovic Arnold
, Eric Prieur
, Jean-Louis Pérignon
& Pierre Druilhe
Biomedical Parasitology Unit, Institut Pasteur, Paris, France
Faculté de Pharmacie, Université Paris Descartes, COMUE Sorbonne Paris Cité, Paris, France
Rachida Tahar
Institut de Recherche pour le Développement, UMR MERIT 216, Paris, France
Division of Clinical Pharmacology, Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland
Laurent Decosterd
Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, UK
Piero Olliaro
Present Address: Amity Institute of Microbial Technology, Amity University, Noida, Uttar Pradesh, India
Present Address: Centre de Recherche sur l'Inflammation, INSERM U1149, Faculté de Médecine, Université Diderot-Site Bichat, 16 rue Henri Huchard, 75018, Paris, France
Patrick J. Gleeson
Present Address: Laboratoire de Biochimie, Hôpital Necker-Enfants Malades, Paris, France
Jean-Louis Pérignon
Search for Rajeev K. Tyagi in:
Search for Patrick J. Gleeson in:
Search for Ludovic Arnold in:
Search for Rachida Tahar in:
Search for Eric Prieur in:
Search for Laurent Decosterd in:
Search for Jean-Louis Pérignon in:
Search for Piero Olliaro in:
Search for Pierre Druilhe in:
RKT performed in vivo and in vitro experiments. PJG performed in vitro experiments, analyzed data, and contributed to writing manuscript. LA performed in vivo experiments and designed methods. LD determined plasma DHA concentrations. RT performed molecular studies. EP performed molecular studies. JLP and PO analyzed data and contributed to writing the manuscript. PD designed experiments, analyzed results, and supervised writing manuscript. All authors read and approved the final manuscript.
Correspondence to Pierre Druilhe.
All procedures were carried out in line with European Community Council Directive, 24th November 1986 (86/609/EEC), and European Union guidelines. All procedures were reviewed and approved by the Pasteur Institute Animal ethical committee (approval number A 75 15–27).
Details of genes, regions and primers used in genetic sequencing of P. falciparum artemisinin-resistant and control strains. (PDF 83 kb)
Morphological changes of artemisinin-resistant parasites under treatment. (PDF 181 kb)
Selection schema for single-dose resistant strain. (PDF 160 kb)
Schematic representation of use of drug pressure to select for single-dose resistance in one mouse. (PDF 204 kb)
Number and intensity of artesunate drug pressure cycles required to select for artemisinin resistance using single doses of artesunate. (PDF 97 kb)
Gametocytes developing from artemisinin resistant parasites. (PDF 108 kb)
Selection schema for 2-day dose resistant strain. (PDF 140 kb)
Number and intensity of artesunate drug pressure cycles required to select for artemisinin resistance using a 2-day artesunate regimen. (PDF 93 kb)
Mouse plasma dihydroartemisinin (DHA) concentrations measured after intravenous administration of artesunate. (PDF 108 kb)
Additional file 10:
In vitro IC50 (95% CI) values at different stages of artemisinin resistance (ART-R). (PDF 69 kb)
Genetic sequencing of RAD5, cNBP, and K-13. (PDF 1419 kb)
Tyagi, R.K., Gleeson, P.J., Arnold, L. et al. High-level artemisinin-resistance with quinine co-resistance emerges in P. falciparum malaria under in vivo artesunate pressure. BMC Med 16, 181 (2018) doi:10.1186/s12916-018-1156-x
DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-018-1156-x
P. falciparum
NSG mice
|
CommonCrawl
|
Uniqueness of solutions for the non-cutoff Boltzmann equation with soft potential
KRM Home
Fractional diffusion limit for collisional kinetic equations: A Hilbert expansion approach
December 2011, 4(4): 901-918. doi: 10.3934/krm.2011.4.901
Kinetic formulation and global existence for the Hall-Magneto-hydrodynamics system
Marion Acheritogaray 1, , Pierre Degond 1, , Amic Frouvelle 1, and Jian-Guo Liu 2,
1-Université de Toulouse; UPS, INSA, UT1, UTM, Institut de Mathmatiques de Toulouse, F-31062 Toulouse, France, France, France
Department of Physics and Department of Mathematics, Duke University, Durham, NC 27708, United States
Received July 2011 Revised August 2011 Published November 2011
This paper deals with the derivation and analysis of the the Hall Magneto-Hydrodynamic equations. We first provide a derivation of this system from a two-fluids Euler-Maxwell system for electrons and ions, through a set of scaling limits. We also propose a kinetic formulation for the Hall-MHD equations which contains as fluid closure different variants of the Hall-MHD model. Then, we prove the existence of global weak solutions for the incompressible viscous resistive Hall-MHD model. We use the particular structure of the Hall term which has zero contribution to the energy identity. Finally, we discuss particular solutions in the form of axisymmetric purely swirling magnetic fields and propose some regularization of the Hall equation.
Keywords: incompressible viscous flow, Hall-MHD, global weak solutions, KMC waves., entropy dissipation, kinetic formulation, generalized Ohm's law, resistivity.
Mathematics Subject Classification: Primary: 35L60; Secondary: 35K55, 35Q8.
Citation: Marion Acheritogaray, Pierre Degond, Amic Frouvelle, Jian-Guo Liu. Kinetic formulation and global existence for the Hall-Magneto-hydrodynamics system. Kinetic & Related Models, 2011, 4 (4) : 901-918. doi: 10.3934/krm.2011.4.901
L. Arnold, J. Dreher and R. Grauer, A semi-implicit Hall-MHD solver using whistler wave preconditioning,, Comput. Phys. Comm., 178 (2008), 553. Google Scholar
S. I. Braginskii, Transport processes in a plasma,, in, (1965). Google Scholar
B. Cassany and P. Grua, Analysis of the operating regimes of microsecond-conduction-time plasma opening switches,, J. Appl. Phys., 78 (1995), 67. doi: 10.1063/1.360583. Google Scholar
L. Chacòn and D. A. Knoll, A 2D high-$\beta$ Hall MHD implicit nonlinear solver,, J. Comput. Phys., 188 (2003), 573. doi: 10.1016/S0021-9991(03)00193-1. Google Scholar
P. Degond, Asymptotic continuum models for plasmas and disparate mass gaseous binary mixtures,, in, (2007). doi: 10.1016/B978-008044535-9/50002-9. Google Scholar
P. Degond, F. Deluzet, G. Dimarco, G. Gallice, P. Santagati and C. Tessieras, Simulation of non-equilibrium plasmas with a numerical noise-reduced particle-in-cell method,, in, (2010), 10. Google Scholar
P. Degond and B. Lucquin-Desreux, Transport coefficients of plasmas and disparate mass binary gases,, Transport Theory Statist. Phys., 25 (1996), 595. doi: 10.1080/00411459608222915. Google Scholar
J. Dreher, V. Runban and R. Grauer, Axisymmetric flows in Hall-MHD: A tendency towards finite-time singularity formation,, Physica Scripta, 72 (2005), 451. doi: 10.1088/0031-8949/72/6/004. Google Scholar
G. Duvaut and J.-L. Lions, inéquations en thermoélasticité et magnétohydrodynamique,, Arch. Ration. Mech. Anal., 46 (1972), 241. Google Scholar
C. Evans, "Partial Differential Equations,'', 2nd edition, 19 (2009). Google Scholar
T. G. Forbes, Magnetic reconnection in solar flares,, Geophysical and Astrophysical Fluid Dynamics, 62 (1991), 15. doi: 10.1080/03091929108229123. Google Scholar
H. Homann and R. Grauer, Bifurcation analysis of magnetic reconnection in Hall-MHD systems,, Physica D, 208 (2005), 59. doi: 10.1016/j.physd.2005.06.003. Google Scholar
D. S. Harned and Z. Mikić, Accurate semi-implicit treatment of the Hall effect in magnetohydrodynamic computations,, J. Comput. Phys., 83 (1989), 1. doi: 10.1016/0021-9991(89)90220-9. Google Scholar
J. D. Huba and L. I. Rudakov, Hall magnetohydrodynamics of reversed field current layers,, Physica Scripta, T107 (2004), 20. doi: 10.1238/Physica.Topical.107a00020. Google Scholar
F. Kazeminezhad, J. N. Leboeuf, F. Brunel and J. M. Dawson, A discrete model for MHD incorporating the Hall term,, J. Comput. Phys., 104 (1993), 398. doi: 10.1006/jcph.1993.1039. Google Scholar
A. S. Kingsep, Yu. V. Mokhov and Y. V. Chukbar, Nonlinear skin phenomenas in plasmas, Nonlinear and Turbulent Processes in Physics,, in, (1983), 10. Google Scholar
J.-L.Lions, "Quelques méthodes de résolution des problèmes aux limites non linéaires,'', Dunod, (1969). Google Scholar
J.-G. Liu and W.-C. Wang, Characterization and regularity for axisymmetric solenoidal vector fields with application to Navier-Stokes equation,, SIAM J. Math. Anal., 41 (2009), 1825. Google Scholar
S. M. Mahajan and V. Krishan, Exact solution of the incompressible Hall magnetohydrodynamics,, Mon. Not. R. Astron. Soc., 359 (2005). doi: 10.1111/j.1745-3933.2005.00028.x. Google Scholar
F. Méhats and J.-M. Roquejoffre, A nonlinear oblique derivative boundary value problem for the heat equation. Part 1: Basic results,, Ann. Inst. Henri Poincaré, 16 (1999), 221. Google Scholar
A. N. Simakov and L. Chacón, Quantitative, comprehensive, analytical model for magnetic reconnection in Hall magnetohydrodynamics,, Phys. Rev. Lett., 101 (2008). doi: 10.1103/PhysRevLett.101.105003. Google Scholar
F. Valentini, P. Tràvníček, F. Califano, P. Hellinger and A. Mangeney, A hybrid-Vlasov model based on the current advance method for the simulation of collisionless magnetized plasma,, J. Comput. Phys., 225 (2007), 753. doi: 10.1016/j.jcp.2007.01.001. Google Scholar
Ning Duan, Yasuhide Fukumoto, Xiaopeng Zhao. Asymptotic behavior of solutions to incompressible electron inertial Hall-MHD system in $ \mathbb{R}^3 $. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3035-3057. doi: 10.3934/cpaa.2019136
Jincheng Gao, Zheng-An Yao. Global existence and optimal decay rates of solutions for compressible Hall-MHD equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3077-3106. doi: 10.3934/dcds.2016.36.3077
Jishan Fan, Fucai Li, Gen Nakamura. Low Mach number limit of the full compressible Hall-MHD system. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1731-1740. doi: 10.3934/cpaa.2017084
Steinar Evje, Kenneth Hvistendahl Karlsen. Global weak solutions for a viscous liquid-gas model with singular pressure law. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1867-1894. doi: 10.3934/cpaa.2009.8.1867
Huajun Gong, Jinkai Li. Global existence of strong solutions to incompressible MHD. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1553-1561. doi: 10.3934/cpaa.2014.13.1553
Shanshan Guo, Zhong Tan. Energy dissipation for weak solutions of incompressible liquid crystal flows. Kinetic & Related Models, 2015, 8 (4) : 691-706. doi: 10.3934/krm.2015.8.691
Zhong Tan, Huaqiao Wang, Yucong Wang. Time-splitting methods to solve the Hall-MHD systems with Lévy noises. Kinetic & Related Models, 2019, 12 (1) : 243-267. doi: 10.3934/krm.2019011
Tong Li, Kun Zhao. Global existence and long-time behavior of entropy weak solutions to a quasilinear hyperbolic blood flow model. Networks & Heterogeneous Media, 2011, 6 (4) : 625-646. doi: 10.3934/nhm.2011.6.625
Kim Dang Phung. Energy decay for Maxwell's equations with Ohm's law in partially cubic domains. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2229-2266. doi: 10.3934/cpaa.2013.12.2229
Xianbo Sun, Pei Yu. Periodic traveling waves in a generalized BBM equation with weak backward diffusion and dissipation terms. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 965-987. doi: 10.3934/dcdsb.2018341
Yachun Li, Xucai Ren. Non-relativistic global limits of the entropy solutions to the relativistic Euler equations with $\gamma$-law. Communications on Pure & Applied Analysis, 2006, 5 (4) : 963-979. doi: 10.3934/cpaa.2006.5.963
Tohru Nakamura, Shuichi Kawashima. Viscous shock profile and singular limit for hyperbolic systems with Cattaneo's law. Kinetic & Related Models, 2018, 11 (4) : 795-819. doi: 10.3934/krm.2018032
Augusto Visintin. Ohm-Hall conduction in hysteresis-free ferromagnetic processes. Discrete & Continuous Dynamical Systems - B, 2013, 18 (2) : 551-563. doi: 10.3934/dcdsb.2013.18.551
Shouming Zhou, Chunlai Mu, Liangchen Wang. Well-posedness, blow-up phenomena and global existence for the generalized $b$-equation with higher-order nonlinearities and weak dissipation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 843-867. doi: 10.3934/dcds.2014.34.843
Jonatan Lenells. Weak geodesic flow and global solutions of the Hunter-Saxton equation. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 643-656. doi: 10.3934/dcds.2007.18.643
Zhen Lei, Yi Zhou. BKM's criterion and global weak solutions for magnetohydrodynamics with zero viscosity. Discrete & Continuous Dynamical Systems - A, 2009, 25 (2) : 575-583. doi: 10.3934/dcds.2009.25.575
Weiping Yan. Existence of weak solutions to the three-dimensional density-dependent generalized incompressible magnetohydrodynamic flows. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 1359-1385. doi: 10.3934/dcds.2015.35.1359
Huali Zhang. Global large smooth solutions for 3-D Hall-magnetohydrodynamics. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6669-6682. doi: 10.3934/dcds.2019290
2018 Impact Factor: 1.38
Marion Acheritogaray Pierre Degond Amic Frouvelle Jian-Guo Liu
|
CommonCrawl
|
A virtual grid-based real-time data collection algorithm for industrial wireless sensor networks
Chuan Zhu1,
Xiaohan Long1,
Guangjie Han ORCID: orcid.org/0000-0002-6921-73691,
Jinfang Jiang1 &
Sai Zhang1
Industrial wireless sensor networks (IWSNs) have been widely used in many application scenarios, and data collection is an extremely significant part of IWSNs. Moreover, a mobile sink is widely used in industrial wireless sensor networks to collect sensory data and alleviate the "hot spot" problem effectively. However, usage of a mobile sink introduces some challenges, such as updating of a mobile sink's location and planning of a mobile sink's trajectory. Meanwhile, the impact of different distribution types of events on data collection has not been sufficiently valued in designing of data collection algorithm for IWSNs yet. To overcome these challenges, a virtual grid-based real-time data collection algorithm for applications with centrally distributed events (VGDCA-C) is proposed in this paper to gain a reliable data gathering for IWSNs. In the target application scenarios, the events are distributed centrally, so we mainly focus on how to shorten the routing paths and decrease the transmission delay. In our VGDCA-C, a mobile sink can adjust its movement dynamically according to the changes in event areas. The adjustment of a mobile sink movement strategy includes two aspects. The first one is the dynamic adjustment of a mobile sink's parking time, and the second one denotes the moving toward event area of a mobile sink. Thus, a mobile sink adjusts its location such that it can get closer to the event area. Hence, the total length of routing is getting shorter so that source nodes can upload sensory data faster. Analysis and simulation results show that compared with the existing work, our VGDCA-C increases the network lifetime and decreases transmission delay.
With the development of industrial wireless communication technologies, microelectronics, sensors, distributed information processing, and embedded computers, the industrial wireless sensor networks (IWSNs) have been widely used in many application scenarios such as poisonous gas boundary detection [1, 2], pollution monitoring [3–5], and production monitoring [6, 7]. Data collected by sensor nodes need to be uploaded to a sink quickly and accurately via data routing to the sink. The stability and accuracy of data collection are the guarantees of IWSNs' normal operations. Therefore, the data collection and routing [8, 9] plays an important role in IWSNs. In the traditional IWSNs, a static sink or a base station is used to collect sensory data from sensor nodes whose batteries can be charged in some scenarios [10], and all source nodes deliver sensory data to the static sink by via multi-hop transmission [11]. This way of data collection always leads to a "hot spot" problem [12] that means nodes near the sink or base station run out of energy very fast so that the network performance has been affected. Due to that, a mobile sink is introduced to solve this problem. A mobile sink can alleviate "hot spot" problem efficiently. Namely, plenty of researchers have revealed that mobile sink can make the data collection energy efficient [13–27]. In their researches, intelligent unmanned vehicle or unmanned aerial vehicle is appointed to be a mobile sink, and all these vehicles are equipped with the industrial wireless communication device and data processor. A mobile sink can move to the locations of source nodes and collect sensory data directly from the source nodes.
Moreover, a mobile sink can balance the energy consumption and prolong the lifetime of the network. However, the use of a mobile sink introduces two new challenges in the data collection of IWSNs. The first one is the way the mobile sink's latest location is updated. The traditional way of this updating is that a mobile sink broadcasts the updated information on its location to the entire network. However, a frequent broadcast may lead to high overheads and shorten the lifetime of a network. Hence, it is challenging to find a suitable way to update the information of mobile sink with lower overheads. Another challenge is the way the trajectory of a mobile sink is planned [28]. As the sensory data are delivered to a mobile sink through multi-hops, the sensor nodes around the mobile sink run out of energy very fast. Moreover, the number of transmission hops affects the transmission delay. Hence, the trajectory of a mobile sink affects the balance of energy consumption and transmission delay greatly.
Meanwhile, the impact of different distribution types of events on data collection has not been sufficiently valued in designing of data collection algorithms for IWSNs. In current studies, source nodes are always distributed evenly in the network. However, in some application scenarios such as industrial fire detection, the monitored targets are distributed in a local area. Therefore, here, we focus on the scenarios wherein the source nodes are distributed centrally. Taking into consideration the challenges mentioned above and the distribution types of source nodes, we propose a virtual grid-based real-time data collection algorithm for applications with centrally distributed events for industrial wireless sensor networks.
The contributions of this paper are summarized as follows. A real-time mobile data collection algorithm based on a virtual grid structure VGDCA-C is proposed. By constructing a virtual grid structure in the network, the information on a mobile sink location can be updated locally such that the energy consumption of a mobile sink and data transmission delay can be reduced and network lifetime can be extended. The algorithm proposed in this paper is suitable for scenarios wherein events are centrally distributed, such as the monitoring of a fire in the industrial factories or malfunction monitoring of industrial equipment.
The remainder of this paper is organized as follows. Firstly, the related works of data collection algorithms with a mobile sink are presented in Section 2. The details of the VGDCA-C are described in Section 3. In Section 4, the simulation experiments and performance evaluations are provided. A brief conclusion is given in Section 5, and abbreviations used in the manuscript are listed in the "Abbreviation" section.
The VGDCA-C is a data collection algorithm based on a virtual structure, which compensates the lack of data collection algorithms for non-virtual structures. However, the trajectory of a mobile sink and the corresponding location updating also need to be taken into consideration. Recently proposed related algorithms can be classified into two following categories: (1) non-virtual structure-based data collection algorithm and (2) virtual structure-based data collection algorithm.
Non-virtual structure-based data collection algorithm
A non-virtual structure-based data collection means that there is no auxiliary structure to assist the data collection, such as virtual grid, virtual honeycomb structure, and virtual ring structure. In this kind of algorithms, mobile sink either moves randomly or along a pre-determined trajectory. When mobile sink moves randomly and beyond the communication range of its previous neighbor sensor, a new neighbor sensor node of a mobile sink will be appointed as an agent node. These agent nodes can help the routing of sensory data. If the mobile sink moves along a pre-determined trajectory, the sensory data will always be routed to the sensor nodes near the trajectory.
In [15], Han et al. proposed the minimum Wiener index spanning tree (MWST), which is designed for IWSNs with a mobile sink. According to the characteristic of Wiener index, the MWST can provide efficient transmission paths for sensor nodes. However, finding a spanning tree with a minimal Wiener index from a weighted graph is a non-deterministic polynomial-time hardness (NP-hard) problem. Therefore, the authors proposed a new way to solve this problem; namely, through the extensive experiments, they found that the Wiener index of a minimum spanning tree (MST) is similar to the Wiener index of MWST and that time complexity of finding the MST is low. The authors used the Wiener index of MST as an initial upper bound. On this basis, the authors proposed two algorithms according to the network size. The first is a branch and bound algorithm for the small-scale sensor networks, and the second is a simulated annealing algorithm for the large-scale sensor networks. These algorithms provide a brand new idea for data transmission. However, the method to find the location of mobile sinks was ignored.
In [16], Shin and Kim proposed a milestone-based predictive routing protocol that can solve the problem of finding a spanning tree with the minimum Wiener index from a weighted graph presented in [15]. The proposed protocol consists of two main parts: estimation of mobile sink future location (namely, when a mobile sink finds some new sensor nodes entering its communication range, it broadcasts its updated location to them) and the establishment of milestone nodes and update of mobile sink's location. The milestone nodes have to spread the estimated future location of a mobile sink to nodes located near the recent trail of a mobile sink. If the direction of a mobile sink is changed, it chooses a new neighbor sensor node as the next milestone node. All the sensory data are delivered by these milestone nodes. The milestone nodes are the tools of the source nodes to find the location of a mobile sink. However, too many milestone nodes are established if a mobile sink moves for a long time, which leads to a longer routing path. Consequently, the control packets among milestone nodes consume more energy. Thus, the presented way to find the mobile sink is inefficient.
A strategy of double cross to collect data for industrial wireless sensor networks was proposed by Shi et al. [17]. In this strategy, the authors introduced the double-blind concept where the source nodes and mobile sink do not know each other's location. The authors provided the scheme of a random line walk (RLW), which is used to transmit the sensory data. When a source node needs to upload the sensory data, it randomly chooses a direction to establish the baseline. The source nodes transmit sensory data in two directions in that baseline. On the basis of the RLW, the authors proposed the strategy of double cross. This strategy enables source nodes to be associated with a mobile sink. When a source node detects an event, it selects one direction randomly and transfers the sensory data along that direction and its vertical direction by the RLW mechanism. All sensor nodes on the transmission path cache the sensory data. When mobile sink needs sensory data, it transmits the query information. The query information is also transmitted in two directions, which are perpendicular to each other, according to the RLW mechanism. The query path and the sensory data routing path intersect at a sensor node. If the intersection sensor node receives both the query information and sensory data, the sensory data will be transmitted to mobile sink. In this strategy, too many sensor nodes need cache and relay sensory data to find the mobile sink; thus, energy is wasted, and transmission delay is large.
Virtual structure-based data collection algorithm
Data collection based on a virtual structure means that there is some auxiliary structure to assist the data gathering. The virtual structure contains the virtual grid, virtual honeycomb structure, virtual ring structure, and so on. The establishment of a virtual structure can simplify the location update of a mobile sink, data upload, path planning of a mobile sink, and so on. In this kind of algorithms, the movements of mobile sinks can be divided into three categories: random movement, fixed-trajectory movement, and dynamically adjusted trajectory movement.
Random movement
Random movement of a mobile sink means that mobile sink can move in any direction at any speed during data collection.
In [18], Singh et al. proposed the EEGBDD algorithm. In this algorithm, every source node establishes its own virtual grid. Mobile sink moves randomly in the network, and sink initiates a query request when it needs data from the source node. The source node sends its data to the sink via the virtual grid. All query request and data are transferred through the dissemination nodes, and dissemination nodes are selected based on node residual energy and the distance from the node to the intersection of the grid. This algorithm reduces the length of the transmission path so that it reduces the energy consumption of nodes. However, some nodes may belong to more than one path so they could consume too much energy and die early.
Tunca et al. proposed an energy-efficient routing protocol to improve the network performance [19]. In this protocol, a ring structure is used to maintain of mobile sink location. After the network is deployed, a virtual ring structure consisted of nodes is built. Sink sends its location information to the nodes on the ring. When a node needs to send data, it first requests the location of a sink from the nodes on the ring and then sends data to the sink. To balance energy consumption, the nodes on the ring are periodically replaced.
In [20], Wang et al. proposed a grid-based data dissemination routing protocol. In this protocol, source node builds a grid to transmit data when an event is detected. Source node obtains the information on eight neighbor grid vertices near the source node and then sends a packet to all its neighbors. Every neighbor node replies a packet containing the information on node remaining energy. Source node selects the best neighbor as a relay node based on the residual energy of these neighbor nodes and distance from the sink to the neighbor nodes. The relay node repeats this process until the data arrive at the sink. When the mobile sink needs data, it initiates a query request. The request arrives at the source node through the relay nodes, and the source node sends data to the sink via the query path.
Fixed-trajectory movement
When a mobile sink moves along a fixed trajectory in data collection, then, sink always moves along a pre-defined trajectory regardless the network status and application environment. Usually, the pre-defined trajectory contains the straight line, rectangle, cycle, and so on.
Mottaghi et al. proposed the O-LEACH algorithm [21] that represents an improved version of the LEACH algorithm. In the proposed algorithm, mobile sink moves along a fixed line, whereas the area near the linear trajectory is called the convergence area. Nodes in the convergence area are set as convergence nodes (RNs). The network is divided into different clusters, and cluster head nodes are selected for each cluster. Data from the sensor nodes is sent to the cluster head nodes. Cluster head node transmits the data to the sink if it is near the sink; otherwise, cluster head node sends data to the near convergence node, and the convergence node transmits data to the sink. This algorithm reduces the energy of nodes, but a fixed trajectory of a mobile sink leads to the high energy consumption of nodes near the trajectory because these nodes bear more data forwarding task.
In [22], Konstantopoulos et al. proposed a data collection algorithm intended for an urban environment. In this algorithm, sink moves along a fixed trajectory and collects data from the nodes near the trajectory. A virtual structure called the cluster is established in the network. Cluster head node that is responsible for data fuse and data forwarding is selected in each cluster. Sensor nodes transmit data to their own cluster head nodes. Cluster head nodes send their data to those nodes near the trajectory of a mobile sink. This algorithm reduces the energy consumption of nodes. However, nodes near the trajectory of a mobile sink bear more task and expend their energy early.
Khan et al. proposed a data collect algorithm called the VGDRA [23]. In this algorithm, a network is divided into virtual grids, and the number of the grids is only related to the number of nodes. Namely, each grid selects a head node, and the head node is in responsible for collecting data from nodes in its grid. Mobile sink moves along the network boundary, and during that movement, the routing of data is adjusted dynamically. Compared with the fixed line trajectory, the rectangle path of mobile sink balances energy consumption of the network but nodes near the trajectory also consume a lot of energy.
Dynamically adjusted trajectory movement
In this classification, a mobile sink is neither moving randomly nor moving on the pre-defined trajectory. However, the sink adjusts the trajectory dynamically according to the collected data, distribution of nodes, remaining energy of sensor nodes, and the number of travelling times in a region.
In [24], Kinalis et al. proposed an algorithm named the Biased Sink Mobility with Adaptive Stop Times. In this algorithm, the virtual grid structure is established in the network, and the intersection point of the grid is a stopping point of a mobile sink. If there is a higher density of nearby sensor nodes near the point, a mobile sink will stay longer at that point. The adaptive stop time of a mobile sink can balance the energy consumption of nodes, but a long trajectory of a mobile sink increases network delay.
Ghafoor et al. proposed the efficient trajectory design for mobile sink algorithm [25]. In this algorithm, the trajectory of a sink is based on a Hilbert curve, and trajectory of a mobile sink is adjusted dynamically according to the density of nodes and network size. The order of Hilbert curve is smaller when a mobile sink is moving toward the region with a smaller node density and vice versa. This algorithm can dynamically adjust the trajectory of mobile sink according to the network state so that it can balance energy consumption of nodes. However, this algorithm can only dynamically adjust the trajectory in a large range, but it is not suitable for data collection under the condition of uneven node density.
To solve the problem presented in [25], Yang et al. proposed the adjustable trajectory design based on node density for mobile sink algorithm [26]. In this algorithm, different orders of Hilbert curves are combined. By refining the established virtual grid structure and considering the density of nodes in a smaller region, Hilbert curves with different orders are constructed according to the different densities of nodes. Besides the fact that this algorithm solves the problem stated in [25], it also makes the data collection algorithm suitable for the case of uneven node deployment.
Network model
Mobile data collection can prolong the network lifetime IWSNs. In most of mobile data collection algorithm, the application scenarios are always with evenly distributed events. However, in some real application scenarios, source nodes are distributed centrally in a local region, and the distributed region may change over the time. For instance, when the emergent events are monitored, such as monitoring of a fire in the industrial factory or monitoring the malfunction of industrial equipment, the source nodes are always distributed in a local region. As illustrated in Fig. 1, the sensor nodes within the event area are source nodes.
Source nodes located in a local region with emerging event. The gray circle in the figure indicates the area where the emerging event occurred
In the following, the network model and relevant assumptions are described in detail. The network is a rectangle area with the size of L×D. The network consists of N sensor nodes, and these sensor nodes are deployed densely and randomly. All the sensor nodes are well connected, and the network is fully covered. The sensor nodes are static and location-aware (i.e., equipped with the GPS-capable antennas). Sensor nodes have the same initial energy, sensing radius r s , and communication radius R. Sensory data is delivered to the mobile sink by a multi-hop transmission. There is only one mobile sink in the network and is not constrained by energy, memory space, and computing ability. The speed of the mobile sink v is fixed during in motion, and the mobile sink may park in the center point of the virtual grid cell for a while. The sensor nodes are densely deployed, and there are no obstacles in the network, so we can assume that there are sensor nodes in each grid. The notation and the corresponding definition used in our algorithm are given in Table 1.
Table 1 Notations and definitions
Network initialization
The first phase after the network is deployed is network initialization. This phase includes three sub-phases: establishment of a grid-based virtual structure, election of head nodes in the virtual grid cells, and establishment of the neighbor tables. Since the deployment area is a rectangle, we adopt the Cartesian coordinate system for convenience. The origin of this coordinate system is located at the lower left corner of the network.
The establishment of grid-based virtual structure
The establishment of a grid-based virtual structure includes three steps. The first step is to calculate the length of a virtual grid cell. The grid cell is a square in our algorithm, and calculation of a grid cell's length is the foundation of a virtual structure. To set an ID number for each grid cell, we calculate the row-column number (RCN) for each grid cell in step 2. To reduce the updating range of a mobile sink's location, we introduce the direction number (DN) in step 3.
1) Calculation of side length of grid cells
As illustrated in Fig. 2, each grid cell has up to eight neighbor grid cells, and the sensor nodes in neighbor grid cells are called the neighbor nodes. To enable nodes to communicate with the neighboring nodes, the relation of communication radius R and a side length of grid cells a need to satisfy Eq. (1). When the side length a is calculated, mobile sink broadcasts some information in the network to establish the virtual structure. The information consists of a, L, and D.
$$ a \leq \frac{\sqrt{2}R}{4} $$
The relationship between communication radius and a side length of a grid cell. The distance between A and B is R, and it is easy to calculate the value of a
2) Calculation of RCN of grid cells
After receiving the information broadcasted by a mobile sink, sensor nodes calculate the number of grid cells in both horizontal direction and vertical direction. The calculating process is defined by Eqs. (2) and (3).
$$ n_{L} = \left\lceil \frac{L}{a} \right\rceil $$
$$ n_{D} = \left\lceil \frac{D}{a} \right\rceil $$
Based on the received information and their own coordinates, nodes can get an RCN, which indicates the location of a grid cell that the node belongs to; the calculating process is given in Eqs. (4) and (5).
$$ M_{r} = \left\lceil \frac{y_{i}}{a} \right\rceil $$
$$ M_{c} = \left\lceil \frac{x_{i}}{a} \right\rceil $$
Sensor nodes in the same grid cell have the same RCN value, and the RCN will not be modified. When RCN is calculated, the network is as shown in Fig. 3.
The RCN of virtual grid cell. It shows the RCS of virtual grid cell
3) Calculation DN of grid cells
Since the initial location of a mobile sink is in the grid cell center, sensor nodes can get the RCN of the grid cell center according to the information they received. The calculating process is shown in Eqs. (6) and (7).
$$ M_{r\_\text{sink}} = \left\lceil \frac{n_{D}}{2} \right\rceil = \left\lceil \frac{\frac{D}{a}}{2} \right\rceil $$
$$ M_{c\_\text{sink}} = \left\lceil \frac{n_{L}}{2} \right\rceil = \left\lceil \frac{\frac{L}{a}}{2} \right\rceil $$
After calculating the initial RCN of mobile sink, sensor nodes get the DN of grid cells that they belong to according to the relation of ($M_{r\_\text {sink}}$, $M_{c\_\text {sink}}$) and their own RCN (M r , M c ). The comparing process is given in Eq. (8).
The grid cell where a mobile sink is located denoted the grid of sink (GS), and the relation between grid cell DN and GS is shown in Table 2 and Fig. 4.
$$ \text{DN} = \left\{ \begin{array}{lll} 0 & \mathrm{, \left(M_{r} = M_{r\_{\text{sink}}} \right) \& \left(M_{c} = M_{c\_{\text{sink}}}\right) } \\ 1 & \mathrm{, \left(M_{r} > M_{r\_{\text{sink}}} \right) \& \left(M_{c} > M_{c\_{\text{sink}}}\right) } \\ 2 & \mathrm{, \left(M_{r} > M_{r\_{\text{sink}}} \right) \& \left(M_{c} < M_{c\_{\text{sink}}}\right) } \\ 3 & \mathrm{, \left(M_{r} < M_{r\_{\text{sink}}} \right) \& \left(M_{c} < M_{c\_{\text{sink}}}\right) } \\ 4 & \mathrm{, \left(M_{r} < M_{r\_{\text{sink}}} \right) \& \left(M_{c} > M_{c\_{\text{sink}}}\right) } \\ 5 & \mathrm{, \left(M_{r} > M_{r\_{\text{sink}}} \right) \& \left(M_{c} = M_{c\_{\text{sink}}}\right) } \\ 6 & \mathrm{, \left(M_{r} = M_{r\_{\text{sink}}} \right) \& \left(M_{c} < M_{c\_{\text{sink}}}\right) } \\ 7 & \mathrm{, \left(M_{r} < M_{r\_{\text{sink}}} \right) \& \left(M_{c} = M_{c\_{\text{sink}}}\right) } \\ 8 & \mathrm{, \left(M_{r} = M_{r\_{\text{sink}}} \right) \& \left(M_{c} > M_{c\_{\text{sink}}}\right) } \\ \end{array} \right. $$
The DN of virtual grid cell. The grid cell 0 is the cell wherein a mobile sink is located
Table 2 The relation between grid cells DN and GS
The election of head nodes in virtual grid cells
In our VGDCA-C, the virtual grid cell head nodes are to collect sensory data from the same grid cell, deliver sensory data, and maintaining the DN of the grid cell.
In the first election, all the sensor nodes broadcast their coordinates and RCNs, and the radius of broadcasting is $\sqrt {2}a$. Thus, one sensor node can receive information from neighbor sensor nodes. Firstly, the sensor node compares its RCN with the RCN from the received information. If the received RCN is equal to that of the receiver, the coordinates and RCNs information will be added to the election list of the receiver. If the received RCN is not equal to that of the receiver, the receiver will drop the coordinates and RCN information. After the information is processed, each sensor node adds the source node of the information into its election list.
After the operation of adding, sensor nodes sort their election lists. The election lists are sorted in ascending order by the abscissa, and if some sensor nodes have the same abscissas, the lists are sorted in ascending order by the ordinate. After the sorting operation, the sensor nodes in the same grid cell have the same election list. If a sensor node finds itself at the top of the election list, the sensor node broadcasts information of CellHead within the radius of $\sqrt {2}a$. The broadcasted CellHead consists of the indexes of sensor node in the election list. The other sensor nodes in the same grid cell receive the CellHead and mark sending node as a head node of that cell. According to the sorted election list, the nodes in the grid are successively selected as the head nodes.
If some sensor nodes cannot broadcast CellHead, a waiting mechanism is introduced. If other nodes in the same grid cell cannot receive CellHead after a threshold period Thtime, the next sensor node in the election list tries to broadcast CellHead.
The establishment of neighbor tables
As each grid cell has up to eight neighbor grid cells, one head node has up to eight neighbor head nodes. In the process of establishing the neighbor tables, each head node broadcasts its coordinate and RCN in the communication range. Meanwhile, each head node can receive information from other head nodes. Each head node compares its RCN (M r ,M c ) with RCN of another head node $\left (M_{r}^{\prime }, M_{c}^{\prime }\right)$. According to the relation of (M r ,M c ) and $ \left (M_{r}^{\prime }, M_{c}^{\prime } \right)$, neighbor head nodes can be added to the corresponding rows of the neighbor table. The neighbor table and corresponding criterion are shown in Table 3 and Fig. 5, respectively.
The neighbor grid cells and corresponding rows in neighbor table. Node A is the head node
Table 3 The neighbor table and corresponding criterion
Data routing
After the network initialization, sensor nodes could rely on a virtual grid structure to upload sensory data to the mobile sink. When source node has collected the sensory data, it encapsulates its RCN and sensory data into a data packet and transmits it to the corresponding head node in the same grid cell, and the head node forwards the data packet to a neighbor head node according to the DN of the former. The data packet also includes other information, and the detail will be described in the following sections. The relation of the DN of head nodes and the corresponding transmission direction are shown in Fig. 6 and Table 4.
The DN of current head node and corresponding transmission direction. Node A is the current head node, and the others are neighbor head nodes
Table 4 The rules of selecting next hop head node
Trajectory planning of mobile sink
In our VGDCA-C, the moving region of a mobile sink is composed of a transverse moving belt and a longitudinal moving belt. The two moving belts are presented as a gray region in Fig. 7. The two moving belts may switch to another column or row when target events change. Before the switching of a moving belt, the mobile sink will complete Num cycle moving cycles in the moving belt. In Fig. 7, the grid cells in the gray region are called the moving belt grids. In one moving cycle, the mobile sink parks in different moving belt grids for different durations according to the collected sensory data. The mobile sink assigns weight values to the moving belt grids, and the weight values affect the parking time of a mobile sink. When the mobile sink completes Num cycle moving cycles in the current moving belts, it estimates the location of the event region. Subsequently, the moving belts switch according to the corresponding strategy.
The moving region of mobile sink in a moving cycle. The dashed line with arrow is the trajectory of sink
Trajectory of mobile sink in moving cycle
In our target application scenarios, the event region appears randomly. To adapt to this kind of application scenarios, we design a "cross" moving trajectory for a mobile sink. As illustrated in Fig. 7, the grid cell intersected by two moving belts is called the intersecting grid (IG). In one moving cycle, the mobile sink starts from IG and moves toward the direction whose DN is 5. After the mobile sink arrives at the grid cell whose M r is (n D −1), it starts to move in the opposite direction. The mobile sink can move back to the IG, and then, the moving direction switches to the direction whose DN is 8. When the mobile sink arrives at the grid cell whose M c is (n L −1), the mobile sink switches the moving direction to the opposite direction. The mobile sink moves in the direction whose DN is 7 after moving back to the IG. When the mobile sink arrives at the grid cell whose M r is 2, it starts to move toward the IG again. After moving back to the IG, the mobile sink starts to move to the direction whose DN is 6 until it reaches the grid cell whose M c is 2, then the mobile sink starts to move to the IG. When the mobile sink arrives at the IG, one moving cycle is completed.
Calculation of weight value for moving belt grids
According to Section 3.3, all the sensory data are routed to the moving belts. Different moving belt grids are responsible for different areas. As illustrated in Fig. 8, grid A has 13 responsible grid cells and grid B is responsible for 6 grid cells. Hence, different moving belts grids have different workloads. When a mobile sink starts the first moving cycle in the current moving belts, it calculates the weight value for each moving belt grid. Here, W is used to represent the weight value. The RCN of IG is $(M_{r\_{\text {IG}}}, M_{c\_{\text {IG}}})$, and the RCN of moving belt grid is ($M_{r\_m}$, $M_{c\_m}$). The process of calculation is as follows: $\left (M_{r\_m} > M_{r\_{\text {IG}}} \right) \&\& \left (M_{c\_m} = M_{c\_{\text {IG}}}\right)$ :
$$ {}\begin{aligned} W\left(M_{r\_m},M_{c\_m} \right) &= \min \left\{ \left(M_{c\_m} - 1 \right),\left(\left\lceil \frac{D}{a} \right\rceil - M_{r\_m} \right) \right\} \\ &\quad+\min\! \left\{\! \left(\left\lceil \frac{L}{a} \right\rceil \,-\, M_{c\_m} \!\right)\! \left(\left\lceil \!\frac{D}{a} \!\right\rceil \,-\, M_{r\_m} \!\right)\! \right\} \end{aligned} $$
The moving belt grids and corresponding responsible areas. S, A, and B are the moving regions of mobile sink in one moving cycle. A′ represents the region which grid A is responsible for and B′ represents the region which grid B is responsible for
$\left (M_{r\_m} = M_{r\_{\text {IG}}} \right) \&\& \left (M_{c\_m} > M_{c\_{\text {IG}}}\right)$ :
$$ {}\begin{aligned} W\left(M_{r\_m},M_{c\_m} \right) &= \min\! \left\{ \!\left(\!\left\lceil \!\frac{D}{a}\! \right\rceil \,-\, M_{r\_m} \right),\left(\left\lceil\! \frac{L}{a} \!\right\rceil \,-\, M_{c\_m} \!\right)\! \right\} \\ &\qquad+\min\! \!\left\{\! \left(\left\lceil \!\frac{L}{a}\! \right\rceil \,-\, M_{c\_m} \!\right),\left(M_{r\_m} - 1\right)\! \right\} \end{aligned} $$
$\left (M_{r\_m} < M_{r\_{\text {IG}}} \right) \&\& \left (M_{c\_m} = M_{c\_{\text {IG}}}\right)$ :
$$ {}\begin{aligned} W\left(M_{r\_m},M_{c\_m} \right) &= \min \left\{ \left(\left\lceil \frac{L}{a} \right\rceil - M_{c\_{\text{IG}}}\right),\left(M_{r\_m} - 1 \right) \right\} \\ &\qquad+\min \left\{ \left(M_{c\_m} - 1 \right),\left(M_{r\_m} - 1 \right) \right\} \end{aligned} $$
$\left (M_{r\_m} = M_{r\_{\text {IG}}} \right) \&\& \left (M_{c\_m} < M_{c\_{\text {IG}}}\right)$ :
$$ {}\begin{aligned} W\left(M_{r\_m},M_{c\_m} \right) &= \min \left\{ \left(\left\lceil \frac{D}{a} \right\rceil - M_{r\_m} \right),\left(M_{c\_m} - 1 \right) \right\} \\ &\quad+\min \left\{ \left(M_{r\_m} - 1 \right),\left(M_{c\_m} - 1 \right) \right\} \end{aligned} $$
$\left (M_{r\_m} > M_{r\_{\text {IG}}} \right) \&\& \left (M_{c\_m} = M_{c\_{\text {IG}}}\right)$ :
$$ {}\begin{aligned} W\left(M_{r\_m},M_{c\_m} \right) &= \min\! \left\{ \left(M_{c\_{\text{IG}}} - 1 \right),\left(\left\lceil \frac{D}{a} \right\rceil - M_{r\_{\text{IG}}} \right) \right\} \\ &\quad+\min \left\{ \left(\left\lceil \frac{D}{a} \right\rceil - M_{r\_{\text{IG}}} \right)\right.,\\ &\left.\qquad\qquad\left(\left\lceil \frac{L}{a} \right\rceil - M_{c\_{\text{IG}}} \right) \right\} \\ &\quad+\min\! \left\{\! \left(\left\lceil \frac{L}{a} \right\rceil - M_{c\_{\text{IG}}} \!\right),\left(M_{r\_{\text{IG}}} \,-\,1 \right) \!\right\} \\ &\quad+\min \left\{ \left(M_{r\_{\text{IG}}} - 1 \right),\left(M_{c\_{\text{IG}}} -1 \right) \right\} \end{aligned} $$
Allocation of parking time for mobile sink in the first moving cycle
When the mobile sink starts the first moving cycle in the current moving belt, it allocates parking time for the moving belt grids according to the corresponding weight values. In one moving cycle, the moving time of a mobile sink is calculated by Eq. (14). We assume that in a moving cycle, the parking time tpa is equal to the moving time $t_{\text {moving\_sink}}$. Therefore, the total time of a moving cycle is calculated by Eq. (15).
$$ t_{\text{moving\_sink}} = \frac{2 \times a \times \left(n_{L} + n_{D} - 6 \right)}{v} $$
$$ T = t_{\text{moving\_sink}} + t_{\text{pa}} = \frac{4 \times a \times \left(n_{L} + n_{D} - 6 \right)}{v} $$
We allocate tpa to each moving belt grid according to the corresponding weight values, and the allocated time is the corresponding parking time. Thus, we need to get the sum of weight values, which is defined in Eq. (16).
$$ {}\begin{aligned} W_{\text{all}} &= \sum_{i=2}^{n_{D} -1} W \left(i,M_{c\_{\text{IG}}} \right) \\ &\quad+ \sum_{j=2}^{n_{L} -1} W \left(M_{r\_{\text{IG}}},j \right) - W\left(M_{r\_{\text{IG}}},M_{c\_{\text{IG}}} \right) \end{aligned} $$
In the allocation process, the moving belt grids need to be classified into three categories. The first category is a top grid, and it refers to the moving belt grids that are adjacent to the boundary grids. The second category is IG, and the third category consists of other moving belt grids. The reason for grids classification is that the number of parking in different types of grid cells is different in a moving cycle. If a grid cell is a top grid, a mobile sink can park in this grid cell only once. We assume that the RCN of a moving belt grid is (i,j). The number of times the mobile sink parks in different types of moving belt grids is calculated as follows:
If a grid (i,j) is a top grid, then:
$$ t_{p} = \frac{W\left(i,j\right)}{W_{\text{all}} }\times t_{\text{pa}} $$
If a grid (i,j) is IG, then:
$$ t_{p}\left(i,j\right) = \frac{\frac{W\left(i,j \right) }{W_{\text{all}}} \times t_{\text{pa}} }{4} $$
If a grid (i,j) is the other moving belt grids, then
In the first moving cycle, a mobile sink starts to collect sensory data in the current moving belt according to the parking time given above. When a mobile sink collects the sensory data, it counts the sensory data in the corresponding counters. Therefore, in the following, we will talk about the counters in the mobile sink.
Counters of mobile sink
In the process of sensory data routing to the mobile sink, the sensory data carry the information on source nodes and routing. According to that information, the mobile sink counts the corresponding counters. When the sensory data are sensed, the source nodes attach their RCNs to the sensory data. When the sensory data are routed to the moving belt grid for the first time, then RCN of this moving belt grid is attached to the sensory data. The counters the mobile sink has are as follows:
Cgm(i,j): In the sensory data, if the RCN of a moving belt grid that is first met is (i,j), then the amount of sensory data is added to this counter.
Cgm(i,j): This counter is used to record the number of the moving belt grids which the sensory data pass and the amount of sensory data are added to the counters corresponding to these passed moving belt grids.
Chorizontal(i): The value of i is from 1 to 3. As the RCN of the source node is in the sensory data, M r of the source node can be obtained. If M r of the source node is larger than IGs, the value of i is 1, and the amount of sensory data is added to Chorizontal(1). If M r of the source node is equal to IGs, the value of i is 2. The value of i is 3 when the M r of the source node is smaller than IGs.
Cvertical(i): The value of i is from 1 to 3. As the RCN of the source node is in the sensory data, M c of the source node can be obtained. If M c of the source node is larger than IGs, the value of i is 1 and the amount of sensory data is added to Cvertical(1). If M c of the source node is equal to IGs, the value of i is 2. The value of i is 3 when the M c of the source node is smaller than IGs.
Dynamic adjustment of parking time
After completing the first moving cycle and moving back to the IG, a mobile sink readjusts the parking time for the next moving cycle according to its counters. Due to that, different grid cells handle different amounts of sensory data in the moving belts, so the mobile sink needs to park longer in the moving belt grid, which handles more sensory data. We divide the time of a moving cycle into two parts on average, of which is allocated by Cgm(i,j), and another is allocated by Cgr(i,j). The summing process of Cgm(i,j) is defined by Eq. (20), and the summing process of Cgr(i,j) is defined by Eq. (21).
$$ \begin{aligned} C_{\text{gma}}&= \sum_{j=2}^{n_{L} -1} C_{\text{gm}}\left(M_{r\_{\text{IG}}},j \right) \\ &\quad+ \sum_{i=2}^{n_{D} -1} C_{\text{gm}}\left(i,M_{c\_{\text{IG}}} \right) \\ &\quad- C_{\text{gm}}\left(M_{r\_{\text{IG}}},M_{c\_{\text{IG}}} \right) \end{aligned} $$
$$ \begin{aligned} C_{\text{gra}}&= \sum_{j=2}^{n_{L} -1} C_{\text{gr}}\left(M_{r\_{\text{IG}}},j\right) \\ &\quad+ \sum_{i=2}^{n_{D} -1} C_{\text{gr}}\left(i,M_{c\_{\text{IG}}}\right) \\ &\quad- C_{\text{gr}}\left(M_{r\_{\text{IG}}},M_{c\_{\text{IG}}} \right) \end{aligned} $$
When the recalculated parking time is discussed, the moving belt grids need to be classified into three categories: top grids, IG, and other moving belt grids. We assume that the RCN of a moving belt grid is (i,j). The time that mobile sink parking in different types of moving belt grids is calculated as follows (the calculated time is the time of one parking in the grid):
When a grid (i,j) is a top grid the time is defined by
$$ \begin{aligned} t_{p}\left(i,j \right) &= \frac{C_{\text{gm}}\left(i,j\right) }{C_{\text{gma}}} \times \frac{t_{\text{pa}}}{2} + \frac{C_{\text{gr}}\left(i,j\right) }{C_{\text{gma}}} \times \frac{t_{\text{pa}}}{2} \\ &= \left[ \frac{C_{\text{gm}}\left(i,j\right) }{C_{\text{gma}}} \,+\, \frac{C_{\text{gr}}\left(i,j\right) }{C_{\text{gra}}} \right] \!\times\ \frac{a \!\times\! \left(n_{L} + n_{D} -6 \right)}{v} \end{aligned} $$
When a grid (i,j) is IG the time is defined by
$$ \begin{aligned} t_{p}\left(i,j \right) &= \frac{C_{\text{gm}}\left(i,j\right) }{C_{\text{gma}}} \times \frac{t_{\text{pa}}}{2} + \frac{C_{\text{gr}}\left(i,j\right) }{C_{\text{gra}}} \times \frac{t_{\text{pa}}}{2} \\ &= \left[ \frac{C_{\text{gm}}\left(i,j\right) }{C_{\text{gma}}} \,+\, \frac{C_{\text{gr}} \left(i,j\right) }{C_{\text{gra}}} \right] \!\times\! \frac{a \times \left(n_{L} + n_{D} -6 \right)}{4 \times v} \end{aligned} $$
When a grid (i,j) is the other moving belt grid the time is defined by
$$ \begin{aligned} t_{p}\left(i,j \right) &= \frac{C_{\text{gm}}\left(i,j\right) }{C_{\text{gma}}} \times \frac{t_{\text{pa}}}{2} + \frac{C_{\text{gr}}\left(i,j\right) }{C_{\text{gra}}} \times \frac{t_{\text{pa}}}{2} \\ &= \left[ \frac{C_{\text{gm}}\left(i,j\right) }{C_{\text{gma}}} \,+\, \frac{C_{\text{gr}}\left(i,j\right) }{C_{\text{gra}}} \right] \!\times\! \frac{a \times \left(n_{L} + n_{D} -6 \right)}{2 \times v} \end{aligned} $$
After the redistribution of parking time, a mobile sink starts a new moving cycle according to the allocated parking. Before the start of a new moving cycle, all the counters of a mobile sink are set to zero. After the mobile sink completes Num cycle moving cycles, it estimates the location of the event area according to the counters. The moving belt then moves toward the event area.
Moving trajectory of moving belts
As the source nodes are distributed centrally, the mobile sink needs to move toward the event area as close as possible such that the total length of the sensory data transmission is reduced and the energy consumption is decreased. Before the moving belts switch to another column or row, the mobile sink first compares Chorizontal(1), Chorizontal(2), and Chorizontal(3). If Chorizontal(1) has the largest value, the transverse moving belt switches to a neighbor row above. If Chorizontal(3) has the largest value, the transverse moving belt switches to a neighbor row below. If Chorizontal(2) is the largest, or two of three counters are equal, or all three counters are equal, the transverse moving belt does not switch to another row. Similarly, the mobile sink compares Cvertical(1), Cvertical(2) and Cvertical(3). If Cvertical(1) has the largest value, the longitudinal moving belt switches to a neighbor column on the right. If Cvertical(3) is the largest, the longitudinal moving belt switches to a neighbor column on the left. In other cases, the longitudinal moving belt does not switch. According to this strategy, the mobile sink can get closer to the event area.
Local updating of mobile sink
When a mobile sink moves from one grid cell to another, the DN of grid cells needs to be updated to maintain a reliable data routing. According to Section 3.3, the sensory data are routed to the mobile sink according to DN of grid cells. Hence, the updating of DN of grid cells denotes the updating of the location information of a mobile sink. For further discussion, the updating operations need to be classified into two categories.
The first category refers to the situation that the moving belts do not switch. In this situation, DNs of two grid cells need to be updated when the mobile sink enter into a new grid cell. The first grid cell is the one that the mobile sink is located in before entering the new one. The second grid cell is the one that the mobile sink entered, and this grid cell sets its DN to 0. The former grid cell sets its DN according to the moving direction. If mobile sink moves toward the direction whose DN is 5, the DN of the former grid cell is set to 7. If the direction DN is 8, the former's DN is set to 6. If the direction has DN is 7, the former's DN is set to 5. If the direction DN is 6, the former's DN is set to 8.
The second category refers to the situation that the moving belts switch to another column or row. When a transverse moving belt switches to another row, both previous row and current row need to update their DNs. When a longitudinal moving belt switches to another column, both previous column and current column need to update their DNs. The updating is relevant to the grid cell where the mobile sink is located before moving. The relationship of grid cell DN and GS is shown in Fig. 4. Namely, when mobile sink updates a grid cell's DN, it only needs to update the head node of the grid cell. Thus, the local updating not only reduces the update area but also reduces the number of the updated sensor nodes.
Re-election of head nodes
To prolong the network lifetime and balance the energy consumption of sensor nodes in each grid cell, the head nodes need to be regularly re-elected. First, the ratio of the residual energy of the current head node and its energy when it was elected is calculated. If that ratio is below a threshold Th, the head node broadcasts Re-Election to start re-election of the corresponding grid cell. The Re-Election includes the ordinal number of the head nodes in the election list Index. The sensor nodes in the same grid cell can receive the Re-Election and query the sensor node whose the ordinal number in the election list is (Index+1). If a sensor node A finds that the sensor node queried itself, the sensor node A sends CellHead_01 to the current head node. After receiving CellHead_01, the current head node sends its neighbor list to the sensor node A. When sensor node A receives the neighbor list of the current head node, it broadcasts CellHead_02. When the other sensor nodes receive CellHead_02, the sensor node A becomes the new head node of the current grid cell, and the previous head node is retired. According to Eq. (1), all eight neighbor head nodes can receive CellHead_02, and the neighbor head nodes then add sensor node A to their neighbor lists. Before the new head node is ready, the sensory data are also transmitted to the previous head node. When the other sensor nodes are waiting for CellHead_02, if the waiting time exceeds the threshold Th time , they query the sensor node whose the ordinal number is (Index+2). If the current head node is the last one on the election list, then the sensor nodes will query the first sensor node on the election list in the next election.
To evaluate the performance of our proposed algorithm, we designed the simulation experiment using the MATLAB platform. The simulation parameters are listed in Table 5 and the values of the parameters are listed in Table 6. In our simulation, the target monitoring area was a rectangle area with the size of L×D. The network consisted of N sensor nodes, and these sensor nodes had the same communication radius r c and initial energy E. The size of the control pocket was l c , and the size of the sensory data packet was l s . We adopted the energy model wherein if a sensor node receives n b bit sensory data, the consumed energy is (n b ×Eelec)J; if a sensor node sends n b bit sensory data, the consumed energy can be classified into two categories. We assumed the transmission distance is d, and the threshold of transmission distance is d0. If d<d0, the consumed energy is [n b ×(Eelec+E fs ×d2)])J; if d≥d0, the energy is equal to [n b ×(Eelec+Emp×d4)]J.
Table 5 Simulation parameters
Table 6 Values of simulation parameters
To simulate the centrally distributed event, we assumed event area is a circular area and all sensor nodes in this area are appointed as source nodes. The network lifetime was determined by the time when the first node runs out of its energy. In our VGDCA-C, the real-time data collection means that the source nodes upload sensory data immediately when the sensory data is sensed. In this context, we mainly focused on the balance of energy consumption to prolong the network lifetime. Thus, we focused on the lifetime performance and energy consumption balance, while the delay of sensory data transmission was not considered. We used the following aspects as performance metrics of simulation experiments:
Network lifetime: defined by the time when the first node runs out of its energy
Average residual energy: the mean value of residual energies of all the sensor nodes
A variance of residual energy: the variance of residual energies of all the sensor nodes
An average number of transmission hops: the mean value of transmission hops from a source node to the mobile sink.
Performance analysis under different system parameters
In the simulation experiments of VGDCA-C, we first studied the impact of system parameters Th and Num cycle on network performances, where Th denotes the threshold of the re-election of head nodes, which affects the equilibrium of energy consumption in a single grid cell, and Num cycle affects the energy consumption in the entire network and network lifetime. In these experiments, the number of sensor nodes was 1000, and the location of event area changed every 1000 s. The values of Th were 10, 30, 50, 70, and 90%, and the values of Num cycle were 2, 4, 6, 8, and 10.
Network lifetime
As illustrated in Fig. 9, the network lifetime was minimal at Th of 10%, and the lifetime increased with the increase of Th. In the case of Th of 10%, the head node could start re-election when the residual energy became 10% of the energy at the election time. It means that sensor nodes that had been head nodes had little residual energy. However, the sensor nodes that had not been elected had higher residual energy. Thus, the energy consumption within the grid cell was unbalanced, and the first sensor node that ran out of energy appeared soon. Therefore, the network lifetime was short when Th was 10%. With the increase of Th, the sensor nodes in the grid cell assumed the task of the head node more frequently, and the energy consumption within grid cell became more and more balanced. We found that the network lifetime had the highest growth rate when Th varied between 70 and 90% because in that case, energy consumption within grid cell was the most balanced.
The network lifetime under different system parameters (Th and Num cycle ). When Th was 90% and Num cycle was 8, the network lifetime could reach the maximum value
When we analyzed the situation from the vertical axis, we found that for the same Th, network lifetime was minimal when Num cycle was 2. When Num cycle was 2, the mobile sink needed to determine whether to switch the moving belts after every two completed moving cycles. According to the distribution of event area and the strategy of switching the moving belts, two moving belts switched in general circumstances. Hence, the grid cells of two rows and two columns needed to be updated. Therefore, much energy was consumed and network lifetime was shortened. With the increase of Num cycle , the network lifetime also increased. When Num cycle reached 8, the network lifetime had the maximum value because the updating operation did not consume much energy when the moving belts did not switch frequently. When Num cycle was 10, the network lifetime began to decrease because all the sensory data were routed to the moving belts. If the moving belts were not switched for a long time, the sensor nodes in the moving belts could cause the "hot spot" problems, which would shorten the network lifetime. Thus, when Th was 90% and Num cycle was 8, the network lifetime could reach the maximum value.
Average residual energy
The average residual energy indicates the energy utilization rate of the network. As illustrated in Fig. 10, with the increase of Th, the average residual energy decreased continuously. According to Fig. 9, the network lifetime increase was caused by the increase of Th. Thus, the average residual energy decreased. When Th was unchanged, the average residual energy was the lowest at Num cycle of 8, which means that the energy utilization rate was the best in that case. In Fig. 10, it can be seen that when Num cycle was equal to 2, the average residual energy was lower than the average residual energy at Num cycle of 4. However, when Num cycle was 2, the network lifetime was shorter than that when Num cycle was 4. This is because the moving belts needed to be constantly switched, and much energy was consumed to update the corresponding grid cells. Thus, the energy utilization rate was not high when Num cycle was 2 because much energy was wasted. On the other hand, when Num cycle was 8, the energy utilization rate was the highest.
The average residual energy under different Th and Num cycle . When Num c ycle was 8, the energy utilization rate was the highest
Variance of residual energy
The variance of residual energy indicates the balance of energy consumption. As illustrated in Fig. 11, the variance was minimal at Th of 10% because only a small number of sensor nodes had consumed energy when the network ended. Most sensor nodes had almost the same residual energy. With the increase of Th, most sensor nodes began to consume energy, and the variance increased. When the Num cycle was 2 and 4, the network lifetime was short, and the variance was low. Although the energy consumption was balanced in that state, that was not an excellent situation. When the Num cycle was 10, the variance reached the maximum value because in that case, the moving belts stayed in the same area for a long time, and the sensory data converged in that area. Hence, the sensor nodes of that area consumed all energy very fast. The energy consumption of network was unbalanced. When Num cycle was 6 and 8, the variance values were close, and the network had a better performance regarding the lifetime and average residual energy.
The variance of residual energy under different system parameters (Th and Num cycle ). When Num cycle was 6 and 8, the variance values were close, and the network had a better performance regarding the lifetime and average residual energy
According to the analysis of network lifetime, average residual energy, and residual energy's variance, the network had the best performance at Th of 90% and Num cycle of 8. There we set Th to 90% and Num cycle to 8 and compared the performance of the VGDCA-C and VGDD [27].
Comparison with VGDD
In this section, we compare the performance of the VGDCA-C and VGDD for different numbers of sensor nodes. The numbers of sensor nodes were 800, 1000, 1200, 1400, and 1600, respectively, and the time interval of event area change was 1000 s.
As illustrated in Fig. 12, we compared the network lifetime for different numbers of sensor nodes. With the increase of the number of sensor nodes, the network lifetime of the VGDCA-C was always larger than that of the VGDDs. In the application scenarios with centrally distributed events, the VGDCA-C could allocate parking time dynamically according to the counters of a mobile sink. The mobile sink would park longer time in the virtual grids with more sensory data. This method could reduce energy consumption and a total length of transmission. Meanwhile, two moving belts would switch dynamically according to the counters of a mobile sink. These counters indicated the location of event area, and the moving belts switched toward the event area. However, in the VGDD, a mobile sink moved by the predefined trajectory. When the event area changed, the mobile sink of the VGDD could not adjust its trajectory to the event area. Hence, network lifetime of the VGDCA-C was longer. When the number of sensor nodes increased, the number of source nodes also increased. Moreover, there was no major fluctuation in the network lifetime due to the increase of the number of sensor nodes.
The comparison of VGDD and VGDCA-C in network lifetime. It shows that network lifetime of the VGDCA-C was longer that VGDD in our simulation model
As mentioned above, the average residual energy indicates the energy utilization ratio. As illustrated in Fig. 13, with the increase of the number of sensor nodes, the average residual energy of the VGDCA-C was slightly lower than that of the VGDD, which indicates that energy utilization ratio of the VGDCA-C was higher than that of the VGDD. In Fig. 12, it can be seen that network lifetime of the VGDCA-C was two times greater than that of the VGDD. However, the average residual energy of VGDCA-C was only slightly below than that of the VGDD, which indicates that the VGDCA-C could work longer when the same energy was consumed, which further means that the VGDCA-C had higher energy utilization ratio.
The comparison of VGDD and VGDCA-C in average residual energy. It indicates that the VGDCA-C could work longer when the same energy was consumed, which further means that the VGDCA-C had higher energy utilization ratio
As illustrated in Fig. 14, when the number of sensor nodes was 800 and 1600, the variance value of the VGDCA-C was slightly larger than that of the VGDD. However, when the number of sensor nodes was 1000, 1200, and 1400, the variance value of the VGDCA-C was slightly lower than that of VGDD. The performances of two algorithms regarding the balance of energy consumption ware similar. As the network lifetime of the VGDCA-C was larger than that of VGDD, the energy consumption balance of the VGDCA-C was slightly better than that of the VGDD.
The comparison of VGDD and VGDCA-C in a variance of residual energy. The energy consumption balance of the VGDCA-C was slightly better than that of the VGDD
Average number of transmission hops
As illustrated in Fig. 15, when the number of sensor nodes varied from 800 to 1600, the average number of transmission hops was about 4 hops in the VGDCA-C. The corresponding number of the VGDD was greater than 7. The average number of transmission hops indicated that when the VGDCA-C was used, and source node sent data to the mobile sink, the hops of the packet could be reduced by 3 hops compared to the VGDD, which was because the trajectory of a mobile sink was adjusted dynamically and moved toward the event area. Meanwhile, when mobile sink allocated the parking time, the sink parked longer in the grids, which processed more sensory data. By constantly adjusting the movement trajectory and parking time, mobile sink could get closer to the source nodes, and the sensory data could be uploaded to the mobile sink faster. However, the mobile sink had a predetermined trajectory in the VGDD, so the mobile sink could not adjust its moving status according to the changes in the event area. Thus, the VGDCA-C had better real-time performance than the VGDD.
The comparison of VGDD and VGDCA-C in an average number of transmission hops. The VGDCA-C decreases transmission hops in the applications with centrally distributed events
In this paper, the algorithm for real-time data collection for applications with centrally distributed events, called the VGDCA-C, is proposed and analyzed. Firstly, a virtual grid virtual gird structure is introduced to initialize the network. The virtual grid structure can divide the network into several virtual square areas with the same size, where virtual grids of different areas have different RCN and DN. The structure is the basis of sensory data routing. Then, the routing of sensory data is discussed. With the help of a virtual grid structure, the sensory data can be routed to the mobile sink easily and automatically. Afterward, the trajectory planning of a mobile sink is proposed such that the mobile sink can move closer to the event area and park in the virtual grids longer, which increases the amount of sensory data. Using the proposed algorithm, the total length of routing paths and the transmission delay are decreased. To reduce the energy consumption of updating of a mobile sink location, we propose the local updating. Finally, we proposed the re-election of head nodes in the virtual grid cells to balance energy consumption. Compared with the VGDD algorithm, the VGDCA-C algorithm prolongs network lifetime and decreases transmission delay in the applications with centrally distributed events.
IWSNs:
Industrial wireless sensor networks
VGDCA-C:
Virtual grid-based real-time data collection algorithm for applications with centrally distributed events
MWST:
Minimum Wiener index spanning tree
MST:
Minimum spanning tree
RLW:
Random line walk
EEGBDD:
Energy efficient grid-based data dissemination routing mechanism
O-LEACH:
Optimizing LEACH clustering algorithm
RNs:
Convergence nodes
VGDRA:
Virtual grid-based dynamic routes adjustment scheme
VGDD:
Virtual grid-based data dissemination scheme
RCN:
Row column number
Direction number
Grid of sink
Head node in the upper left corner
Head node in the upper right corner
Head node in the lower left corner
RD:
Head node in the lower right corner
UG:
Head node in the grid above
DG:
Head node in the grid below
LG:
Head node in the left grid
Head node in the right grid
IG:
Intersecting grid
L Shu, M Mukherjee, X Wu, Toxic gas boundary area detection in large-scale petrochemical plants with industrial wireless sensor networks. IEEE Commun. Mag.54:, 22–28 (2016).
L Shu, M Mukherjee, X Xu, K Wang, X Wu, A survey on gas leakage source detection and boundary tracking with wireless sensor networks. IEEE Access. 4:, 1700–1715 (2016).
B Ahmed, B Walid, R Herve, Optimal WSN deployment models for air pollution monitoring. IEEE Trans. Wireless Commun.16:, 2723–2735 (2017).
M Amjad, L Jaime, S Sandra, A secure and low-energy zone-based wireless sensor networks routing protocol for pollution monitoring. Wireless Commun. Mobile Comput.16:, 2869–2883 (2016).
V Carlos, D Yezid, Delay/disruption tolerant network-based message forwarding for a river pollution monitoring wireless sensor network application. Sensors. 16:, 436–460 (2016).
CZ Zulkifli, HN Hassan, W Ismail, et al, Embedded RFID and wireless mesh sensor network materializing automated production line monitorin. ACTA Phys. Polonica A. 128(B86-B89) (2015).
G Han, L Liu, S Chan, R Yu, Y Yang, HySense: a hybrid mobile CrowdSensing framework for sensing opportunities compensation under dynamic coverage constraint. IEEE Commun. Mag.55:, 93–99 (2017).
G Han, L Zhou, H Wang, W Zhang, S Chan, A source location protection protocol based on dynamic routing in WSNs for social internet of things. Futur. Gener. Comput. Syst.82:, 689–697 (2017).
X Tian, Y Zhu, K Chi, J Liu, D Zhang, Reliable and energy-efficient data forwarding in industrial wireless sensor networks. IEEE Syst. J.11:, 1424–1434 (2015).
G Han, X Yang, L Liu, W Zhang, A joint energy replenishing and data collection algorithm in wireless rechargeable sensor networks. IEEE Internet Thing J. (2017).
E Lee, S Park, J Lee, Novel service protocol for supporting remote and mobile users in wireless sensor networks with multiple static sinks. Wireless Netw. 17:, 861–875 (2011).
S Jannu, P Janam, A grid based clustering and routing algorithm for solving hot spot problem in wireless sensor networks. Wireless Netw.22:, 1901–1916 (2016).
C Zhu, G Han, H Zhang, A honeycomb structure based data gathering scheme with a mobile sink for wireless sensor networks. Peer-to-Peer Netw. Appl.10:, 1–16 (2016).
C Zhu, S Zhang, G Han, A greedy scanning data collection strategy for large-scale wireless sensor networks with a mobile sink. Sensors. 16:, 1432–1459 (2016).
S Han, I Jeong, S Kang, Low latency and energy efficient routing tree for wireless sensor networks with multiple mobile sinks. J. Netw. Comput. Appl. 36:, 156–166 (2013).
K Shin, S Kim, Predictive routing for mobile sinks in wireless sensor networks: a milestone-based approach. J. Supercomput.62:, 1519–1536 (2012).
G Shi, J Zheng, J Yang, Z Zhao, Double-blind data discovery using double cross for large-scale wireless sensor networks with mobile sinks. IEEE Trans. Vehicular Technol.61:, 2294–2304 (2012).
P Singh, R Kumar, V Kumar, An energy efficient grid based data dissemination routing mechanism to mobile sinks in Wireless Sensor Network, International Conference on Issues and Challenges in Intelligent Computing Techniques, 401–409 (2014).
C Tunca, M Dönmez, S Isik, C Ersoy, Ring routing: an energy-efficient routing protocol for wireless sensor networks with a mobile sink. IEEE Trans. Mobile Comput.14:, 1947–1960 (2015).
N Wang, Y Chiang, Power-aware data dissemination protocol for grid based wireless sensor networks with mobile sinks. IET Commun.5:, 2684–2691 (2011).
S Mottaghi, M Zahabi, Optimizing LEACH clustering algorithm with mobile sink and rendezvous nodes, AEU-Int. J. Electron. Commun.69:, 507–514 (2015).
C Konstantopoulos, G Pantziou, D Gavalas, et al, A rendezvous-based approach enabling energy-efficient sensory data collection with mobile sinks. IEEE Trans. Parallel Distributed Syst.23:, 809–817 (2012).
A Khan, A Abdullah, M Razzaque, VGDRA: a virtual grid-based dynamic routes adjustment scheme for mobile sink-based wireless sensor networks. IEEE Sensors J.15:, 526–534 (2015).
A Kinalis, S Nikoletseas, D Patroumpa, Biased sink mobility with adaptive stop times for low latency data collection in sensor networks. Inf. Fusion. 15:, 56–63 (2009).
S Ghafoor, M Rehmani, S Cho, An efficient trajectory design for mobile sink in a wireless sensor network. Comput. Electr. Eng.40:, 2089–2100 (2014).
G Yang, S Liu, X He, N Xiong, Adjustable trajectory design based on node density for mobile sink in WSNs. Sensors. 16:, 2091–2114 (2016).
A Khan, A Abdullah, M Razzaque, VGDD: a virtual grid based data dissemination scheme for wireless sensor networks with mobile sink. International J. Distributed Sensor Netw.11:, 1–17 (2015).
G Han, X Yang, L Liu, W Zhang, M Guizani, A disaster management-oriented path planning for mobile anchor-based localization in wireless sensor networks. IEEE Trans. Emerging Topics Comput. (2017).
The work is supported by "the Fundamental Research Funds for the Central Universities, no. 2017B14714", supported by "the National Natural Science Foundation of China under grant no. 61572172", and supported by "Changzhou Sciences and Technology Program, no. CE-20165023 and no. CE20160014" and "six talent peaks project in Jiangsu Province, no. XYDXXJ-S-007".
The values of simulation parameters are listed in Table 6.
Department of Internet of Things Engineering, Hohai University, Changzhou, China
Chuan Zhu
, Xiaohan Long
, Guangjie Han
, Jinfang Jiang
& Sai Zhang
Search for Chuan Zhu in:
Search for Xiaohan Long in:
Search for Guangjie Han in:
Search for Jinfang Jiang in:
Search for Sai Zhang in:
CZ, XL, GH, JJ-N, and SZ designed the study, performed the research, analyzed the data, and wrote the paper. All authors read and approved the final manuscript.
Correspondence to Guangjie Han.
The authors declared that they have no competing interests.
Chuan Zhu received the Ph.D. degree from the Department of Computer Science, Northeastern University, Shenyang, China, in 2009. And in December 2017, he finished his work as a Postdoctoral Researcher with Hohai University. He is currently a Lecturer in the Department of Information and Communication System, Hohai University, China. He has authored over ten papers in related international conferences and journals. His current research interests are sensor networks, cloud computing, and computer networks.
Xiaohan Long is a Master degree candidate of the Department of Internet of things and its Application at Hohai University, China. His current research interests are wireless sensor networks, underwater wireless sensor networks, cloud computing, and Android security software development.
Guanjie Han is currently a Professor in the Department of Information and Communication System, Hohai University, Changzhou, China. In 2004, he received the Ph.D. degree from Northeastern University, Shenyang, China. From 2004 to 2006, he was a Product Manager for the ZTE Company. In February 2008, he finished his work as a Postdoctoral Researcher in the Department of Computer Science, Chonnam National University, Gwangju, Korea. From October 2010 to 2011, he was a Visiting Research Scholar in the Osaka University, Suita, Japan. He is the author of over 230 papers published in related international conference proceedings and journals and is the holder of 100 patents. His current research interests include sensor networks, computer communications, mobile cloud computing, and multimedia communication and security. Dr. Han has served as a Co-chair for more than 50 international conferences/workshops and as a Technical Program Committee member of more than 150 conferences. He had been awarded the ComManTel 2014, ComComAP 2014, Chinacom 2014, and Qshine 2016 Best Paper Awards. He is a member of IEEE and ACM.
Jinfang Jiang is currently a Lecturer in the Department of Information and Communication System at Hohai University, China. She received her Ph.D degree in Information and Communication Engineering from Hohai University, China, in 2015. Her current research interests are security and localization for sensor networks.
Sai Zhang received the Master degree from the Department of Information and Communication System at Hohai University, China, 2017. He has published 1 paper in related international conferences and journals. He is working at Huawei Technologies Co., Ltd. currently.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Mobile sink
Centrally distributed events
Algorithms and Architectures for Industrial Wireless Sensor Networks
|
CommonCrawl
|
Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs
Junjun Zhao1,
Menggang Yu2 &
Xi-Ping Feng3
Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones.
We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis.
In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting.
Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.
Simon's two-stage designs [1] are commonly used in phase II clinical trials, especially in cancer clinical trials. In a study with a Simon's design, the null hypothesis is concerned with a response rate, H 0:π≤π 0. The power is calculated at some π 1>π 0. A Simon's design is usually indexed by four numbers that represent the stage 1 sample size (n 1), stage 1 critical value (r 1), final sample size (n t ) and final critical value (r t ). In stage 1, a sample of size n 1 is taken. If the number of successes X 1 in stage 1 satisfies X 1≤r 1, the trial is stopped for futility; otherwise, an additional sample of size n 2=n t −n 1 is taken. Let X 2 be the number of successes in stage 2, and let X t =X 1+X 2. If X t ≤r t , futility is concluded; otherwise efficacy is concluded by rejecting H 0. Softwares are available for calculating Simon's two-stage designs, for example, from a website at the National Cancer Institute: http://linus.nci.nih.gov/brb/samplesize/otsd.html, from a website at the Department of Biostatistics of the Vanderbilt University: http://biostat.mc.vanderbilt.edu/wiki/Main/TwoStageInference, and from the NCSS/PASS package: http://www.ncss.com/.
Koyama and Chen [2] (hereafter KC) pointed out that the inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. They outlined proper statistical inference procedures for studies based on the Simon's two-stage designs.
Because the actual sample size of stage 2 may frequently differ from the planned one due to various reasons, KC also proposed a way to conduct a hypothesis testing when the stage 2 sample size is changed in a Simon's design. They focused on the case of non-informative sample size change at the second stage. In other words, the actual stage 1 sample size always equals to the planned stage 1 sample size but the actual stage 2 sample size can differ from the planned stage 2 sample size. In addition, the decision to use a different sample size must be independent of the observed outcome data. Inference then needs to be made based on the actual data. This is in contrast to adaptive designs that can alter the sample size based on interim results. We restrict our attention to the same setting as KC although we believe our method can be extended.
The scenarios of non-informative sample size change or protocol deviation can arise quite frequently in practice. Shortening of stage 2 can occur in cases of early termination of study due to lack of funding, slow accrual, non-informative drop-outs, accrual of ineligible subjects, etc. Such shortening of stage 2 sample size can be reasonably assumed to be independent of the outcomes of the study. Extension of stage 2 can occur in cases of sites coordination error, over compensation for unevaluable or dropout patients, or administrative reasons.
In applying KC's method, we found some difficulties in calculation for certain scenarios due to the discrete nature of the binomial distribution. In particular, in the case when the number of responders x 1 at the first stage exceeds the final boundary r t with an (unexpectedly) efficacious treatment. Because Simon's two-stage design does not stop for early efficacy [1], the study would continue to the second stage. In this case, KC's method breaks down. Another possible problem is for the case when we have no responders at the second stage, that is, x 2=0. We give our detailed explanation after we review their method in the next section. We therefore introduce a different method for inference based on conditional likelihood. Besides the ability to make proper inference for the settings when KC's method may be difficult to apply, our method is also seen to improve on statistical properties for many settings we have investigated.
Porcher and Desseaux [3] considered different approaches for point and confidence intervals estimation, as well as computation of p-values for the same setting as KC. In their methods, the rankings used for computing p-values were based on estimators instead of likelihood. They recommended the uniformly minimum variance unbiased estimator (UMVUE) as it exhibited good properties. In particular, when the second stage sample size is unaltered, they pointed out that the method based on UMVUE is equivalent to KC [3]. For this reason, our method should also improve on their methods.
In addition to [2, 3], other related works exist. Green and Dahlberg [4] were among the first who considered settings that accommodate a modified sample size in both stages even though the proposed analysis method was ad hoc. Masaki et al. [5] considered designs for a range of possible stage I and total sample size deviations from planned study. Li et al. [6] formulated a Bayesian approach with a modified sample size. Their method can have desirable frequentist properties under certain types of priors. Recently, Zeng et al. [7] considered computation improvement and proposed a normal approximation that is accurate even under small sample sizes.
Review of Koyama and Chen (2008)
The KC method centers mainly on the calculation of p-values. Throughout, use P π (E) to represent the probability of the event E at a specific π. Denote x 1 and x 2 as the actual observed numbers of responders at stage 1 and 2 of a study based on Simon's two-stage design.
If x 1≤r 1, the trial is stopped early at the first stage due to futility. In this case, the p-value is given by \(P_{\pi _{0}}[X_{1} \ge x_{1}|n_{1}]\), which can be easily computed from the binomial distribution with size n 1 and success probability π 0.
If x 1>r 1, the trial continues to the second stage. In this case, the p-value calculation is based on observed sample paths, given by
$$ \sum_{x=r_{1}+1}^{n_{1}}P_{\pi_{0}}[X_{1}=x|n_{1}]P_{\pi_{0}}[X_{2}\geq x_{1}+x_{2}-x|n_{2}], $$
((1))
where \(P_{\pi _{0}}[X_{2}\geq x_{1}+x_{2}-x|n_{2}]\) represent more 'extreme' sample paths than the observed one given that x>r 1 responses are observed at stage 1. The actual type I error and power are evaluated through
$$\begin{array}{@{}rcl@{}} P_{\pi}[\text{Reject}\ H_{0}] \,=\,\sum_{x = r_{1}+1}^{n_{1}} \!P_{\pi}[X_{1}\,=\,x|n_{1}] P_{\pi}[X_{2} > r_{t}-x \,|\, n_{2}] \end{array} $$
under H 0 and H 1, respectively. Let A(x,n 2,π)≡P π [X 2>r t −x | n 2] be the conditional rejection rate of H 0 at the end of stage 2 given X 1=x. Then, the rejection rule at the end of stage 2, x 1+x 2>r t , is equivalent to
$$P_{\pi_{0}}[X_{2}\geq x_{2}|n_{2}] \le A(x_{1},n_{2},\pi_{0}), $$
where A(x 1,n 2,π 0) serves as a conditional critical value.
When the actual sample size of stage 2, denoted by n ∗, deviates from n 2, A(x 1,n 2,π) can still be used as a conditional criterion for decision making. That is to reject H 0 when
$$P_{\pi_{0}}[X_{2}\geq x_{2}|n_{2}^{*}] \le A(x_{1},n_{2},\pi_{0}). $$
However, with the presence of the second stage sample size deviation, the p-value cannot be directly extended from (1) because the observed total number of responses x 1+x 2 is not a good ranking determinant of 'extremeness' any more. In particular, KC gave a concrete example in which two different sample paths (x 1,x 2) and \((x_{1}^{*}, x_{2}^{*})\) with the same total number of responses (\(x_{1}^{*}+x_{2}^{*}=x_{1}+x_{2}\)) and the same deviated sample size \(n_{2}^{*}\) of stage 2 may lead to different conclusions about the hypothesis. Therefore, Koyama and Chen [2] proposed the following way of calculating p-value.
Find π ∗ such that \(A(x_{1},n_{2},\pi ^{*})=P_{\pi _{0}}[X_{2}\geq x_{2}|n_{2}^{*}]\).
Compute the p-value by
$$\begin{array}{@{}rcl@{}} \sum_{x = r_{1}+1}^{n_{1}} P_{\pi_{0}}[X_{1}=x|n_{1}]A(x,n_{2},\pi^{*}). \end{array} $$
One difficulty with this way of calculation is when x 1>r t . Although infrequent, this happens when the investigational treatment is unexpectedly efficacious. Because Simon's two-stage designs do not stop for early efficacy [1], the study continues to the second stage. In this case, we have A(x 1,n 2,π)≡1 for any π. Therefore π ∗ can not be determined from step (a) above and the algorithm breaks down.
Another possible problem is for the case when we have x 2=0. In this case, \(P_{\pi _{0}}[X_{2}\geq x_{2}|n_{2}^{*}] \equiv 1\) for any \(n_{2}^{*}\). When x 1≤r t , this corresponds to the solution π ∗=1. Therefore the corresponding p-value is independent of \(n_{2}^{*}\) and equals to \(\sum _{x = r_{1}+1}^{n_{1}} P_{\pi _{0}}[X_{1}=x] = P_{\pi _{0}}[X_{1}> r_{1}]\). This may not be sensible as it is independent of both observed number of response x 1 and of the actual second stage sample size \(n_{2}^{*}\). We therefore introduce a different method for inference based on likelihood.
Likelihood based construction of confidence intervals
We extend the existing likelihood based inference for two-stage and multiple stage trials [8–12] to our setting for construction of p-values and confidence intervals. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under H 0.
Let M denote the stopping stage, and let S M denote the total number of responders accumulated up to the stopping stage. That is, S M =X 1 when M=1 and S M =X 1+X 2 when M=2. Similarly, let N M be total sample size of the study. The probability mass function of the random vector (M;S M ) is given by
$$\begin{array}{@{}rcl@{}} f(m, s_{m} |\pi)\,=\, \left\{ \begin{array}{ll} {n_{1}\choose s_{m}} \pi^{s_{m}}(1 - \pi)^{n_{1}-s_{m}} & \quad m=1 \\ \sum_{x_{1}=(r_{1}+1)\vee (s_{m}-n_{2})}^{s_{m} \wedge n_{1}} {n_{1}\choose x_{1}} {n_{2}\choose s_{m}-x_{1}} \pi^{s_{m}}(1 - \pi)^{n_{1}+n_{2}-s_{m}} & \quad m=2 \end{array} \right. \end{array} $$
where ∧ takes the minimum and ∨ takes the maximum of its arguments. Jung and Kim [8] showed that (M,S M ) is complete and sufficient for π. The MLE of π is therefore \(\hat {\pi }=S_{M}/N_{M}\). However the MLE is biased [11, 13]. Based on the fact that X 1/n 1 is always unbiased estimator for the true probability π, Jung and Kim [8] derived the UMVUE of π to be
$$\begin{array}{@{}rcl@{}} \tilde{\pi}= \left\{ \begin{array}{ll} \frac{x_{1}}{n_{1}} & m=1\\ \\ \frac{\sum_{x_{1}=(r_{1}+1)\vee (s_{m}-n_{2})}^{s_{m}\wedge n_{1}}{n_{1}-1 \choose x_{1}-1}{n_{2} \choose s_{m}-x_{1}}}{\sum_{x_{1}=(r_{1}+1)\vee (s_{m}-n_{2})}^{s_{m}\wedge n_{1}}{n_{1} \choose x_{1}}{n_{2} \choose s_{m}-x_{1}}}& m=2 \end{array} \right. \end{array} $$
The existence of the UMVUE \(\tilde {\pi }\) also facilitates the determination of confidence intervals. In particular, an exact (1−α)% confidence interval (π L ,π U ) for π is given by
$$\begin{array}{@{}rcl@{}} Pr(\tilde{\pi}(M, S_{M}) \ge \tilde{\pi}(m,s_{m}) |\pi=\pi_{L}) = \alpha/2 \end{array} $$
$$\begin{array}{@{}rcl@{}} Pr(\tilde{\pi}(M, S_{M}) \ge \tilde{\pi}(m,s_{m}) |\pi=\pi_{U}) = 1-\alpha/2. \end{array} $$
Jung and Kim [8] showed that such ordering of the sample space by the UMVUE is the same as that by Jennison and Turnbull [14]. Chang and O'Brien [12] showed that likelihood ratio based construction is more efficient and led to smaller average CI length.
When there is study extension or shortening, the second stage sample size n 2 becomes a random variable. The likelihood can depend on the probability that n 2 obtains a specific value \(n_{2}^{*}\). However, in the case when such change of sample size is not related to π, the above likelihood can be viewed as the conditional likelihood given the observed value of \(n_{2}^{*}\) and therefore can be used to make inference. The UMVUE takes the same format as in (3) except with \(n_{2}^{*}\) in place of n 2.
The likelihood ratio test of H 0:π=π 0 vs. H 1:π≠π 0 is based on
$$\begin{array}{@{}rcl@{}} T(M, S_{M}, \pi_{0}) = \frac{\hat{\pi}^{S_{M}}(1-\hat{\pi})^{N_{M}-S_{M}}}{\pi_{0}^{S_{M}}(1-\pi_{0})^{N_{M}-S_{M}}}, \end{array} $$
where \(\hat {\pi }=S_{M}/N_{M}\). Under H 0, any path (m,s m ) that has larger likelihood ratio is considered to be more 'extreme' against H 0. Therefore, the probability of observing (M,S M ) or more extreme paths is
$$\sum_{\{(m,s_{m}): T(m, s_{m}, \pi_{0}) > T(M, S_{M}, \pi_{0})\}} f(m,s_{m}|\pi_{0}). $$
After correcting for the discreteness of the binomial distribution by a fraction of the probability of (M,S M ), the p-value is proposed to be
$$\begin{array}{@{}rcl@{}}{\fontsize{8.3pt}{12pt}\selectfont{ \begin{aligned} {}P_{\pi_{0}} \equiv \sum_{\{(m,s_{m}): T(m, s_{m}, \pi_{0}) > T(M, S_{M}, \pi_{0})\}} f(m,s_{m}|\pi_{0}) + 0.5f(M, S_{M}|\pi_{0}). \end{aligned}}} \end{array} $$
The acceptance region defined as \(\{\pi _{0}: P_{\pi _{0}} \ge \alpha \}\phantom {\dot {i}\!}\) can be used to form the limits of a (1−α)% confidence interval of π. Note that it is possible that such a defined region may not be an interval. However, such case is rare and has minimal impact on the confidence interval performance [12].
We conduct simulation studies to evaluate likelihood ratio test based CI construction, conditional likelihood based UMVUE, and compare their performances with approaches of Koyama and Chen [2]. In particular, we selected the designs from Tables one and two in Simon's paper [1] and simulated 5,000 data sets based on various values of π. If a simulated study continues to the 2nd stage under the specified design, the actual sample size at the second stage of the study \(n_{2}^{*}\) is generated via an equal-probability multi-nomial distribution that range from n 2/3 to 1.5n 2. We have also examined other possible ranges of \(n_{2}^{*}\) and found similar results. We only report 90 % CI widths and coverage as well as the actual power from the two methods in Tables 1, 2 and 3 and visualized the comparison of the corresponding CI widths, CI coverage, and bias in Figs. 1, 2, 3 and 4. Since the two methods yield same CIs in the first stage, we only present the CI width comparison for studies that are made to the 2nd stage in our simulation. From the tables, we see that the average CI width based on conditional likelihood are either similar to or smaller than those based on Koyama and Chen [2] in most cases. In some cases, the improvement can be quite significant (Figs. 1, 2, 3 and 4).
Confidence interval width comparison is based on studies made to the second stage; Coverage is to be compared with 90 %; Bias is the absolute value of difference between the estimate and true probability of response
Table 1 Ninety percent CI width and actual power based on studies made to the 2nd stage (α=0.05, β=0.1)
Table 2 Ninety percent CI width and actual power based on studies made to the 2nd stage (α=0.1, β=0.1)
We also compare CI coverage and bias based on all simulation studies including those stopped after the first stage. We see that the CI coverage are similar between the two methods. The conditional likelihood UMVUE has uniformly smaller biases than the estimate based on Koyama and Chen [2], especially when the underlying true probability is large.
Real example
Advanced hepatobiliary cancers have a poor prognosis, in part complicated by underlying liver dysfunction. Although surgical resection and liver transplantation can be curative for select patients, those with advanced disease have few treatment options with survival rates of 6-12 months. GI06-101 was a multi-institutional study conducted by the Hoosier Oncology Group aimed to assess the efficacy of erlotinib (Tarceva, OSI-774; OSI Pharmaceuticals, Melville, NY) in combination with docetaxel in refractory hepatobiliary cancers [15]. Due to similarly poor outcomes and few existent treatment options for refractory disease at the time of this study's design in 2006, both hepatocellular cancers and biliary tract cancers were included.
The primary end point of this trial was the rate of progression free survival (PFS) at 16 weeks. PFS was defined as time from the start of treatment until disease progression or death of any cause, whichever occurred first. A Simon optimal two-stage design tested the hypothesis that the 16-week PFS is π 0≤15 % (clinically inactive) versus the alternative of π 1≥30 % (warranting further study). The design used 0.10 as the level of significance and 80 % as power. This led to n 1=19, r 1=3, n t =39, and r t =8.
Among the 19 patients of the first stage, 8 were progression free at 16-week. The study went on to the second stage and was terminated due to lack of funding after recruiting 6 patients. Among these 6 patients, 4 were progression free at 16-week. Therefore we have \(n_{2}^{*}=6\), x 1=8, and x 2=4. The resulting estimate for 16-week PFS rate is 0.435 with 90 % confidence interval (0.271,0.605) based on Koyama and Chen's method, compared with 0.48 with 90 % confidence interval (0.322,0.646) based on the conditional likelihood method. The conditional likelihood based estimate is larger and has shorter CI width.
Koyama and Chen [2] considered statistical inference problem for phase II studies based on Simon's two-stage designs when there are study deviations at the second stage. We propose an alternative method for such problem based on likelihood principle. In addition to provide inference for a couple of scenarios where Koyama and Chen's method breaks down, the resulting estimate appears to have certain advantage in terms of bias magnitude and confidence interval width in many cases.
Sample size change can also happen in the first stage [4, 16]. Our method of inference should be applicable if such change is not related to the actual outcome. There is also recent research on adaptive Simon's two-stage designs [17] where the second stage sample size is decided at the end of stage 1 based on observed responses. The decision can be to extend the study because there are fewer positive responses than expected or to shorten the study simply because there are more positive responses than expected. Our method should also be applicable. However the whole likelihood needs to be used that incorporates the mechanism of the second stage sample size determination.
Simon R. Optimal two-stage designs for phase ii clinical trials. Controlled Clinical Trials. 1989; 10:1–10.
Koyama T, Chen H. Proper inference from simon's two-stage designs. Stat Med. 2008; 27:3145–154.
Porcher R, Desseaux K. What inference for two-stage phase ii trials?. BMC Med Res Methodol. 2012; 12:117.
Green S, Dahlberg S. Planned versus attained design in phase ii clinical trials. Stat Med. 1992; 11:853–62.
Masaki N, Koyama T, Yoshimura I, Hamada C. Optimal two-stage designs allowing flexibility in number of subjects for phase ii clinical trials. J Biopharm Stat. 2009; 19:721–31.
Li Y, Mick R, Heitjan D. A bayesian approach for unplanned sample sizes in phase ii cancer clinical trials. Clin Trials. 2012; 9:293–302.
Zeng D, Gao F, Hu K, Jia C, Ibrahim J. Hypothesis testing for two-stage designs with over or under enrollment. Stat Med. 2015. In press.
Jung S, Kim K. On the estimation of the binomial probability in multistage clinical trials. Stat Med. 2004; 23:881–96.
Emerson S, Fleming T. Parameter estimation following group sequential hypothesis testing. Biometrika. 1990; 77:875–92.
Rosner G, Tsiatis A. Exact confidence intervals following a group sequential trial: A comparison of methods. Biometrika. 1988; 75:723–9.
Whitehead J. On the bias of maximum likelihood estimation following a sequential test. Biometrika. 1986; 73:573–81.
Chang M, O'Brien P. Confidence intervals following group sequential tests. Controlled Clin Trials. 1986; 7:18–26.
Chang M, Wieand H, Chang V. The bias of the sample proportion following a group sequential phase ii clinical trials. Stat Med. 1989; 8:563–70.
Jennison C, Turnbull B. Confidence intervals for a binomial parameter following a multistage test with application to mil-std 105d and medical trials. Technometrics. 1983; 25:49–58.
Hoosier Cancer Research Network. Erlotinib in Combination With Docetaxel in Advanced Hepatocellular and Biliary Tract Carcinomas. https://clinicaltrials.gov/ct2/show/NCT00532441.
Chen T, Ng T. Optimal flexible designs in phase ii clinical trials. Stat Med. 1998; 17:2301–312.
Banerjee A, Tsiatis A. Adaptive two-stage designs in phase ii clinical trials. Stat Med. 2006; 25:3382–395.
This work was supported by Chinese National Science Foundation Projects 81470737, 81400496, and 81300911.
Department of General Dentistry, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, 639 Zhi Zao Ju Road, Shanghai, 200011, P.R. China
Junjun Zhao
Department of Biostatistics & Medical Informatics, University of Wisconsin, K6/446 CSC 600 Highland Ave., Madison, Wisconsin, USA
Menggang Yu
Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 639 Zhi Zao Ju Road200011, P.R. China
Xi-Ping Feng
Search for Junjun Zhao in:
Search for Menggang Yu in:
Search for Xi-Ping Feng in:
Correspondence to Xi-Ping Feng.
JZ analysed the data and conducted simulations. MY motivated the idea of the manuscript and drafted the manuscript. XPF analysed the data, drafted the manuscript and interpreted the results. All authors read and approved the final manuscript.
Zhao, J., Yu, M. & Feng, X. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs. BMC Med Res Methodol 15, 48 (2015). https://doi.org/10.1186/s12874-015-0039-5
Simon's two-stage designs
Phase II studies
|
CommonCrawl
|
A two-solar-mass neutron star measured using Shapiro delay
P. B. Demorest1,
T. Pennucci2,
S. M. Ransom1,
M. S. E. Roberts3 &
J. W. T. Hessels4,5
Nature volume 467, pages 1081–1083 (2010)Cite this article
Neutron stars are composed of the densest form of matter known to exist in our Universe, the composition and properties of which are still theoretically uncertain. Measurements of the masses or radii of these objects can strongly constrain the neutron star matter equation of state and rule out theoretical models of their composition1,2. The observed range of neutron star masses, however, has hitherto been too narrow to rule out many predictions of 'exotic' non-nucleonic components3,4,5,6. The Shapiro delay is a general-relativistic increase in light travel time through the curved space-time near a massive body7. For highly inclined (nearly edge-on) binary millisecond radio pulsar systems, this effect allows us to infer the masses of both the neutron star and its binary companion to high precision8,9. Here we present radio timing observations of the binary millisecond pulsar J1614-223010,11 that show a strong Shapiro delay signature. We calculate the pulsar mass to be (1.97 ± 0.04)M⊙, which rules out almost all currently proposed2,3,4,5 hyperon or boson condensate equations of state (M⊙, solar mass). Quark matter can support a star this massive only if the quarks are strongly interacting and are therefore not 'free' quarks12.
Figure 1: Shapiro delay measurement for PSR J1614-2230.
Figure 2: Results of the MCMC error analysis.
Figure 3: Neutron star mass–radius diagram.
Lattimer, J. M. & Prakash, M. The physics of neutron stars. Science 304, 536–542 (2004)
Article ADS CAS Google Scholar
Lattimer, J. M. & Prakash, M. Neutron star observations: prognosis for equation of state constraints. Phys. Rep. 442, 109–165 (2007)
Glendenning, N. K. & Schaffner-Bielich, J. Kaon condensation and dynamical nucleons in neutron stars. Phys. Rev. Lett. 81, 4564–4567 (1998)
Lackey, B. D., Nayyar, M. & Owen, B. J. Observational constraints on hyperons in neutron stars. Phys. Rev. D 73, 024021 (2006)
Schulze, H., Polls, A., Ramos, A. & Vidaña, I. Maximum mass of neutron stars. Phys. Rev. C 73, 058801 (2006)
Kurkela, A., Romatschke, P. & Vuorinen, A. Cold quark matter. Phys. Rev. D 81, 105021 (2010)
Shapiro, I. I. Fourth test of general relativity. Phys. Rev. Lett. 13, 789–791 (1964)
Article ADS MathSciNet Google Scholar
Jacoby, B. A., Hotan, A., Bailes, M., Ord, S. & Kulkarni, S. R. The mass of a millisecond pulsar. Astrophys. J. 629, L113–L116 (2005)
Article ADS Google Scholar
Verbiest, J. P. W. et al. Precision timing of PSR J0437–4715: an accurate pulsar distance, a high pulsar mass, and a limit on the variation of Newton's gravitational constant. Astrophys. J. 679, 675–680 (2008)
Hessels, J. et al. in Binary Radio Pulsars (eds Rasio, F. A. & Stairs, I. H.) 395 (ASP Conf. Ser. 328, Astronomical Society of the Pacific, 2005)
Crawford, F. et al. A survey of 56 midlatitude EGRET error boxes for radio pulsars. Astrophys. J. 652, 1499–1507 (2006)
Özel, F., Psaltis, D., Ransom, S., Demorest, P. & Alford, M. The massive pulsar PSR J1614−2230: linking quantum chromodynamics, gamma-ray bursts, and gravitational wave astronomy. Astrophys. J. (in the press)
Hobbs, G. B., Edwards, R. T. & Manchester, R. N. TEMPO2, a new pulsar-timing package - I. An overview. Mon. Not. R. Astron. Soc. 369, 655–672 (2006)
Damour, T. & Deruelle, N. General relativistic celestial mechanics of binary systems. II. The post-Newtonian timing formula. Ann. Inst. Henri Poincaré Phys. Théor. 44, 263–292 (1986)
Freire, P. C. C. & Wex, N. The orthometric parameterisation of the Shapiro delay and an improved test of general relativity with binary pulsars. Mon. Not. R. Astron. Soc (in the press)
Iben, I., Jr & Tutukov, A. V. On the evolution of close binaries with components of initial mass between 3 solar masses and 12 solar masses. Astrophys. J Suppl. Ser. 58, 661–710 (1985)
Özel, F. Soft equations of state for neutron-star matter ruled out by EXO 0748 - 676. Nature 441, 1115–1117 (2006)
Ransom, S. M. et al. Twenty-one millisecond pulsars in Terzan 5 using the Green Bank Telescope. Science 307, 892–896 (2005)
Freire, P. C. C. et al. Eight new millisecond pulsars in NGC 6440 and NGC 6441. Astrophys. J. 675, 670–682 (2008)
Freire, P. C. C., Wolszczan, A., van den Berg, M. & Hessels, J. W. T. A massive neutron star in the globular cluster M5. Astrophys. J. 679, 1433–1442 (2008)
Alford, M. et al. Astrophysics: quark matter in compact stars? Nature 445, E7–E8 (2007)
Lattimer, J. M. & Prakash, M. Ultimate energy density of observable cold baryonic matter. Phys. Rev. Lett. 94, 111101 (2005)
Podsiadlowski, P., Rappaport, S. & Pfahl, E. D. Evolutionary sequences for low- and intermediate-mass X-ray binaries. Astrophys. J. 565, 1107–1133 (2002)
Podsiadlowski, P. & Rappaport, S. Cygnus X-2: the descendant of an intermediate-mass X-Ray binary. Astrophys. J. 529, 946–951 (2000)
Hotan, A. W., van Straten, W. & Manchester, R. N. PSRCHIVE and PSRFITS: an open approach to radio pulsar data storage and analysis. Publ. Astron. Soc. Aust. 21, 302–309 (2004)
Cordes, J. M. & Lazio, T. J. W. NE2001.I. A new model for the Galactic distribution of free electrons and its fluctuations. Preprint at 〈http://arxiv.org/abs/astro-ph/0207156〉 (2002)
Lattimer, J. M. & Prakash, M. Neutron star structure and the equation of state. Astrophys. J. 550, 426–442 (2001)
Champion, D. J. et al. An eccentric binary millisecond pulsar in the Galactic plane. Science 320, 1309–1312 (2008)
Berti, E., White, F., Maniopoulou, A. & Bruni, M. Rotating neutron stars: an invariant comparison of approximate and numerical space-time models. Mon. Not. R. Astron. Soc. 358, 923–938 (2005)
P.B.D. is a Jansky Fellow of the National Radio Astronomy Observatory. J.W.T.H. is a Veni Fellow of The Netherlands Organisation for Scientific Research. We thank J. Lattimer for providing the EOS data plotted in Fig. 3, and P. Freire, F. Özel and D. Psaltis for discussions. The National Radio Astronomy Observatory is a facility of the US National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, Virginia 22093, USA,
P. B. Demorest & S. M. Ransom
Astronomy Department, University of Virginia, Charlottesville, 22094-4325, Virginia, USA
T. Pennucci
Eureka Scientific, Inc., Oakland, 94602, California, USA
M. S. E. Roberts
Netherlands Institute for Radio Astronomy (ASTRON), Postbus 2, 7990 AA Dwingeloo, The Netherlands,
J. W. T. Hessels
Astronomical Institute "Anton Pannekoek", University of Amsterdam, 1098 SJ Amsterdam, The Netherlands
P. B. Demorest
S. M. Ransom
All authors contributed to collecting data, discussed the results and edited the manuscript. In addition, P.B.D. developed the MCMC code, reduced and analysed data, and wrote the manuscript. T.P. wrote the observing proposal and created Fig. 3. J.W.T.H. originally discovered the pulsar. M.S.E.R. initiated the survey that found the pulsar. S.M.R. initiated the high-precision timing proposal.
Correspondence to P. B. Demorest.
This file contains Supplementary Information and Data. (PDF 51 kb)
Demorest, P., Pennucci, T., Ransom, S. et al. A two-solar-mass neutron star measured using Shapiro delay. Nature 467, 1081–1083 (2010). https://doi.org/10.1038/nature09466
Issue Date: 28 October 2010
r-Process nucleosynthesis in gravitational-wave and other explosive astrophysical events
Daniel M. Siegel
Nature Reviews Physics (2022)
Gravitational-wave and X-ray probes of the neutron star equation of state
Nicolás Yunes
M. Coleman Miller
Kent Yagi
Quasi-stationary sequences of hyper-massive neutron stars with exotic equations of state
Sanika Khadkikar
Chatrik Singh Mangat
Sarmistha Banik
Journal of Astrophysics and Astronomy (2022)
Study of anisotropic compact stars in $$f({\mathcal {R}},{\mathcal {T}},{\mathcal {R}}_{\chi \xi }{\mathcal {T}}^{\chi \xi })$$ gravity
M Sharif
T Naseer
Pramana (2022)
Effect of charge on the maximum mass of the anisotropic strange quark star
A Saha
K B Goswami
P K Chattopadhyay
Editorial Summary
Record neutron star mass rules out exotics
New observations of the binary millisecond pulsar J1614-2230 have identified one of its components as the most massive neutron star for which a precise mass is known — nearly 20% greater than previous highest values. Neutron stars are composed of the densest form of matter known, and millisecond pulsars are rotating neutron stars. The observed range of neutron star masses has hitherto been too narrow to rule out many predictions of 'exotic' non-nucleonic components, but this pulsar weighs in at around two solar masses, ruling out almost all currently proposed equations of state involving exotic hyperon or boson condensates.
Weighing in on neutron stars
Nature News & Views 27 Oct 2010
|
CommonCrawl
|
Journal of Engineering and Applied Science
Granger causality analysis of deviation in total electron content during geomagnetic storms in the equatorial region
Sumitra Iyer ORCID: orcid.org/0000-0003-4846-080X1 &
Alka Mahajan2
Journal of Engineering and Applied Science volume 68, Article number: 4 (2021) Cite this article
The total electron content (TEC) in the ionosphere widely influences Global Navigation Satellite Systems (GNSS) especially for critical applications by inducing localized positional errors in the GNSS measurements. These errors can be mitigated by measuring TEC from stations located around the world at various temporal and spatial scales and using them for advanced forecasting of TEC. The TEC can be used as a tool in understanding space weather phenomena such as geomagnetic storms which cause disruptions in the ionosphere. This paper examines the causal relationship between perturbations in TEC caused by geomagnetic storms. The causality between two geomagnetic indices auroral electrojet (AE) and disturbed storm index (Dst) and TEC is investigated using Granger causality at two low-latitude stations, Bangalore and Hyderabad. The outcomes of this study strengthen the regional understanding and modeling of ionospheric parameters which can contribute towards the global efforts for modeling and reducing the ionospheric effects on trans-ionospheric communication and navigation. The causal inferences combined with the data-driven model can be useful in identifying the correct and informative physical quantities to improve the forecasting models.
The changes in the solar wind and interplanetary medium's physical conditions due to the solar activity result in several space weather phenomena such as geomagnetic storms and substorms which causes large magnetic field perturbations and disturbances in the near-Earth environment [1]. The technologies, such as the Global Positioning System (GPS), which play an important role in navigation, are severely affected by these disturbances. Therefore, it is important to mitigate the damages and errors caused due to these phenomena. Also, it is necessary to get a deeper knowledge of the physical processes responsible for generating such disturbances in the near-Earth environment and model and forecast their complex behavior. This is attempted by investigating dependencies between the parameters defining the geomagnetic storms namely disturbed storm index (Dst) and auroral electrojet index (AE) and the TEC which defines the dynamics of the ionosphere which impacts the positional accuracy. The causality between the variables is evaluated in this study. The details about the ionosphere TEC and geomagnetic indices are explained in the next section.
Ionosphere
The ionosphere is a region in the upper atmosphere which extends from around 50 to 1000 km height and is characterized by partially ionized plasma [2]. The ionosphere is described by the TEC in the layer. The TEC is the total number of electrons present along a path between a radio transmitter and receiver. It is measured in electrons per square meter. By convention, 1 TEC Unit or 1 TECU = 1016 electrons/m2.
The TEC is estimated from Global Navigation Satellite System (GNSS) observables and is an important tool in studying the space weather impacts. The TEC in the ionosphere depends on the solar activity like solar flares, coronal mass ejections, high-speed solar wind, solar cycle, solar maxima, and minima. As these solar activities vary with time and have different impacts at different locations, therefore, the TEC also varies due to local time, latitude, longitude, season, geomagnetic conditions, and solar cycle and exhibits temporal and spatial variation. The TEC is found to be maximum near the equator and tapering at poles. The seasonal effects are also observed in TEC due to the movement of the Earth around the Sun. Furthermore, the electron density is linked to the 11-year solar cycle, and during this cycle, it goes through a maximum when the ionosphere is more likely to be disturbed, and the electron density much higher and unpredictable as compared to a quiet day [3]. The daily distribution of TEC also frequently gets affected by geomagnetic storms, during the high solar activity period.
The ionosphere attributes to one of the largest errors in GPS positioning. Apart from positional error, the ionosphere also causes Faraday rotation and bending of radio waves of GPS signal. The irregularity in the ionosphere also leads to rapid fluctuations in signal amplitude/phase or scintillations. The dispersive nature of the ionosphere adds to the complexity and makes the positional error dependent on the frequency of the incoming signal [4]. This dependence is described in Eq. 1 and is obtained from the Appleton and Hartree general equation for the ionosphere's refractive index [5].
$$ {n}_{\mathrm{iono}}=1-\frac{1}{2}\times {\left(\frac{f\mathrm{plasma}\ }{f\mathrm{signal}}\right)}^2=1-40.3\times \frac{N}{f^2} $$
where niono is the refractive index, fplasma the plasma frequency, N is the electron density, and fsignal is the incoming signal frequency. This Eq. 1 is modified and used to evaluate the group delay of a ray path crossing the ionosphere, which is given by Eq. 2
$$ y1=40.3\times \frac{\mathrm{TEC}\kern1.25em }{f{\mathrm{signal}}^2} $$
For a single-frequency receiver on L1 frequency, positional error due to group delay is minimized using a correction code. This code emulates the spatial and temporal variations and is broadcasted to the receivers. Several models have been proposed and the Klobuchar model is being currently used in GPS receivers. However, in a dual-frequency receiver, the TEC is computed at two different frequencies and error is eliminated [6]. The TEC estimation for a dual-frequency receiver at L1 and L2 frequencies is shown in Eq. 3 where L1 is given as 1575.42 MHz and L2 1227.6 MHz and P1 and P2 are group path lengths.
$$ \mathrm{TEC}=\frac{1}{40.3}\left[\frac{L{1}^2.L{2}^2}{L{1}^2-L{2}^2}\right]\left(P1-P2\right) $$
Thus, TEC is an important parameter to understand the dynamics of the ionosphere. The TEC has a linear relationship with the positional error and 1 TECU of electron content produces a range error of 0.16 m at L1 frequency [7].
The ionosphere is one of the largest obstacles for the Global Positioning System (GPS) to become the primary navigational aid for critical applications and can cause positioning errors, which may be more than 50 m. As seen above, these errors can be eliminated in dual-frequency receivers. However, for single-frequency receivers, these errors can be only reduced by applying fixed corrections based on GNSS observables. For equatorial regions, due to equatorial anomaly and complex spatio-temporal variations in TEC, furthermore, the space weather phenomena like geomagnetic storms cause unpredictable irregularities in the ionosphere causing deviation in TEC pattern. Although there are several geomagnetic indices available to explain the strength of the geomagnetic storms, they are of little use in describing the deviation in the TEC pattern directly. Hence, there is a need to devise a method that can explain the impact of geomagnetic storms on TEC. This paper investigates the causality method to study the impact of a geomagnetic storm on TEC.
Geomagnetic storms and geomagnetic indices
A geomagnetic storm is one of the major space weather activities which affects the TEC and causes deviation in the TEC. A geomagnetic storm is a disturbance in the magnetosphere that may cause a sudden change in electron density. The Earth's magnetosphere, thermosphere, and ionosphere are driven by the energy emitted from the Sun. The solar wind transfers its wind energy to the Earth's magnetosphere through magnetic reconnection which leads to geomagnetic storms. These slow and fast solar winds from the coronal region also lead to powerful solar events like coronal mass ejections (CMEs) from the Sun [8] and the corotating interaction regions (CIRs). The CMEs are the result of plasma outbursts from the Sun's active region [9]. The CMEs interact with the solar wind and interplanetary magnetic field of the Earth. The southward-directed solar magnetic field interacts strongly with the oppositely oriented magnetic field of the Earth and results in geomagnetic storms. The severe geomagnetic storms lead to anomalous changes in the ionospheric TEC, resulting in frequent amplitude and phase fluctuations. They may also cause cycle slips, amplitude, and phase scintillations, or even loss of lock. Such events not only affect the determination of the position of the receiver, but also the velocity and time of GPS receivers.
A geomagnetic storm may lead to an increase or decrease in the electron density as compared to quiet days when solar and geomagnetic activities are low. Thus, a geomagnetic disturbance may cause a positive ionospheric storm or a negative ionospheric storm. The impact of the geomagnetic storm on TEC depends on the phase and origin of the storm. A positive ionospheric storm is seen during the main phase, while in the recovery phase, negative storms are pronounced at all latitudes [10]. Positive storm effects with enhanced TEC are observed at geomagnetically low and mid-latitudes in the daytime, and negative storm effects are observed near the geomagnetic equator [11].
The TEC in the equatorial region is also impacted by the equatorial anomaly, which causes TEC accumulation at certain latitudes due to the formation of crests. This is primarily due to the equatorial electrojet (EEJ), which is caused due to vertical EXB drift leading to the fountain effect. The entire phenomenon is dependent on the EEJ and found to be more pronounced during the high solar activity period or equinox months. Hence, the geomagnetic storm effects are far more pronounced in the equatorial regions.
The strength and impact of geomagnetic disturbances are estimated using geomagnetic indices like Kp, Dst, and AE, to name a few [12]. In this study, two indices AE and Dst are selected. Both the indices are available at 1-h interval while Kp index is a 3-hourly index. Furthermore, there is a good correlation between Dst and AE; hence, AE and Dst are selected for the study. The magnitude of these indices is determined using the horizontal H component of the geomagnetic field. These indices have a pattern characteristic pattern during quiet and disturbed conditions.
The AE index characterizes the intensity of the auroral zone currents or auroral electrojet. It is the difference between the largest negative and positive H component variations, the AL and AU indices. The AE index uses magnetograms of the H component. This is collected from twelve observatories located over the longitude in the northern hemisphere at auroral or subauroral latitudes [13]. In a quiet time, this index's value is tens of nT, and during storms and substorms, it increases to several hundred and more than a thousand nano-Tesla (nT).
The Dst index is the globally averaged value of the horizontal component of the Earth's magnetic field at the magnetic equator from a few magnetometer stations [14]. The Dst is computed once per hour and reported in near-real-time. During quiet times, the Dst value is between + 20 and − 20 nT. Based on a geomagnetic storm's strength, it can be classified as a moderate storm for Dst between − 50 and − 100 nT, intense for Dst between − 100 and − 200 nT, and severe or super-storm for Dst less than − 200 nT [15].
In the proposed work, an attempt is made to see if TEC can be used to study and understand the impact of space weather phenomena. This study of the dependency of TEC on AE and Dst indices can be helpful to understand the impact of space weather phenomena on the satellite-based system. The advantage of using TEC is its high temporal resolution as compared to other indices used for measuring geomagnetic storms like Dst and AE, which are available at 1-h intervals, or Kp, which is available at 3-h intervals. Furthermore, the equatorial ionosphere is characterized by large ionospheric gradients (even within 5°X 5° latitude and longitude). The deviations and perturbations in the TEC at different latitudes due to geomagnetic storms are also different. Thus, investigating the causality between the geomagnetic storm and TEC at the regional level can be useful in improving the existing methods used for correcting positional errors. This can be achieved with the high spatial resolution regional TEC data available from the GNSS receivers which have a wide global coverage. As causal inferences can result in the selection of physical quantities which are more informative, hence, the proposed study can be further combined with data-driven models for improved estimation of positional forecasting errors in the propagating signal.
Granger causality test
Causality refers to the dependency between variables and is different from correlation. Although there is a well-known correlation between variation in TEC during the occurrence of geomagnetic storms and substorms, there is no clear, direct cause and effect relation between them which can be modeled to forecast TEC.
Several attempts to forecast TEC using geomagnetic disturbances (in terms of both geomagnetic indices AE and Dst measurements) during magnetic storms and substorms have been developed using Artificial Neural Networks and linear or nonlinear regression models [16, 17]. However, most of these models are based on using a large historical dataset of these physical quantities. Many feature selection methods have also been combined with these models to identify the most relevant physical quantity. However, there is little work done in the area to identify the most informative physical quantities. Most of the studies are based on the correlation between the physical quantities which may not be very indicative due to the nonlinear and abrupt nature of these variations [18].
In a stochastic system, Granger causality between the variables can be established if it is possible for variable Xt to cause Yt + 1 or for Yt to cause Xt + 1 where t is the time variable [19]. This paper investigates the causality between the variables—deviation in TEC, Dst, and AE. The Granger causality test or G test method proposed by the Nobel Economics Prize recipient Clive W. J. Granger is used to analyze the causality between the variables.
For a time series, the Granger causality is said to exist between two variables, X and Y, if variable X can help explain Y's future values, considering both time series are stationary or steady. Therefore, before conducting the Granger causality test, it is necessary to conduct a unit root test of the time series' stationarity, which ensures the stationarity of the time series. The Augmented Dickey-Fuller test (ADF test) is generally used to conduct this unit root test of stationarity of the series. The Granger causality is sensitive to the lag period, and under different lag periods, completely different test results can be obtained if a precondition of stationarity is not satisfied. Thus, a series of pretests must be performed on the data before the G test.
In the present study, the deviation in TEC denoted by DTEC is taken as Y or the dependent variable, and Dst or AE are explanatory variables X1 and X2. The causality test is performed to check if AE/Dst can cause deviation in TEC. Hence, if X1/X2 does not help predict variable Y, which is DTEC, then X1/X2 is not the cause for the deviation in TEC. On the contrary, if the Dst or AE is the cause for DTEC, then AE and Dst should be able to predict the variable DTEC. A statistical hypothesis is tested to establish the causality. This can be explained with a mathematical formulation of the test based on vector autoregression (VAR) modeling of stochastic processes based on the past value of two variables Y and X [20]. The regression equation for two variables can be expressed as shown below:
$$ {\displaystyle \begin{array}{c}\mathrm{Y}\left(\mathrm{t}\right)=\sum \limits_{\mathrm{j}=1}^{\mathrm{p}}{A}_{11,j}\mathrm{Y}\left(\mathrm{t}-\mathrm{j}\right)+\sum \limits_{\mathrm{j}=1}^{\mathrm{p}}{A}_{12,j}\mathrm{X}1\left(\mathrm{t}-\mathrm{j}\right)+\sum \limits_{\mathrm{j}=1}^{\mathrm{p}}{A}_{13,j}\mathrm{X}2\left(\mathrm{t}-\mathrm{j}\right)+{E}_1\left(\mathrm{t}\right)\\ {}\mathrm{X}1\left(\mathrm{t}\right)=\sum \limits_{\mathrm{j}=1}^{\mathrm{p}}{A}_{21,j}\mathrm{Y}\left(\mathrm{t}-\mathrm{j}\right)+\sum \limits_{\mathrm{j}=1}^{\mathrm{p}}{A}_{22,j}\mathrm{X}1\left(\mathrm{t}-\mathrm{j}\right)+\sum \limits_{\mathrm{j}=1}^{\mathrm{p}}{A}_{23,j}\mathrm{X}2\left(\mathrm{t}-\mathrm{j}\right)+{E}_2\left(\mathrm{t}\right)\\ {}\mathrm{X}2(t)=\sum \limits_{\mathrm{j}=1}^{\mathrm{p}}{A}_{31,j}\mathrm{Y}\left(\mathrm{t}-\mathrm{j}\right)+\sum \limits_{\mathrm{j}=1}^{\mathrm{p}}{A}_{32,j}\mathrm{X}1\left(\mathrm{t}-\mathrm{j}\right)+\sum \limits_{\mathrm{j}=1}^{\mathrm{p}}{A}_{33,j}\mathrm{X}2\left(\mathrm{t}-\mathrm{j}\right)+{\mathrm{E}}_3\left(\mathrm{t}\right)\end{array}} $$
where p is the maximum number of lagged observations; the coefficients of the model are the contributions of each lagged observation to the predicted values of X1 (t), X2 (t), and Y (t); and E1, E2, and E3 are residuals (prediction errors) for each time series. If the variance of E1 (or E2/E3) is reduced by the inclusion of the Y (or X) terms in the equation, then it is said that Y (or X) Granger-(G)-causes X (or Y). In other words, Y G-causes X if the coefficients in Aij are significantly different from zero. This is tested by performing a t-test or chi-squared test of the null hypothesis that Aij = 0, given assumptions of covariance stationarity on X and Y.
Cointegration
The cointegration test is done to establish the presence of a statistically significant connection between two or more time series. It is seen that if two variables are cointegrated, then there exists causality between variables in at least one direction [21]. Thus, a cointegration test can be viewed as an indirect test of long-run dependence. It occurs when two or more non-stationary time series have a long-run equilibrium and move together so that their linear combination of variables results in a stationary time series. There is a linear combination of the variables with an order of integration less than that of the individual series. In this context, cointegration can help understand if there is a long-run equilibrium between deviation in TEC during the disturbed condition and Dst/AE. The cointegration test establishes a stationary linear combination of time series that are not themselves stationary.
Thus, the cointegration test indicates a long-run equilibrium relationship between variables, while the Granger causality test indicates a unidirectional causality. The results of cointegration determine the type of regression model to be implemented for the causality test. The regression results with non-stationary variables can be spurious if the variables are non-stationary and cointegrated. Furthermore, the regression with the first differenced variables is for short-run relationship; hence, it cannot capture the long-run information. In such cases, the causality is investigated through vector error correction model (VECM). It is an extension of the VAR model to include cointegrated variables that balance the short-term dynamics of a process with the long-term dependencies. The VECM expresses the long-run dynamics of the process including error correction terms that measure the deviation from the stationary mean at (t−1) time. Thus, linear Granger causality on VAR can be applied only to time series that are stationary. If data are not stationary and not cointegrated, then the VAR can fit to the differenced time series. For a cointegrated non-stationary time series, with a long-term equilibrium relationship, the time series have to be fitted with the VECM model to evaluate the short-run properties of the cointegrated time series.
In this paper, three variables namely deviation in TEC, Dst, and AE are investigated under different storm conditions and for two different locations in the equatorial region. The primary aim is to identify the extent of causality and identify causal variables that can cause a state transition. As per the Granger causality principles [22], forecasting is related to identifying causal variables responsible for state transitions. Therefore, Granger causality inferences between variables can be combined with forecasting and can improve forecasting.
The data used for the study is from the year 2015 which is the descending phase of the 24th solar cycle. The year 2015 is characterized by 56 geomagnetic storms, of which the storm on March 17, 2015, was the most severe one (St. Patrick's storm) of the solar cycle [23]. This storm had major adverse effects on communication and navigation systems on and above the Earth. The present study is conducted with thirty geomagnetic storms. In this paper, twelve geomagnetic storms are presented, which is used to verify the causal effect of geomagnetic indices on ionospheric TEC measured at two different GPS stations at Bangalore and Hyderabad. The geographic and geomagnetic coordinates (latitudes and longitudes) of the GPS stations are given in Table 1.
Table 1 Locational coordinates of GPS stations
The storms considered for this study are of different intensities and of different types and origins (recurrent and sporadic). The details of the storm durations, storm type, geomagnetic indices (Dst and AE), and TEC characteristics are listed in Table 2. The maximum TEC values for all storm days are higher than the quiet day maximum TEC value. Furthermore, the maximum value of TEC is observed to be higher at Hyderabad due to the crest formation around noon time.
Table 2 Details of storm days
For this study, three parameters namely deviation in TEC denoted by "DTEC" and the geomagnetic storm indices "Dst" and AE" are used. Both Dst and AE are available at 1-h time interval and DTEC is available at 2.5-minute interval. The DTEC is calculated from the vertical total electron content (VTEC) for which the calculation is shown in the next section. The descending phase of the twenty-fourth cycle is chosen and the VTEC data of geomagnetic storm days occurring in this period is considered for this study. The storm days are selected based on the Dst index.
Calculation of VTEC
The VTEC is the vertical total electron content and is computed from the receiver independent exchange (RINEX) observation files of the International GNSS Service (IGS) receiver stations at Bangalore (IISC) and Hyderabad (HYDE). The data is processed using GPS-TEC online application software, developed by Ionolab [24]. The desired TEC is the combination of calculated TEC and receiver and satellite biases in TEC units. The TEC is computed using the standard procedure to compute the absolute total electron content on the slant ray path (STEC) from the satellite to the receiver and is calculated from the difference of pseudo ranges P1 and P2 at L1 and L2 frequencies respectively.
The computed slant TEC is projected to the local zenith direction to obtain the vertical TEC through a mapping function, M (E, h), assuming a thin shell model of the ionosphere. The receiver and satellite biases are also added to compute the VTEC values. The VTEC value is computed as shown in Eq. 5:
$$ {\displaystyle \begin{array}{c}\mathrm{TEC}\;\mathrm{slant}\;\left(\mathrm{STEC}\right)=M\;\left(E,h\right)\times \mathrm{TEC}\;\mathrm{vertical}\;\left(\mathrm{VTEC}\right)\\ {}\mathrm{where}\kern0.36em M\left(E,\mathrm{h}\right)={\left[1-{\left(\frac{R\cos \left(\mathrm{E}\right)}{R+h}\right)}^2\right]}^{-1/2}\end{array}} $$
In the above formula, R is the radius of the Earth, E is the elevation angle, and h is the height of the ionospheric pierce point.
Calculation of deviation in TEC (DTEC)
The DTEC is the deviation in TEC on a geomagnetic storm day w.r.t. the TEC on a quiet day. For this calculation, the quietest day of the month is selected to be used as the reference TEC pattern for that month. Monthly selection is done to take care of seasonal variations in TEC. The quiet day is selected based on the Dst and Kp indices. The quiet days selected for every month have low solar and geomagnetic activities such that Dst variation does not exceed 5 to 10 nT over the entire day and the absolute value is also within −15 to 15nT. The DTEC is computed as a deviation in TEC, as shown in Eq. 6
$$ \kern1em \mathrm{DTEC}=\mathrm{TEC}\ \mathrm{storm}-\mathrm{TEC}\ \mathrm{quiet} $$
The DTEC is the measure of deviation in TEC caused due to disturbed geomagnetic storm conditions over the entire day. Figure 1 shows the comparison of the VTEC variation pattern for a quiet day on March 10, 2015, and a moderate geomagnetic storm day on March 2, 2015, at Hyderabad. Similarly, Fig. 2 shows the variation for Bangalore station for a quiet day on March 10, 2015, and a geomagnetic storm day on March 2, 2015. Figures 3 and 4 show the plot of DTEC for March 2, 2015, for both Hyderabad and Bangalore. In the equatorial region, the TEC pattern shows latitudinal variation which can be seen in Figs. 1 and 2. Hence, DTEC is also different for both the latitudes. The geomagnetic storm that occurred on March 17, 2015, is considered the most intense storm of the solar cycle. Hence, the TEC pattern also shows a steep rise in TEC around 5 UTC. This can be seen in Figs. 5 and 6 which represent the comparison of variation pattern between quiet day (10-3-2015) and severe geomagnetic storm day on March 17, 2015, for Hyderabad and Bangalore stations, respectively. Figures 7 and 8 show the DTEC for March 17, 2015, for Hyderabad and Bangalore, respectively, which is different from the DTEC on March 2, 2015. Major variation in the TEC pattern is generally seen on severe storm days while the minor variation pattern is observed for moderate storm days.
VTEC quiet (10-3-2015) and moderate storm day (2-3-2015) Hyderabad
VTEC quiet (10-3-2015) and moderate storm day (2-3-2015) Hyderabad moderate
Deviation on moderate storm day (2-3-2015) w.r.t to quiet day (10-3-2015) Hyderabad
Deviation on moderate storm day (2-3-2015) w.r.t to quiet day (10-3-2015) Bangalore
VTEC quiet (10-3-2015) and severe storm day (17-3-2015) Hyderabad intense
VTEC quiet (10-3-2015) and severe storm day (17-3-2015) Hyderabad
Deviation on severe storm day (17-3-2015) w.r.t to quiet day (10-3-2015) Hyderabad
Deviation on severe storm day (17-3-2015) w.r.t to quiet day (10-3-2015) Bangalore
The figures clearly indicate that the deviation pattern also varies with latitude. The DTEC is more abrupt during days having the main phase of intense/severe storms (17-3-2015). This is further verified from Figs. 9 and 10 which show deviation in TEC for another severe storm day that occurred June 22, 2015. Figures 11 and 12 are the DTEC variation on a moderate storm day, Dec 31, 2015, where rapid variations of smaller magnitudes are seen.
Deviation on moderate storm day (31-12-2015) w.r.t to quiet day (10-3-2015) Hyderabad
Deviation on moderate storm day (31-12-2015) w.r.t to quiet day (10-3-2015) Bangalore
Data for Dst and AE indices
The data for the DST index and AE index are downloaded from data centers for geomagnetism, Kyoto website (http://wdc.kugi.kyoto-u.ac.jp). Both indices describe the intensity of the storm. The variation pattern of Dst is also useful in finding the phase of the storm. The type of the storm, sudden or recurrent, can also be deduced from the Dst variation pattern.
A geomagnetic storm is defined by changes in the Dst index. Dst is computed once per hour and reported in near-real-time. During quiet times, Dst is between + 20 and − 20 nano-Tesla (nT). Figure 13 shows the Dst variation on a quiet day 10-3-2015 and 19-1-2015. These days have also been used as reference days in this study. The variation is between 1 and 10 nT on 10-3-2015 and between −10 and 10 nT on 19-1-2015.
Diurnal pattern of Dst variation on a quiet day
Most of the geomagnetic storms have three phases: initial, main, and recovery. The initial phase is characterized by an increase in Dst by 20 to 50 nT in a short time. The initial phase is also referred to as a storm sudden commencement (SSC). This is followed by the main phase characterized by Dst decreasing to less than −50 nT. The minimum value during a storm can range from −50 to −600nT in extreme cases. The duration of the main phase is typically 2–8 h. The recovery phase is when Dst changes back from its minimum value to its quiet time value. The recovery phase may range from 8 h to 7 days. However, not all geomagnetic storms have an initial phase and not all sudden increases are followed by a geomagnetic storm. Figure 14 shows the Dst variation of severe geomagnetic storm day on 17-3-2015 and a moderate storm day on 31-12-2015 considered during this study. The Dst variation on 17-3-2015 indicates the rising Dst in the initial phase and Dst reaching its minimum (−220 nT) in the main phase of the storm. The moderate storms are many times recurrent and periodic in nature with a gradual decrease in Dst. The Dst pattern is more predictable. Sometimes, substorms are triggered during the recovery phase of intense storms.
Diurnal pattern of Dst variation on a moderate storm day and severe storm day on 17-3-2015
The AE index represents the auroral zone magnetic activity produced by enhanced ionospheric currents flowing below and within the auroral oval. The equatorward expansion of auroral electrojet influences the TEC in the low-latitude ionosphere. They have been useful in studying the magnetic substorms. The enhancement in AE can be seen on days with high geomagnetic activity. Figure 15 shows the AE index for quiet and storm days. The quiet days on 10-3-2015 show two peaks of 200nT and for most of the day the value is not more than 50 nT. The AE index ranges between 50 and 200nT on 19-1-2015. Two storm days one severe (17-3-2015) and one moderate (31-12-2015) are shown in Fig. 15. On both days, the TEC is enhanced to around 1500nT and for most of the day, it is more than 500nT.
Diurnal pattern of AE variation on quiet day and enhancement on moderate storm day (31-12-2015) and severe storm day (17-3-2015)
Data scaling
All the geomagnetic indices and DTEC are normalized using the Min-Max scaler method. The Min-Max scaler is chosen as it preserves the original distribution's shape and does not change the meaning of the information embedded in the original data.
In this section, the results for twelve different geomagnetic storms of different origins and types are presented. The year 2015 had a total of 56 storms out of which 52 were of low or moderate intensity, 2 were intense, and 2 were severe. In this paper, the test results for two severe storms and two intense storms and 8 moderate storms are presented. The tests have been conducted for thirty storm days at both Bangalore and Hyderabad stations as TEC data was not available for some of the disturbed days. Three tests namely the Augmented Dickey-Fuller Test, Granger causality test, and cointegration test are carried out. All the tests are performed on normalized data. First, the stationarity of raw data is tested. The stationarity is tested using the Augmented Dickey-Fuller Test (ADF test). The Granger causality test is performed on the stationary data. The causality between the variables and the direction of the causality is tested using the Granger causality test. The long-term equilibrium relationships between variables are tested using the cointegration test. The detailed observations are presented below.
Stationary test of variables
All the three time-series data are first tested for trend and stationarity. Differencing technique is used for making the data stationary. The stationarity is tested after every differencing. The differenced data's stationarity is tested after single and double (if required) differencing using the Augmented Dickey-Fuller Test (ADF test). The test uses the null hypothesis: "Data has a unit root and is non-stationary" is tested. The test is performed for different lag values with a 0.05 significance value. The p value is used to decide if there is any evidence to reject the null hypothesis. This test is repeated for all the data variables.
Cointegration test
Johansen cointegration test is performed on data to establish the presence of a statistically significant relationship between the time series. The Johansen method is based on the relationship between the rank of the matrix Π and the size of its eigenvalues. The rank of the matrix Π determines the long-term dynamics. If Π has full rank, the process Yt is stationary in mean. If the rank of Π is zero, then the error correction term disappears, and the system is stationary in differences (the VAR model in differences can be used). If the rank of Π is r (within (0, K)), then there are r independent cointegrating relations among the variables in Yt. For a given r, the maximum likelihood estimator of β defines the combination of Yt − 1 that yields the r largest canonical correlations of ∆Yt with Yt − 1.
The null hypothesis "There are no cointegrating equations" is tested for a 95% confidence level. If the trace statistics is greater than the critical value, then the null hypothesis is rejected, establishing linear relation between the variables. Hence, for cointegration to exist, the null hypothesis must be rejected. The cointegration test is carried out between DTEC, Dst, and AE for all storm days. The cointegration is true for most of the cases, and hence, there exists a long-run dependency between the variables and DTEC, AE, and Dst.
Results of the cointegration test for Bangalore
To test the null hypothesis that the variables are not cointegrated, trace and eigenvalue statistics are carried out. For most of the cases, null hypotheses are rejected for both eigen and trace tests. All tests are conducted with 5% critical value. Since the maximum eigenvalue and trace test statistics values are higher than 5% critical values, the alternative of one or more cointegrating vectors is accepted.
Johansen cointegration with the hypothesis of the reduced rank of a regression coefficient matrix, estimated consistently from vector regression equations, is also tested. Here, the maximum eigenvalue statistic and the trace statistic test the number of cointegrating relations between variables. The trace test is a joint test where the null hypothesis is "number of cointegrating vectors is less than or equal to r," against a general alternative that "there are more than r vectors," whereas the maximum eigenvalue test conducts separate tests on the individual eigenvalues, where the null hypothesis is that the "number of cointegrating vectors is r," against an alternative of (r + 1).
Table 3 shows the cointegration results for twelve geomagnetic storm days for the year 2015. The test is done pairwise between DTEC and Dst and DTEC and AE. For DTEC and Dst pair, the null hypothesis is not rejected for the storm on March 19 at the Bangalore location. For most of the storm use cases, the test statistic value is higher as compared to the critical value for DTEC and Dst pair for Bangalore. Hence, it can be concluded that for most of the cases the null hypothesis can be rejected. This is also in line with the fact the geomagnetic storms can be well explained by the Dst index in equatorial and low-latitude regions. Both the tests (eigen and trace) confirmed that most of the storm days have cointegrating vectors indicating that the geomagnetic indices Dst and deviation in TEC have a long-run linkage at Bangalore station.
Table 3 Johansen cointegration rank test results for DTEC and Dst for the Bangalore location
Table 4 shows the cointegration test results for DTEC and AE pair. For most of the storm cases listed, the null hypothesis is rejected, and test statistics is higher than the critical value except for storm that occurred on March 19, 2015, April 16, 2015, and Jun 22, 2015. The storm on June 22 has its main phase around 20 UTC, and the AE index shows enhancement only after 15 UT. Hence, for most of the day, the AE is low. Similarly, the storm on April 16 is a moderate storm and the AE is almost constant at 800 nT for the entire day. The AE index is a suitable measure of substorms. It is seen that for all cases of substorm on 21-3-2015 and 22-3-2015, the DTEC and AE reject the null hypothesis.
Table 4 Johansen cointegration rank test results for DTEC and AE pair for the Bangalore location
Results of the cointegration test for Hyderabad
The results of Johansen's maximum eigenvalue and trace tests for Hyderabad are given in Tables 5 and 6. All tests are conducted with 5% critical value, since the maximum eigenvalue and trace test statistics values are higher than 5% critical values and the alternative of one or more cointegrating vectors is accepted. Table 5 shows the results for twelve geomagnetic storm days for the year 2015. The test is done pairwise between DTEC and Dst and DTEC and AE. Both the tests confirm that most of the storm days under consideration have cointegrating vectors indicating that the geomagnetic indices and deviation in TEC have a long-run linkage. Most of the storm use cases have a higher test statistic value as compared to the critical value for DTEC and Dst pair for Hyderabad. Hence, it can be concluded that for most of the cases the null hypothesis can be rejected and there exists long-run cointegration.
Table 5 Johansen cointegration rank test results for DTEC and Dst for the Hyderabad location
Table 6 Johansen cointegration rank test results for DTEC and AE pair for the Hyderabad location
Table 6 shows the cointegration test results for DTEC and AE pair. For most of the storm cases, the null hypothesis is rejected. The test statistics for eigenvalue is lower than the critical value for the storm on March 22, 2015, April 16, 2015, and Jun 22, 2015. The null hypothesis cannot be rejected for these days.
The causality tests are based on the null hypothesis as follows: "Coefficients of past values in the regression equation is zero." Furthermore, the significance value is set (p = 0.05) and if the p value obtained from the test is less than the significance level of 0.05, then the null hypothesis is rejected. This means that the past values of time series (X) do not cause the series (Y) to be rejected. The hypothesis is tested using the F test. The causality is tested pairwise. For causality to exist between the time series, the null hypothesis must be rejected. For testing causality between DTEC and Dst and AE, the null hypothesis is stated as "Past value of the Dst index (X1) and AE index (X2) do not cause a deviation in VTEC (DTEC)."
Furthermore, as all the time series under consideration are non-stationary, hence the cointegration test results are considered before performing the Granger causality test. Hence, whenever the series are cointegrated, the causality test based on VECM is carried out or else they are fitted with VAR as mentioned in the "Granger causality test" section. The vector error correction model (VECM) expresses the long-run dynamics of the process including error correction terms (αβ′Yt − 1) which is the measure of the deviation from the stationary mean at time t−1 and is given as:
$$ \Delta {Y}_t=c+\prod {Y}_{t-1}+\sum \limits_{i=1}^{p-1}{\varGamma}_i\Delta {Y}_{t-i}\kern0.5em +{\in}_t $$
where Π = αβ′ and c are the drift coefficient and \( \Pi =\sum \limits_{i=1}^p Ai-I\ \mathrm{and}\kern0.50em \Gamma \mathrm{i}=-\sum \limits_{j=i+1}^p\mathrm{Aj}\kern0.5em . \)
If the variables in Yt have differencing order of one (I (1)), then the terms involving differences are stationary, and the error correction term in the VEC model introduces long-term stochastic trends between the variables.
The appropriate value of lag value p is made using the Akaike Information Criterion (AIC). The causality result is represented as a matrix based on the p value. The Granger causality test results on different storm days are discussed in the next section.
Results of the Granger causality test
After finding cointegration among the data series, the Granger causality is estimated between the selected pairs DTEC and Dst and DTEC and AE. The results of the Granger causality tests are presented in Table 7 which shows F statistics for the causality tests between variables DTEC and Dst and DTEC and AE for Bangalore. The null hypothesis of Granger causality is rejected for most of the cases of storms and substorms. This indicates that most of the geomagnetic storms can be explained by either Dst or AE. The same procedure is repeated for Hyderabad. Table 8 summarizes the results of the F test done for Granger causality at Hyderabad between pairs DTEC–Dst and DTEC–AE, respectively.
Table 7 F statistics results of the Granger causality test at location Bangalore with variables DTEC and Dst pair and DTEC and AE pair at 5% significance level
Table 8 F statistics results of the Granger causality test at location Hyderabad with variables DTEC and Dst pair and DTEC and AE pair at 5% significance level
For both the locations, deviation in TEC could be explained by either Dst or AE depending on the nature of the storm or substorm. The difference in significance p value between latitudes also clearly indicates that the impact of the storm is different at both the location, and hence, this test can help in providing results at the regional level.
This work probes the relationship between the GPS-derived VTEC at the location and the geomagnetic indices, Dst and AE, through causality analysis. The geomagnetic storms can bring about a lot of irregularities in the TEC in the ionosphere causing positional errors. Hence, estimating the amount of deviation in TEC during the geomagnetic storm can improve positional accuracy. The causal inference provides intuitive ways for detecting an anomaly in the TEC variation during disturbed ionospheric conditions. It is well known that most of the geomagnetic storms can be well explained with Dst or AE index, and in this study, Granger causality between geomagnetic indices and TEC could be established for most cases. As per the causality test results, causality between deviation in TEC and both geomagnetic indices Dst and AE could not be established simultaneously for some storms. This is primarily due to the difference in their origin and type. However, causality could be established with either Dst or AE for most of the storm cases tested for the year 2015. In this paper, storms of different intensities, types, and different origins are presented. Some storms were in the main phase while some of them were storms during the recovery phase. Furthermore, the causality is tested for both recurrent and sudden commencement storms. For most of the cases, the causality could be established with suitable lag values. The storms on March 1 and 2 have a similar origin, and causality results are well aligned with Dst. For most of the storm days, all three variables DTEC, AE, and Dst are found to be cointegrated at both the latitudes. Thus, this indicates long-run dependence in at least one direction. The causality method can be further used for predicting the short-term TEC irregularities by using VAR or VECM models. However, further investigation with more variables and different lag values is required. The advance prediction of TEC can be helpful in mitigating ionospheric effects on trans-ionospheric communication and improve the navigation system used for critical applications especially in equatorial regions.
TEC:
Total Electron Content
GNSS:
Global Navigation Satellite System
Auroral Electrojet
Dst:
Disturbance Storm Time
CMEs:
Coronal Mass Ejection
CIRs:
Corotating Interaction Region
IMF:
Interplanetary Magnetic Field
EEJ:
Equatorial Electrojet
ADF:
Augmented Dickey Fuller
Alberti T et al (2017) Timescale separation in the solar wind-magnetosphere coupling during St. Patrick's Day storms in 2013 and 2015. J Geophys Res Space Physics 122(4):4266–4283
Marković M (2014) Determination of total electron content in the ionosphere using GPS technology. Geonauka 2(4):1–9
Panda SK, Gedam SS, Jin S (2015) Ionospheric TEC variations at low latitude Indian region. In: Satellite positioning-methods, models and applications. Tech-Publisher, Rijeka, pp 149–174
Chakraborty M et al (2015) Effects of geomagnetic storm on low latitude ionospheric total electron content: a case study from Indian sector. J Earth Syst Sci 124(5):1115–1126. https://doi.org/10.1007/s12040-015-0588-3
Bora S (2017) Ionosphere and radio communication. Resonance 22(2):123–133. https://doi.org/10.1007/s12045-017-0443-8
Ya'acob N, Abdullah M, Ismail M (2010) GPS total electron content (TEC) prediction at ionosphere layer over the equatorial region. In: Trends in Telecommunications Technologies
Nayir H et al (2007) GPS/TEC estimation with IONOLAB method. 2007 3rd International Conference on Recent Advances in Space Technologies. IEEE, Istanbul
Michalek G, Gopalswamy N, Xie H (2007) Width of radio-loud and radio-quiet CMEs. Sol Phys 246(2):409–414. https://doi.org/10.1007/s11207-007-9062-y
Zhang J et al (2007) Solar and interplanetary sources of major geomagnetic storms (Dst≤− 100 nT) during 1996–2005. J Geophys Res Space Physics 112(A10)
Jin S, Jin R, Kutoglu H (2017) Positive and negative ionospheric responses to the March 2015 geomagnetic storm from BDS observations. J Geod 91(6):613–626. https://doi.org/10.1007/s00190-016-0988-4
Wang W et al (2010) Ionospheric response to the initial phase of geomagnetic storms: common features. J Geophys Res Space Physics 115:A7
Kane RP (2009) Evolution of Dst and auroral indices during some severe geomagnetic storms. Rev Bras Geofísica 27(2):151–163
Adebesin BO (2016) Investigation into the linear relationship between the AE, Dst and ap indices during different magnetic and solar activity conditions. Acta Geodaetica Geophysica 51(2):315–331. https://doi.org/10.1007/s40328-015-0128-2
Bergin A, Chapman SC, Gjerloev JW (2020) AE, DST and their SuperMAG counterparts: The effect of improved spatial resolution in geomagnetic indices. J Geophys Res Space Phys 125:2020JA027828. https://doi.org/10.1029/2020JA027828
Bhattarai N, Narayan P Chap again, Adhikari B (2016) Total electron content and electron density profile observations during geomagnetic storms using COSMIC satellite data. Discovery 52(250):1979–1990
Pallocchia G et al (2008) AE index forecast at different time scales through an ANN algorithm based on L1 IMF and plasma measurements. J Atmos Sol Terr Phys 70(2-4):663–668
Camporeale E, Wing S, Johnson J (eds) (2018) Machine learning techniques for space weather. Elsevier
Immel TJ, Mannucci AJ (2013) Ionospheric redistribution during geomagnetic storms. J Geophys Res Space Physics 118(12):7928–7939. https://doi.org/10.1002/2013JA018919
Guo X, Wei B, Qihao F (2017) Granger causality test of the relationship between export and economic growth in Central Jiangsu region. 2017 4th International Conference on Industrial Economics System and Industrial Security Engineering (IEIS). IEEE, Kyoto
Seth A (2007) Granger causality. Scholarpedia 2(7):1667
Papana A et al (2014) Identifying causal relationships in case of non-stationary time series. Department of Economics of the University of Macedonia, Thessaloniki
Granger CWJ (2004) Time series analysis, cointegration, and applications. Am Econ Rev 94(3):421–425
Wu CC, Liou K, Lepping RP, Hutting L, Plunkett S, Howard RA, Socker D (2016) The first super geomagnetic storm of solar cycle 24:"The St. Patrick's day event (17 March 2015)". Earth Planets Space 68(1):1–12
Arikan FEZA et al (2008) Estimation of single station interfrequency receiver bias using GPS-TEC. Radio Sci 43(4). https://doi.org/10.1029/2007RS003785
This study had no funding from any resource.
Department of Electronics and Communication, Nirma Institute of Technology, Ahmedabad, Gujarat, India
Sumitra Iyer
Mukesh Patel School of Technology Management & Engineering, Vile Parle (West), Mumbai, India
Alka Mahajan
The idea of causality is suggested by AM, and SI has implemented the idea and verified the results. All authors have read and approved the final manuscript.
Correspondence to Sumitra Iyer.
Iyer, S., Mahajan, A. Granger causality analysis of deviation in total electron content during geomagnetic storms in the equatorial region. J. Eng. Appl. Sci. 68, 4 (2021). https://doi.org/10.1186/s44147-021-00007-x
Geomagnetic storm
Granger causality
|
CommonCrawl
|
Inference of spatiotemporal effects on cellular state transitions from time-lapse microscopy
Michael K. Strasser1,
Justin Feigelman1,2,
Fabian J. Theis1,2 &
Carsten Marr1
Time-lapse microscopy allows to monitor cell state transitions in a spatiotemporal context. Combined with single cell tracking and appropriate cell state markers, transition events can be observed within the genealogical relationship of a proliferating population. However, to infer the correlations between the spatiotemporal context and cell state transitions, statistical analysis with an appropriately large number of samples is required.
Here, we present a method to infer spatiotemporal features predictive of the state transition events observed in time-lapse microscopy data. We first formulate a generative model, simulate different scenarios, such as time-dependent or local cell density-dependent transitions, and illustrate how to estimate univariate transition rates. Second, we formulate the problem in a machine-learning language using regularized linear models. This allows for a multivariate analysis and to disentangle indirect dependencies via feature selection. We find that our method can accurately recover the relevant features and reconstruct the underlying interaction kernels if a critical number of samples is available. Finally, we explicitly use the tree structure of the data to validate if the estimated model is sufficient to explain correlated transition events of sister cells.
Using synthetic cellular genealogies, we prove that our method is able to correctly identify features predictive of state transitions and we moreover validate the chosen model. Our approach allows to estimate the number of cellular genealogies required for the proposed spatiotemporal statistical analysis, and we thus provide an important tool for the experimental design of challenging single cell time-lapse microscopy assays.
Cellular plasticity is the key property essential for multi-cellular development [1], tissue maintenance [2] and regeneration [3]. While the notion of state transitions from multipotent stem cells to mature functional cells is established, the breakthrough findings on transdifferentiation [4] and reprogramming [5] have sparked renewed interest into mechanisms driving cellular lineage choice with the prospect of therapeutic application [6].
To understand differentiation kinetics and thus the origins of stem cell population heterogeneity, one has to observe the transition of cells between states of different lineage potential. However, observing cell state transitions is impossible in data obtained from a single time point, emerging from e.g. flow cytometry, transcriptome or immunofluorescence analyses. For example, a clonal colony of differentiated cells may have originated from a single differentiated cell following multiple divisions, or from the simultaneous differentiation of multiple cells after a few divisions. Continuous time information and the tracking of individual cells is necessary to distinguish the two possibilities.
Live cell imaging allows to observe state transitions e.g. via cell surface markers or cell morphology (Fig. 1 a), but it cannot immediately provide a mechanistic explanation why the transition occurs. For example, the differentiation rate of a stem cell towards a more mature cell type may depend on time [7] (Fig. 1 c), cell density [8] (Fig. 1 d), the makeup of surrounding niche cells [9] or on a combination of these features. While it is possible to quantify the emergence of cellular patterns in colonies [10, 11], it is impossible to tell from the mere observation if the simultaneous differentiation of multiple cells is a random event or if it is triggered by, e.g., the increased density in the colony. The inference of features predictive of this state transition rate requires robust statistical analysis, and thus a large number of time-lapse microscopy data, which is in particular for mammalian systems still a challenging and labor intensive task [12, 13].
State transitions observed via time-lapse microscopy can be explained by different mechanisms. a During a time-lapse microscopy experiment cells are imaged over multiple time points. From these images, spatial configuration, cell proliferation and changes in cell state, e.g. via surface markers (we consider only two states, indicated by black and cyan) can be obtained. However, these observations do not inform about the underlying mechanisms that caused the transition in cell state. For example, the state transition could be entirely random (b), where cells spontaneously undergo state transitions (indicated by dice), it could depend on time (c), such that the transition rate changes in the course of the experiment (indicated by clocks). Alternatively, the transition could depend on local cell density (d), e.g. cells with higher local cell density preferentially transit from one state to the other
Here, we present a model and analysis framework that can infer the spatiotemporal features predictive of state transitions and also allows to estimate the number of samples required for this analysis. To validate the performance of our framework, we first simulate cellular genealogies from a generative spatiotemporal model for different scenarios of transition rate dependencies. We then develop an inference method based on generalized linear models (GLM) and feature selection with L 1 regularization. We show that our method is able to correctly identify the transition rate as a multi-feature function and determine the number of required genealogies and allowed tracking errors for different scenarios. Finally, we use the correlations between cell siblings to validate the chosen approach and detect shortcomings – either due to non-considered features, or due to cell-internal effects that drive cell state transitions.
A generative model for spatiotemporal cellular genealogies
Throughout this paper, we use a simple model of cell state transition with two cellular states I and II (Fig. 2 a). A single cell in state I (black circle in Fig. 2 a) can divide into two cells in state I, or transition into another state II (cyan circle), where it can only divide. This unidirectional state transition could for example model cell differentiation, where a progenitor transforms into a more differentiated cell type, but the reverse transition does not occur naturally. The transition rate λ(t,F i (t)) of a cell i from state I to state II depends on the features F i of the cell. Notably, the features F, like time, cell cycle state, position or local cell density, can change over time. Specific examples of the function λ(t,F i (t)) are introduced later on (see section "Cell state transition scenarios").
Spatiotemporal simulation and analysis of cell state transitions. a In our model, a cell in state I (black) can divide or transition into state II (cyan). The transition is governed by the transition rate λ, which can depend on features like time, position, cell cycle, or the local cell density. This unidirectional transition model is inspired by cellular differentiation where a undifferentiated progenitor cell irreversibly transitions into a more differentiated cell type. b Visualization of a cellular genealogy in space and time with cells in state I (black to gray) and state II (cyan to blue). c Tree view of the genealogy depicted in b (coloring as in a). d Local cell density is modeled via a set of annular basis functions ϕ k with inner radii k Δ r and constant thickness Δ r (green circles). Cells are indicated as crosses. e Linear combinations of the ϕ k can approximate any density dependence (e.g. a tophat kernel, upper panel, or a Gaussian kernel, lower panel). f The tree structured data is transformed into a data matrix by discretizing time (t 0,…,t 4 in this example) and creating one sample (i.e. one column) for each cell at each time interval, simulating a measurement process. For each cell i and each timepoint t, we record different features (i.e. rows), e.g. cell coordinates x i(t) and y i(t), the spatial features \({\phi ^{i}_{0}}(t), {\phi ^{i}_{1}}(t),\ldots \) (illustrated in d) and state transition events Y within the time interval
Mathematically, in our model a single cell is defined by its 2D spatial coordinates \(x \in \mathbb {R}^{2}\) (assuming an in vitro experimental setting where cells are imaged on a cover-slip), its state Y∈ [0,1], where Y=0 (Y=1) if the cell is in state I (II) and its age \(\tau \in \mathbb {R}^{+}\), i.e. the time since the last division. The cell division rate γ(τ) is age dependent to account for non-exponential lifetime of cells (constant γ would yield unrealistic exponential lifetimes). This system of dividing cells that undergo state transitions evolves probabilistically in time and has to be described by a Master Equation (accounting not only for changes in Y and x but also considering cell divisions), whose derivation is sketched in Additional file 1: Section 1. Instead of solving the intractable Master Equation, we simulated realizations of the underlying stochastic process (Fig. 2 b): Since the system has continuous (space x, age τ) and discrete (cell state Y) variables, a standard stochastic simulation algorithm cannot be applied and a hybrid simulation method must be used (see e.g. Haseltine et al. [14]). Cell position is treated as Brownian motion (movement speed resembles agile cells, e.g. hematopoietic stem and progenitor cells) and is updated via an Euler-Maruyama scheme [15].
To evolve the cell state in time for a single cell in state I, the simulation proceeds in small time steps Δ t, during which a state transition event takes place with probability (see Additional file 1: Section 5)
$$\begin{array}{*{20}l} P_{i}(t)&=1-e^{-\int_{t}^{t + \Delta t} \lambda(t', F_{i}(t')) dt'} \approx 1-e^{-\lambda(t, F_{i}(t))\cdot \Delta t} \end{array} $$
for some arbitrary, state and time-dependent transition rate λ(t,F i (t)). The rate λ is evaluated at the beginning of each iteration, and the time step Δ t is chosen sufficiently small (such that no appreciable change in cell locations occurs and the rate λ is approximately constant). The cell divides after 12 hours on average, corresponding to the typical lifetime of mammalian stem and progenitor cells [16, 17] (for simplicity, but without loss of generality, we assumed cell lifetime to be uniformly distributed in the interval [10 h,14 h]). The cell division replaces the dividing cell by two daughter cells, with positions close to that of the mother cell and with the same cell state: e.g. a mother cell in state I gives rise to two daughters in state I. These cells are then simulated in parallel. Over the course of the simulation, a cellular genealogy with a distinct cell state pattern emerges (Fig. 2 c). Genealogies are simulated for 100 hours (8−9 generations of cells) corresponding to the typical observation periods of long term time-lapse microscopy [17–19].
Local cell density
Local cell density for a single cell is estimated using a kernel f that determines how much each cell contributes to the local density at a certain point x in space as a function of intercellular distance. We define the local cell density \({\rho _{i}^{f}}(t)\) of cell i at time t with respect to a kernel \(f:\mathbb {R}\rightarrow [0,\infty ]\):
$$\begin{array}{*{20}l} {\rho_{i}^{f}}(t) = \sum_{j\ne i} f\left[ d(x_{i}(t),x_{j}(t)) \right]\;, \end{array} $$
where x i (t) is the spatial coordinate of cell i at time t and d(x i ,x j ) denotes Euclidean distance. We use either a tophat kernel (Fig. 2 e, upper panel, black line) with
$$ f(r) = I(r<R)\;, $$
where I(…) is the indicator function of [0,1], or a Gaussian kernel (Fig. 2 e, lower panel, black line) with
$$ f(r) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{r^{2}}{2 \sigma^{2}}}\;. $$
For the tophat kernel each cell within distance R contributes equally to the local density experienced by cell i, whereas cells with distance larger than R do not contribute at all. For the Gaussian kernel the contribution to the local cell density decreases smoothly with distance.
Local cell density as a linear combination of basis functions
In order to model and estimate any (radially symmetric) density kernel f, we approximate f as a linear combination of basis functions ϕ k , k=0,1,…
$$\begin{array}{*{20}l} f \approx \sum_{k} \omega_{k} \cdot \phi_{k}\;, \end{array} $$
where the ϕ k are defined as
$$\begin{array}{*{20}l} \phi_{k}(r) = I\left[ k \Delta r < r \le (k+1)\Delta r \right]\;, \end{array} $$
and I(…) denotes the indicator function. ϕ k resembles a ring of inner radius k Δ r and thickness Δ r (Fig. 2 d). For example, we can recover the tophat kernel with radius R (Eq. 2) by choosing the coefficients ω k as
$$\begin{array}{*{20}l} \omega_{k} = \begin{cases} 1,& k\Delta r< R\\ 0,& k\Delta r\ge R \end{cases}\;. \end{array} $$
For our analysis, we choose Δ r=40 μ m, which allows to resolve short range interactions on the order of eukaryotic cell diameter (≈20 μ m) but also long range interactions due to diffusive signaling molecules (max. 25 cell diameters or 500 μ m [20]).
Cell state transition scenarios
We create four datasets corresponding to different scenarios of cell state transition:
1. We consider a scenario where the transition rate is constant (λ constant), resembling spontaneous transitions independent of other effects:
$$ \lambda(t,F_{i}(t)) = c\;, $$
with c=0.01 h −1. Thus, a state transition in a cell with a typical 12 h lifetime will occur with probability p=0.11.
2. For a time-dependent scenario (λ∝ time), the transition rate is chosen as
$$ \lambda(t,F_{i}(t)) = a \cdot t\;, $$
i.e. linearly increasing with time (a=3·10−4 h −2). Note that λ does not depend on any other feature F of the cell. A time-dependent transition rate might for example be encountered in an in vitro stem cell system, where primary stem cells are isolated, separated from the stem cell niche. Over time the stem cells are depleted of crucial signaling molecules previously supplied by the niche cells and start transitioning into more mature cells.
3. For a density-dependent scenario (λ∝ density), the local density of a cell i at time t is mediated by a tophat kernel (Eq. 2) with R=300 μ m, which is roughly the distance a cell can move in its lifetime (we assume agile, non-adherent cells in our simulations). The transition rate λ is then defined by
$$ \lambda\left(t, \rho_{i}^{\text{tophat}}(t)\right) = b \cdot \rho^{\text{tophat}}_{i}(t)\;, $$
with b=0.002 h −1. Density-dependent transition rates might be relevant for in vitro cultures of embryonic stem cells, which are known to differentiate when cell density becomes too large [8]. Another example for density-dependent transitions are bacteria that use quorum sensing to estimate local cell density and base their fate decision on that, e.g. by becoming virulent [21].
4. For a time and density-dependent scenario (λ∝ density + time), the contributions of the previous two factors are summed, using a Gaussian kernel (Eq. 3 with σ=130 μ m) to define cell density:
$$ \lambda\left(t, \rho^{\text{Gauss}}_{i}(t)\right) = a \cdot t + b \cdot \rho^{\text{Gauss}}_{i}(t)\;. $$
Non-parametric estimation of the transition rate
Given a dataset as described above, we now delineate two methods to estimate the transition rate from the data. First, the transition rate λ can be estimated non-parametrically by considering the definition of the rate as the probability of a transition in an infinitesimal time dt:
$$ P(t,t+dt| F_{i}(t)) = \lambda(t,F_{i}(t)) \cdot dt\;, $$
where P(t,t+d t|F i (t)) is the probability for a transition in the interval [t,t+d t] in the presence of the features F. We estimate the probability P(t,t+d t|F) of a state transition in [t,t+d t] given features F as
$$\begin{array}{*{20}l} \hat P(t,t+dt| F) = \frac{\text{Number of transition events}| (t,F)}{\text{Number of cells in state I}| (t,F)}\;, \end{array} $$
which is the fraction of candidate cells (in state I) that transit into state II in [t,t+d t] having features F. After rearranging Eq. 9, we obtain
$$\begin{array}{*{20}l} \hat \lambda(t,F)=\frac{1}{\Delta t} \cdot \frac{\text{Number of transition events}| (t,F)}{\text{Number of cells in state I}| (t,F)} \end{array} $$
To measure the uncertainty of the estimates, we calculate Bayesian credibility intervals (see Additional file 1: Section 2).
Estimating the transition rate via generalized linear models
The transition rate can be inferred systematically using a machine-learning framework. We consider every timepoint of each cell as an observed sample (F (i),Y (i)), where F (i) is a set of features measured for this sample (absolute time, time since last division, absolute spatial coordinates, and different measures of local cell density ϕ k ). We use superscripts to index the samples to clearly distinguish it from the per-cell indexing via subscripts used previously. Y (i)∈{0,1} denotes the class label of the sample being either "state I" (Y (i)=0) or "transition into II" (Y (i)=1). A sample is considered as Y (i)=1 if a state transition occurred in the time interval of the sample. Timepoints after the state transition (either of the cell itself or its progeny) are discarded (Fig. 2 f) since we are interested in what actually triggers the transition of cells, not the state of the cell itself. Counter-intuitively, all samples (F (i),Y (i)) are independent, even though, e.g. adjacent samples typically are strongly correlated with respect to their features (Additional file 1: Section 3).
We use generalized linear models (GLMs, [22]) to learn the relation between features F (i) and class labels Y (i) as
$$\begin{array}{*{20}l} \mathbb{E}(Y^{(i)}|F^{(i)},w) = \mu^{(i)} = g^{-1}(w^{T}F^{(i)})\;, \end{array} $$
where μ (i) is the expected value of an exponential family distribution, g −1 is called the mean function, and w is the weights vector that has to be learned from the data. Choosing a Bernoulli distribution and an exponential mean function would exactly correspond to our data generating process (Additional file 1: Section 4). However, this specific GLM has unfavorable numerical properties leading to convergence issues [23]. Therefore, we resort to a GLM that has the desired exponential mean function but a Poisson instead of a Bernoulli distribution (also known as Poisson regression) and has better numerical properties. Note that Poisson regression is generally used to model count data (where \(Y^{(i)} \in \mathbb {N}_{0}\)), but is a good approximation to binary data (Y (i)∈{0,1}) in the case of rare events (see Additional file 1: Section 4). Thus, we obtain the following log-likelihood (see Additional file 1: Section 4 for a derivation):
$$\begin{array}{*{20}l}{} \log p(Y|F,w)=\sum_{i} \left[Y^{(i)} w^{T} F^{(i)} -e^{w^{T} F^{(i)}}-\log(Y^{(i)}!)\right]\;. \end{array} $$
Feature selection via L 1 regularization
To determine the relevant features of the transition rate and to exclude features that only indirectly influence the state transition (as e.g. for scenario 3 with a density dependent λ, where however λ also indirectly depends on time; see Fig. 3 c, d and Results), we apply L 1 regularization to the GLM, also known as Lasso (least absolute shrinkage and selection operator) [24]. Here one minimizes the following function with respect to the weights w:
$$\begin{array}{*{20}l} g(w) &= -\log p(Y|F,w) + \kappa \cdot \|w\|_{1}\;, \end{array} $$
Features regulating the transition rate λ can be estimated non-parametrically from cellular genealogies with annotated state transition events. a The transition rate estimated from 100 genealogies (posterior mean, black line) agrees well with the true constant transition rate (red line). Gray areas indicate the 95 % credibility region of the estimate. b The transition rate estimated from 100 genealogies simulated with linear time-dependent rate agrees well with the true rate (red solid line). c The transition rate as a function of local cell density ρ for 100 genealogies simulated with density-dependent rate. The estimated transition rate seems to depend on both local density ρ (in line with the simulated form λ=b·ρ) and time (see inset). d The estimated transition rate \(\hat \lambda \) as a function of both density and time reveals that the time-dependence observed in the inset in c is an indirect influence (density increases with time, see inset). Instead, the transition rate depends only on local cell density ρ (as seen by the predominantly uniform pattern of \(\hat \lambda \) in time for fixed ρ, indicated by arrow)
with \(\|w\|_{1} = \sum _{i} |w_{i}|\). This regularization is equivalent to placing a Laplace prior with location parameter m=0 and scale parameter b=κ −1 on the weights [25], resembling our knowledge that most of the weights should be zero and the resulting model should be sparse. Depending on the chosen regularization strength κ, one obtains models of differing sparsity (Fig. 4 a). We follow the standard approach to determine the optimal regularization parameter κ ∗: for each κ, we perform ten-fold cross validation using the deviance of the model as the error criterion and choose κ ∗ based on the 1SE rule [26]: We select the largest κ (hence the simplest model) that in terms of its deviance is still within one standard error of the best κ. Optimization and cross validation of Lasso is performed using the function lassoglm() from the Matlab Statistics Toolbox.
Regularized generalized linear models (GLMs) select the relevant features predictive of cell state transitions. a Regularization path of the GLMs applied to the density dependent dataset. The means (lines) and standard deviations (shaded regions, shown only for the relevant features) of the regression weights w are plotted against the regularization strength κ across 50 bootstrap samples (see Methods for details). The mean of the optimal regularization strength κ ∗ determined by cross validation is shown as a vertical black line. Solid (dashed) lines correspond to relevant (irrelevant) features in the respective scenario. b Percentage of bootstrap samples that included the respective features. Included features were determined as those with non zero weights at κ ∗. Enforcing a 90 % threshold (gray area) on the inclusion probability for each feature, we select the relevant features of the model. The features ϕ 0,ϕ 1 are not included as their effect is too weak to be detected by the GLM at the current sample size (see main text). c Reconstructed kernel of local cell density (bars) from the selected features in b. The true underlying tophat kernel shape is shown in black. As in b, the features ϕ 0,ϕ 1 are not included because their effect is to weak. d-f Analogous to a-c, but for a dataset where the transition rate λ depends on time and local cell density with a Gaussian kernel. Both features are correctly identified and the density kernel is correctly estimated
Additionally, we have to account for the fact that the classes in our dataset are severely imbalanced with more non-events than events (at a ratio of 1:200 in our simulations). Such class imbalance can lead to problems for machine learning algorithms [27]. Therefore, we down-sample the majority class (Y (i)=0) to achieve a ratio of 1:3, yielding a good tradeoff between class balance and number of overall samples. Feature selection using Lasso is applied to this down-sampled dataset via Eq. (12). Since down-sampling intentionally discards data and Lasso feature selection is sensitive to data perturbation [25], we repeat the procedure N=50 times, each time using a different sample of the majority class, combining it with the minority class and fitting the Lasso to this dataset. This approach is adapted from rare event logistic regression with replication [28] and is reminiscent of bootstrap Lasso [29]. Finally, for each feature, we record the probability of inclusion in the model, i.e. the percentage of the N iterations that included the feature into the model at the optimal regularization strength κ ∗. We consider those features to be relevant that have an inclusion probability larger than 90 % [29]. This yields the final set of features for our model. We now fit this sparse model to the full data without the L 1 penalty (a process called "debiasing" [25]), since L 1 regularization is biased towards too small weights. We thus obtain our final model, its associated weights \(\hat w\) and the corresponding transition rate \(\hat \lambda (t,F) = -\hat w^{T} F \cdot \Delta t\).
The inclusion probability threshold (0.9) controls the probability α of a type I error, i.e. including a feature even tough it is irrelevant. In addition it is also important to assess the probability β of type II errors, i.e. the chance that a relevant feature is not included into the model, or equivalently, the statistical power=1−β, which is the probability of discovering the feature if it is indeed relevant. The power is a function of sample size and effect size (the parameters a and b in Eqs. 6–8): The more samples are available and the larger the effect size, the higher to probability to discover a relevant feature. Since no analytical expressions are available, we estimate the statistical power of our model with respect to a certain feature through repeated simulation: Given a fixed sample size and effect size, we generate M independent datasets, apply the above GLM with bootstrapping-based feature selection to each dataset, resulting in M different models, which might have selected different features. We then approximate the statistical power as the fraction of the M models that correctly selected the feature of interest. Since computations become demanding (sample size and effect size/tracking error have to be varied, see Fig. 5), we choose M=10 (Fig. 5) and M=20 (Fig. 6).
The method's performance is robust for different sample size and effect size. a The statistical power for each feature (the probability of including the feature into the model) plotted against sample size (other parameters as in Fig. 4 a-c). Relevant features have high probability of being included in the model (power ≥0.8) when 2000 or more transition events (corresponding to approximately 44 genealogies) are used for the analysis. Solid (dashed) lines correspond to relevant (irrelevant) features in the respective scenario. b The power for the density feature ϕ 2 shown as a function of sample size and relative effect size. The red line indicates the section corresponding to a with a relative effect size of 1. As expected, power increases with sample size and relative effect size. c,d The power as a function of sample size and relative effect size for all c relevant and irrelevant d features of the scenario. Colorbar as in b. e-h Analogous to a-d, but for a dataset used in Fig. 4 d-f, where the transition rate λ depends on time and local cell density with a Gaussian kernel
The method's performance is robust for moderate amount of tracking error. a Statistical power plotted against the amount of tracking error for the density dependent scenario from Fig. 4 a-c (4500 onsets). Solid (dashed) lines correspond to relevant (irrelevant) features in the respective scenario. The correct features are identified reliably (power ≥0.8) up to a tracking error of 3 %. For larger tracking error, there is a high probability to include time (blue curve) into the model even though it is only an indirect influence. Note that tracking error seems to some extent facilitate the detection of ϕ 0,ϕ 1 (see main text for details). b Analogous to a, but for the dataset where the transition rate λ depends on time and local cell density with a Gaussian kernel (4500 onsets)
Expected frequencies of subtree patterns in cellular genealogies
Having estimated the transition rate \(\hat \lambda \) via the regularized GLM, we calculate the number of subtree patterns expected under this transition rate. In the following we consider only subtrees of 1 generation, i.e. a mother and its two daughter cells, but the approach is easily extendable to larger subtree patterns. The expected frequencies of sister cell pairs where in either both cells, one cell, or none of the two cells state transition occurs, can be used to validate the inferred transition rate (see Fig. 7 a and Additional file 1: Figure S1). We define the random variable C i to indicate whether cell i underwent a state transition within its lifetime (C i =1) or stayed in state I (C i =0). Note that the C i describe the state of a cell over its entire lifetime, as opposed to the Y (i) used in the previous section, which denote the state of a cell at a small time interval Δ t. Using the estimated transition rate \(\hat \lambda \), we calculate the probability of a state transition in a single cell i as
$$\begin{array}{*{20}l} P(C_{i}=1) = p_{i} = 1-\exp\left[- \int_{\zeta_{i}}^{\eta_{i}} ds\hat\lambda(s,F_{i}(s))\right] \end{array} $$
Expected frequencies of sister pairs reveal if the model can account for the observed genealogical correlations. a Comparison of the observed and expected frequencies of sister pairs (both, one, or none undergoing a state transition) of the dataset used throughout Fig. 4 a-c shows no significant difference (p=0.21, χ 2-test, see Methods). Fitting the same data, but not accounting for the ϕ 5,ϕ 6 features causes significant deviations from the expected frequencies (p=1.3·10−6). b P-values of the χ 2-test (average and standard deviation over 10 replicates) to compare the observed and expected frequencies of sister pairs against amount of tracking error for the density dependent scenario. For tracking errors <5 %, the method correctly concludes that the frequencies of observed sister pairs are in agreement with the model (applying a significance threshold of α=0.05, red dashed line)
where \(\hat \lambda (s,F_{i}(s))\) is the estimate of the transition rate the cell experiences throughout its lifetime [ζ i ,η i ] based on its features F i (s) (Additional file 1: Figure S1). Similarly, we derive the probability of a state transition in its sister cell i ′ as \(P(C_{i^{\prime }}=1)\phantom {\dot {i}\!}\). Considering the whole dataset containing M pairs of sister cells (i,i ′),i=1…M, the expected number of pairs where both sister undergo a state transition is:
$$\begin{array}{*{20}l} E_{2} = \sum_{i=1}^{M} P(C_{i}=1, C_{i'}=1) \;, \end{array} $$
where \(P(C_{i}=1, C_{i^{\prime }}=1)\) is the joint probability of these events. However, assuming independence between sisters, this factorizes to
$$\begin{array}{*{20}l} E_{2} &= \sum_{i=1}^{M} P(C_{i}=1) \cdot P(C_{i'}=1)= \sum_{i=1}^{M} p_{i} \cdot p_{i'}\;. \end{array} $$
The expected number of pairs where a state transition occurs in only one sister (E 1) and in none of the sisters (E 0) are:
$$\begin{array}{*{20}l} E_{0} &= \sum_{i=1}^{M} (1-p_{i}) \cdot (1-p_{i'})\\ E_{1} &= \sum_{i=1}^{M} (1-p_{i}) \cdot p_{i'} + p_{i} \cdot (1-p_{i'})\;. \end{array} $$
Applying Eq. 13, we can evaluate (E 0,E 1,E 2) in terms of the estimated transition rate \(\hat \lambda \).
In order to test whether our observed data matches these expected frequencies (E 0,E 1,E 2) we count the observed frequencies (O 0,O 1,O 2) in the data and perform a χ 2 test with two degrees of freedom and
$$ \chi^{2} = \sum_{j=1}^{3} \frac{(E_{j}-O_{j})^{2}}{E_{j}}\;. $$
In the following, we use our generative model to simulate datasets from the simple cell state transition model (Fig. 2 a) according to four different scenarios, where the transition rate λ depends on different features (e.g. a time dependence or cell density dependence). Subsequently, we apply our proposed inference methods to the data from different scenarios, assuming the data generating scenario is unknown. We show how the dependence of λ can be recovered from the data, e.g. allowing us to distinguish density- and time-dependent scenarios. Furthermore, we analyze the impact of sample size and tracking error on our results in order to assess the required experimental design.
Estimating the transition rate non-parametrically from simulated data
In the simplest scenario the rate λ is constant during the whole time of observation (λ constant, Eq. 5). This corresponds to state transitions occurring spontaneously independent of other influences. Using our generative model for cellular genealogies (see Methods for details), we generate a sample of 100 genealogies with constant rate λ. We then reconstruct the rate \(\hat \lambda \) from the data via Eq. 10 (black curve in Fig. 3 a). The underlying true rate λ (red curve in Fig. 3 a) is well contained within the Bayesian 95 % credibility intervals of our estimate (gray areas in Fig. 3 a).
Next, we simulate 100 genealogies with a linear time-dependent transition rate (λ∝ time, Eq. 6). With the same approach we estimate \(\hat \lambda \) (see Fig. 3 b) and again, we observe good agreement between the estimated (black curve in Fig. 3 b) and the true transition rate (red curve in Fig. 3 b).
We now account for cell-cell communication and consider a transition rate depending on local cell density (λ∝ density, Eq. 7): the more cells present in the vicinity of the cell of interest, the more likely it is that a state transition occurs. We estimate the density dependent rate from 100 simulated genealogies, assuming we already know the underlying density kernel (this assumption will be relaxed later on). The estimated rate \(\hat \lambda (\rho)\) (black curve in Fig. 3 c) linearly increases with local cell density and the true rate is well contained within the credibility intervals (gray area in Fig. 3 c), showing that one can identify the influence of local cell density on the transition rate. Note that the estimates of the transition rates at high density (ρ>35 in Fig. 3 c) carry large statistical uncertainty (indicated by the broad credibility intervals) simply because very few cells are observed in those high local cell densities.
However, if we instead estimate the rate as a function of time from the same dataset, we would conclude that it is time-dependent, since the rate strongly increases over time (see Fig. 3 c, inset). This is an indirect dependence: as time increases, local cell density grows exponentially and as a result, cells are more prone to undergo a state transition (see Fig. 3 d inset). We can resolve this by estimating the rate simultaneously as a function of time and local density, \(\hat \lambda (t,\rho)\) (Fig. 3 d). For fixed local density ρ, the rate is almost constant across different times (black arrow in Fig. 3 d). However, the transition rate changes considerably if the local density changes. Therefore, we can conclude that the true transition rate depends only on local cell density. Notice however that this conclusion relies on having sufficiently many samples, yielding a good coverage of the (t,ρ) space, and knowledge of the range (R) and nature of the spatial interaction. If R is chosen too small, any dependence of λ on the local cell density is hidden by the dominating indirect time-dependence. Moreover, analyzing \(\hat \lambda \) visually becomes infeasible for higher feature dimensions.
Estimating the transition rate with generalized linear models
To approach the aforementioned issues, we infer the transition rate more systematically using the machine-learning framework of generalized linear models (GLMs, see Methods for details). Instead of considering only one feature at a time, we include all features at once and apply feature selection to determine the relevant ones. An additional advantage of this approach is that it is not necessary to assume any density kernel a priori (as in the previous section). Instead, we use a set of spatial features ϕ k , whose linear combination can approximate any kernel (Eq. 4). We then use the proposed GLM equipped with L 1 regularization to learn the relationship between features and class label and to obtain those features that directly influence the state transition rate.
We apply this approach to the density-dependent dataset (λ∝ density, Eq. 7). Starting with strong regularization (that is, a large κ and consequently a sparse model) only the most relevant features have non-zero weights and are included (Fig. 4 a). By decreasing the regularization parameter, the weights of the features gradually increase, making the model more complex. The optimal regularization κ ∗ (the black line in Fig. 4 a corresponds to the mean of κ ∗ across the 50 bootstraps) is determined by cross validation (see Methods for details). All features with non-zero weights at κ ∗ are included in the model. The ground truth of features used to simulate the dataset is indicated by solid (relevant) and dashed (irrelevant) lines in Fig. 4 a.
We estimate the inclusion probability of a feature as the fraction of the 50 bootstraps that selected the feature (Fig. 4 b). For example, the features ϕ 2,…,ϕ 6 (representing local cell densities at different radii, see Fig. 2 d) are present in all bootstraps, ϕ 8 is present in 70 % of the bootstraps, and all other features have low inclusion probabilities. In particular, time is included in only 18 % of the bootstraps and spatial location (x,y) and time since last division (cell cycle) have zero inclusion probability. Choosing a cutoff at 90 % (gray area in Fig. 4 b) for a feature to be included in the final model, we recover all features (except ϕ 0,ϕ 1) that were used to generate that dataset. We miss ϕ 0 and ϕ 1 since their contribution to the overall transition rate is effectively very low: the average number of cells within ϕ 1 is approximately 0.2, whereas the average number of cells within e.g. ϕ 7 is approximately 1. Hence, leaving out ϕ 1 will not change the overall result, and the algorithm chooses to neglect the feature in favor of sparsity.
After feature selection, we can reconstruct the density kernel as a weighted sum of the basis functions ϕ k via Eq. 4 (shown as green bars in Fig. 4 c). Here, we observe that the reconstructed kernel closely resembles the true underlying tophat kernel that was used to simulate the data (shown as a black curve in Fig. 4 c). To demonstrate that the method can faithfully report the range of the spatial interaction, we performed the same analysis on a dataset with a density dependence mediated via a short range tophat kernel (R=40 μ m), which indeed can be recovered from the data (Additional file 1: Figure S2).
We extend the set of relevant features and now consider a scenario where the transition rate depends on time and on local cell density (λ∝ density + time, Eq. 8), this time modeled via a Gaussian kernel (with σ=130 μ m) instead of a tophat kernel to illustrate the versatility of our method. Since the Gaussian kernel has infinite support, a priori there is no clear definition which ϕ i are relevant. In the following, we define all ϕ i inside the 95 % quantile of the Gaussian distribution as relevant. This results in ϕ 0,…,ϕ 4 considered relevant while ϕ 5,…,ϕ 9 are deemed irrelevant.
The regularization path and the feature inclusion probabilities (Fig. 4 d, e) show that the GLM correctly selects both time and local cell density (ϕ 1,…,ϕ 4) with inclusion probabilities close to 1. Finally, using the weights associated with the selected density features we reconstruct the kernel of local cell density and find that it indeed matches a Gaussian kernel (Fig. 4 f). As before (Fig. 4 a-c), the feature selection procedure misses ϕ 0 due to its relatively small contribution to the overall transition rate. We conclude that our proposed method is capable of identifying the features that are most predictive of the transition rate and faithfully filters out indirect influences. Furthermore, we can estimate the shape of the density kernel from the data.
Sample size, effect size and statistical power
Accurate single-cell identification and tracking in time-lapse movies is still a challenging task and requires, at least in mammalian systems manual data curation [12, 13]. Thus estimating the required sample size for any given effect size is necessary for efficient experimental design.
To assess the impact of sample size on the performance of the feature selection, we systematically reduce the number of observed state transition events (by reducing the number of genealogies) and calculate the statistical power of our method, i.e. the probability to detect a certain effect if present in the data (Fig. 5 a, e). Starting at the original sample size of 4500 onsets (using all 100 genealogies), we find that the power is 1 for the features detected previously (Fig. 4 b, e), suggesting these features can reliably be detected. Similarly, the model's power with respect to features ϕ 0 and ϕ 1 is 0, hence those features are not detectable at the given sample size. Decreasing the sample size, the power for certain features gradually drops (e.g. ϕ 2 in Fig. 5 a): The data no more contains sufficient statistical information to identify the feature as relevant. At a sample size below 1000 events, the power for all features is considerably smaller than one such that non of the features can reliably be identified. However, a sample size of 2000 onsets (corresponding to 44 genealogies) is sufficient (based on the established threshold of power >0.8) to faithfully detect the most important features influencing the transition rate and to distinguish a direct time-dependence (Fig. 5 a) from an indirect one (Fig. 5 e).
The statistical power does not only depend on the available sample size but also on the strength of the effect, i.e. small effects will be hard to detect for a fixed sample size than a strong one. We therefore vary the effect strength by changing the parameters a and b in Eqs. 7, 8 within one order of magnitude and estimate the power for each feature not only as a function of sample size but also of effect strength (relative to our baseline scenarios used in Fig. 4). As expected the power increases with increasing sample size and effect strength. For example, in the density dependent scenario, for a large relative effect size of 10, 1500 samples are sufficient to yield a power of 0.8 for feature ϕ 2, while for small effect size (0.1) more than 4000 samples are needed to achieve the same power (Fig. 5 b). Furthermore, features ϕ 3,…,ϕ 6 can reliably be identified (power >0.8) with more than 2000 onsets almost independent of the effect strength considered (Fig. 5 c). In contrast, ϕ 0 cannot be detected (power =0) for any of the given effect strengths and sample sizes, and ϕ 1,ϕ 2 are only detectable for both large effect size and sample size (Fig. 5 c).
Looking at the irrelevant features (Fig. 5 d), the probability of detecting them as relevant is mostly zero and they are correctly eliminated from the model. Only for large effect size, 'Time' has non-zero probability of being contained in the model: Due to the large effect size, state transitions happen after the first cell division (the transition rate increases strongly as soon as one daughter senses the presence of the other) and hence time- and density-dependence cannot be distinguished.
For the dataset used in Fig. 4 d-f, where the transition rate λ depends on time and local cell density with a Gaussian kernel, similar patterns are observed (Fig. 5 f-h). Time is identified reliably (power >0.8) for a sample size large than 1500 onsets, while for the other relevant features (ϕ 0,…,ϕ 4) more samples or a larger effect size are needed (Fig. 5 g). Interestingly, larger effect size seems to decrease the power for features ϕ 3,ϕ 4 (Fig. 5 g). If the effect size is very large, most state transitions will happen even before cells spread out in space such that the outer density features get populated. Therefore their effect cannot be inferred from the data.
Influence of tracking error
To obtain genealogies from time-lapse microscopy data, manual [Schwarzfischer et al., in revision] or automatic tracking (for an overview of current methods, see [30]) is required. Neither automatic nor manual tracking can produce perfect genealogies, but will introduce errors especially when local cell density is high or cells move fast as compared to the time resolution of the imaging. To test the influence of tracking errors on the our method, we introduce artificial tracking errors into the simulated datasets by interchanging the identity of randomly selected cells of the same generation and hence swapping entire subtrees of the genealogies. The amount of tracking error is defined as the percentage of all cells in the dataset where an artificial tracking error was introduced. We simulate different amounts of tracking error with up to 10 % of all cells in the experiment containing a tracking error. Note that tracking errors impact our analysis only by the creation of spurious state transitions (a cell in state I is at some point accidentally interchanged with a cell in state II). We now evaluate the previous results on these erroneous datasets.
We find that for both the density-dependent (λ∝ density, Fig. 6 a) and the time- and density-dependent scenarios (λ∝ density + time, Fig. 6 b) our method reliable identifies the underlying features (power ≥0.8) for up to 3 % of tracking error. For higher amounts of tracking error, we erroneously identify time as a relevant feature and fail to identify ϕ 2 as relevant feature in the first scenario (blue line in Fig. 6 a). Note that the wrong inclusion of time is due to the fact that tracking errors and the spurious state transitions created by those errors are more likely at later timepoints where more cells are present. Hence, those spurious onsets at late timepoint lead to the inclusion of time into the model.
For the second scenario (Fig. 6 b), identification of the relevant features seems to be very robust with respect to tracking error, as all of them have power >0.8 even for 10 % tracking error.
In both scenarios, tracking error seems to facilitate the detection of ϕ 0 (and ϕ 1 for the density dependent scenario) that was not detectable previously or only for large effect size (see Fig. 5 c, g). As discussed before, ϕ 0 is removed by the Lasso in favor of sparsity as the other features are sufficient to explain the data. Tracking error increases the noise level, i.e. the correlation between relevant features and class labels Y (i) becomes weaker. Since the other features are no longer sufficient to explain the transition events, the Lasso includes ϕ 0, which now significantly improves the model.However, at some point tracking error and hence the noise level will become so large that relevant features become decorrelated with the events and LASSO removes them again in favor of sparsity.
Model validation using sister correlations
Apparently, our method is able to infer state transition mechanisms by identifying relevant features even in the presence of moderate tracking errors. However, what if we miss to include relevant features in the GLM, e.g. unobserved influences like nutrient concentrations? In this section, we show how to use the tree structure – if available – to validate the chosen model. We investigate whether the transition rate λ estimated by the GLM is capable of explaining the observed correlated transition events within the cellular genealogies. We focus here on correlations between sister cells, but the approach easily generalizes to higher order relationships within a genealogy, like cousin-quartets. Suppose that we obtained a reasonable estimate \(\hat \lambda \) of the transition rate. Then, the state transition of one sister cell is independent of the other and just determined by the transition rate that might differ due to the spatial context in both cells. With this independence assumption, we can calculate the probability to observe sister subtree patterns (where both, one or none of the sister cells change state) just as the product of the individual probabilities (see Methods). Note that these probabilities are calculated over the entire lifetime of each cell finally resulting in the expected number of sister subtree patterns for the entire dataset.
Using these frequencies, we assess if the transition rate estimated by the GLM (agnostic of the tree structure) is capable of explaining the observed correlations in the genealogies and therefore is an adequate description of the data. For the dataset where the state transition depends only on local cell density (λ∝ density, Eq. 7), we calculate the expected frequencies of sister subtrees given the previously estimated transition rate (Fig. 7 a, gray bars) and compare these to the observed frequencies in the data (Fig. 7 a, black bars). No significant differences are observed (p=0.21, χ 2-test, see Methods), and hence, there is no indication of correlations beyond what we expect from the density dependent transition rate, in agreement with the generative model.
Next, we show how this idea can be used to determine if all relevant features have been included in the GLM. To that end, we now deliberately neglect the spatial features ϕ 5,ϕ 6 when fitting the transition rate via the GLM. Since these two features influence the transition rate in the chosen scenario, fitting the impaired GLM yields a different \(\hat \lambda \) and hence also different expected frequencies of sister subtrees (Fig. 7 a white bars). The frequencies are significantly different (p=1.3·10−6), indicating the model is inappropriate, as there is more correlation in the trees than the model can account for (due to the missing ϕ 5,ϕ 6). This difference is most pronounced for the pattern where both sister cells change their state.
Furthermore we performed this analysis for a smaller sample size with 2000 onsets (which are sufficient to recover the most relevant features, see Fig. 5 a) and recover a similar result (see Additional file 1: Figure S3): While observed and expected frequencies of sister subtrees are not significantly different, impairing the model leads to significant differences in the sister subtree frequencies.
Our approach to validate the model using sister correlations (Fig. 7 a) relies on entire correct trackings of both sister cells, as we integrate over the entire lifetime of these cells in Eq. 13. Analogous to Fig. 7 a, we evaluate whether we observed frequencies of sister subtrees match the expectations of the model (which was also fitted to the dataset containing the tracking errors) via a χ 2-test for different amounts of tracking error. For the density dependent scenario, we find that up to 5 % of tracking error, we do not observe significant differences between observed and expected frequencies (α=0.05), correctly indicating that the density dependent transition rate can explain the observed frequencies (Fig. 7 b). However, more than 5 % of tracking error result in substantial changes of the sister correlations, which cannot be explained by the model of the transition rate (shown by the significant differences in frequencies).
In this paper, we have presented a method to investigate mechanisms driving cell state transition events observed in cellular genealogies. As two features explicitly regulating the transition rate, we have here considered time and local cell density. Our method is complementary to the approach by Snijder et al. [31] who showed that the response of a cell to a certain stimulus (in their case, a virus infection) strongly depends on each cell's "population context", that is, its localization within the colony, its cell density and cell cycle stage. This approach, which has been applied to the analysis of high-content screens by Knapp et al. [32], is designed for static data and a single, controlled perturbation. The cells are subject to a treatment at a defined timepoint and their response is recorded by a single image. For our purpose a static approach, where the timepoint of the event is predetermined, is not applicable. Instead, we assume that cells undergo state transitions spontaneously, and hence transition events can happen at any point in time but their probability chances over time due to the changing environment the cells experience.
Our method currently assumes a linear relationship between features and the transition rate (see Eq. 11). Hence, it is necessary to discuss whether the model can recover relevant features in the presence of nonlinearities or how it can be adapted. In general it is difficult to predict the outcome when fitting a GLM that assumes a linear transition rate to data generated with a nonlinear transition rate. On the one hand, performance might suffer as the model cannot capture the nonlinearities and might potentially select the wrong features. On the other hand, nonlinearities might simplify the task of identifying relevant features. For example, if the transition rate is a steep, sigmoid function of cell density, this influence will be easier to detect than a linear one: In feature space, the samples with transition events (Y (i)=1) will be clearly separated from the samples without events (Y (i)=0) in the nonlinear case, while in the linear case there's a continuum and no clear separation between those two classes. We simulated a scenario where the transition rate is a sigmoid function of cell density (Additional file 1: Figure S4). Here, our method can still deduce the relevant features despite the nonlinear relation. More generally, one can extend the presented method to handle nonlinearities: The set of features F can be augmented by nonlinear transformations, e.g. by including quadratic or interaction terms (e.g. \({\phi _{i}^{2}}, \phi _{i}\phi _{j}\)) into the data matrix and feature selection is performed on this extended set. Alternatively, the GLM can be replaced by nonlinear classifiers, e.g. random forests [33]. While those methods can handle nonlinear relationships in the data, they lack the build-in feature selection of LASSO and will in general not be sparse. For random forests, one can instead use the calculated feature importance measures to perform feature selection.
In our model, we assume that cells can undergo just a single fate transition (black cells turn into blue cells, Fig. 2 a), while for example in stem cells, fate decisions are often binary, i.e. cells have to choose between two mutually exclusive follow-up states. For illustration, let us assume that black cells turn either into blue or red cells. The proposed method can easily be adapted to this setting. Two different scenarios should be distinguished: 1) The two transitions are entirely independent, i.e. there exist two separate transition rates λ 1,λ 2 and whatever fate is chosen first determines the resulting cell state [34]. In this case, one can simply split the dataset into the cells undergoing the one transition and those cells undergoing the other transition and fit the model to both sets separately. 2) The transition time is determined by a single transition rate λ, and the outcome (blue vs red cell) is determined by a probability p, which might again be a function of features such as cell density. Here, one would first build a model for the transition rate λ (simply treating red and blue cells alike) and in a second step build a model of p (considering now only blue and red cells). If it is unknown which setting applies a priori, both have to be analyzed and later compared to determine which one best explains the data.
Our model assumes a homogeneous cell population, i.e. all cells in state I (Fig. 2 a) are equivalent and share the same transition rate. In reality however, apparently homogeneous cell populations often contain subpopulations that behave differently but cannot be distinguished a priori by e.g. surface markers. In our context one could imagine two subpopulations of cells in state I: One subpopulation that undergoes state transitions in response to cell density (i.e. the transition rate is a function of cell density), while the other subpopulation obeys a transition rate that is a function of time. In our model that is unaware of the subpopulations, two potential results can be imagined: The model might identify time and cell density as relevant features but it will miss the fact that cells respond to either one of those features. Alternatively, the model might consider both density and time as irrelevant as neither of them is capable of explaining all the observed data, but just a fraction of it. Here, one has to use more flexible models than a GLM. A natural choice are "Mixtures of generalized linear models" [35], where instead of fitting a single GLM to the data, multiple GLMs are fitted which are responsible for different parts of the data. Ideally this would result in a mixture model with two GLMs, one containing only density features, the other containing only time as relevant variables.
In time-lapse microscopy, the cell's state is usually read out via surface markers. We here assume that a change in such surface marker expression reports a cell state transitions immediately. However, the marker might not be perfect, i.e. if the cell undergoes the transition but the marker changes only several hours later causing a delay between the event and its observation. If this delay is short relative to the autocorrelation time of the relevant features (e.g. if the cell density a few hours after the state change is still comparable to the density at the transition), our proposed method is still capable of detecting the effect. However, delays becomes more difficult to handle in the same way that tracking error degrades the performance: The noise level increases and decorrelates predictors and response variables. A much longer delay (e.g. several generations) might be caused by cell intrinsic processes, e.g. a new gene expression program is initiated after the state transition and a change in phenotype (the upregulation of a marker gene) is observed only once this program has been completed. This causes correlations between related cells that cannot be explained by the observed features (see 'Model validation using sister correlations'). Here, one has to model the delay explicitly, exploiting the fact that the particular correlations between the cells inform about the delay length: For example, if one observes strong correlations between sister cells, but no correlations between cousins, this would indicate a delay of about one generation.
Note that our approach shares certain aspects with Cox proportional hazard models [36]. The standard Cox proportional hazards models use fixed covariates measured once per individual to predict the time to an event for that individual. However, they can be extended to account for time varying covariates (measured several times per individual) using the counting process reformulation by Andersen and Gill [37]. Analogous to our approach, one then considers small time intervals where the covariates are constant and builds a model that predicts whether an event happened in these small intervals. This reformulation is also crucial for our approach as it allows to handle the tree structure of the data by dissecting it into small intervals. The main difference of our model to proportional hazards is the form of the hazard rate (in our context the transition rate), which in our case is linear in the covariates (see Eqs. 6–8), while in the Cox proportional hazard model it is multiplicative in the covariates. This choice is motivated by the form of the transition rate in an earlier study [7]. In general our assumption is equally strong as the proportional hazards assumption, however not relying on the proportional hazards machinery is beneficial as one then can easily exchange the GLM framework by alternative machine learning techniques if required.
In the current formulation, we assume that the transition rate of a cell i at time t is a function only of the features F i (t) at time t (the transition event is a point process) and not a function of the history F i (s),s<t of the cell. On the one hand this is advantageous because no extensive tracking of cells over multiple generations, but only an accurate cell segmentation at time t is required to assess all observable features F i (t) such as cell density. On the other hand, the method cannot identify a history-dependence of the transition rate, e.g. in a scenario where a cell integrates over the previously experienced cell densities via some internal mechanism. However, given reliable single cell tracking data and hence reliable timecourses of the features F i (t) for single cells i instead of snapshots, the presented approach can be extended to also detect history dependent transition rates. To that end, one has to augment the feature vectors entering the GLM by time-shifted versions of those features, i.e. including not only present cell density, but also densities at previous timepoints, and fits the model as proposed. This is analogous to applying the idea of Granger causality [38] to feature timecourses F i (t) and binary cell state timecourses Y i (t), i.e. inferring what features (including their history) contain information to predict the state-timecourses. Accounting for potential history-dependence with time-shifted versions of the original features complicates the inference problem due to the increasing feature space, potentially requiring a larger sample size. However, the following reasons might still allow for a faithful inference: (i) The Lasso regularization with bootstrapping is known to work well even for high dimensional problems [29]. (ii) Since the features have considerable autocorrelation (local cell density does not change rapidly over time for typical cell speeds), one only has to include the time-shifted features at intervals greater than the autocorrelation time. This leads to fewer features than considering every possible time-shift for each original feature. Still, if the transition rate depends on a very long history (e.g. several generations) this approach might become infeasible due to the increasingly large features space. Here one has to augment the problem with some additional regularization, e.g. enforcing the weights of neighboring time-lags to be similar [39].
We showed how the kernel for spatial interactions can be learned from the data using a set of concentric basis functions with width Δ r controlling the resolution of the kernel. From a biological perspective the most interesting quantity of the interaction kernel is its range, i.e. the distance on which cells communicate and influence each other. A kernel range on the order of a typical cell size indicates that state transitions are initiated or inhibited due to cell-cell contact (e.g. Delta-Notch signaling [40]). Large kernel range suggest communication via signaling molecules, e.g. cytokines that are able to instruct cell fate choice in stem- and progenitor cells [17, 41]. This range can be inferred using a relatively coarse kernel resolution (large Δ r, also see Additional file 1: Figure S2). A fine resolution (small Δ r) has to be chosen if the precise shape of the kernel is of interest. From the exact shape of the kernel one could learn about the signaling mechanism, e.g. how the signal is integrated by the receiving cells. For example, a long range tophat kernel would indicate a threshold response in the signal-receiving cell, i.e. the cell's surface receptor transfers the signal into the cell only if the signaling molecule exceeds a certain concentration. A long range Gaussian kernel would instead indicate that the receiver responses gradually to the signal. However, a fine kernel resolution comes at the expense of a more challenging inference problem: Not only does the number of features ϕ i increase (if the same overall range of interactions has to be covered) but more importantly, the contributions of the individual ϕ i becomes weaker such that they are more likely eliminated by the Lasso regularization (analogous to ϕ 0,ϕ 1 in Fig. 4 a-c). This can either be compensated by increasing the sample size or putting constraints on the weights of the ϕ i , e.g. enforcing the weights of neighboring ϕ i to be similar, resulting in smooth kernels (similar to fused lasso [39]).
With respect to regulating features, our method can be extended to any other parameter that is experimentally accessible. In terms of tumor growth for example, the presence (or local density) of distinct cancer cell subtypes might influence transitions between states of different proliferative potential [42]. This could be analyzed by introducing cell type specific density features \({\phi _{i}^{c}}\) that take into account only a certain cell type c when calculating local cell density. For blood progenitor cells, including the expression levels of Pu.1 [43], a pivotal fate determining factor [44], as a feature will allow to compare extrinsic and intrinsic [45] effects on cellular plasticity.
Our approach is designed for dynamic data provided by time-lapse microscopy, which allows to observe state transitions in their spatiotemporal and genealogical context. The requirements for an appropriate dataset are (i) single-cell genealogies obtained from automatic or manual cell tracking, (ii) at least as many annotated state transitions as determined by our analysis, and (iii) the identification of all cells surrounding a transition event in an sufficiently large radius. To the best of our knowledge, no such dataset exist up to now, but manual and automated tracking tools increase accuracy and efficiency ([13, 46]; [Schwarzfischer et al., in revision]). Moreover, our method relies only on short trackings of one cell cycle to quantify sister correlations (Fig. 7). Since fluorescent fate markers exist for various systems, morphological quantification has been shown to be usable for fate recognition [47], and robust cell segmentation algorithms work on full time-lapse movies [16], we believe that adequate datasets from various cell systems will emerge in the near future. Due to the method's generality, many different types of cell state transitions can be investigated in their spatiotemporal context: For example, one can study the influence of cytokine signaling between differentiating blood stem- and progenitor cells [17], i.e. if the presence of one celltype (potentially secreting the cytokine) promotes specific differentiation decisions. In mouse embryonic stem cells the impact of cell-cell interactions on transitions between Nanog-high and Nanog-low cells [48, 49], or on the cell fate decision between epiblast and primitive endoderm [50] could be analyzed with our proposed method. Similarly, transitions between cancer stem cells and non-tumorigenic cells [51], or the epithelial-mesenchymal transition, which is thought to initiate tumor metastases [52] can be analyzed in their the spatiotemporal context.
Waddington CH. Principles of Embryology. New York: New York, Macmillan; 1956, p. 528.
Orkin SH, Zon LI. Hematopoiesis: an evolving paradigm for stem cell biology. Cell. 2008; 132(4):631–44.
Gage FH, Temple S. Neural stem cells: Generating and regenerating the brain. Neuron. 2013; 80(3):588–601.
Davis RL, Weintraub H, Lassar AB. Expression of a single transfected cDNA converts fibroblasts to myoblasts. Cell. 1987; 51(6):987–1000.
Takahashi K, Yamanaka S. Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell. 2006; 126(4):663–76.
Yamanaka S. A fresh look at iPS cells. Cell. 2009; 137(1):13–7.
Marr C, Strasser M, Schwarzfischer M, Schroeder T, Theis FJ. Multi-scale modeling of GMP differentiation based on single-cell genealogies. FEBS J. 2012; 279(18):3488–500.
Lorincz MT. Optimized Neuronal Differentiation of Murine Embryonic Stem Cells: Role of Cell Density In: Turksen K, editor. Embryonic Stem Cell Protocols. New Jersey: Humana Press: 2006. p. 55–69.
Morrison SJ, Spradling AC. Stem cells and niches: mechanisms that promote stem cell maintenance throughout life. Cell. 2008; 132(4):598–611.
Scherf N, Herberg M, Thierbach K, Zerjatke T, Kalkan T, Humphreys P, et al.Imaging, quantification and visualization of spatio-temporal patterning in mESC colonies under different culture conditions. Bioinformatics. 2012; 28(18):556–61.
Shivanandan A, Radenovic A, Sbalzarini IF. MosaicIA: an ImageJ/Fiji plugin for spatial pattern and interaction analysis. BMC Bioinformatics. 2013; 14:349.
Schroeder T. Imaging stem-cell-driven regeneration in mammals. Nature. 2008; 453(7193):345–51.
Amat F, Lemon W, Mossing D, McDole K. Fast, accurate reconstruction of cell lineages from large-scale fluorescence microscopy data. Nat Methods. 2014; 11(9):951–8.
Haseltine EL, Rawlings JB. Approximate simulation of coupled fast and slow reactions for stochastic chemical kinetics. J Chem Phys. 2002; 117(15):6959.
Fuchs C. Inference for Diffusion Processes: With Applications in Life Sciences. Berlin: Springer; 2013.
Buggenthin F, Marr C, Schwarzfischer M, Hoppe PS, Hilsenbeck O, Schroeder T, et al.An automatic method for robust and fast cell detection in bright field images from high-throughput microscopy. BMC Bioinformatics. 2013; 14(1):297.
Rieger MA, Hoppe PS, Smejkal B, Eitelhuber AC, Schroeder T. Hematopoietic cytokines can instruct lineage choice. Science. 2009; 325:217–8.
Costa MR, Ortega F, Brill MS, Beckervordersandforth R, Petrone C, Schroeder T, et al.Continuous live imaging of adult neural stem cell division and lineage progression in vitro. Development. 2011; 138(6):1057–68.
Eilken HM, Nishikawa SI, Schroeder T. Continuous single-cell imaging of blood generation from haemogenic endothelium. Nature. 2009; 457(7231):896–900.
Francis K, Palsson BO. Effective intercellular communication distances are determined by the relative time constants for cyto/chemokine secretion and diffusion. Proc Natl Acad Sci U S A. 1997; 94(23):12258–62.
Williams P, Camara M, Hardman A, Swift S, Milton D, Hope VJ, et al.Quorum sensing and the population-dependent control of virulence. Philos Trans R Soc Lond Series B Biol Sci. 2000; 355(1397):667–80.
McCullagh J, Nelder P. Generalized Linear Models. London: Chapman and Hall/CRC; 1989.
Zou G. A Modified Poisson Regression Approach to Prospective Studies with Binary Data. Am J Epidemiol. 2004; 159(7):702–6.
Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc Ser B. 1996; 58(1):267–88.
Murphy KP. Machine Learning: a Probabilistic Perspective. Cambridge, Massachusetts: The MIT Press; 2012.
Hastie T, Tibshirani R, Friedman J. The Elements of Statistical Learning. New York: Springer; 2009.
He H, Garcia E. Learning from Imbalanced Data. IEEE Trans Knowl Data Eng. 2009; 21(9):1263–84.
Guns M, Vanacker V, Glade T. Logistic regression applied to natural hazards: rare event logistic regression with replications. Nat Hazards Earth Syst Sci. 2012; 12:1937–47.
Bach F. Bolasso: model consistent lasso estimation through the bootstrap. In: Proceedings of the 25th International Conference on Machine Learning. Madison, Wisconsin: Omnipress: 2008. p. 33–40.
Maska M, Ulman V, Svoboda D, Matula P, Matula P, Ederra C, et al.A benchmark for comparison of cell tracking algorithms. Bioinformatics. 2014; 30(11):1–8.
Snijder B, Sacher R, Rämö P, Damm EM, Liberali P, Pelkmans L. Population context determines cell-to-cell variability in endocytosis and virus infection. Nature. 2009; 461(7263):520–3.
Knapp B, Rebhan I, Kumar A, Matula P, Kiani NA, Binder M, et al.Normalizing for individual cell population context in the analysis of high-content cellular screens. BMC Bioinformatics. 2011; 12(1):485.
Breiman L. Random forests. Mach Learn. 2001; 45:5–32.
Kuchina A, Espinar L, Çagatay T, Balbin AO, Zhang F, Alvarado A, et al.Temporal competition between differentiation programs determines cell fate choice. Mol Syst Biol. 2011; 7(557):1–11.
Grün B, Leisch F. Finite Mixtures of Generalized Linear Regression Models. In: Recent Advances in Linear Models and Related Areas SE - 11. Heidelberg, Germany: Physica Verlag: 2008. p. 205–30.
Cox D. Regression models and life tables. JR stat soc B. 1972; 34(2):187–220.
Andersen PK, Gill RD. Cox's Regression Model for Counting Processes:A Large Sample Study. Ann Stat. 1982; 10:1100–20.
Granger CWJ. Investigating Causal Relations by Econometric Models and Cross-spectral Methods. Econometrica. 1969; 37(3):424–38.
Tibshirani R, Saunders M, Rosset S, Zhu J, Knight K. Sparsity and smoothness via the fused lasso. J R Stat Soc Ser B: Stat Methodol. 2005; 67(1):91–108.
Appel B, Givan LA, Eisen JS. Delta-Notch signaling and lateral inhibition in zebrafish spinal cord development. BMC Dev Biol. 2001; 1:13.
Metcalf D. Hematopoietic cytokines. Blood. 2008; 111(2):485–91.
Stingl J, Caldas C. Molecular heterogeneity of breast carcinomas and the cancer stem cell hypothesis. Nat Revi Cancer. 2007; 7(10):791–9.
Kueh HY, Champhekhar A, Nutt SL, Elowitz MB, Rothenberg EV. Positive Feedback Between PU.1 and the Cell Cycle Controls Myeloid Differentiation. Science. 2013; 341(6146):670–3.
Krumsiek J, Marr C, Schroeder T, Theis FJ. Hierarchical differentiation of Myeloid Progenitors is encoded in the transcription factor network. PLoS ONE. 2011; 6(8):22649.
Strasser M, Theis FJ, Marr C. Stability and multiattractor dynamics of a toggle switch based on a two-stage model of stochastic gene expression. Biophys J. 2012; 102(1):19–29.
Chenouard N, Smal I, de Chaumont F, Maška M, Sbalzarini IF, Meijering E. Objective comparison of particle tracking methods. Nat Methods. 2014; 11(3):281–9.
Cohen AR, Gomes FLAF, Roysam B, Cayouette M. Computational prediction of neural progenitor cell fates. Nat Methods. 2010; 7(3):213–8.
Chambers I, Silva J, Colby D, Nichols J, Nijmeijer B, Robertson M, et al.Nanog safeguards pluripotency and mediates germline development. Nature. 2007; 450:1230–4.
Herberg M, Kalkan T, Glauche I, Smith A, Roeder I. A model-based analysis of culture-dependent phenotypes of mESCs. PloS ONE. 2014; 9(3):92496.
Schröter C, Rué P, Mackenzie JP, Martinez Arias A. FGF/MAPK signaling sets the switching threshold of a bistable circuit controlling cell fate decisions in ES cells. bioRxiv. 2015. http://biorxiv.org/content/early/2015/04/21/015404.
Gupta PB, Fillmore CM, Jiang G, Shapira SD, Tao K, Kuperwasser C, Lander ES. Stochastic state transitions give rise to phenotypic equilibrium in populations of cancer cells. Cell. 2011; 146(4):633–44.
Magee JA, Piskounova E, Morrison SJ. Cancer stem cells: impact, heterogeneity, and uncertainty. Cancer Cell. 2012; 21(3):283–96.
We thank Felix Buggenthin, Florian Büttner, Bettina Knapp, Michael Laimighofer, Michael Rieger, and Timm Schroeder for helpful discussions on the manuscript. Moreover, we acknowledge the comments of the unknown reviewers, who improved the quality of the manuscript. This work was supported by the German Science Foundation DFG (project 'Inference of Differentiation Decision Times from Blood Stem Cell Genealogies' and SPP 1356) and by the European Research Council (ERC Starting Grant - Latent Causes).
Institute of Computational Biology, Helmholtz Zentrum München, German Research Center for Environmental Health, Ingolstädter Landstr. 1, Neuherberg, 85764, Germany
Michael K. Strasser
, Justin Feigelman
, Fabian J. Theis
& Carsten Marr
Department of Mathematics, Technische Universität München, Boltzmannstr. 3, Garching, 85747, Germany
Justin Feigelman
& Fabian J. Theis
Search for Michael K. Strasser in:
Search for Justin Feigelman in:
Search for Fabian J. Theis in:
Search for Carsten Marr in:
Correspondence to Carsten Marr.
MKS developed the method and conducted the simulation study. JF provided the simulation data. FJT critically commented on the study and the manuscript. MKS wrote the manuscript with CM. CM and MKS designed the study. CM supervised the study. All authors read and approved the final manuscript.
MATLAB code is provided at https://github.com/QSCD.
Additional file 1
Supplementary Text. This document provides a detailed description of the full probabilistic model, Bayesian credibility intervals, the proof of sample independence, and the relation of log-binomial and Poisson regression. (PDF 345 kb)
Strasser, M.K., Feigelman, J., Theis, F.J. et al. Inference of spatiotemporal effects on cellular state transitions from time-lapse microscopy. BMC Syst Biol 9, 61 (2015) doi:10.1186/s12918-015-0208-5
Cell state transition
Time-lapse microscopy
Spatial interaction
|
CommonCrawl
|
It's not clear that there is much of an effect at all. This makes it hard to design a self-experiment - how big an effect on, say, dual n-back should I be expecting? Do I need an arduous long trial or an easy short one? This would principally determine the value of information too; chocolate seems like a net benefit even if it does not affect the mind, but it's also fairly costly, especially if one likes (as I do) dark chocolate. Given the mixed research, I don't think cocoa powder is worth investigating further as a nootropic.
Learning how products have worked for other users can help you feel more confident in your purchase. Similarly, your opinion may help others find a good quality supplement. After you have started using a particular supplement and experienced the benefits of nootropics for memory, concentration, and focus, we encourage you to come back and write your own review to share your experience with others.
Ethical issues also arise with the use of drugs to boost brain power. Their use as cognitive enhancers isn't currently regulated. But should it be, just as the use of certain performance-enhancing drugs is regulated for professional athletes? Should universities consider dope testing to check that students aren't gaining an unfair advantage through drug use?
There are hundreds of cognitive enhancing pills (so called smart pills) on the market that simply do NOT work! With each of them claiming they are the best, how can you find the brain enhancing supplements that are both safe and effective? Our top brain enhancing pills have been picked by sorting and ranking the top brain enhancing products yourself. Our ratings are based on the following criteria.
For obvious reasons, it's difficult for researchers to know just how common the "smart drug" or "neuro-enhancing" lifestyle is. However, a few recent studies suggest cognition hacking is appealing to a growing number of people. A survey conducted in 2016 found that 15% of University of Oxford students were popping pills to stay competitive, a rate that mirrored findings from other national surveys of UK university students. In the US, a 2014 study found that 18% of sophomores, juniors, and seniors at Ivy League colleges had knowingly used a stimulant at least once during their academic career, and among those who had ever used uppers, 24% said they had popped a little helper on eight or more occasions. Anecdotal evidence suggests that pharmacological enhancement is also on the rise within the workplace, where modafinil, which treats sleep disorders, has become particularly popular.
Price discrimination is aided by barriers such as ignorance and oligopolies. An example of the former would be when I went to a Food Lion grocery store in search of spices, and noticed that there was a second selection of spices in the Hispanic/Latino ethnic food aisle, with unit prices perhaps a fourth of the regular McCormick-brand spices; I rather doubt that regular cinnamon varies that much in quality. An example of the latter would be using veterinary drugs on humans - any doctor to do so would probably be guilty of medical malpractice even if the drugs were manufactured in the same factories (as well they might be, considering economies of scale). Similarly, we can predict that whenever there is a veterinary drug which is chemically identical to a human drug, the veterinary drug will be much cheaper, regardless of actual manufacturing cost, than the human drug because pet owners do not value their pets more than themselves. Human drugs are ostensibly held to a higher standard than veterinary drugs; so if veterinary prices are higher, then there will be an arbitrage incentive to simply buy the cheaper human version and downgrade them to veterinary drugs.
My answer is that this is not a lot of research or very good research (not nearly as good as the research on nicotine, eg.), and assuming it's true, I don't value long-term memory that much because LTM is something that is easily assisted or replaced (personal archives, and spaced repetition). For me, my problems tend to be more about akrasia and energy and not getting things done, so even if a stimulant comes with a little cost to long-term memory, it's still useful for me. I'm going continue to use the caffeine. It's not so bad in conjunction with tea, is very cheap, and I'm already addicted, so why not? Caffeine is extremely cheap, addictive, has minimal effects on health (and may be beneficial, from the various epidemiological associations with tea/coffee/chocolate & longevity), and costs extra to remove from drinks popular regardless of their caffeine content (coffee and tea again). What would be the point of carefully investigating it? Suppose there was conclusive evidence on the topic, the value of this evidence to me would be roughly $0 or since ignorance is bliss, negative money - because unless the negative effects were drastic (which current studies rule out, although tea has other issues like fluoride or metal contents), I would not change anything about my life. Why? I enjoy my tea too much. My usual tea seller doesn't even have decaffeinated oolong in general, much less various varieties I might want to drink, apparently because de-caffeinating is so expensive it's not worthwhile. What am I supposed to do, give up my tea and caffeine just to save on the cost of caffeine? Buy de-caffeinating machines (which I couldn't even find any prices for, googling)? This also holds true for people who drink coffee or caffeinated soda. (As opposed to a drug like modafinil which is expensive, and so the value of a definitive answer is substantial and would justify some more extensive calculating of cost-benefit.)
The leadership position in the market is held by the Americas. The region has favorable reimbursement policies and a high rate of incidence for chronic and lifestyle diseases which has impacted the market significantly. Moreover, the region's developed economies have a strong affinity toward the adoption of highly advanced technology. This falls in line with these countries well-develop healthcare sectors.
This continued up to 1 AM, at which point I decided not to take a second armodafinil (why spend a second pill to gain what would likely be an unproductive set of 8 hours?) and finish up the experiment with some n-backing. My 5 rounds: 60/38/62/44/5023. This was surprising. Compare those scores with scores from several previous days: 39/42/44/40/20/28/36. I had estimated before the n-backing that my scores would be in the low-end of my usual performance (20-30%) since I had not slept for the past 41 hours, and instead, the lowest score was 38%. If one did not know the context, one might think I had discovered a good nootropic! Interesting evidence that armodafinil preserves at least one kind of mental performance.
Results: Women with high caffeine intakes had significantly higher rates of bone loss at the spine than did those with low intakes (−1.90 ± 0.97% compared with 1.19 ± 1.08%; P = 0.038). When the data were analyzed according to VDR genotype and caffeine intake, women with the tt genotype had significantly (P = 0.054) higher rates of bone loss at the spine (−8.14 ± 2.62%) than did women with the TT genotype (−0.34 ± 1.42%) when their caffeine intake was >300 mg/d…In 1994, Morrison et al (22) first reported an association between vitamin D receptor gene (VDR) polymorphism and BMD of the spine and hip in adults. After this initial report, the relation between VDR polymorphism and BMD, bone turnover, and bone loss has been extensively evaluated. The results of some studies support an association between VDR polymorphism and BMD (23-,25), whereas other studies showed no evidence for this association (26,27)…At baseline, no significant differences existed in serum parathyroid hormone, serum 25-hydroxyvitamin D, serum osteocalcin, and urinary N-telopeptide between the low- and high-caffeine groups (Table 1⇑). In the longitudinal study, the percentage of change in serum parathyroid hormone concentrations was significantly lower in the high-caffeine group than in the low-caffeine group (Table 2⇑). However, no significant differences existed in the percentage of change in serum 25-hydroxyvitamin D
Cognitive control is a broad concept that refers to guidance of cognitive processes in situations where the most natural, automatic, or available action is not necessarily the correct one. Such situations typically evoke a strong inclination to respond but require people to resist responding, or they evoke a strong inclination to carry out one type of action but require a different type of action. The sources of these inclinations that must be overridden are various and include overlearning (e.g., the overlearned tendency to read printed words in the Stroop task), priming by recent practice (e.g., the tendency to respond in the go/no-go task when the majority of the trials are go trials, or the tendency to continue sorting cards according to the previously correct dimension in the Wisconsin Card Sorting Test [WCST]; Grant & Berg, 1948) and perceptual salience (e.g., the tendency to respond to the numerous flanker stimuli as opposed to the single target stimulus in the flanker task). For the sake of inclusiveness, we also consider the results of studies of reward processing in this section, in which the response tendency to be overridden comes from the desire to have the reward immediately.
For Malcolm Gladwell, "the thing with doping is that it allows you to train harder than you would have done otherwise." He argues that we cannot easily call someone a cheater on the basis of having used a drug for this purpose. The equivalent, he explains, would be a student who steals an exam paper from the teacher, and then instead of going home and not studying at all, goes to a library and studies five times harder.
There is a similar substance which can be purchased legally almost anywhere in the world called adrafinil. This is a prodrug for modafinil. You can take it, and then the body will metabolize it into modafinil, providing similar beneficial effects. Unfortunately, it takes longer for adrafinil to kick in—about an hour—rather than a matter of minutes. In addition, there are more potential side-effects to taking the prodrug as compared to the actual drug.
Table 4 lists the results of 27 tasks from 23 articles on the effects of d-AMP or MPH on working memory. The oldest and most commonly used type of working memory task in this literature is the Sternberg short-term memory scanning paradigm (Sternberg, 1966), in which subjects hold a set of items (typically letters or numbers) in working memory and are then presented with probe items, to which they must respond "yes" (in the set) or "no" (not in the set). The size of the set, and hence the working memory demand, is sometimes varied, and the set itself may be varied from trial to trial to maximize working memory demands or may remain fixed over a block of trials. Taken together, the studies that have used a version of this task to test the effects of MPH and d-AMP on working memory have found mixed and somewhat ambiguous results. No pattern is apparent concerning the specific version of the task or the specific drug. Four studies found no effect (Callaway, 1983; Kennedy, Odenheimer, Baltzley, Dunlap, & Wood, 1990; Mintzer & Griffiths, 2007; Tipper et al., 2005), three found faster responses with the drugs (Fitzpatrick, Klorman, Brumaghim, & Keefover, 1988; Ward et al., 1997; D. E. Wilson et al., 1971), and one found higher accuracy in some testing sessions at some dosages, but no main effect of drug (Makris et al., 2007). The meaningfulness of the increased speed of responding is uncertain, given that it could reflect speeding of general response processes rather than working memory–related processes. Aspects of the results of two studies suggest that the effects are likely due to processes other than working memory: D. E. Wilson et al. (1971) reported comparable speeding in a simple task without working memory demands, and Tipper et al. (2005) reported comparable speeding across set sizes.
And there are other uses that may make us uncomfortable. The military is interested in modafinil as a drug to maintain combat alertness. A drug such as propranolol could be used to protect soldiers from the horrors of war. That could be considered a good thing – post-traumatic stress disorder is common in soldiers. But the notion of troops being unaffected by their experiences makes many feel uneasy.
Popular among computer programmers, oxiracetam, another racetam, has been shown to be effective in recovery from neurological trauma and improvement to long-term memory. It is believed to effective in improving attention span, memory, learning capacity, focus, sensory perception, and logical thinking. It also acts as a stimulant, increasing mental energy, alertness, and motivation.
This world is a competitive place. If you're not seeking an advantage, you'll get passed by those who do. Whether you're studying for a final exam or trying to secure a big business deal, you need a definitive mental edge. Are smart drugs and brain-boosting pills the answer for cognitive enhancement in 2019? If you're not cheating, you're not trying, right? Bad advice for some scenarios, but there is a grain of truth to every saying—even this one.
The next morning, four giant pills' worth of the popular piracetam-and-choline stack made me... a smidge more alert, maybe? (Or maybe that was just the fact that I had slept pretty well the night before. It was hard to tell.) Modafinil, which many militaries use as their "fatigue management" pill of choice, boasts glowing reviews from satisfied users. But in the United States, civilians need a prescription to get it; without one, they are stuck using adrafinil, a precursor substance that the body metabolizes into modafinil after ingestion. Taking adrafinil in lieu of coffee just made me keenly aware that I hadn't had coffee.
The blood half-life is 12-36 hours; hence two or three days ought to be enough to build up and wash out. A week-long block is reasonable since that gives 5 days for effects to manifest, although month-long blocks would not be a bad choice either. (I prefer blocks which fit in round periods because it makes self-experiments easier to run if the blocks fit in normal time-cycles like day/week/month. The most useless self-experiment is the one abandoned halfway.)
"I have a bachelors degree in Nutrition Science. Cavin's Balaster's How to Feed a Brain is one the best written health nutrition books that I have ever read. It is evident that through his personal journey with a TBI and many years of research Cavin has gained a great depth of understanding on the biomechanics of nutrition has how it relates to the structure of the brain and nervous system, as well as how all of the body systems intercommunicate with one another. He then takes this complicated knowledge and breaks it down into a concise and comprehensive book. If you or your loved one is suffering from ANY neurological disorder or TBI please read this book."
The chemicals he takes, dubbed nootropics from the Greek "noos" for "mind", are intended to safely improve cognitive functioning. They must not be harmful, have significant side-effects or be addictive. That means well-known "smart drugs" such as the prescription-only stimulants Adderall and Ritalin, popular with swotting university students, are out. What's left under the nootropic umbrella is a dizzying array of over-the-counter supplements, prescription drugs and unclassified research chemicals, some of which are being trialled in older people with fading cognition.
Ngo has experimented with piracetam himself ("The first time I tried it, I thought, 'Wow, this is pretty strong for a supplement.' I had a little bit of reflux, heartburn, but in general it was a cognitive enhancer. . . . I found it helpful") and the neurotransmitter DMEA ("You have an idea, it helps you finish the thought. It's for when people have difficulty finishing that last connection in the brain").
Of course, there are drugs out there with more transformative powers. "I think it's very clear that some do work," says Andrew Huberman, a neuroscientist based at Stanford University. In fact, there's one category of smart drugs which has received more attention from scientists and biohackers – those looking to alter their own biology and abilities – than any other. These are the stimulants.
So what's the catch? Well, it's potentially addictive for one. Anything that messes with your dopamine levels can be. And Patel says there are few long-term studies on it yet, so we don't know how it will affect your brain chemistry down the road, or after prolonged, regular use. Also, you can't get it very easily, or legally for that matter, if you live in the U.S. It's classified as a schedule IV controlled substance. That's where Adrafinil comes in.
Two studies investigated the effects of MPH on reversal learning in simple two-choice tasks (Clatworthy et al., 2009; Dodds et al., 2008). In these tasks, participants begin by choosing one of two stimuli and, after repeated trials with these stimuli, learn that one is usually rewarded and the other is usually not. The rewarded and nonrewarded stimuli are then reversed, and participants must then learn to choose the new rewarded stimulus. Although each of these studies found functional neuroimaging correlates of the effects of MPH on task-related brain activity (increased blood oxygenation level-dependent signal in frontal and striatal regions associated with task performance found by Dodds et al., 2008, using fMRI and increased dopamine release in the striatum as measured by increased raclopride displacement by Clatworthy et al., 2009, using PET), neither found reliable effects on behavioral performance in these tasks. The one significant result concerning purely behavioral measures was Clatworthy et al.'s (2009) finding that participants who scored higher on a self-report personality measure of impulsivity showed more performance enhancement with MPH. MPH's effect on performance in individuals was also related to its effects on individuals' dopamine activity in specific regions of the caudate nucleus.
Each nootropic comes with a recommended amount to take. This is almost always based on a healthy adult male with an average weight and 'normal' metabolism. Nootropics (and many other drugs) are almost exclusively tested on healthy men. If you are a woman, older, smaller or in any other way not the 'average' man, always take into account that the quantity could be different for you.
More than once I have seen results indicating that high-IQ types benefit the least from random nootropics; nutritional deficits are the premier example, because high-IQ types almost by definition suffer from no major deficiencies like iodine. But a stimulant modafinil may be another such nootropic (see Cognitive effects of modafinil in student volunteers may depend on IQ, Randall et al 2005), which mentions:
Swanson J, Arnold LE, Kraemer H, Hechtman L, Molina B, Hinshaw S, Wigal T. Evidence, interpretation and qualification from multiple reports of long-term outcomes in the Multimodal Treatment Study of Children With ADHD (MTA): Part II. Supporting details. Journal of Attention Disorders. 2008;12:15–43. doi: 10.1177/1087054708319525. [PubMed] [CrossRef]
If stimulants truly enhance cognition but do so to only a small degree, this raises the question of whether small effects are of practical use in the real world. Under some circumstances, the answer would undoubtedly be yes. Success in academic and occupational competitions often hinges on the difference between being at the top or merely near the top. A scholarship or a promotion that can go to only one person will not benefit the runner-up at all. Hence, even a small edge in the competition can be important.
Next, if these theorized safe and effective pills don't just get you through a test or the day's daily brain task but also make you smarter, whatever smarter means, then what? Where's the boundary between genius and madness? If Einstein had taken such drugs, would he have created a better theory of gravity? Or would he have become delusional, chasing quantum ghosts with no practical application, or worse yet, string theory. (Please use "string theory" in your subject line for easy sorting of hate mail.)
My first dose on 1 March 2017, at the recommended 0.5ml/1.5mg was miserable, as I felt like I had the flu and had to nap for several hours before I felt well again, requiring 6h to return to normal; after waiting a month, I tried again, but after a week of daily dosing in May, I noticed no benefits; I tried increasing to 3x1.5mg but this immediately caused another afternoon crash/nap on 18 May. So I scrapped my cytisine. Oh well.
Adrafinil is Modafinil's predecessor, because the scientists tested it as a potential narcolepsy drug. It was first produced in 1974 and immediately showed potential as a wakefulness-promoting compound. Further research showed that Adrafinil is metabolized into its component parts in the liver, that is into inactive modafinil acid. Ultimately, Modafinil has been proclaimed the primary active compound in Adrafinil.
There is evidence to suggest that modafinil, methylphenidate, and amphetamine enhance cognitive processes such as learning and working memory...at least on certain laboratory tasks. One study found that modafinil improved cognitive task performance in sleep-deprived doctors. Even in non-sleep deprived healthy volunteers, modafinil improved planning and accuracy on certain cognitive tasks. Similarly, methylphenidate and amphetamine also enhanced performance of healthy subjects in certain cognitive tasks.
One of the most widely known classes of smart drugs on the market, Racetams, have a long history of use and a lot of evidence of their effectiveness. They hasten the chemical exchange between brain cells, directly benefiting our mental clarity and learning process. They are generally not controlled substances and can be purchased without a prescription in a lot of locations globally.
But how, exactly, does he do it? Sure, Cruz typically eats well, exercises regularly and tries to get sufficient sleep, and he's no stranger to coffee. But he has another tool in his toolkit that he finds makes a noticeable difference in his ability to efficiently and effectively conquer all manner of tasks: Alpha Brain, a supplement marketed to improve memory, focus and mental quickness.
There is no shortage of nootropics available for purchase online that can be shipped to you nearly anywhere in the world. Yet, many of these supplements and drugs have very little studies, particularly human studies, confirming their results. While this lack of research may not scare away more adventurous neurohackers, many people would prefer to […]
The choline-based class of smart drugs play important cognitive roles in memory, attention, and mood regulation. Acetylcholine (ACh) is one of the brain's primary neurotransmitters, and also vital in the proper functioning of the peripheral nervous system. Studies with rats have shown that certain forms of learning and neural plasticity seem to be impossible in acetylcholine-depleted areas of the brain. This is particularly worth mentioning because (as noted above under the Racetams section), the Racetam class of smart drugs tends to deplete cholines from the brain, so one of the classic "supplement stacks" – chemical supplements that are used together – are Piracetam and Choline Bitartrate. Cholines can also be found in normal food sources, like egg yolks and soybeans.
(If I am not deficient, then supplementation ought to have no effect.) The previous material on modern trends suggests a prior >25%, and higher than that if I were female. However, I was raised on a low-salt diet because my father has high blood pressure, and while I like seafood, I doubt I eat it more often than weekly. I suspect I am somewhat iodine-deficient, although I don't believe as confidently as I did that I had a vitamin D deficiency. Let's call this one 75%.
Finally, it's not clear that caffeine results in performance gains after long-term use; homeostasis/tolerance is a concern for all stimulants, but especially for caffeine. It is plausible that all caffeine consumption does for the long-term chronic user is restore performance to baseline. (Imagine someone waking up and drinking coffee, and their performance improves - well, so would the performance of a non-addict who is also slowly waking up!) See for example, James & Rogers 2005, Sigmon et al 2009, and Rogers et al 2010. A cross-section of thousands of participants in the Cambridge brain-training study found caffeine intake showed negligible effect sizes for mean and component scores (participants were not told to use caffeine, but the training was recreational & difficult, so one expects some difference).
On the plus side: - I noticed the less-fatigue thing to a greater extent, getting out of my classes much less tired than usual. (Caveat: my sleep schedule recently changed for the saner, so it's possible that's responsible. I think it's more the piracetam+choline, though.) - One thing I wasn't expecting was a decrease in my appetite - nobody had mentioned that in their reports.I don't like being bothered by my appetite (I know how to eat fine without it reminding me), so I count this as a plus. - Fidgeting was reduced further
Clarke and Sokoloff (1998) remarked that although [a] common view equates concentrated mental effort with mental work…there appears to be no increased energy utilization by the brain during such processes (p. 664), and …the areas that participate in the processes of such reasoning represent too small a fraction of the brain for changes in their functional and metabolic activities to be reflected in the energy metabolism of the brain… (p. 675).
The effect? 3 or 4 weeks later, I'm not sure. When I began putting all of my nootropic powders into pill-form, I put half a lithium pill in each, and nevertheless ran out of lithium fairly quickly (3kg of piracetam makes for >4000 OO-size pills); those capsules were buried at the bottom of the bucket under lithium-less pills. So I suddenly went cold-turkey on lithium. Reflecting on the past 2 weeks, I seem to have been less optimistic and productive, with items now lingering on my To-Do list which I didn't expect to. An effect? Possibly.
Neuroplasticity, or the brain's ability to change and reorganize itself in response to intrinsic and extrinsic factors, indicates great potential for us to enhance brain function by medical or other interventions. Psychotherapy has been shown to induce structural changes in the brain. Other interventions that positively influence neuroplasticity include meditation, mindfulness , and compassion.
Even though smart drugs come with a long list of benefits, their misuse can cause negative side effects. Excess use can cause anxiety, fear, headaches, increased blood pressure, and more. Considering this, it is imperative to study usage instructions: how often can you take the pill, the correct dosage and interaction with other medication/supplements.
On the other metric, suppose we removed the creatine? Dropping 4 grams of material means we only need to consume 5.75 grams a day, covered by 8 pills (compared to 13 pills). We save 5,000 pills, which would have cost $45 and also don't spend the $68 for the creatine; assuming a modafinil formulation, that drops our $1761 down to $1648 or $1.65 a day. Or we could remove both the creatine and modafinil, for a grand total of $848 or $0.85 a day, which is pretty reasonable.
As discussed in my iodine essay (FDA adverse events), iodine is a powerful health intervention as it eliminates cretinism and improves average IQ by a shocking magnitude. If this effect were possible for non-fetuses in general, it would be the best nootropic ever discovered, and so I looked at it very closely. Unfortunately, after going through ~20 experiments looking for ones which intervened with iodine post-birth and took measures of cognitive function, my meta-analysis concludes that: the effect is small and driven mostly by one outlier study. Once you are born, it's too late. But the results could be wrong, and iodine might be cheap enough to take anyway, or take for non-IQ reasons. (This possibility was further weakened for me by an August 2013 blood test of TSH which put me at 3.71 uIU/ml, comfortably within the reference range of 0.27-4.20.)
l-theanine (Examine.com) is occasionally mentioned on Reddit or Imminst or LessWrong32 but is rarely a top-level post or article; this is probably because theanine was discovered a very long time ago (>61 years ago), and it's a pretty straightforward substance. It's a weak relaxant/anxiolytic (Google Scholar) which is possibly responsible for a few of the health benefits of tea, and which works synergistically with caffeine (and is probably why caffeine delivered through coffee feels different from the same amount consumed in tea - in one study, separate caffeine and theanine were a mixed bag, but the combination beat placebo on all measurements). The half-life in humans seems to be pretty short, with van der Pijl 2010 putting it ~60 minutes. This suggests to me that regular tea consumption over a day is best, or at least that one should lower caffeine use - combining caffeine and theanine into a single-dose pill has the problem of caffeine's half-life being much longer so the caffeine will be acting after the theanine has been largely eliminated. The problem with getting it via tea is that teas can vary widely in their theanine levels and the variations don't seem to be consistent either, nor is it clear how to estimate them. (If you take a large dose in theanine like 400mg in water, you can taste the sweetness, but it's subtle enough I doubt anyone can actually distinguish the theanine levels of tea; incidentally, r-theanine - the useless racemic other version - anecdotally tastes weaker and less sweet than l-theanine.)
Now, what is the expected value (EV) of simply taking iodine, without the additional work of the experiment? 4 cans of 0.15mg x 200 is $20 for 2.1 years' worth or ~$10 a year or a NPV cost of $205 (\frac{10}{\ln 1.05}) versus a 20% chance of $2000 or $400. So the expected value is greater than the NPV cost of taking it, so I should start taking iodine.
Smart Pill is formulated with herbs, amino acids, vitamins and co-factors to provide nourishment for the brain, which may enhance memory, cognitive function, and clarity. , which may enhance memory, cognitive function, and clarity. In a natural base containing potent standardized extract 24% flavonoid glycosides. Fast acting super potent formula. A unique formulation containing a blend of essential nutrients, herbs and co-factors.
Does little alone, but absolutely necessary in conjunction with piracetam. (Bought from Smart Powders.) When turning my 3kg of piracetam into pills, I decided to avoid the fishy-smelling choline and go with 500g of DMAE (Examine.com); it seemed to work well when I used it before with oxiracetam & piracetam, since I had no piracetam headaches, and be considerably less bulky.
White, Becker-Blease, & Grace-Bishop (2006) 2002 Large university undergraduates and graduates (N = 1,025) 16.2% (lifetime) 68.9%: improve attention; 65.2:% partying; 54.3%: improve study habits; 20%: improve grades; 9.1%: reduce hyperactivity 15.5%: 2–3 times per week; 33.9%: 2–3 times per month; 50.6%: 2–3 times per year 58%: easy or somewhat easy to obtain; write-in comments indicated many obtaining stimulants from friends with prescriptions
Christopher Wanjek is the Bad Medicine columnist for Live Science and a health and science writer based near Washington, D.C. He is the author of two health books, "Food at Work" (2005) and "Bad Medicine" (2003), and a comical science novel, "Hey Einstein" (2012). For Live Science, Christopher covers public health, nutrition and biology, and he occasionally opines with a great deal of healthy skepticism. His "Food at Work" book and project, commissioned by the U.N.'s International Labor Organization, concerns workers health, safety and productivity. Christopher has presented this book in more than 20 countries and has inspired the passage of laws to support worker meal programs in numerous countries. Christopher holds a Master of Health degree from Harvard School of Public Health and a degree in journalism from Temple University. He has two Twitter handles, @wanjek (for science) and @lostlenowriter (for jokes).
"I think you can and you will," says Sarter, but crucially, only for very specific tasks. For example, one of cognitive psychology's most famous findings is that people can typically hold seven items of information in their working memory. Could a drug push the figure up to nine or 10? "Yes. If you're asked to do nothing else, why not? That's a fairly simple function."
However, normally when you hear the term nootropic kicked around, people really mean a "cognitive enhancer" — something that does benefit thinking in some way (improved memory, faster speed-of-processing, increased concentration, or a combination of these, etc.), but might not meet the more rigorous definition above. "Smart drugs" is another largely-interchangeable term.
The stimulant now most popular in news articles as a legitimate "smart drug" is Modafinil, which came to market as an anti-narcolepsy drug, but gained a following within the military, doctors on long shifts, and college students pulling all-nighters who needed a drug to improve alertness without the "wired" feeling associated with caffeine. Modafinil is a relatively new smart drug, having gained widespread use only in the past 15 years. More research is needed before scientists understand this drug's function within the brain – but the increase in alertness it provides is uncontested.
Low level laser therapy (LLLT) is a curious treatment based on the application of a few minutes of weak light in specific near-infrared wavelengths (the name is a bit of a misnomer as LEDs seem to be employed more these days, due to the laser aspect being unnecessary and LEDs much cheaper). Unlike most kinds of light therapy, it doesn't seem to have anything to do with circadian rhythms or zeitgebers. Proponents claim efficacy in treating physical injuries, back pain, and numerous other ailments, recently extending it to case studies of mental issues like brain fog. (It's applied to injured parts; for the brain, it's typically applied to points on the skull like F3 or F4.) And LLLT is, naturally, completely safe without any side effects or risk of injury.
When it comes to coping with exam stress or meeting that looming deadline, the prospect of a "smart drug" that could help you focus, learn and think faster is very seductive. At least this is what current trends on university campuses suggest. Just as you might drink a cup of coffee to help you stay alert, an increasing number of students and academics are turning to prescription drugs to boost academic performance.
Two additional studies used other spatial working memory tasks. Barch and Carter (2005) required subjects to maintain one of 18 locations on the perimeter of a circle in working memory and then report the name of the letter that appeared there in a similarly arranged circle of letters. d-AMP caused a speeding of responses but no change in accuracy. Fleming et al. (1995) referred to a spatial delay response task, with no further description or citation. They reported no effect of d-AMP in the task except in the zero-delay condition (which presumably places minimal demand on working memory).
I'm wary of others, though. The trouble with using a blanket term like "nootropics" is that you lump all kinds of substances in together. Technically, you could argue that caffeine and cocaine are both nootropics, but they're hardly equal. With so many ways to enhance your brain function, many of which have significant risks, it's most valuable to look at nootropics on a case-by-case basis. Here's a list of 9 nootropics, along with my thoughts on each.
Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap
|
CommonCrawl
|
Numerical validation of pressure and flow characteristics across a control valve in a feed line
Nikhil Suri ORCID: orcid.org/0000-0002-1470-36931,
Venkateswaran K. S.1 &
Ramesh T.1
Journal of Engineering and Applied Science volume 68, Article number: 49 (2021) Cite this article
This work is intended to understand the variation of pressure and flow at the pump inlet of liquid rocket engine. The opening and closure of the valve upstream of the pump features complex phenomenon. The opening and closing of the valve cause pressure and flow variations at the pump inlet which may lead to combustion instabilities in combustion chamber of engine, hydraulic transients in feedlines, and off-design operation of turbo-pumps which are fundamental to the efficient testing and operation of engine. A numerical model to predict the pressure and flow transients across a control valve for different rate of opening in fluid feed systems has been developed using first-order finite difference technique. In case of flow in pipes, the velocity and pressure is governed by momentum and continuity equations. A computer code for the prediction of fluid transients is developed based on method of characteristics for one-dimensional fluid flow in pipelines and compared with test data for validation. The control valve is considered to be in-line with the feed line and modeled based on the valve coefficient vs. percent opening of valve. This model can subsequently be used to predict the effect of opening/closing time of the valve on pressure surges across the control valve and corresponding flow rate in the feedline for different opening of the valve.
Characterization of pressure and flow transients is an area of immense interest to researchers in the field of fluid dynamics since flow transients may result in performance degradation due to variation in most critical parameters like flow or pressure. The most popular manifestation of this phenomena is combustion instability in case of liquid propellant engines or bursting of water supply lines around the city due to sudden surges in pressure which can be attributed to valve opening or closure, pump starts/stoppages, and by abnormal condition, such as power failure.
Several numerical methods have been reported in literature for hydraulic hammer effects in fluid lines when a valve is subjected to closure. The algebraic method, proposed by Allievi [1] solved the differential equation for a non-visous flow but the results were found to be inaccurate. The Graphical method, proposed by Löwy [2], and later extended for use by Bergeron [3] and Parmakian [4] assumes quasi-steady friction model. The results from this method were found to be accurate only for first wave-period only. It was in 1950, when Gray [5] proposed an algorithm to solve transient problem using method of characterisitcs. This work was later improved by Streeter and Lai [6]. Many studies have been carried out in the past to solve the transient state problems, but Method of Charateristics is the most widely used method for the solution to such problems.
Streeter [7] derived the equations for the numerical solution of transient liquid flow in a conduit using method of characteristics. He presented some applications to show the general adaptability of equations and boundary conditions. He assumed some closing characteristics of a valve as a function of time and validated the data with his experimental work.
Lohrasbi and Attarnejad [8] presented a study on pressure oscillations in water networks. They developed a mathematical model for the hydraulic circuits and studied the effect of valve opening/closing on pressure oscillations. They modeled various pipe networks and observed the effects of water hammer. They concluded that slow opening/closing of valve results in less pressure surge.
Shani et al. [9] summarized the work of many researchers over the past years towards the solution of transient state problem in fluids. The paper stated the purpose and importance of previous works evolved over time. The paper also presented a mathematical model for transient state problem starting with the assumptions involved in classical transient theory and presented in all the previously formulated mathematical models. The results of numerical model based on method of characteristics was compared with experimental data as published by M.S. Ghidaoui et al. [10] for validation, which was found to be within an average relative error of 1.5% of maximum pressure head.
Berrier, Jr. [11] presented his thesis on dynamics of propellant lines and developed simple code to solve equations of motion using finite difference method to solve for pressure and velocity field within a pipe. Simulations were run for known boundary conditions, results were analyzed and compared with published data. The model was able to predict the flow field for different sets of conditions for complex systems like multi-pipe models, valve closure, cavitation, reservoir and accumulator with sufficient accuracy. The model also took into account the courant condition to consider the stability of solution. This model was also able to verify the discreet vapor model to validate cavitation problems.
Sirvole [12] presented his thesis on transient analysis in pipe networks to study pressure and velocity variation by sudden closure of valve. The maximum pressure heads on comparison with Joukowski's equation was found to be comparable. Few other simulations were run to analyze gaseous cavitation in pipes by developing a 2D numerical model using MATLAB. The results from the study presents a significant increase in fluid temperature along with high pressures during transients. As a result, some of the fluid gets vaporized and pockets of air are formed in distribution systems.
Most of the previously mentioned works/researches focused on hydraulic transients in a conduit due to valve closure as a function of time only. Very few/limited studies have been done so far to study the perturbations in flow parameters expressed as a function of percentage opening and response time of the valve. The transients encountered in fluid transfer lines due to installed characteristics of the valve represented as a function of percentage opening and time are still a subject to be studied in more detail. The present work highlights the effect of percentage opening, response time and valve characteristics of the valve on flow and pressure fluctuations in transfer lines.
Methods/experimental
This research focuses primarily on the development of a numerical model of a feed line with a single control valve in operation. The control valve is modeled using its installed characteristics followed by its validation with Test data. Water is used as a medium to carry out the analysis owing to its ease of availability, non-toxicity, stability at room temperature, and less susceptibility to cavitation.
The current analysis aims at minimizing the variation in flow-rates and pressure at the upstream of test article to ensure required flow parameters within permissible range of variation. A mathematical model of a feed line is developed and presented in this paper which includes flow components like constant pressure tank, feed line, Control valve, elbows and fittings. This model, upon successful validation, will be used to simulate some predetermined process test conditions like pressure and flow rate at the inlet of turbo-pumps of rocket engine during its testing. Based on the agreement of test results and numerical model, as discussed in this paper, this model shall be used further to model the pressure and flow characteristics during flow transition from one tank to other using simultaneous operation of two valves. It is expected that the change-over of flow from one from one feed circuit to another will cause pressure and flow transients at the engine inlet which are detrimental to the performance of the engine.
Modeling principles and classical assumptions
The analysis is carried out by solving the 1-D equations obtained by applying the conservation of mass and momentum principles to a control volume. The solution involves following steps:
Derivation of the partial differentiation equation.
Parameters affecting the flow transients in a pipeline.
Solving the equation through partial differential equation using "method of characteristics".
Characterization of the boundary conditions.
Modeling the feed line circuit with control valve and assigning proper boundary conditions.
Analysis of the results derived from above steps.
The existing fundamental theory for the transient flows in pipelines is drawn from classical assumptions: [13, 14]
One-dimensional flow, with pipe full of incompressible fluid at all times.
Dynamic liquid-pipe interactions are neglected and a quasi-steady interaction [15] between pipe and liquid is assumed and friction factors obtained under steady conditions applied to unsteady flows.
Liquid velocity is much less than wave velocity.
Rigid feed lines, which is merely an approximation as the feed line has expansion joints and flexible hoses.
Governing equations and numerical solution [16]
Newton second law of motion can be applied to the case of unsteady flow of a compressible fluid to an infinitesimal mass of fluid in an elastic pipe, as illustrated in Fig. 1, to get momentum equation, as follows:
Control volume used to derive continuity and momentum equations [16]
$$ \rho Adx\frac{dV}{dt}= pA-\left( pA+\frac{\partial p}{\partial x} Adx\right)+\rho gAdx\ sin\theta -\tau \pi Ddx $$
As wall shear stress τ can be written as \( \boldsymbol{\tau} =\frac{\boldsymbol{\rho} \boldsymbol{fV}\left|\boldsymbol{V}\right|}{\mathbf{8}} \) ,
Thus, Eq. (1) can be re-written as:
$$ \boldsymbol{L}\mathbf{1}:\frac{\mathbf{\partial}\boldsymbol{V}}{\mathbf{\partial}\boldsymbol{t}}+\frac{\mathbf{1}}{\boldsymbol{\rho}}\frac{\mathbf{\partial}\boldsymbol{p}}{\mathbf{\partial}\boldsymbol{x}}+\boldsymbol{g}\frac{\boldsymbol{dz}}{\boldsymbol{dx}}+\frac{\boldsymbol{fV}\mid \boldsymbol{V}\mid }{\mathbf{2}\boldsymbol{D}}=\mathbf{0} $$
Considering conservation of mass, the continuity equation can be obtained from the flow element and can be written as:
$$ \boldsymbol{L}\mathbf{2}:\frac{\boldsymbol{dp}}{\boldsymbol{dt}}+{\uprho \mathrm{a}}^2\frac{\boldsymbol{\partial V}}{\boldsymbol{\partial x}}+\boldsymbol{v}\ \frac{\partial p}{\partial x}=0 $$
The method of characteristics proceeds by making a linear combination of Eqs. (2) and (3) using a Lagrangian multiplier λ which, by appropriate substitution, results in two characteristic equations [17]
C+ equation:
$$ \frac{\boldsymbol{dV}}{\boldsymbol{dt}}+\frac{\mathbf{1}}{\boldsymbol{\rho} \boldsymbol{a}}\frac{\boldsymbol{dp}}{\boldsymbol{dt}}+\boldsymbol{g}\frac{\boldsymbol{dz}}{\boldsymbol{dx}}+\frac{\boldsymbol{fV}\left|\boldsymbol{V}\right|}{\mathbf{2}\boldsymbol{D}}=\mathbf{0} $$
Equation (4) is applicable if \( \frac{dx}{dt} \) = v + a
C− equation:
$$ \frac{dV}{dt}-\frac{1}{\rho a}\frac{dp}{dt}+g\frac{dz}{dx}+\frac{fV\left|V\right|}{2D}=0 $$
Equation (5) is applicable if \( \frac{dx}{dt}=\mathrm{v}-\mathrm{a} \)
The above equation forms the basis for the finite difference approach to compute the numerical solution of the transient characteristics in feed lines of fluids.
\( \frac{\boldsymbol{dx}}{\boldsymbol{dt}} \) = v ± a represents a pair of straight lines between t and x on which C+ and C− equations are valid, as shown in given Fig. 2.
Representation of characteristics lines
Equations (4) and (5) are non-linear coupled equation in flow velocity and pressure; therefore, first order first-order finite difference method is used to solve the system of equations and compute the numerical solution.
A pipe of length L is discretized into N number of elements, giving N + 1 number of nodes. For every time step Δt, the pressure and flow velocity is computed at each node. The time step is determined by the pipe length and the wave speed according to Δt = Δx/a.
The MOC requires viscous flow in pipes to satisfy the Courant condition in Eq. 6. The Courant number (Co), is defined as the ratio of the actual wave speed and the numerical wave speed (Δx/Δt).
$$ \mathrm{Co}=\frac{a}{\raisebox{1ex}{$\Delta x$}\!\left/ \!\raisebox{-1ex}{$\Delta t$}\right.} $$
Based on the analytical studies and procedures proposed by O'brien [18] and considering linearized equation, Perkins [19] has proved that for the process to be stable, the time step Δt should be calculated such that the Courant number should be less than or equal to 1 for better stability and faster convergence of the model. This shows that characteristics through the point C, as shown in Fig. 2 shall not fall outside segment AB [20].
The C+, hereafter mentioned as CP is valid upstream, i.e., when using information from the previous node in the previous time step, represented by the characteristic line with positive slope in Fig. 2. The C−, hereafter mentioned as CM is valid downstream, i.e., when using information from the next node in the previous time step, represented by the characteristic line with negative slope in Fig. 2. Integration of equations followed by first order approximation, results in following equations.
$$ \left({V}_{\mathrm{C}}-{V}_{\mathrm{A}}\right)+\left({P}_{\mathrm{C}}-{P}_{\mathrm{A}}\right)/\uprho \mathrm{a}+\mathrm{g}\frac{dz}{dx}\ \left({t}_{\mathrm{C}}-{t}_{\mathrm{A}}\right)+{\mathrm{fV}}_{\mathrm{A}}\mid {V}_{\mathrm{A}}\mid \left({t}_{\mathrm{C}}-{t}_{\mathrm{A}}\right)/2\mathrm{D}=0 $$
$$ {\mathrm{x}}_{\mathrm{C}}-{\mathrm{x}}_{\mathrm{A}}=\left({V}_{\mathrm{A}}+\mathrm{a}\right)\ \left({t}_{\mathrm{C}}-{t}_{\mathrm{A}}\right) $$
$$ \left({V}_{\mathrm{C}}-{V}_{\mathrm{B}}\right)-\left({P}_{\mathrm{C}}-{P}_{\mathrm{B}}\right)/\uprho \mathrm{a}+\mathrm{g}\frac{dz}{dx}\ \left({t}_{\mathrm{C}}-{\mathrm{t}}_{\mathrm{B}}\right)+{\mathrm{fV}}_{\mathrm{B}}\mid {V}_{\mathrm{B}}\mid \left({t}_{\mathrm{C}}-{t}_{\mathrm{B}}\right)/2\mathrm{D}=0 $$
$$ {\mathrm{x}}_{\mathrm{C}}-{\mathrm{x}}_{\mathrm{B}}=\left({V}_{\mathrm{B}}+\mathrm{a}\right)\ \left({t}_{\mathrm{C}}-{\mathrm{t}}_{\mathrm{B}}\right) $$
Rewriting above equations in terms of flow rate Q and solving Eqs. (7) and (9) for QC and PC,
$$ Q\mathrm{c}=\mathrm{CP}-\mathrm{BPc} $$
$$ Q\mathrm{c}=\mathrm{CM}+\mathrm{BPc} $$
$$ {\left.\mathrm{CP}={Q}_{\mathrm{A}}+{\mathrm{BP}}_{\mathrm{A}}-{R}_{\mathrm{A}}-{\mathrm{FFQ}}_{\mathrm{A}}\right|}_{\mathrm{A}}\mid $$
$$ \mathrm{CM}={Q}_{\mathrm{B}}-{\mathrm{B}\mathrm{P}}_{\mathrm{B}}-{R}_{\mathrm{B}}-{\mathrm{FFQ}}_{\mathrm{B}\mid \mathrm{B}\mid } $$
$$ B=A/\uprho \mathrm{a} $$
$$ R= gA\Delta \mathrm{t}\frac{dz}{dx} $$
$$ \mathrm{FF}=\mathrm{f}\Delta \mathrm{t}/2\mathrm{DA} $$
Solution to Eqs. (11) and (12) gives
$$ Q\mathrm{c}=\left(\mathrm{CM}+\mathrm{CP}\right)/2 $$
Equation (18) can be substituted in Eqs. (11) or (12) to get pressure Pc at that node.
Flowchart for computation of solution [11]
This section describes the steps involved in calculating the Pressure and flow rate for the pipeline transient problem. Corresponding steps involved in modeling process in the form of flowchart, as illustrated in Fig. 3.
Flowchart to solve transient problem
Boundary conditions [20]
The finite difference equations, as defined in above equations, are applicable when the end boundary equations remain same, which is usually the case with steady-state flows. As soon as the boundary conditions start varying, it start influencing the interior points in the subsequent time step. Thus, in order to compute the pressure and flow at every time step, proper boundary conditions become necessary to be defined.
At the inlet and outlet of pipe, only one equation is available at either end. Thus, with respect to the previous time step, Eq. (13) is used to compute positive characteristic line at the upstream end of pipe while Eq. (14) can be used to get negative characteristic equation on the downstream side of the pipe, as shown in Fig. 4. With known pressure or flow conditions at the inlet or outlet of pipe, Eqs. (11) and (12) are used to calculate the other flow parameters on either end.
Characteristic lines at the pipe ends
The simplest boundary conditions are specified values of the relevant variables, the pressure or flow in this case. At the inlet,
Pc = Preservoir = constant.
Thus, using Eq. (14), negative characteristic equation is derived.
With pressure known at the inlet, Eq. (12) is used to compute flow at inlet.
Similarly, Eqs. (11) and (13) can be used to derive pressure/flow boundary condition at the outlet, provided either of one property out of flow or pressure is known.
It is to be noted that Eq. (18) and its associated equations are valid for interior points of the solution space, 2 ≤ i ≤ N.
Boundary conditions, valid for all values of t ≥ 0, must be imposed on the two points at either end. Initial conditions must be specified at t = 0.
Modeling of control valve [11]
The process of valve modeling defined in the present and subsequent sections is just a variation of the work by Berrier, Jr. [11] wherein the valve characteristics are defined as a function of valve response time and percentage lift of the valve. Berrier, Jr. [11] in his work presented valve pressure-discharge characteristic which follows exponential law and it is a function of time only and not a function of percentage lift of the valve. Moreover, the present work considers installed characteristics of the valve for analysis which evaluates flow and pressure transients across the valve based on variation of flow coefficient with percentage lift which is again a function of time.
Any flow component like a valve-in-line, as shown in Fig. 5, can be considered to be composed of two pipes connected by a valve. In this case, Eqs. (11) and (12) are valid for inlet and outlet of two pipes. Additional variables at valve inlet calls for additional auxiliary equations to obtain the complete solution. The continuity equation provides one such equation
$$ {\boldsymbol{Q}}_{\mathbf{P}}{\left|{}_{\boldsymbol{J},\boldsymbol{N}+\mathbf{1}}={\mathbf{Q}}_{\mathbf{P}}\right|}_{\mathbf{J}+\mathbf{1},\mathbf{1}} $$
Modeling of valve in-line with pipe
while the fourth equation is deduced from the valve characteristic curve representing its flow coefficient vs. % lift of the valve.
Writing Eqs. (11) and (12) across the valve at nodes (J,N + 1) and (J + 1,1),
$$ {\left.{Q}_{\mathrm{P}}\right|}_{\left(J,N+1\right)}={\mathrm{C}}_{\mathrm{p}}{\left|{}_{\left(J,N+1\right)}-{B}_1{\mathrm{P}}_{\mathrm{P}}\right|}_{\left(J,N+1\right)} $$
$$ {\left.{Q}_{\mathrm{P}}\right|}_{\left(J+1,1\right)}={\mathrm{C}}_{\mathrm{M}}{\left|{}_{\left(J+1,1\right)}+{\mathrm{B}}_2{\mathrm{P}}_{\mathrm{P}}\right|}_{\left(\mathrm{J}+1,1\right)} $$
$$ {\left.{C}_{\mathrm{p}}\right|}_{\left(J,N+1\right)}={Q}_{\mathrm{a}}+{\mathrm{B}}_1{\mathrm{P}}_{\mathrm{a}}-\mathrm{FF}\ {\mathrm{Q}}_{\mathrm{a}}\mid {\mathrm{Q}}_{\mathrm{a}}\mid $$
$$ {\left.{C}_{\mathrm{M}}\right|}_{\left(J+1,1\right)=}\ {Q}_{\mathrm{b}}-{\mathrm{B}}_2{\mathrm{P}}_{\mathrm{b}}-\mathrm{FF}\ {\mathrm{Q}}_{\mathrm{b}}\mid {\mathrm{Q}}_{\mathrm{b}}\mid $$
When the valve is discharging into the downstream of the valve at some pressure, then for steady-state conditions at fully open condition, the flow rate through the valve can be written as:
$$ {Q}_0={C}_{\mathrm{VN}\ast}\kern0.5em \ast \surd {\Delta \mathrm{P}}_{\mathrm{o}} $$
The flow rate and pressure drop across the valve during the transient state conditions can be written as
$$ {Q}_{\mathrm{P}}=\boldsymbol{\uptau} .{C}_{\mathrm{VN}\ast}\kern0.5em \ast \surd \left({P}_{\mathrm{P}1}-{P}_{\mathrm{P}2}\ \right) $$
Dividing Eqs. (24) by (25), we get
$$ \uptau =\frac{{\boldsymbol{Q}}_{\boldsymbol{p}}}{{\boldsymbol{Q}}_{\mathbf{0}}}\ast \sqrt{\frac{\Delta {\boldsymbol{P}}_{\mathbf{0}}}{\Big({\boldsymbol{P}}_{\boldsymbol{p}\mathbf{1}}-{\boldsymbol{P}}_{\boldsymbol{p}\mathbf{2}\Big)}}}=\frac{{\boldsymbol{C}}_{\boldsymbol{V}}\ast }{{\boldsymbol{C}}_{\boldsymbol{V}\boldsymbol{N}\ast }}=\surd \frac{{\boldsymbol{C}}_{\boldsymbol{V}}}{{\boldsymbol{C}}_{\boldsymbol{V}\boldsymbol{N}}} $$
Rearranging the terms in Eq. (26),
$$ {Q}_{\mathrm{P}\mid J,N+1}=\uptau .Q0\ \sqrt{\frac{\Big({\boldsymbol{P}}_{\boldsymbol{p}\mathbf{1}}-{\boldsymbol{P}}_{\boldsymbol{p}\mathbf{2}\Big)}}{\Big(\Delta {\boldsymbol{P}}_{\boldsymbol{o}\Big)}}} $$
Taking square on both sides and writing CV as a function of CVN
$$ {C}_{\mathrm{V}}={\boldsymbol{\uptau}}^{\mathbf{2}}\ {C}_{\mathrm{V}\mathrm{N}} $$
$$ {\displaystyle \begin{array}{c}{C}_{\mathrm{VN}}=\frac{{\boldsymbol{Q}}_{\boldsymbol{o}}^{\mathbf{2}}}{2\mathbf{\Delta }{\mathbf{P}}_o}\\ {}\ {Q_{\mathrm{P}\mid J,N+1}}^2=2\ast \frac{{\boldsymbol{\uptau}}^{\mathbf{2}}.{\boldsymbol{Q}}_{\boldsymbol{o}}^{\mathbf{2}}}{2\mathbf{\Delta }{\mathbf{P}}_o}\ \left\{\frac{C_{P\left(J,N+1\right)}+{Q}_{P\left(J,N+1\right)}}{B_1}-\right.\frac{C_{M\left(J+1,1\right)}-{Q}_{P\left(J+1,N\right)}}{B_2}\ \Big\}\end{array}} $$
Rewriting,
$$ {Q_{\mathrm{P}\mid J,N+1}}^2\kern0.5em -2{C}_{\mathrm{v}}\ {Q}_{\mathrm{P}\mid J,N+1}\left(\frac{1}{B_1}+\frac{1}{B_2}\right)-2{C}_{\mathrm{v}}\left\{\frac{{\boldsymbol{C}}_{\boldsymbol{P}}}{{\boldsymbol{B}}_{\mathbf{1}}}-\frac{{\boldsymbol{C}}_{\boldsymbol{M}}}{{\boldsymbol{B}}_{\mathbf{2}}}\right\}=\mathbf{0} $$
The solution of above quadratic equation is based on the direction of flow which is decided by the value of \( \frac{C_P}{B_1}-\frac{C_M}{B_2} \) if it is greater or less than 0.
Making substitutions and solving above equation for QP|J,N + 1, gives
$$ {Q}_{\mathrm{P}\mid J,N+1}=-{C}_{\mathrm{V}}\ \left(\frac{1}{B_1}+\frac{1}{B_2}\right)+\sqrt{\left({C_{\mathrm{V}}}^2{\left(\frac{1}{B_1}+\frac{1}{B_2}\right)}^2+2{C}_V\ \left(\frac{C_P}{B_1}-\frac{C_M}{B_2}\right)\right)} $$
Above equation is applicable for positive flow where Eq. (26) is applicable and \( \frac{{\boldsymbol{C}}_{\boldsymbol{P}}}{{\boldsymbol{B}}_{\mathbf{1}}}-\frac{{\boldsymbol{C}}_{\boldsymbol{M}}}{{\boldsymbol{B}}_{\mathbf{2}}} \) >0
For negative flow, Eq. (27) can be written as
$$ \uptau =-\frac{{\boldsymbol{Q}}_{\boldsymbol{p}}}{{\boldsymbol{Q}}_{\mathbf{0}}}\ast \sqrt{\frac{{\boldsymbol{P}}_{\mathbf{0}}}{\Big({\boldsymbol{P}}_{\boldsymbol{p}\mathbf{2}}-{\boldsymbol{P}}_{\boldsymbol{p}\mathbf{1}\Big)}}}\ \mathrm{and}\ \frac{{\boldsymbol{C}}_{\boldsymbol{P}}}{{\boldsymbol{B}}_{\mathbf{1}}}-\frac{{\boldsymbol{C}}_{\boldsymbol{M}}}{{\boldsymbol{B}}_{\mathbf{2}}}<0 $$
In this case, Eq. (27) can be written as
$$ {Q}_{\mathrm{P}\mid J,N+1}={C}_{\mathrm{V}}\ \left(\frac{1}{B_1}+\frac{1}{B_2}\right)-\sqrt{\left({C_{\mathrm{V}}}^2\left(\frac{1}{B_1}+\frac{1}{B_2}\right)-2{C}_V\ \left(\frac{C_P}{B_1}-\frac{C_M}{B_2}\right)\right)} $$
With QP|J,N + 1 known, it can be substituted in continuity Eqs. (19), (20), (21) to compute QP|J + 1,1, PP1 and PP2.
In this section, the equations developed in the above sections by the method of characteristics followed by the application of relevant boundary conditions are used to model flow components like control valve in a feed line. Pressure and flow parameters are calculated at each time step along the feed line. The results of the mathematical model are subsequently verified with the experimental data.
Valve characteristics
The system used for the analysis is shown in Fig. 8. For current investigation, water is used as a working fluid. In the given system, a control valve has been introduced in the feed line for the calibration of flow component FLC (see Fig. 8). The system consists of a constant pressure reservoir at 26 bar. Flow components like flow-meter, pressure transmitters, electro-pneumatic valve and control valve are installed in the feed line, in addition to the Control valve IVC-101. The flow component FLC has been kept at a constant opening of 24%, which acts as a source of high pressure drop at this opening. The outlet of the feed circuit is connected to the collection tank kept at a constant pressure of 1.6 bar. Based on the inlet and outlet boundary conditions, the technical specifications of control valve IVC-104 are given in Table 1. Further, the installed characteristics of the IVC-104 are given in Table 2 and Fig. 6.
Table 1 Specifications of control valve IVC-104
Table 2 Installed Cv characteristics for control valve IVC-104
Installed characteristic curve for IVC-104
Figure 6 shows the variation of flow coefficient of the valve with percentage opening. These are the installed characteristics of the valve specific to the system with maximum inlet pressure of 30 bar. The maximum theoretical Cv* corresponds to CVN*=38. The valve pressure-discharge coefficient,
$$ \boldsymbol{\tau} =\frac{{\boldsymbol{C}}_{\boldsymbol{V}}\ast }{{\boldsymbol{C}}_{\boldsymbol{V}\boldsymbol{N}\ast }} $$
is defined by above equation. The use of coefficient as a fraction makes the system independent of the units of Cv*.
Mathematical modeling and numerical analysis
The pipeline is modeled as consisting of two pipes of equal diameter in series, where both the upstream and downstream ends of the system are connected to constant pressure reservoirs. The pressure drop across the system caused due to various flow components is taken into account by modeling the feed line of appropriate length upstream and downstream of the valve.
$$ {\Delta P}_{\mathrm{flow}}\ \mathrm{components}=\frac{\uprho \ast {\left(f\ast l\right)}_{\mathrm{equivalent}}\ast {\mathrm{V}}^2}{2\ast d} $$
A line of suitable length is considered to account for the same pressure drop under steady-state conditions during the modeling in order to reduce the computational power for the numerical simulation involving the modeling of all flow components. Under steady-state conditions during the full opening of the valve, the equivalent length of pipe upstream and downstream of the valve was evaluated to be around 320 m and 20 m respectively for a friction factor of approx. 0.015.
The solution starts begins with steady flow until t = 0.1 s, when the control valve begins to close in steps. The pressure wave velocity a is calculated as
$$ \boldsymbol{a}=\sqrt{\frac{\boldsymbol{K}^{\prime }}{\boldsymbol{\rho}}} $$
$$ \frac{\mathbf{1}}{\boldsymbol{K}}+\frac{\boldsymbol{D}}{\boldsymbol{E.t}}=\frac{\mathbf{1}}{\boldsymbol{K}^{\prime }} $$
Initialization of parameters and boundary conditions
Fluid medium: Water
Diameter of pipe: 2.5" 10S, or 0.0669 m
Inlet pressure: 26 bar
Outlet pressure: ambient 1.6 bar
Length of feed line from ITK-01 to valve: 320 m
Length of feed line from valve to ITK-02: 20 m
Number of nodes considered in feed line: 122 (120 on feed line + 2 on valve inlet and outlet)
Duration of analysis: 13 s
Steady-state flow rate Q o (lit/s): 11.75 L/s
Time step (s): 3.62e−04 (based on Courant condition)
Methodology for numerical analysis
During steady flow, the flow across the system is evaluated. Also, the pressure drop ∆Po across the valve IVC-104 is calculated and substituted in Eq. (29) to evaluate CVN for the given system.
Starting with the upstream end of the pipeline, a constant pressure reservoir is modeled, allowing the volumetric flow rate to be calculated by Eq. (12). Moving downstream, the next boundary condition encountered was the Control valve IVC-104. In this case, the flow rate is calculated in accordance with Eq. (31), where the valve pressure-discharge characteristic is defined by Eq. (31) and Table 2. The pressure on the upstream and downstream of the valve can then be calculated from Eqs. (11) and (12).
The control valve IVC-104 is initially closed until t = 0.1 s, correspondingly, τ is equal to 0. The valve begins to open at t = 0.1 sec when the % opening increases to 10% in 0.25 s and maintains this position for 1 s. The interval of 1 s is chosen between two successive openings so that there is sufficient time for the flow to get stabilized and there is no noticeable fluctuations in pressure and flow during experimental trials. If this time interval is too less, then the flow may not get stabilized before opening the valve further. If this time interval is too high, then more computational power is required to run the simulation. Moreover, the average of stabilized flow and pressure data during experimental trails is taken for calculation of valve flow-coefficient Cv*. After t = 1.35 s, the valve opens to 20% in 0.25 s and maintains its position for 1 s. This continues till the valve is completely opened, as shown in Fig. 7. At each time step, the value of τ is evaluated and used to evaluate the required value of Cv, in accordance with Eq. 28. This value of Cv is then substituted in Eq. (31). The final boundary condition occurs at the pipe outlet which is pressure reservoir at 1.6 bar. The flow condition here is evaluated by substituting the pressure in Eq. (11).
Variation of percent opening of IVC-104 vs. time
The results of numerical analysis performed for a particular case of opening a control valve with a given set of boundary conditions are compared with that of experimental data, as obtained from Cold flow Test Facility, IPRC. The numerical analysis is done for the following experimental setup:
Experimental setup
The numerical model is based on the experimental setup, as shown in Fig. 8, which consists of two run tanks, mainly DM water run tank, ITK-01 maintained at a constant pressure of 2.6 MPa and DM water collection Tank ITK-02, at ambient pressure. Flow meter FFQ-101 is connected in-line in closed loop with IVC-103. All other flow components including IVP-103 and IVC-103 are kept at full-open condition. The flow component being modeled is IVC-104. IVC-104 is opened in steps as shown in Fig. 7. The complete setup is a part of calibration of flow component FLC (classified) at different opening. One such experimental data at an opening of 24% is considered for the numerical validation of our model, which is also a source of high pressure drop in the feed line. Pressure transducers/transmitters are installed at various points along the feed line to understand the variation of pressure during the opening of IVC-104 and simultaneous comparison with numerical data.
Experimental setup for model validation
The results of the numerical simulations are compared with the measured data during the calibration of the flow component FLC, when the valve IVC-104 is opened from 0 to 100% in steps. As the initial boundary conditions of the experimental setup are known, they are used as an input for the validation of numerical analysis. The sampling rate of the data acquisition system is 500 ms, while that of the numerical model is 3.6e−04 s. The sequence of opening of valve IVC-104 is given in Fig. 7.
Figure 9 shows the comparison of the numerical data (blue) with the experimental data (red) for a given set of boundary conditions across the control valve IVC-104 in the feed line.
Comparison of numercal and experimental results. a: Comparison of flow rate vs. time. b: Comparison of flow vs. valve opening. c: Comparison of inlet pressure of IVC-04. d: Comparison of outlet pressure of IVC-104. e: Comparison of pressure drop across IVC-104
As the valves is fully closed till t = 0.1 s, the flow-rate (Fig. 9.a, 9.b) is monitored to be 0. As the valve begins to open at t = 0.1 s, the flow-rate across the flow-meter FFQ-101 and hence, the control valve IVC-101, gradually increases with time, as shown in Fig. 9a. The flow-rate begins to stabilize after 0.35 s till t = 1.35 sec when again flow begins to increase till t = 1.6 s when the valve opens from 10 to 20%. It can be seen from Fig. 9.a that the model is able to accurately capture the variation in flow rate for different opening of Control valve in the feed line. Although the model is unable to predict the flow-rate initially to sufficient accuracy, the numerical results match closely with experimental results after 10% opening.
The flow across any valve is a function of valve coefficient Cv* and increases with increase in Cv*. The variation of Cv* as a function of percentage opening is shown in Fig. 6. As control valve continues to open as described in Fig. 7, the flow-rate continues to increase in-line with increase in Cv* (or percentage opening), as shown in Fig. 9.b.
In closed position of valve IVC-104 till t = 0.1 s, the pressure at the inlet of IVC-104 is same as tank pressure ITK-101, as shown in Fig. 9.c. As the valve begins to open at t = 0.1 s, there is a lot of fluctuations initially in pressure. Since, the flow increases with increase in valve opening, the inlet pressure to the control valve IVC-104 decreases due to the pressure drop in the system.
The pressure of the collection tank ITK-102 is constant and is equal to 1.6 bar. As the flow-rate increases with valve opening and in order to maintain constant pressure at the inlet of tank, the pressure at the outlet of IVC-104 continues to increase, as shown in Fig. 9.d. The magnitude of the fluctuations also matches appropriately as can be seen from the plot.
As the valve continues to open as depicted in Figs. 6 and 7, the inlet pressure continues to decrease as shown in Fig. 9.c. Also, the valve coefficient Cv* is inversely proportional to pressure drop across the valve therefore the pressure drop across across the valve continues to decrease. This is very clearly evident from the plot given in Fig. 9.e.
It may be noted that the pressure and flow transients in Fig. 9 do not exactly match each other during initial transients. There is a pressure and flow variation during the initial opening of control valve as observed in the numerical model, when compared to experimental data. This is the result of a sudden change in velocity, and hence inertia from 0 to a finite value, causing a temporary separation of liquid column at the valve. Moreover, larger pressure gradient during the initial opening of the valve cause a variation of pressure and flow.
A numerical model is developed to analyze fluid transients in order to predict the variation of pressure and flow in the feed line for different opening of control valve. The derivation of the equations of motion used in the numerical analysis are based on method of characteristics. The model is used to determine the effect of valve opening in steps on pressure and flow transients. The present work defines a more realistic approach where change in flow pattern by virtue of valve plug design is incorporated in the mathematical model. The model use actual flow coefficient vs. percent opening characteristics for the control valve as an input. Numerical predictions of model is validated against experimental data. There is a close agreement of the numerical and experimental data. The predictions of flow and pressure matched very closely, when compared point by point. The numerical results also predicts the variation of pressure and flow for the valve opening values less than 10% when the controllability in case of control valves is quite less. The model can be subsequently used to analyze some real-time situations which involve propellants like liquid oxygen and kerosene at higher flow rates and pressure. The effects of opening time of control valve on the downstream transients are studied. These studies are important for maintaining the actual inlet conditions of engine under special conditions, when the flow from one propellant feed line has to be switched over to the other switched feed line in order to ensure flow-rate and pressure conditions within permissible limits.
The datasets (experimental as well as numerical results) used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Method of characteristics
Co:
Courant number
C VN :
\( \frac{{\boldsymbol{Q}}_{\boldsymbol{o}}^{\mathbf{2}}}{2\mathbf{\Delta }{\mathbf{P}}_o} \) (known constant in valve equation at 100% valve opening, based on steady state value)
C v :
Theoritically derived valve coefficient for a given valve opening
C VN * :
\( \frac{{\boldsymbol{Q}}_{\boldsymbol{o}}}{\surd \mathbf{\Delta }{\mathbf{P}}_o} \) (Max. flow coefficient of the valve at 100 % opening); in this case, CVN* = 38
Cv* :
Experimentally derived valve coefficient for a given valve opening
ρ :
Density of fluid
Cross-sectional area of fluid element (control volume)
dx :
length of fluid element (control volume)
dt :
Elevation of fluid element from datum
Velocity of flow in control volume of pipe
Flow rate through control volume of pipe
Diameter of pipe
g :
Acceleration due to gravity
Wave velocity
Bulk modulus of fluid
Young's modulus of feedline
Coefficient of friction
B, FF :
Pipeline constants
Number of nodes in a pipe
J :
Counter to indicate the pipe
τ:
Dimensionless valve pressure-discharge characteristic
Q o :
Steady-state flow rate through the fully open valve
∆P o :
Pressure difference across the valve in steady-state conditions at full opening
Steady-state condition
Allievi L (1902) Teoria generale del moto perturbato dell'acqua nei tubi in pressione (colpo d'ariete). ("General theory of the variable motion of water in pressure conduits"). Annali della Società degli Ingegneri ed Architetti Italiani 17(5):285–325
Löwy R (1928) Druckschwankungen in Druckrohrleitungen. Mit 45 Abb. Cham: Springer
Bergeron L (1932) Variations in flow in water conduits. Soc Hydrotechnique de France 47:605
Parmakian J (1955) Waterhammer analysis. prenticehall, linc., Englewood Cliffs
Gray CAM (1953) The analysis of the dissipation of energy in water hammer. In: Proc. ASCE, vol 119, pp 1176–1194
Streeter VL, Lai C (1962) Water-hammer analysis including fluid friction. J Hydraulics Div 88(3):79–112
Streeter VL (1962) Water hammer analysis with nonlinear frictional resistance, Proeedings of 1st Australasian Conf. on Hydraulics and Fluid Mechanics, vol 1963. Pergamon Press, Oxford, p 431. https://doi.org/10.1016/B978-0-08-010291-7.50032-X
Lohrasbi AR, Attarnejad R (2008) Water hammer analysis by characteristic method. Am J Eng Appl Sci. 1(4):287–94.
Tushar S, Tinish G, Nitish (2017) Hydraulic transient flow analysis using method of characteristics. Int J Innov Res Sci Eng Technol 6(7):14813–14827
Ghidaoui MS, Mansour S (2002) Efficient treatment of the vardy–brown unsteady shear in pipe transients. J Hydraulic Eng 128(1):102–112
Berrier WF Jr (1987) First Lieutenant, USAF, "Dynamics of Propellant Feedline Systems", Rept. AFIT/GA/AA/85D-2
Sirvole K (2007) Transient analysis in pipe networks, Master of Science Dissertation, Department of Civil Engineering, Virginia Polytechnic Institute & State University
Chaudhry MH (1979) Applied hydraulic transients. Springer-Verlag, New York
Wylie EB, Streeter VL (1978) Fluid transients, vol 401. McGraw-HillInternational Book Co., 1978, New York, p 1
Pezzinga G (1999) Quasi-2d model for unsteady flow in pipe networks. J Hydraulic Eng 125(7):676–685
Rick Sellens, Water hammer: differential equations", URL: http://www.aq.upm.es/Departamentos/Fisica/agmartin/webpublico/docencia/amplifis/fluidos-queens/hammer3.htm
Std 1207-2004 (2004) IEEE Guide for the Application of Turbine Governing Systems for Hydroelectric Generating Units. IEEE, New York
O'Brien GG, Hyman MA, Kaplan S (1951) A study of the numerical solution of partial differential equation. J Math Physics 29:223–251
Perkins FE, Tedrow AC, Eagleson P.S, Ippen AT (1964) Hydro-Power Plant Transients, Part II; Report No. 71. Cambridge: Department of Civil Engineering, School of Engineering, Massachusetts Institute of Technology.
Chaudhry, M Hanif, "Boundary Conditions for analysis of Water Hammer in Pipe Systems", Master of Applied Science Dissertation, Department of Civil Engineering, The University of British Columbia, 1968.
I wish to express my deep sense of gratitude and profound thanks to my supervisors Mr. Rahul Chaurasia, Mr. Ashish Shukla, Mr. Venkateswaran K.S., and Dr. Ramesh T. from ISRO Propulsion Complex for their interest, guidance, and providing me with all the necessary resources to successfully conduct the experiment. I am also thankful to Mr. Sanu Meena, Asst. Professor, M.B.M. Engineering College,Jodhpur for providing his valuable inputs for successfully writing this paper.
It is declared that all the equipment and resources were provided by ISRO Propulsion Complex.
ISRO Propulsion Complex, Mahendragiri, Tirunelveli, Tamil Nadu, 627133, India
Nikhil Suri, Venkateswaran K. S. & Ramesh T.
Nikhil Suri
Venkateswaran K. S.
Ramesh T.
VKS conceived of this study. NS carried out the experimental trials, numerical simulation, and data analysis under the guidance and supervision of VKS and TR. The author(s) read and approved the final manuscript.
Correspondence to Nikhil Suri.
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:
That equipment and writing assistance were provided by ISRO Propulsion Complex.
That authors/co-authors have a relationship with ISRO Propulsion Complex that includes employment.
Suri, N., K. S., V. & T., R. Numerical validation of pressure and flow characteristics across a control valve in a feed line. J. Eng. Appl. Sci. 68, 49 (2021). https://doi.org/10.1186/s44147-021-00033-9
Hydraulic transients
Pressure surge
Water hammer
Unsteady friction
|
CommonCrawl
|
MathOverflow Meta
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
Are the $L$-functions of $X_0(N)$ automorphic?
This question, like all of my previous questions regarding Langlands, is very naive.
All $g\geq 1$ curves come from quotients of the upper half plane. The curves $X_0(N)$ come from quotients of special subgroups of the group of automorphisms of the upper half plane. This might imply that they are easier to work with.
$Gal(\mathbb{Q})$ acts on the Tate module of $X_0(N)$, which leads to a motivic $L$-function. Can one prove that $L$-functions arising from these $X_0(N)$'s are $L$-functions coming from automorphic forms?
Furthermore, is this the motivation for these curves to begin with? If this is true, is this the reason that the modularity theorem (Taniyama-Shimura) often phrased in terms of parametrizing elliptic curves via $X_0(N)$'s? If not, then why do these curves come up in the formulation of Taniyama-Shimura?
nt.number-theory langlands-conjectures modular-forms
James D. TaylorJames D. Taylor
$\begingroup$ The study of modular curves arose from the theory of elliptic integrals and elliptic functions, and the resulting theory of modular equations. The connections with arithmetic came later (and are an outgrowth of the work of Ramanujan and Hecke, among others). $\endgroup$ – Emerton Sep 4 '11 at 3:47
Langlands, in his Antwerp II article, was the first to show that the zeta function of a modular curve is exactly the product (well, some of the $L$-functions are in the denominator) of $L$-functions of modular forms (previous results of Eichler, Shimura, Kuga, Sato, Ihara showed the equality up to finitely many factors). He used a comparison of the Lefschetz trace formula and the Arthur–Selberg trace formula to accomplish this. This set up a basic approach to proving that zeta functions of Shimura varieties are products of automorphic $L$-functions which Langlands spent a few papers developing (check out the section on Shimura varieties of his "complete works" website here). This approach involves knowing something specific about the structure of the points mod $p$ of a Shimura variety. The paper of Langlands and Rapoport at the above link is where the Langlands–Rapoport conjecture on the points mod $p$ is first spelled out carefully, but there are other places to read about it (in English! and improved/simplified) such as several of Milne's papers such as his article in Motives II or his article in the Montréal proceedings (which, incidentally, are the proceedings of a conference pretty much whose sole purpose was to prove the zeta function of the Shimura variety associated to a unitary group in three variables (a Picard modular surface) is a product of automorphic $L$-functions) (the book is called The zeta functions of Picard modular surfaces, edited by Langlands and Ramakrishnan), and Kottwitz's JAMS article which begins with a historical overview.
The modularity theorem, as first suggested by Taniyama, was in terms of $L$-functions. Basically, he said that if Hasse was correct and the $L$-function of an elliptic curve had analytic continuation and satisfied a functional equation then the inverse Mellin transform of the $L$-function of an elliptic curve could very well be a weight 2 modular form (see Shimura's article on Taniyama). The formulation in terms of a modular parametrization came from Shimura's work in the late 50s and 60s on constructing quotients of Jacobians of modular curves attached to modular forms, since some of those quotients were indeed elliptic curves over Q (whose $L$-functions matched up as they should). So, perhaps one could say that modular curves come up in Shimura–Taniyama because, if Hasse's conjecture that the $L$-function of an elliptic curve has analytic continuation and functional equation is true, then the inverse Mellin transform of it is a differential form on a modular curve.
Modular curves/forms were interesting to mathematicians way before the 1950s. Poincaré, for one, studied them, but that's a bit far back in time to be my area of expertise.
Rob HarronRob Harron
The zeta function of the modular curve $X_0(N)$ is the product of the $L$-functions of a basis of cusp forms of weight 2 for $\Gamma_0(N)$ (the basis taken to be normalized eigenforms for the Hecke operators prime to $N$), up to a finite number of factors. See, e.g., Milne's notes on modular forms, Theorem 11.14 (p. 108).
Modular curves are the (or at least one of the) simplest examples of Shimura varieties (See Milne's notes on Shimura varieties). One of the main motivations for the study of Shimura varieties is showing that their Hasse-Weil zeta functions are products (allowing positive and negative powers) of automorphic $L$-functions (as part of a broader program to prove the same thing for general algebraic varieties, i.e. that motivic $L$-functions are automorphic). There are plenty of other reasons to study Shimura varieties, though (e.g. they are the most powerful tool for proving results about special values of automorphic $L$-functions, more advanced versions of $\zeta(2n)\in(2\pi)^{2n}{\mathbb Q}$)
The original version of Taniyama-Shimura-Weil is "for any elliptic curve $E$, there exists a non-constant map from some $X_0(N)$ to $E$ (defined over $\mathbb Q$). So, there are historical reasons for phrasing it that way.
B RB R
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged nt.number-theory langlands-conjectures modular-forms or ask your own question.
Quaternary quadratic forms and Elliptic curves via Langlands?
Are there motives which do not, or should not, show up in the cohomology of any Shimura variety?
Langlands in dimension 2: the Yoshida conjecture
What is the precise relationship between Langlands and Tannakian formalism?
The historical development of automorphic geometry
To what extent are modular parametrizations expected to generalize?
Arithmetic motivations for modularity in higher rank
What does the Langlands philosophy have to say about the weight and the level?
|
CommonCrawl
|
Transcription apparatus: A dancer on a rope
Laws of physics govern all forms of matter movement. However, lives, which are composed of chemical elements which everyone is familiar with, are largely beyond physical description available. This is because the construction of life is not the same as that of general matters, rendering it unknown how physics laws are utilized. In this paper, we present our thinking on the transcriptional apparatus (TA). The TA is a huge molecular machine acting to sense regulatory signals and initiate transcripts at right time and with right rate. The operation of the TA is fundamental to almost all forms of lives. Although great progress has been made in recent years, one often has to face contradictory conclusions from different studies. Additionally, the studies of transcription are divided into several fields, and different fields are increasingly separate and independent. Focusing on eukaryotic transcription, in this review we briefly describe major advances in various fields and present new conflicting view points. Although the structural studies have revealed the main components and architecture of the TA, it is still unclear how the Mediator complex transmits signals from activators to the core transcriptional machinery at the promoter. It is believed that the Mediator functions to recruit RNA polymerase II onto the promoter and promote the entry into transcriptional elongation, which fails to explain how the signal transduction is achieved. On the other hand, the allostery effect of the Mediator allows for signal transmission but is not supported by structural study. It is reported that enhancers, especially supper enhancers, act to recruit activators via forming a so-called liquid drop and phase separation. By contrast, it is suggested that enhancers should cooperate delicately to orchestrate transcription. Results on the kinetics of protein-promoter interaction also contrast with each other, leading to a paradox called "transcriptional clock". It is then concluded that proteins interact frequently and transiently with promoters and different proteins interact with the promoter at different stages of transcriptional progression. The phenomenon of transcriptional burst questions how the cellular signaling is achieved through such a noisy manner. While the burst frequency or size, or both are potentially modulated by transcriptional activators, more evidence supports the mode of frequency modulation. The technical difficulties in investigating the mechanism of transcription include 1) structural characterization of flexible and/or unstable proteins or protein complexes, 2) measurement of intermolecular kinetics, 3) tracking of single molecule movement, and 4) lack of methodology in theoretical research. We further propose a research strategy based on the ensemble statistical method, and introduce a model for how the TA dynamically operates. The model may act as a benchmark for further investigations. The operating mechanism of the TA should reflect an optimal use of physics laws as a result of long-term biological evolution. Wang Yaolai, Liu Feng Acta Physica Sinica.2020, 69(24): 248702.
GEOPHYSICS, ASTRONOMY, AND ASTROPHYSICS
Wang Yaolai, Liu Feng
Laws of physics govern all forms of matter movement. However, lives, which are composed of chemical elements which everyone is familiar with, are largely beyond physical description available. This is because the construction of life is not the same as that of general matters, rendering it unknown how physics laws are utilized. In this paper, we present our thinking on the transcriptional apparatus (TA). The TA is a huge molecular machine acting to sense regulatory signals and initiate transcripts at right time and with right rate. The operation of the TA is fundamental to almost all forms of lives. Although great progress has been made in recent years, one often has to face contradictory conclusions from different studies. Additionally, the studies of transcription are divided into several fields, and different fields are increasingly separate and independent. Focusing on eukaryotic transcription, in this review we briefly describe major advances in various fields and present new conflicting view points. Although the structural studies have revealed the main components and architecture of the TA, it is still unclear how the Mediator complex transmits signals from activators to the core transcriptional machinery at the promoter. It is believed that the Mediator functions to recruit RNA polymerase II onto the promoter and promote the entry into transcriptional elongation, which fails to explain how the signal transduction is achieved. On the other hand, the allostery effect of the Mediator allows for signal transmission but is not supported by structural study. It is reported that enhancers, especially supper enhancers, act to recruit activators via forming a so-called liquid drop and phase separation. By contrast, it is suggested that enhancers should cooperate delicately to orchestrate transcription. Results on the kinetics of protein-promoter interaction also contrast with each other, leading to a paradox called "transcriptional clock". It is then concluded that proteins interact frequently and transiently with promoters and different proteins interact with the promoter at different stages of transcriptional progression. The phenomenon of transcriptional burst questions how the cellular signaling is achieved through such a noisy manner. While the burst frequency or size, or both are potentially modulated by transcriptional activators, more evidence supports the mode of frequency modulation. The technical difficulties in investigating the mechanism of transcription include 1) structural characterization of flexible and/or unstable proteins or protein complexes, 2) measurement of intermolecular kinetics, 3) tracking of single molecule movement, and 4) lack of methodology in theoretical research. We further propose a research strategy based on the ensemble statistical method, and introduce a model for how the TA dynamically operates. The model may act as a benchmark for further investigations. The operating mechanism of the TA should reflect an optimal use of physics laws as a result of long-term biological evolution.
Research progress of applications of acoustic-vortex information
Guo Zhong-Yi, Liu Hong-Jun, Li Jing-Jing, Zhou Hong-Ping, Guo Kai, Gao Jun
The orbital angular momentum (OAM) carried by acoustic vortex beam can be transmitted to objects, which has a good application prospect in particle manipulation. In addition, the acoustic vortex beam also has great potentials in acoustic communication. The acoustic vortex beams with different OAM modes are orthogonal to each other, so the OAM mode can be introduced into the traditional acoustic communication, which provides a potential solution for realizing the high-speed, large-capacity and high-spectral efficiency of underwater acoustic communication technology in future. In this paper, we summarize the research progress of acoustic vortex beam, in which we mainly introduce the generation and detection scheme of acoustic vortex beam, its transmission characteristics, and its typical research cases in communication. Finally, the future development trend and the outlook of acoustic vortex beam are also analyzed and prospected.
Research progress of two-dimensional transition metal dichalcogenide phase transition methods
Zhang Hao-Zhe, Xu Chun-Yan, Nan Hai-Yan, Xiao Shao-Qing, Gu Xiao-Feng
Following traditional semiconductors such as silicon and GaAs, in recent years the two-dimensional materials have attracted attention in the field of optoelectronic devices, thermoelectric devices and energy storage and conversion due to their many peculiar properties. However, the normal two-dimensional materials such as graphene, cannot be well used in the field of optoelectronics due to the lack of a band gap, and the black phosphorus is also greatly limited in practical applications due to its instability in the air. The two-dimensional transition metal dichalcogenides have attracted more attention due to the different atomic structures, adjustable energy band and excellent photoelectric properties. There are different crystal phases in transition metal dichalcogenides, some of which are stable in the ground state, and others are instable. Different phases exhibit different characteristics, some of which have semiconductor properties and others have like metal in property. These stable and metastable phases of transition metal dichalcogenides can be transformed into each other under some conditions. In order to obtain these metastable phases, thereby modulating their photoelectric performance and improving the mobility of the devices, it is essential to obtain a phase transition method that enables the crystal phase transition of the transition metal dichalcogenides. In this article, first of all, we summarize the different crystal structures of transition metal dichalcogenides and their electrical, mechanical, and optical properties. Next, the eight phase transition methods of transition metal dichalcogenides are listed, these being chemical vapor deposition, doping, ion intercalation, strain, high temperature thermal treatment, laser inducing, plasma treatment, and electric field inducing. After that, the research progress of these phase transition methods and their advantages and disadvantages are introduced. Finally, we sum up all the phase transition methods mentioned in this article and then list some of the problems that have not been solved so far. This review elaborates all of the presently existing different phase transition methods of transition metal dichalcogenides in detail, which provides a good reference for studying the phase transition of transition metal dichalcogenides in the future, the electrical performance regulated by different phases, and the applications of memory devices and electrode manufacturing.
Research progress of piezoelectrets based micro-energy harvesting
Zhang Mi, Zuo Xi, Yang Tong-Qing, Zhang Xiao-Qing
In this paper, the progress of micro-energy harvesters by using piezoelectret-based transducers as a core element is reviewed, including basic physical principle and properties of piezoelectrets, and their applications in micro-energy harvesting. Piezoelectret is electret-based piezoelectric polymer with a foamed structure. The piezoelectric effect of such material is a synergistic effect of the electret property of the matrix polymer and the foam mechanical structure in the material. Piezoelectret, featuring strong piezoelectric effect, flexibility, low density, very small acoustic impedance and film form, is an ideal electromechanical material for lightweight flexible sensors and mechanical energy harvesters. The piezoelectret prepared by means of grid, template patterning, supercritical CO2 assisted low-temperature assembly, lithography mold combined with rotary coating and hot pressing has regular voids and good piezoelectric properties. Piezoelectret has been used to harvest vibrational energy, human motion energy and sound energy. According to the stress direction applied to the piezoelectrets, operating modes of energy harvesters can be divided into 33 and 31 modes. The vibrational energy harvesters based on piezoelectret are utilized to harvest medium frequency vibrational energy generated by factory machines, aircrafts, automobiles, etc. Such energy harvesters can generate considerable power even in a small size. Human motion energy harvesters are generally used to power wearable sensors. The high sensitivity, lightweight, and flexibility of the piezoelectret make such a material a promising candidate for harvesting human motion energy. Owing to very small acoustic impedance, high figure-of-merit, flat response in audio and low-frequency ultrasonic range, the piezoelectrets are more appropriate for acoustic energy harvesting in air medium than conventional PZT and ferroelectric polymer PVDF.In the future, specific micro-energy harvesters using piezoelectrets as transduction material can be designed and fabricated according to the practical application environment, and their performance can be enhanced by using flexible connections of transduction elements.
Research advances in intervening opportunity class models for predicting human mobility
Liu Er-Jian, Yan Xiao-Yong
Predicting human mobility between locations is of great significance for investigating the population migration, traffic forecasting, epidemic spreading, commodity trade, social interaction and other relevant areas. The intervening opportunity (IO) model is the model established earliest from the perspective of individual choice behavior to predict human mobility. The IO model takes the total number of opportunities between the origin location and the destination as a key factor in determining human mobility, which has inspired researchers to propose many new IO class models. In this paper, we first review the research advances in the IO class models, including the IO model, radiation class models, population-weighted opportunity class models, exploratory IO class models and universal opportunity model. Among them, although the IO model has an important theoretical value, it contains parameters and has low prediction accuracy, so it is rarely used in practice. The radiation class models are built on the basis of the IO model on the assumption that the individual will choose the closest destination whose benefit is higher than the best one available in origin location. The radiation class models can better predict the commuting behavior between locations. The population-weighted opportunity class models are established on the assumption that when seeking a destination, the individual will not only consider the nearest locations with relatively large benefits, but also consider all locations in the range of alternative space. The population-weighted opportunity class models can better predict intracity trips and intercity travels. The exploratory IO class models are built on condition that the destination selected by the individual presents a higher benefit than the benefit of the origin and the benefits of the intervening opportunities. The exploratory IO class models can better predict the social interaction between individuals, intracity trips and intercity travels. The universal opportunity model is developed on the assumption that when an individual selects a destination, she/he will comprehensively compare the benefits between the origin and the destination and their intervening opportunity. The universal opportunity model presents a new universal framework for IO class models and can accurately predict the movements on different spatiotemporal scales. The IO class models have also been widely used in many fields, including predicting trip distribution in transportation science, modeling the purchasing behaviors of consumers in economics, detecting complex network communities in network science, measuring spatial interaction in economic geography and predicting infectious disease transmission in epidemiology. This paper focuses on the applications of IO class models in spatial interaction and epidemic spreading, and finally presents the discussion on the possible future research directions of these models.
Analysis of COVID-19 spreading and prevention strategy in schools based on continuous infection model
Sun Hao-Chen, Liu Xiao-Fan, Xu Xiao-Ke, Wu Ye
After the COVID-19 epidemic leveled off in China, many provinces have started to resume schooling. Long-term contact between students and teachers in such a closed environment in schooling can increase the possibility of the outbreak. Although the school closure can effectively alleviate the epidemic, large-scale students' isolation not only causes social panic but also brings huge social and economic burden, so before the emergence of school epidemics, one should select and adopt more scientific prevention and control measures. In this study, according to the virus excretion of COVID-19 patients in the disease period, the infectious capacity of patients is redefined. After introducing it into the traditional suspected-exposed-infected-removed (SEIR) model, a continuous infection model that is more consistent with the actual transmission of COVID-19 patients is proposed. Secondly, the effective distance between students is calculated through real contact data. Based on the analysis of the effective distance, three types of isolation area prevention and control measures are proposed and compared with the recently proposed digital contact tracking prevention and control measures. Simulating the spread of COVID-19 in schools through real student contact data and continuous infection models, in order to compare the preventions and control effects of various prevention and control measures in the school epidemic situation, and evaluating the social influence of measures by accumulating the number of quarantines when prevention and control measures are adopted, we find that the COVID-19 can lead the cases to happen on a larger scale in the continuous infection model than in the traditional SEIR model, and the prevention and control measures verified in the continuous infection model are more convincing. Using digital contact tracking prevention and control measures in schools can achieve similar results to those in closed schools with the smallest number of quarantines. The research in this paper can help schools choose appropriate prevention and control measures, and the proposed continuous infection model can help researchers more accurately simulate the spread of COVID-19.
Multi-frequency sinusoidal chaotic neural network and its complex dynamics
Li Ru-Yi, Wang Guang-Yi, Dong Yu-Jiao, Zhou Wei
A large number of animal experiments show that there is irregular chaos in the biological nervous systems. An artificial chaotic neural network is a highly nonlinear dynamic system, which can realize a series of complex dynamic behaviors, optimize global search and neural computation, and generate pseudo-random sequences for information encryption. According to the superposition theory of sinusoidal signals with different frequencies of brain waves, a non-monotone activation function based on the multifrequency-frequency conversion sinusoidal function and a piecewise function is proposed to make a neural network more consistent with the biological characteristics. The analysis shows that by adjusting the parameters, the activation function can exhibit the EEG signals in its different states, which can simulate the rich and varying brain activities when the brain waves of different frequencies and types work at the same time. According to the activation function we design a new chaotic cellular neural network. The complexity of the chaotic neural network is analyzed by the structural complexity based SE algorithm and C0 algorithm. By means of Lyapunov exponential spectrum, bifurcation diagram and basin of attraction, the effects of the activation function's parameters on its dynamic characteristics are analyzed in detail, and it is found that a series of complex phenomena appears in the chaotic neural network, such as many different types of chaotic attractors, coexistent chaotic attractors and coexistence limit cycles, which improves the performance of the chaotic neural network, and proves that the multi-frequency sinusoidal chaotic neural network has rich dynamic characteristics, so it has a good prospect in information processing, information encryption and other aspects.
General image encryption algorithm based on deep learning compressed sensing and compound chaotic system
Chen Wei, Guo Yuan, Jing Shi-Wei
Many image compression and encryption algorithms based on traditional compressed sensing and chaotic systems are time-consuming, have low reconstruction quality, and are suitable only for grayscale images. In this paper, we propose a general image compression encryption algorithm based on a deep learning compressed sensing and compound chaotic system, which is suitable for grayscale images and RGB format color images. Color images can be directly compressed and encrypted, but grayscale images need copying from 1 channel to 3 channels. First, the original image is divided into multiple 3 × 33 × 33 non-overlapping image blocks and the bilinear interpolation Bilinear and convolutional neural network are used to compress the image, so that the compression network has no restriction on the sampling rate and can obtain high-quality compression of image. Then a composite chaotic system composed of a two-dimensional cloud model and Logistic is used to encrypt and decrypt the compressed image (sliding scrambling and vector decomposition), and finally the decrypted image is reconstructed. In the reconstruction network, the convolutional neural network and bilinear interpolation Bilinear are mainly responsible for reconstructing the contour structure information, and the fully connected layer is mainly responsible for reconstructing and combining the color information to reconstruct a high-quality image. For grayscale images, we also need to calculate the average value of the corresponding positions of the 3 channels of the reconstructed image, and change the 3 channels into 1 channel. The experimental results show that the general image encryption algorithm based on deep learning compressed sensing and compound chaos system has great advantages in data processing quality and computational complexity. Although in the network the color images are used for training, the quality of grayscale image reconstruction is still better than that of other algorithms. The image encryption algorithm has a large enough key space and associates the plaintext hash value with the key, which realizes the encryption effect of one image corresponding to one key, thus being able to effectively resist brute force attacks and selective plaintext attacks. Compared with it in the comparison literature, the correlation coefficient is close to an ideal value, and the information entropy and the clear text sensitivity index are also within a critical range, which enhances the confidentiality of the image.
Thermoacoustic imaging based on noise suppression of multi-channel amplifier and additive circuit
Tang Yong-Hui, Zheng Zhu, Xie Shi-Meng, Huang Lin, Jiang Hua-Bei
Thermoacoustic imaging (TAI) is an emerging biomedical imaging method in which microwave is used as an excitation source to generate acoustic signals. The TAI possesses the advantages of high contrast of microwave imaging and high resolution of ultrasound imaging, which is also noninvasive. While the signal-to-noise ratio (SNR) of TAI is often very low. It is usually required by averaging the thermoacoustic signal many times to improve the SNR. However, averaging the signal to improve the SNR can significantly reduce the TAI's time resolution, which hinders the development of rapid TAI. Here in this paper, we propose to reduce the cost and improve the time resolution of TAI based on multi-channel amplifier and additive circuit. The received thermoacoustic signals are divided into 4 channels and then entered into 4 amplifiers simultaneously.After being amplified, the signals are added and collected by the data acquisition system for reconstructing the image. The phantom results indicate that the time resolution of TAI increases 5 times and the SNR rises from 6 dB to 12 dB, with the multi-channel amplifier and additive circuit adopted. The method proposed in this paper is helpful in promoting the development and clinical application of TAI, especially it has a great significance for developing the ultra-fast TAI.
Hadron-quark deconfinement phase transition in hybrid stars
Gong Wu-Kun, Guo Wen-Jun
Astronomical statistics shows that the mass of neutron star is of the order of the solar mass, but the radius is only about ten kilometers. Therefore, the neutron star is highly condensed and there may be a variety of competing material phases inside the compact star. Hadron-quark deconfinement phase transition that is poorly understood at high density can be studied by the matter properties of hybrid star. The hybrid star contains many kinds of material phases, which cannot be described uniformly by one theory. So, different material phases are described by different theories. The hadronic phase is described by the relativistic mean-field theory with parameter set FSUGold including ω2ρ2 interaction term, and the quark phase is described by an effective mass bag model in which the quark mass is density-dependent. The hadron-quark mixed phase is constructed by the Gibbs phase transition, and the properties of hybrid star in β equilibrium is studied in this model. It is found that the bag constant B has a great influence on the starting point and ending point of the hadron-quark deconfinement phase transition and the particle composition in the hybrid star. Comparing with the starting point of phase transition, the influence of B on the ending point of phase transition is very obvious. For the hybrid star, the equation of state of matter becomes stiffer at low density and softer at high density as B increases. The overall effect is that the slope of the mass-radius curve increases with B increasing. The calculated results show that the maximum mass of hybrid star is between 1.3 solar mass and 1.4 solar mass (M☉), and the radius is between 9 km and 12 km. In addition, the influence of attractive and repulsive Σ potential on the properties of hybrid stars are studied. The results show that the Σ potential has a great influence on the particle composition in the hybrid star. We also find that the repulsive Σ potential makes the hybrid star have a greater maximum mass then an attractive Σ potential. For the attractive Σ potential, the maximum mass of hybrid star is 1.38M☉, while for the repulsive Σ potential, the maximum mass of hybrid stars is 1.41M☉.
A coding metasurface antenna array with low radar cross section
Hao Biao, Yang Bin-Feng, Gao Jun, Cao Xiang-Yu, Yang Huan-Huan, Li Tong
An aperiodic metasurface antenna array with low radar cross section (RCS) is designed. The upper patches of the two antenna elements have the same shape and are placed at an orthogonal position, which can effectively reduce the workload of simulating the reflection characteristics of the patch. As antenna elements, they have identical operational band and polarization mode, and as metasurfaces, they can form an effective phase difference of 180° ± 37°. The RCS of the array is reduced mainly by phase cancellation under the x polarization and by absorption under the y polarization. According to the coding metamaterial theory, the two elements can be coded aperiodically by using the programming software. Regarding element A and element B as "0" and "1", respectively, the coding matrix can be solved by a genetic algorithm. Element A and element B are arranged according to positions "0" and "1" to obtain a proposed array. The scattering field of proposed array is diffusive, and the peak RCS is effectively reduced. In order to highlight the characteristics of the proposed array, the chessboard-type array is designed for comparison. The simulation results show that the radiation performance of proposed array is good. Comparing with the metal board of the same size, the 6 dB reduction bandwidth of the monostatic RCS is 4.8-7.4 GHz (relative bandwidth is 42.6%) under the x polarization and 4.6-7.8 GHz (relative bandwidth is 51.6%) under the y polarization. Comparing with the chessboard type array, the scattering energy distribution of the designed antenna array is very uniform and the peak RCS in space reduces obviously. When a 4.8 GHz electromagnetic wave is incident with different incident angles and polarization modes, the scattering field is diffusive. Compared with other similar arrays, the proposed array has advantages of simple design process and even scattering field. The experimental results are in good agreement with the simulation results. This work makes full use of the scattering characteristics of the antenna element itself to solve the problem that the array antenna possesses both good radiation characteristics and low scattering characteristics at the same time, and improves the design process of the antenna patch. This design method has certain universality and reference significance for designing the low RCS antenna array.
Broadband efficient focusing on-chip integrated nano-lens
Tian Zi-Cong, Guo Yi-Min, Hu Chen-Yan, Wang Hui-Qin, Lu Cui-Cui
As a basic optical element, optical lens is widely used for realizing the focusing, imaging and optical communication systems. Light of different wavelengths will propagate at different speeds. A beam of polychromatic light will produce chromatic dispersion after passing through a single optical device, which prevents the ordinary lenses from focusing the light of different wavelengths into a point. This means that the light of different wavelengths cannot be focused ideally. Traditional focusing systems can solve this problem by superimposing multiple lenses, but this is at the expense of increasing the complexity, weight, and cost of the system, and is not suitable for highly integrated nano-optical systems. At present, a better solution is to use the plane metalens, that is, using the metasurface to control the amplitude, phase and polarization at each point in space. However, the plane metalens is difficult to directly integrate on the chip. An intelligent algorithm developed by combining finite element method with genetic algorithm is used to optimize the design of multi-channel on-chip wavelength router devices and polarization router devices. In this paper, combining with years' research results of the theory of multiple scattering coherent superposition of disordered media, the use of intelligent algorithm to design an on-chip integrated nano-lens that can achieve efficient focusing from the visible to the near infrared band. In the lens structure SiO2 serves as a substrate, and the arrangement structure of SiC rectangular column is designed. The substrate size is only 2 μm × 2 μm. The lens achieves low-dispersion focusing in the band from 470 nm to 1734 nm, with a focusing efficiency of over 55% at the highest level and 30% at the lowest level, and an average focusing efficiency of 42.1%. A 200-nm waveguide is added behind the focusing region. After refocusing through the waveguide, the laser beam with a size of 2 μm can be focused by the coupling of the lens and the waveguide into a beam below 200 nm in size. The focusing efficiency goes up to 80%. At the same time, the intelligent algorithm can be applied to different types of structures. The focusing lens structures composed of triangle, diamond, or circular nano columns are designed, which can achieve an approximate focusing effect and efficient coupling propagation efficiency. This work provides important ideas for developing broadband and efficient focusing nano-lens, as well as a new way to achieve the high-density integrated nanophotonic devices.
Design and analysis of polarization imaging lidar and short wave infrared composite optical receiving system
Feng Shuai, Chang Jun, Hu Yao-Yao, Wu Hao, Liu Xin
The basic principle of three-dimensional (3D) imaging lidar-an active imaging technology, is parallel laser ranging. Compared with traditional passive sensor imaging and microwave radar, the 3D imaging lidar has obvious advantages, so it promises to possess a wide application prospect. Non-scanning 3D imaging lidar has seven modulation modes. Among them, the 3D imaging lidar based on polarization modulation has the advantages of large measurement range, high measurement accuracy, fast imaging speed, and no motion artifacts. At the same time, it is not limited by other modulation methods, such as intensified charge coupled device and avalanche photodiode array detectors, and its process is complex but easy to saturate and damage. However, its disadvantage is that it requires two cameras, electro-optic crystal limits the imaging field of view, and is easily affected by atmospheric conditions such as incident angle and cloud and mist. In order to overcome the above shortcomings, in this paper we propose to use polarization imaging lidar and short-wave infrared zoom optical system to construct a dual-mode target detection imaging system by means of common aperture, which can not only reduce the volume of the two systems and solve the coaxial problem of the two systems, but also solve the problems such as the influence of atmospheric conditions (small viewing angle, incident angle and cloud and mist) on imaging quality of polarization modulation imaging lidar and the limitation of low energy of short-wave infrared imaging targets. According to the above ideas, the design and research of polarization imaging lidar and shortwave infrared composite optical system are carried out. The optical design software is used to complete the optical design of the telescope group, shortwave infrared imaging lens group, polarization modulation lens group and the system as a whole. In the telescope group the off-axis three-mirror structure is used to solve the blocking problem of the center of the field of view, and in the shortwave infrared lens group the type of mobile zoom compensation group is used to realize zooming. Analysis of the image quality of the optical system shows that the designed optical system has high imaging quality and its optical design meets the requirements for system design. The optical simulation software is used to simulate the imaging process of the optical system. The results show below. The polarization imaging lidar and shortwave infrared imaging have high quality, the stray light has little influence on the imaging of the system, the target edge imaging is clear, and the independent square targets with a 1-m in diameter can be distinguished. The field of view of the short-wave infrared short-focus mode is 9 times that of the long-focus mode. The shortwave infrared telescopic mode is basically consistent with the field of view of polarization imaging lidar. The received illuminance value of polarization imaging lidar is about 2.4 times that of short-wave infrared long focal length mode. The overall energy distribution of polarization imaging lidar is more balanced, and the imaging effect is better. The method adopted in this paper provides a new idea for studying the polarization modulated imaging lidar. The next step in experimental research is to complete the physical processing, assembly and adjustment, and selection of suitable targets.
Aberration correction for ellipsoidal window optical system based on Zernike mode coefficient optimization
Liang Dian-Ming, Wang Chao, Shi Hao-Dong, Liu Zhuang, Fu Qiang, Zhang Su, Zhan Jun-Tong, Yu Yi-Xin, Li Ying-Chao, Jiang Hui-Lin
The traditional window of high-speed aircraft is hemispherical, and the aberration produced by such a window is constant. However, the hemispherical window is difficult to meet the requirements of a high speed flight of aircraft. Aspheric windows are usually used to replace hemispherical windows to increase the aerodynamic performance. However, the aspheric window will introduce dynamic aberrations that fluctuate with the change of scanning field-of-view (FOV), which becomes the key issue of the development of optoelectronic imaging systems for high-speed aircraft. For the ellipsoidal window optical system with scanning FOV of ±60°, an aberration correction method in large FOV combined with the static correction and non-wavefront-sensor adaptive optical correction is studied. In the initial optical structure design, the types of system aberration are reduced and the fifth-order Zernike aberration is eliminated during initial aberration correction, thus, the number of the subsequent adaptive optimization control variables is reduced. According to the characteristics of the deformable mirror, the driving voltage of the driver is generally taken as a variable of the genetic algorithm. However, when the deformable mirror used has many units, too many variables will directly lead the optimization speed of the algorithm to greatly decrease. So, according to the aberration characteristics of the ellipsoidal optical window, using the conversion matrix between the Zernike polynomial coefficients and the voltages of the deformable mirror driver, the optimization variable is reduced from 140 driver voltages to 2−9 Zernike stripe polynomial coefficients in number. Finally, the genetic algorithm based on Zernike model is used to control the shape of the deformable mirror and correct the residual aberration. Taking 2−9 Zernike mode coefficients, 2−16 Zernike mode coefficients and 140 driver voltages as the variables of genetic algorithm respectively, the optimization generations of genetic algorithm under different variables are obtained. The simulation results show that the optimization speed of each typical scanning field of view is increased more than 95% by changing the variable from 140 driver voltages to 2−9 Zernike mode coefficients, and the imaging quality is close to the diffraction limit. This optimization method can not only correct the aberrations caused by the special-shaped optical window, but also compensate for the error caused by processing and aligning the optical system.
In depth learning based method of denoising joint transform correlator optical image encryption system
Lang Li-Ying, Lu Jia-Lei, Yu Na-Na, Xi Si-Xing, Wang Xue-Guang, Zhang Lei, Jiao Xiao-Xue
There is serious noise interference in the decryption process of the joint transform correlator (JTC) optical encryption system, so the quality of the decrypted image cannot meet the accuracy requirements in most cases. The quality of decrypted image can be improved to a certain extent when the phase key is designed by the Gerchberg-Saxton algorithm and the iterative algorithm fuzzy control algorithm, but the complexity of the design process is inevitable and the quality of the decrypted image still needs improving. Recently, the in depth learning technology has attracted the attention of scholars in the fields of computer vision, natural language processing and optical information processing. In order to deal with the noise interference in the JTC optical encryption system, combining the current deep learning method, in this paper we propose a new denoising method for JTC optical image encryption system based on in depth learning, the dense modules are added into the generated network to enhance the reuse of feature information and improve the performance of the network. The latest self-attention mechanism area is added into the network to distinguish the weights of different channels and learn the relationship between channel and channel, so that the network can selectively strengthen the useful feature information but suppress useless feature information. The density module and the channel attention module are integrated into a DCAB synthesis module, which can effectively extract the image feature information and improve the performance of the network. The receptive field of the convolution kernel is enlarged by two down-sampling and the feature map is restored to its original size by two up-sampling. The VGG-19 is used to extract high-frequency details and texture features of images, meanwhile, the non-adversarial loss and mean-square error (MSE) loss are added into the loss function to reduce the difference among the image samples. The quality of noise-reduced images in this method are obviously better than that of the existing denoising algorithms by evaluating intuitive visual observation or SSIM (structural similarity), PSNR (peak signal to noise ratio) and MSE. The results of numerical calculation and simulation experiments show that this method can greatly eliminate the influence of noise on the JTC optical image encryption system, and effectively improve the effectiveness and feasibility of JTC optical image encryption system for high-quality image encryption.
Self-similarity transformation and two-dimensional rogue wave construction of non-autonomous Kadomtsev-Petviashvili equation
Zhang Jie-Fang, Jin Mei-Zhen, Hu Wen-Cheng
Rogue wave is a kind of natural phenomenon that is fascinating, rare, and extreme. It has become a frontier of academic research. The rogue wave is considered as a spatiotemporal local rational function solution of nonlinear wave model. There are still very few (2 + 1)-dimensional nonlinear wave models which have rogue wave solutions, in comparison with soliton and Lump waves that are found in almost all (2 + 1)-dimensional nonlinear wave models and can be solved by different methods, such as inverse scattering method, Hirota bilinear method, Darboux transform method, Riemann-Hilbert method, and homoclinic test method. The structure and evolution characteristics of the obtained (2 + 1)-dimensional rogue waves are quite different from the prototypes of the (1 + 1)-dimensional nonlinear Schrödinger equation. Therefore, it is of great value to study two-dimensional rogue waves.In this paper, the non-autonomous Kadomtsev-Petviashvili equation is first converted into the Kadomtsev-Petviashvili equation with the aid of a similar transformation, then two-dimensional rogue wave solutions represented by the rational functions of the non-autonomous Kadomtsev-Petviashvili equation are constructed based on the Lump solution of the first kind of Kadomtsev-Petviashvili equation, and their evolutionary characteristics are illustrated by images through appropriately selecting the variable parameters and the dynamic stability of two-dimensional single rogue waves is numerically simulated by the fast Fourier transform algorithm. The obtained two-dimensional rogue waves, which are localized in both space and time, can be viewed as a two-dimensional analogue to the Peregrine soliton and thus are a natural candidate for describing the rogue wave phenomena. The method presented here provides enlightenment for searching for rogue wave excitation of (2 + 1)-dimensional nonlinear wave models.We show that two-dimensional rogue waves are localized in both space and time which arise from the zero background and then disappear into the zero background again. These rogue-wave solutions to the non-autonomous Kadomtsev-Petviashvili equation generalize the rogue waves of the nonlinear Schrödinger equation into two spatial dimensions, and they could play a role in physically understanding the rogue water waves in the ocean.
Anti-plane fracture problem of four nano-cracks emanating from a regular 4n-polygon nano-hole in magnetoelectroelastic materials
Yang Dong-Sheng, Liu Guan-Ting
According to the conformal mapping from the exterior region of the regular n-polygon hole to the exterior region of a unit circle and from the exterior region of four cracks emanating from a circle to the interior region of a unit circle, a new conformal mapping is constructed to map the exterior region of four cracks emanating from a regular 4n-polygon hole to the interior of a unit circle. Then, based on the Gurtin-Murdoch surface/interface model and complex method, the anti-plane fracture of four nano-cracks emanating from a regular 4n-polygon nano-hole in magnetoelectroelastic material is studied. The exact solutions of stress intensity factor, electric displacement intensity factor, magnetic induction intensity factor, and energy release rate are obtained under the boundary condition of magnetoelectrically impermeable with considering the surface effect. Without considering the effect of the surface effect, the exact solution of four cracks emanating from a regular 4n-polygon hole in a magnetoelectroelastic material can be obtained. The numerical results show the influences of surface effect and the size of defect on the stress intensity factor, electric displacement intensity factor, magnetic induction intensity factor and energy release rate under the magnetoelectrically impermeable boundary condition. It can be seen that the stress intensity factor, electric displacement intensity factor, and magnetic induction intensity factor are significantly size-dependent when considering the surface effects of the nanoscale defects. And when the size of defect increases to a certain extent, the influence of surface effect begins to decrease and finally tends to follow the classical elasticity theory. When the distance between the center and the vertex of the regular 4n-polygon nano-hole is constant, the dimensionless field intensity factor decreases gradually with the increase of the number of edges, and approaches to the conclusion of a circular hole with four cracks. With the increase of the relative size of the crack, the dimensionless field intensity factor increases gradually. The dimensionless energy release rate of the nanoscale cracked hole has a significant size effect. The increase of mechanical load will increase the normalized energy release rate. The normalized energy release rate first decreases and then increases with electrical load increasing. The normalized energy release rate decreases with magnetic load increasing.
Mechanism of bubble sinking in vertically vibrating water
Zhao Xiao-Gang, Yang Hao-Ran, Zhang Qi, Cheng Lin, Zhang Xiang-Yu, Wang Feng-Long, Duan Cheng-Bo, Zhuo Wei, Xu Chun-Long, Hou Zhao-Yang
When a container filled with water is subjected to vertical vibration, bubbles in the water may sink. This phenomenon exists widely in the field of engineering, and has a non-negligible influence on aerospace engineering and ship engineering. Therefore, it is of great significance to study the movement of bubble sinking in order to reduce the adverse effect caused by bubble sinking in the project. In previous papers, the effect of Basset force on bubble motion was usually ignored. In this paper, the bubble motion model based on the ideal gas equation is built for spherical bubbles, and the influence of the Basset force on the bubble motion is considered in the model. In the process of solving Basset force, the motion is directly separated and the convergence factor is introduced in theoretical solution. The equal step composite trapezoid formula is applied to the numerical solution. The results of numerical calculation show that the added mass force is important for bubble sinking. We find that the Basset force has no effect on the stable oscillation position of bubble, but it can accelerate the later trajectory of bubble motion. Importantly, we demonstrate that the bubble is hindered by the following component forces: buoyancy, viscous resistance, and flow thrust (which are ordered from large to small value). The movement of the bubble is observed to be in the form of oscillation, and there exists a depth, i.e. a critical depth: the bubble oscillate steadily at this depth, specifically, the bubble rises above this depth and sinks below this depth. When the vibration pressure changes, the location of the bubble's stable oscillation will also be affected. The origin can be ascribed to the change of added mass force caused by the change of vibration pressure. Meanwhile, on the basis of digital image processing method, denoising, filtering, local stretching, image binarization and image filling are used to extract the characteristic dimension of bubbles. The theoretical value of the critical depth of bubble sinking matches the experimental result and their relative error is less than 5%. These new findings enrich the understanding of the moving bubbles in liquid materials used in nuclear reactors, rocket propulsion fuels and chemical experiments.
Two-dimensional numerical study of effect of magnetic field on laser-driven Kelvin-Helmholtz instability
Sun Wei, An Wei-Ming, Zhong Jia-Yong
Kelvin-Helmholtz instability is the basic physical process of fluids and plasmas. It is widely present in natural, astrophysical, and high energy density physical phenomena. With the construction of strong laser facilities, the research on high energy density physics has gained new impetus. However, in recent years the magnetized Kelvin-Helmholtz instability was rarely studied experimentally. In this work, we propose a new experimental scheme, in which a long-pulsed nanosecond laser beam is generated by a domestic starlight III laser facility. The whole target consists of two parts: the upper part that is the CH modulation layer with lower density, and the lower part that is the Al modulation layer with higher density. The laser beam is injected from one side of the CH modulation layer and generates a CH plasma outflow at the back of the target. During the transmission of the CH plasma outflow, the Al modulation layer is radiated and ionized, which makes the Al modulation layer generate an Al plasma outflow. The interaction between the Al plasma outflow and the CH plasma outflow produces a velocity shear layer, and then Kelvin-Helmholtz instability will gradually form near the Al modulation layer. In this paper, the open-source FLASH simulation program is used to conduct a two-dimensional numerical simulation of the Kelvin-Helmholtz instability generated by the laser-driven modulation target. We use the FLASH code, which is an adaptive mesh refinement program, developed by the Flash Center at the University of Chicago, and is well-known in astrophysics and space geophysics, to create a reference to the magnetohydrodynamic solution in our experiment. At present, this code introduces a complete high-energy-density physical modeling module, which is especially suitable for simulating intense laser ablation experiments. The equation of state and opacity tables of targets are based on the IONMIX4 database. The evolution of Kelvin-Helmholtz vortices, separately, in the Biermann self-generated magnetic field, the external magnetic field, and no magnetic field are investigated and compared with each other. It is found that the self-generated magnetic field hardly changes the morphology of the Kelvin-Helmholtz vortex during the evolution of Kelvin-Helmholtz instability. The external magnetic field parallel to the fluid direction can stabilize the shear flow. The magnetic field mainly stabilizes the long wave disturbance. The study results in this work can provide theoretical guidance for the next step of the Kelvin-Helmholtz experiment under a strong magnetic environment in the high energy density laser facility.
Gas-liquid two-phase flow of liquid film breaking process under shock wave
Peng Xu, Li Bin, Wang Shun-Yao, Rao Guo-Ning, Chen Wang-Hua
The gas-liquid two-phase flow of liquid dispersing and breaking under the action of shock wave includes complex physical phenomena, such as turbulent mixing of gas-liquid two-phase, instability and breakage of liquid interface, and formation of internal cavity structure after atomization. In order to investigate the shock-wave-caused breaking process of the liquid film, a three-dimensional numerical simulation of the gas-liquid two-phase flow process is performed by using the computational fluid dynamics method. In the simulation, the Mach number of shock wave is 1.5 and the thickness of liquid film is 2 mm. The finite volume method is used to solve the three-dimensional Navier-Stokes equation. The volume of fluid model is applied to the gas-liquid two-phase flow. The k-ε double equation turbulence model is selected for the turbulence calculation. The evolution process of the wave system structure of the shock wave and the deformation, breakage and atomization characteristics of the liquid film are obtained, and compared with the experimental results. The results show that the incidence, reflection, and transmission phenomena occur during the interaction between the shock wave and the liquid film, and the intensity of the transmitted shock wave and the liquid surface tension have an important effect on the breaking process of the liquid film. The transmitted shock wave affects the shape of the broken cloud cluster on the left of the liquid film, while the incident shock wave and reflected shock wave affect the shape of the broken cloud cluster on the right side of the liquid film. The volume of the atomized cloud formed in the breaking process of the liquid film increases rapidly, first reaching 6.7 dm3 within 2.5 ms, then keeping stable basically. After the shock wave exits from the tube, a long narrow jet is formed. The maximum velocity reaches 519 m/s and appears in the interior of the jet, and then decreases continuously. Under the action of the jet, an expanding three-dimensional cavity structure is formed inside the atomizing cloud, and an annular vortex with negative pressure in the core area occurs in the cavity structure. Finally, the annular vortex continuously entrains the surrounding fluid in the process of forward movement, the strength of the vortex decreases and gradually dissipates in the space. This work is conducive to further understanding the interaction process of gas-liquid two-phase flow.
In situ observation of phase transition in polycrystalline under high-pressure high-strain-rate shock compression by X-ray diffraction
Chen Xiao-Hui, Tan Bo-Zhong, Xue Tao, Ma Yun-Can, Jin Sai, Li Zhi-Jun, Xin Yue-Feng, Li Xiao-Ya, Li Jun
$ {10^{6}}\;{{\rm{s}}^{ - 1}} $ . However, in contrast with the strain rate range where the phase diagram is a good predictor of the crystal structure of a material, at higher strain rate ($ > {10^{6}}\;{{\rm{s}}^{ - 1}} $ ) the phase diagram measured can be quite different not only in shifting the boundary line between various phases, but also in giving a different sequence of crystal structure. High-power laser facility can drive shock wave and simultaneously provide a precisely synchronized ultra-short and ultra-intense X-ray source. Here, based on the Prototype laser facility, an in situ X-ray diffraction platform for diagnosing shock-induced phase transition of polycrystalline material is established. The in situ observation of material phase transition under high-strain-rate shock loading is carried out with typical metals of vanadium and iron. Diffraction results are consistent with vanadium remaining in the body-centered-cubic structure up to 69 GPa, while iron transforms from the body-centered-cubic structure into hexagonal-close-packed structure at 159 GPa. The compressive properties of vanadium and iron obtained in in situ X-ray diffraction experiment are in good agreement with their macroscopic Hugonoit curves. The decrease in the lattice volume over the pressure step period yields a strain rate on the order of $ {10^{8}} - {10^{9}}\;{{\rm{s}}^{ - 1}} $ . The available of the presented in situ X-ray diffraction plateform offers the potential to extend our understanding of the kinetics of phase transition in polycrystalline under high-pressure high-strain-rate shock compression.">The knowledge of phase transition of material under dynamic loading is an important area of research in inertial confinement fusion and material science. Though the shock-induced phase transitions of various materials over a broad pressure range have become a field of study for decades, the loading strain rates in most of these experiments is not more than $ {10^{6}}\;{{\rm{s}}^{ - 1}} $ . However, in contrast with the strain rate range where the phase diagram is a good predictor of the crystal structure of a material, at higher strain rate ($ > {10^{6}}\;{{\rm{s}}^{ - 1}} $ ) the phase diagram measured can be quite different not only in shifting the boundary line between various phases, but also in giving a different sequence of crystal structure. High-power laser facility can drive shock wave and simultaneously provide a precisely synchronized ultra-short and ultra-intense X-ray source. Here, based on the Prototype laser facility, an in situ X-ray diffraction platform for diagnosing shock-induced phase transition of polycrystalline material is established. The in situ observation of material phase transition under high-strain-rate shock loading is carried out with typical metals of vanadium and iron. Diffraction results are consistent with vanadium remaining in the body-centered-cubic structure up to 69 GPa, while iron transforms from the body-centered-cubic structure into hexagonal-close-packed structure at 159 GPa. The compressive properties of vanadium and iron obtained in in situ X-ray diffraction experiment are in good agreement with their macroscopic Hugonoit curves. The decrease in the lattice volume over the pressure step period yields a strain rate on the order of $ {10^{8}} - {10^{9}}\;{{\rm{s}}^{ - 1}} $ . The available of the presented in situ X-ray diffraction plateform offers the potential to extend our understanding of the kinetics of phase transition in polycrystalline under high-pressure high-strain-rate shock compression.
Isotope effect of carrier transport in organic semiconductors
Liu Xuan, Gao Teng, Xie Shi-Jie
Isotopic substitution can effectively tune the device performances of organic semiconductors. According to the experimental results of isotope effects in electric, light and magnetic process in organic semiconductors, we adopt the tight-binding model with strong electron-phonon coupling to study the isotope effects on carrier transport. We try to give a quantificational explanation and show the physical origin of isotope effects on mobility in organic semiconductors in this work. Using polaron transport dynamics with diabatic approach, we simulate the carrier transport in an array of small molecule crystals under weak bias. Because of strong electron-phonon coupling in organic materials, an injected electron will induce lattice distortion, and the carriers are no longer free electrons or holes, but elementary excitations such as solitons, polarons or bipolarons. Our simulation results indicate that the existence of deuterium and 13C element will reduce the mobility of organic material, which means that the isotopic substitution can be utilized to manifest organic device performance. Besides, we also find that the isotope effect on mobility will increase with electron-phonon coupling increasing. This suggests that both the mass of lattice groups and electron-phonon coupling should be taken into account to understand the isotope effects in organic semiconductors. With the consideration of that, we derive the effective mass of polaron based on the continuum model, and verify that effective mass can successfully describe the isotope effect on mobility. The effective mass of carrier can be measured to represent the property of a material, which can tell us whether we need the isotopic substitution in organic layer to improve the device performance. Then we present the microcosmic movement of a polaron at the moment when it encounters isotopic substituted molecules. We come to the conclusion that the isotopic distribution will affect the instantaneous speed of the carrier, but has little effect on the mobility of the whole device when the substituted concentration remains constant. In conclusion, after simulating various possible isotope effects in materials, analyzing its physical mechanism and comparing calculation results in experiment, we provide a theoretical foundation for describing the isotope effects on mobility, which can be a basis of improving the performances of organic semiconductor devices.
Interface performance of PbTe-based thermoelectric joints
Wang Ya-Ning, Chen Shao-Ping, Fan Wen-Hao, Guo Jing-Yun, Wu Yu-Cheng, Wang Wen-Xian
The conversion efficiency of thermoelectric material PbTe is high. A high-quality and high-conversion-efficiency PbTe thermoelectric connector is investigated systematically. Excess Pb in composition can increase the carrier concentration and improve the thermoelectric performance of PbTe. The composite electrode can improve the interface barrier and reduce the contact resistance. Traditional processes of making contacts onto bulk crystalline PbTe-based materials do not work for reducing the contact resistance by inhibiting element diffusion and increasing the shear strength at the same time. In this study, we consider a composite electrode which can form an intermediate layer to suppress the diffusion of the Pb element on the PbTe side. This work not only reduces the contact resistance, but also increases the shear strength. The sample Pb50.01Te49.99 is obtained by adjusting the stoichiometric ratio of PbTe; Te and Pb are mixed in the Fe electrode. The composite electrode and Pb50.01Te49.99 are hot-pressed and sintered in one step to obtain the required PbTe thermoelectric electrode joint. We find that the contact resistance of the composite electrode is reduced by nearly 75% compared with that of metallization layer (Fe) connection. The smallest value is 26.610 μΩ·cm2 which is closer to the lowest 10 μΩ·cm2 reported in the literature than the counterpart of pure Fe electrode, and the shear strength is also greatly improved simultaneously. This work provides a new idea for obtaining PbTe thermoelectric connectors with excellent performance.
First-principles study of atomic bond nature of one-dimensional carbyne chain under different strains
Hou Lu, Tong Xin, Ouyang Gang
One-dimensional (1D) carbyne chain has the potential applications in the nanoelectronic devices due to its unique properties. Although some progress of the mechanical and thermal properties of 1D carbyne chain has been made, the physical mechanism of the strain modulation of atomic bond nature remains unclear. In order to explore the strain effects on the mechanical and related physical properties of 1D carbyne chain, we systematically investigate the strain-dependent bond nature of 1D carbyne chain based on the first-principles calculations of density functional theory and generalized gradient approximation. It is found that when the compressive strain is 16%, the bonding nature of 1D carbyne chain is changed, and the bond length alternation of single and triple bonds in 1D carbyne chain tends to zero, which originates from the difference in bond strength between single bond and triple bond. Moreover, 1D carbyne chain can change from semiconductor into metal when the compressive strain is 16% indicated by analyzing the band structure and related differential charge density. When the strain is 17%, the phonon spectrum has an imaginary frequency. Besides, when the ambient temperature is less than 510 K, the heat capacity of 1D carbyne chain decreases with strain increasing. However, more phonon modes will be activated at larger strains when the temperature is higher than 510 K, and the heat capacity is enhanced gradually with strain increasing. Also, the stiffness coefficient of 1D carbyne chain is larger than that of graphene and carbon nanotube. These results conduce to the fundamental understanding of atomic bond nature in 1D carbyne chain under different strains.
Half-metallic magnetism and electronic structures of CrPSe3 monolayers with multiple Dirac cones
Yang Jun-Tao, Xiong Yong-Chen, Huang Hai-Ming, Luo Shi-Jun
${v_{\rm F}{(K)}} = 15.8 \times 10^5 \;{\rm m \!\cdot\! s^{-1}}$ about twice larger than the $ v_{\rm F} $ of graphene in the vicinity of Fermi level, and six cones at $ K'/2 $ points with ${ v_{\rm F} {(K'/2)}} = 10.1 \times 10^5\;{\rm m \!\cdot\! s^{-1}}$ close to the graphene's value. These spin-polarized Dirac cones are mostly composed of Cr ${\rm d}_{xz}$ and ${\rm d}_{yz}$ orbitals. The novel electronic structure of CrPSe3 monolayer is also confirmed by the HSE06 functional. A tight-binding model was built based on the Cr-honeycomb structure with two Cr-d orbitals as the basic with the first, second and third nearest-neighboring interactions, further demonstrating that the multiple Dirac cones are protected by the Cr-honeycomb lattice symmetry. Our findings indicate that 2D CrPSe3 monolayer is a candidate with potential applications in the low-dimensional, high speed and temperature spintronics.">According to the first-principles calculation within PBE+U method and tight-binding model, the magnetic properties and electronic structures of two-dimensional (2D) CrPSe3 monolayer were investigated. Constructed by a Cr-honeycomb hexagonal lattice, 2D CrPSe3 was predicted to be in a half-metallic ferromagnetic state with dynamic stability, confirmed by the phonon spectrum with no imaginary dispersion. The Curie temperature was estimated as 224 K by Monte Carlo simulation within the Metropolis algorithm under the periodic boundary condition. The thermal stability of CrPSe3 monolayer was estimated at 300 K by a first-principles molecular dynamics simulation. It is found that the magnetic ground state of CrPSe3 monolayer is determined by a competition between the antiferomagnetic d-d direct exchange interactions and the Se-p orbitals mediated ferromagnetic p-d superexchange interactions. Most interestingly, in the half-metallic state the band structure exhibits multiple Dirac cones in the first Brillouin zone: two cones at K point showing a very high Fermi velocity${v_{\rm F}{(K)}} = 15.8 \times 10^5 \;{\rm m \!\cdot\! s^{-1}}$ about twice larger than the $ v_{\rm F} $ of graphene in the vicinity of Fermi level, and six cones at $ K'/2 $ points with ${ v_{\rm F} {(K'/2)}} = 10.1 \times 10^5\;{\rm m \!\cdot\! s^{-1}}$ close to the graphene's value. These spin-polarized Dirac cones are mostly composed of Cr ${\rm d}_{xz}$ and ${\rm d}_{yz}$ orbitals. The novel electronic structure of CrPSe3 monolayer is also confirmed by the HSE06 functional. A tight-binding model was built based on the Cr-honeycomb structure with two Cr-d orbitals as the basic with the first, second and third nearest-neighboring interactions, further demonstrating that the multiple Dirac cones are protected by the Cr-honeycomb lattice symmetry. Our findings indicate that 2D CrPSe3 monolayer is a candidate with potential applications in the low-dimensional, high speed and temperature spintronics.
Thermoelectric properties of Cu-doped Cu2SnSe4 compounds
Zheng Li-Xian, Hu Jian-Feng, Luo Jun
Cu2SnSe4 compound, as a non-toxic inexpensive thermoelectric material, has low thermal conductivity and adjustable conductivity, which promises to have a high-efficiency thermoelectric application in a medium-temperature range. The Cu-doped bulk samples of Cu2+xSnSe4 (0 ≤ x ≤ 1) compounds are synthesized by a fast method, i.e. by combining high energy ball milling with spark plasma sintering. In this work, the thermoelectric properties of Cu-doped Cu2SnSe4 compound are investigated. The experimental results reveal that the intrinsic vacancy at Cu/Sn site of Cu2SnSe4 can be completely filled by Cu (i.e. x = 1 in Cu2+xSnSe4). The crystal structures of all Cu2+xSnSe4 samples have the same space group F3m as that of the undoped Cu2SnSe4. The electrical conductivity of Cu2+xSnSe4 increases rapidly with the content of Cu doped at intrinsic vacancy increasing, concretely, it increases by two orders of magnitude and reaches a maximum value at x = 0.8. The increase in electrical conductivity results in the significant improvement in power factor. The observed results display that the increase in electrical conductivity is a nonlinear relationship with Cu-doping content in a range of 0 < x < 0.1, but is linearly related to the Cu-doping content in a range of 0.1 ≤ x ≤ 0.8. Meanwhile, the carrier (hole) concentration is observed to reach a maximum value at x = 0.2 and then slightly decreases at x = 0.8. The rapid increase in electrical conductivity with Cu-doping content increasing may be attributed to the intensifying of Cu-Se bond network that plays a dominant role in controlling hole transport in Cu2SnSe4. The carrier mobility also increases with the Cu-doping content increasing in the range of 0 ≤ x ≤ 0.8, which is in contrast to the common scenarios in thermoelectric materials that the carrier mobility decreases with the increase in the carrier concentration. Furthermore, the carrier transport mechanism of Cu2+xSnSe4 sample is revealed to be able to be described by the small polaron hopping model, which means the strong coupling between electron and phonon. The analysis of thermal conductivities of the Cu2+xSnSe4 samples reveals that the relationship between the electronic thermal conductivity and the electrical conductivity cannot be described by the classical Wiedemanmn-Franz law, which may be attributed to the formation of electron-phonon coupled small polaron. Therefore, the coupling between electron and phonon inside the Cu2+xSnSe4 structure strongly influences the behaviors of carrier transmission and thermal conductivity.
Voltage induced phase transition of polyethene glycol composite film filled with VO2 nanoparticles
Sun Xiao-Ning, Qu Zhao-Ming, Wang Qing-Guo, Yuan Yang
In this paper, the voltage induced metal-insulator phase transition (MIT) of polyethene glycol (PEG) composite film is investigated based on VO2 nanoparticles prepared by the hydrothermal method and vacuum annealing process. High purity VO2 (B) nanoparticles are obtained after being treated in a hydrothermal reactor at 180 ℃ for 12 h by using vanadium pentoxide (V2O5) and oxalic acid (H2C2O4·2H2O) as raw materials. The X-ray diffraction (XRD) pattern shows that the prepared nano-powders are free of impurities, and the scanning electron microscope (SEM) pictures confirm that the micro-morphology is of a band-shaped nano-structure. Next, these products are heated in a vacuum quartz tube at 500 ℃ for different times. The XRD and differential scanning calorimeter (DSC) curves of the annealed samples prove that the VO2 (M) with MIT performance is successfully prepared. And the content of M phase in the sample increases with preparation time increasing. When the annealing time is longer than 60 min, all the samples are converted into materials with M phase. The SEM images show that the average length of the nano-powders decreases with the annealing time increasing from 10 min to 300 min. Then PEG coating containing VO2 (M) nanoparticles is applied between two electrodes with a pitch of 1 mm on printed circuit board (PCB). The V-I test is carried out after a 20 kΩ resistor has been connected in the circuit. The results display repeatable non-linear V-I curves indicating that the composite film undergoes an MIT phase transition under voltage. After it is activated for the first test, the MIT voltage and non-linear coefficient increase exponentially as the length of VO2 decreases. Besides, it is also found that the voltage across the material is maintained at around 10 V after the resistance has changed suddenly, which is similar to the behavior of diode clamping voltage. We believe that the phase transition voltage and non-linear coefficient of the VO2 composite film are influenced by the intra-particle potential barrier and the inter-layer potential barrier. The longer the average length of the nanoparticles, the higher the potential barrier between the interfaces in the conductive channels is, and thus increasing the phase transition voltage and phase transition coefficient. The activation phenomenon of the thin film is caused by reducing the barrier between particles during the first test. Furthermore, the results can prove that the electric field is the determinant of the phase transition during the VO2 composite film electrical field induced MIT of the VO2 composite film. However, after the phase transition, Joule heat plays a significant role in maintaining the low resistance state.
Preparation and properties of multi-effect potassium sodium niobate based transparent ferroelectric ceramics
Liu Yong, Xu Zhi-Jun, Fan Li-Qun, Yi Wen-Tao, Yan Chun-Yan, Ma Jie, Wang Kun-Peng
Traditional transparent materials, including glasses and polymers, are chemically unstable and mechanically weak. Single crystals of some inorganic materials are also optically transparent, which are more stable than glasses and polymers. The fabrication of crystals, however, is relatively slow. Fortunately, transparent ceramics emerge as a promising candidate. Transparent ferroelectric ceramic is a kind of transparent ceramic with electro-optic effect, which also has excellent characteristics of conventional ceramics with excellent mechanical properties, resistance to high temperature, resistance against corrosion, and high hardness. Lead based transparent ferroelectric ceramic dominates this field for many years due to its superior electro-optic effect. Owing to the high toxicity of lead oxide, however, its development is significantly hampered. Therefore, it is greatly urgent to develop the lead-free transparent ferroelectric ceramics with excellent properties to replace the traditional lead based ceramics. In this paper, (K0.5Na0.5)0.94–3xLi0.06LaxNb0.95Ta0.05O3 (KNLTN-Lax; x = 0, 0.01, 0.015, 0.02) lead-free transparent ferroelectric materials are fabricated by the conventional solid state reaction method and ordinary sintering process. The dependence of microstructure, phase structure, optical transmittance and electrical properties of the ceramic on composition are systemically investigated. The transparent ferroelectric ceramic with relaxor-behavior is obtained at x = 0.02. The optical transmittance of the ceramic near infrared region is as high as 60%. Meanwhile, the electrical properties of the ceramic at x = 0.01 still maintains a relatively high level (d33 = 110 pC/N, kp = 0.267). In addition, the Curie temperature for each of all the samples is higher than 400 ℃. These results suggest that this material might be a novel and promising lead-free material that could be used in a large variety of electro-optical devices.
NO2 sensing properties of porous Fe-doped indium oxide
Liu Zhi-Fu, Li Pei, Cheng Tie-Dong, Huang Wen
It is of great significance to study the characteristics and working mechanism of NO2 sensor material for monitoring air pollution and protecting human health. As a metal oxide semiconductor material with simple preparation, low cost and good long-term stability, In2O3 has been widely studied in the detection of NO2. In order to explore the influence of Fe content on the gas sensing properties of porous In2O3 material, porous Fe-doped In2O3 nanoparticles are synthesized by the hydrothermal method, and the NO2 sensor is fabricated by using the above nanoparticles. The X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy and specific surface area measurement are used to characterize the micro morphology of the prepared nanoparticles in this paper, while the sensor performance is studied, including temperature, response recovery, selectivity and stability. In most samples, Fe atoms are completely doped into the In2O3 lattice as indicated by the XRD results. The SEM results show that the Fe-doped In2O3 nanoparticles prepared with Span-40 as activators are square in size of 50–200 nm, and a large number of small pores are distributed in it, which are also observed in the N2 adsorption/desorption experiment, this is one of the main reasons for the large specific surface area and high sensitivity of the nano materials. Studying the performance of the sensor, we find that when the molar ratio of In∶Fe is 9∶1, the sensor made of porous Fe-doped In2O3 nanoparticles has an excellent selectivity and short response recovery time for NO2 gas. The sensitivity of the sensor to 50-ppm-concentration (1 ppm = 1 mg/L) NO2 can reach 960.5 at 260 ℃, and the response time and recovery time are 5 s and 6 s respectively. Based on the theory of space charge and the knowledge of built-in barrier and energy band change before and after doping, the mechanism of the sensor is analyzed.
Effects of current density on fracture behaviors for micron-sized crystalline silicon electrodes
Zhang Xing-Yu
The large volume change during lithiation/delithiation leads the silicon electrodes in lithium-ion batteries to severely degrade the mechanical performance and the silicon electrodes in lithium-ion batteries to further deteriorate electrochemical properties, which limits the commercial applications of silicon electrodes. After several year's studies, the whole process of fracture for crystalline silicon anodes has been almost understood. However, the relationship between fracture behaviors and the lithiation depth has not been sufficiently studied. In this work, the in-situ observations of morphological changes (e.g., volume expansion, crack initiation, propagation, and debonding of lithiated silicon) during lithiation at the different current densities are reported for silicon micropillars fabricated by standard photolithography and a deep reactive ion etching process. Also, this work focuses on the relative depth of lithiation of silicon electrodes at the moment of crack initiation, which is one of the crucial parameters representing the utilization of active materials with no crack. The results show that the silicon micropillars are broken faster (i.e., crack initiation and pulverization in a shorter lithiation time) and more seriously at a large current density, exhibiting more prominent symmetry of morphology. However, the relative depths of lithiation at the different current densities have just a slight difference (i.e., 18%–22%), when cracks are initiated. Here in this work, a silicon micropillar fracture is confirmed by the optical observation, while the relative depth of lithiation is calculated according to the capacity data recorded by the charge/discharge battery test system. The small fluctuation of the relative depth of lithiation with the large wave of current density can be ascribed to the dominant role of local stress concentration caused by anisotropic volume change in fracture behavior, which is validated by the results obtained by the finite element model (i.e., the depth of lithiation predicted by numerical simulations is ~ 22.6%). Therefore, the relationship between fracture behavior and the lithiation kinetics is established, providing an effective strategy for estimating the utilization of active materials under crack-free operation. With the help of the theoretical mechanics model considering both volume change and concurrent movement of reaction front, the stress state in the lithiated silicon at the moment of crack initiation is given, showing the tensile hoop stress near the reaction front. Consequently, these results suggest that the fracture behaviors depend on the current density, but the position of crack initiation (i.e., the depth of lithiation with no crack) is unrelated to current density (at least in a relatively broad range) for large micron-sized crystalline silicon electrodes, thereby shedding light on the fracture mechanisms and the design of alloy anodes (e.g., size and structure) in lithium-ion batteries.
Coverage and transmission bandwidth analyses of undersea-to-air magnetic induction communication with relay transmission
Zhang Xin, Tong Yu-Ze, Tian Zhi-Ying, Wang Jin-Hong, Yao Ze
The transboundary information transmission across the air-and-sea interface is of great practical significance. No matter from the perspective of scientific research or from the view of applications, transboundary communication is a hot and challenging field. Magnetic induction communication has the unique advantages of two-way transboundary transmission, insusceptible to complex hydro-logical environment, and especially suitable for shallow water channel and other environments with harsh propagation characteristics, providing a promising solution for transboundary information transmission. However, the rapid attenuation of magnetic field component with the increase of distance and frequency limits the coverage and transmission rate of the transboundary magnetic induction communication. Therefore, enhancing magnetic field component at a distance has become a focus of magnetic induction communication research. An undersea-to-air transboundary magnetic induction communication scheme based on relay transmission is proposed in this paper, in which a virtual distributed antenna array is formed by processing and relaying the received signals performed at the relay terminals, and the distributed spatial diversity gain can be obtained which is used to enhance the underwater magnetic field component, expand the magnetic induction propagation range, and increase the transmission bandwidth and improve the receiving signal-to-noise ratio as well. Moreover, even in a dynamic marine environment, the relay transmission can be effectively realized and the communication performance can be guaranteed. In this paper, the propagation model of relay transmission based undersea-to-air transboundary magnetic induction communication is established by using the magnetic dipole model in layered conductive media. The effective communication range of direct and relay communication are defined by using their receiving thresholds, and the basic methods and steps to determine the relay location are presented. The communication coverage and available transmission bandwidth of undersea-to-air transboundary magnetic induction communication under different relay scenarios are analyzed and compared by calculating the underwater magnetic induction strength distribution. The numerical results indicate that the underwater coverage and available bandwidth of transboundary magnetic induction communication can be simultaneously doubled under the appropriate number and location of relays. The research in this paper suggests that the relay transmission scheme for magnetic induction communication is suitable for the application in dynamic environment with high propagation loss, which greatly increases the feasibility and effectiveness of the magnetic induction communication as a transboundary communication technology.
Effects of oxygen adsorption on spin transport properties of single anthracene molecular devices
Cui Xing-Qian, Liu Qian, Fan Zhi-Qiang, Zhang Zhen-Hua
With the miniaturization of molecular devices, high-performance nano devices can be fabricated by controlling the spin states of electrons. Because of their advantages such as low energy consumption, easy integration and long decoherence time, more and more attention has been paid to them. So far, the spin filtration efficiency of molecular device with graphene electrode is not very stable, which will decrease with the increase of voltage, and thus affecting its applications. Therefore, how to enhance the spin filtration efficiency of molecular device with graphene electrode becomes a scientific research problem. Using the first principle calculations based on density functional theory combined with non-equilibrium Green's function, the physical mechanism of regulating the spin polarization transport properties of single anthracene molecule device with graphene nanoribon as electrode is investigated by molecular oxygen adsorption. In order to explore the effect of the change of the connection mode between single anthracene molecule and zigzag graphene nanoribbon electrode on the spin transport properties of the device, we establish two models. The first model is the model M1, which is the single anthracene molecule longitudinal connection, and the second model is the model M2, which is the single anthracene molecule lateral connection. The adsorption model of single oxygen molecule is denoted by M1O and M2O respectively. The results show that when none of oxygen molecules is adsorbed, the spin filtering effect of single anthracene molecule connecting graphene nanoribbons laterally (M2) is better than that of single anthracene molecule connecting graphene nanoribbons longitudinally (M1). After oxygen molecules are adsorbed on single anthracene molecule, the enhanced localized degree of transport eigenstate will make the spin current of the two kinds of devices decrease by nearly two orders of magnitude. However, molecular oxygen adsorption significantly improves the spin filtering efficiency of the device and enhances the application performance of the device. The maximal spin filtering efficiency of single anthracene molecule connecting graphene nanoribbons longitudinal (M1O) can be increased from 72% to 80%. More importantly, the device with single anthracene molecule connecting graphene nanoribbons laterally (M2) maintains nearly 100% spin filtering efficiency in a bias range from –0.5 V to +0.5 V. These results provide more theoretical guidance for practically fabricating spin molecular devices and regulating their spin transport properties.
Bi2O2Se photoconductive detector with low power consumption and high sensitivity
Li Dan-Yang, Han Xu, Xu Guang-Yuan, Liu Xiao, Zhao Xiao-Jun, Li Geng-Wei, Hao Hui-Ying, Dong Jing-Jing, Liu Hao, Xing Jie
With the advent of graphene, atomically thin two-dimensional materials receive great attention in both science and technology. However, the characterization of zero-band gap of graphene hinders its applications in semiconductor logic and memory devices. To make up for the imperfection of graphene, one has made efforts to search for other two-dimensional layered materials. The Bi2O2Se is an emerging material with very high electron mobility, modest bandgap, and excellent thermal and chemical stability. In this work, high-quality Bi2O2Se thin films are synthesized through chemical vapor deposition. The effect of temperature on the morphology and size distribution of Bi2O2Se thin film are discussed in detail experimentally. Under an optimized experimental condition, the Bi2O2Se thin films with a lateral size of 100 μm are achieved. Interestingly, Bi2O2Se nanowires are obtained at a lower growth temperature (620–640 ℃). The photoelectric performances of Bi2O2Se on mica and silicon oxide substrate are examined based on a photoconductive mode. At a small bias of 0.5 V, the responsivity and specific detectivity of the rectangular Bi2O2Se thin film on the mica substrate reach 45800 A/W and 2.65 × 1012 Jones, respectively, and the corresponding photoelectric gain is greater than 105. The photoelectric performance of our device is comparable to the best results achieved by other research groups, which may be related to the higher quality and appropriate absorption thickness. The Bi2O2Se nanowire and Bi2O2Se thin film transferred to Si/SiO2 by a polystyrene-assisted method also exhibit a good photoresponse under the illumination of a 532 nm laser with a high optical power density (127.4 mW/cm2). The experimental results demonstrate that the Bi2O2Se has great potential applications in the optoelectronic devices with low power consumption and high sensitivity.
Morphologies of self-assembled gold nanorod-surfactant-lipid complexes at molecular level
Yang Ying, Song Jun-Jie, Wan Ming-Wei, Gao Liang-Hui, Fang Wei-Hai
Gold nanorods (GNRs) have aroused the extensive interest of many researchers in recent years due to their unique physicochemical properties. However, the toxic cetyltrimethylammonium bromide (CTAB) is often introduced into the process of synthesizing GNRs, which hinders the wide-range applications of GNRs in clinical practice. To reduce the toxicity, the CTAB molecules coated on the surface of GNRs should be replaced by nontoxic and biocompatible agents such as phospholipid. Thus the component and morphology of the mixed coating agents on the surface of GNRs affect the physicochemical properties of GNRs. To study the morphology and properties of the coated GNRs at a molecular level, we investigate the self-assembly of GNRs, CTAB, and dimyristoyl phosphatidylcholine (DMPC) by using solvent-free dissipative particle dynamics simulations. Our results show that the morphology of the assembled complex mainly depends on the CTAB/DMPC molar ratio, while neither of the interaction strength between GNRs and the coating agents nor the diameter of GNRs has significant effect on the morphology. At a certain combination of GNRs-coating agent interaction strength with GNRs diameter, the mixture of CTAB and DMPC on the surface of GNRs undergoes a gradual change in morphology as the CTAB/DMPC molar ratio increases, including the forming of intact bilayer membrane, cracked bilayer membrane, long patches of micelles, and short wormlike micelles winding GNRs in spiral shape. The morphology of intact bilayer membrane verifies the experimental guess, while the other three morphologies are brand-new discoveries. We also find that when the GNR's diameter becomes smaller, or the CTAB/DMPC molar ratio is larger, or the interaction strength is greater, the agents cap the ends of GNRs, meanwhile the membrane thickness becomes thinner. The multiple morphologies of the assembled complexes can be qualitatively explained by the shape energy of a membrane adsorbed on a solid surface. When the surface tension of the membrane (which is proportional to the spontaneous curvature of the membrane) exceeds a critical value (which is equal to the adhesion energy density of the membrane), the membrane dissociates from the solid surface and its shape changes. The change trend is related to the spontaneous curvature of the free membrane. As a result of the synergy and competition among the inherent curvatures of GNRs, the spontaneous curvature of CTAB/DMPC membrane or micelle, as well as the adhesion energy, various interesting morphologies are produced. Our simulations and analyses directly characterize the morphological structures of CTAB and lipid coated GNRs, which allow us to in depth understand the self-assembling behaviors of GNRs at a molecular level. This is also conductive to achieving the controlled assemblies of GNRs.
Nonlinear feature extraction and chaos analysis of flow field
Xu Zi-Fei, Miao Wei-Pao, Li Chun, Jin Jiang-Tao, Li Shu-Jun
A novel signal processing method named adaptive variational mode decomposition with the fractal (AFVMD), which is based on variational mode decomposition and fractal theory, is proposed in this paper for solving a problem that it is easy to misjudge the working conditions of the centrifugal compressor. The measured signal of a compressor is unstable, so a traditional method is used to analyze the nonlinear phenomenon of the stall flutter. Owing to the fact that the robustness of VMD method is strong and its combination with the fractal dimension can accurately describe self-similarity and fractal characteristics of a measured signal, the proposed AFVMD method can not only achieve noise reduction, but also extract nonlinear feature from a complex signal. Taking the dynamic pressure data of the impeller during the instability of a centrifugal compressor as an object to verify the effectiveness and superiority of the proposed AFVMD method, the results are obtained as follows. Firstly, compared with the wavelet noise reduction method, the proposed AFVMD method has both noise reduction and feature extraction functions, and the compressor pressure pulsation spectrum has more significant stall characteristics. Secondly, none of the traditional nonlinear analysis methods can reflect the stall process, so the chaotic phase space attractor is used to visualize the flow field changes. Due to the reasonable choice of the delay time and the embedding dimension, the physical information originally mixed in the signal is separated, so that the attractor phase diagram method has a better process of judging the flow stall than the frequency spectrum method. The results show that the proposed AFVMD method can judge the compressor about to enter into the deep surge earlier. Thirdly, In order to quantify the superiority of the proposed method, if the process of surging and the occurrence of deep wheezing can be predicted in advance, the largest Lyapunov exponent is used as an evaluation index. The above results show that the largest Lyapunov exponent of the proposed AFVMD is smallest for illustrating that the signal has more accurate flow field nonlinear information, which improves the predictability of the signal.
|
CommonCrawl
|
3.1: Some Applications Leading to Differential Equation
[ "stage:draft", "article:topic", "calcplot:yes", "license:ccbyncsa", "showtoc:yes", "transcluded:yes" ]
MATH 2200: Calculus for Scientists II
3 Introduction to Differential Equations
Population Growth and Decay
Newton's Law of Cooling
Glucose Absorption by the Body
Spread of Epidemics
Newton's Second Law of Motion
Interacting Species: Competition
Much of calculus is devoted to learning mathematical techniques that are applied in later courses in mathematics and the sciences; you wouldn't have time to learn much calculus if you insisted on seeing a specific application of every topic covered in the course. Similarly, much of this section is devoted to methods that can be applied in later courses. Only a relatively small part of the section is devoted to the derivation of specific differential equations from mathematical models or relating the differential equations that we study to specific applications. In this section, we mention a few such applications.
The mathematical model for an applied problem is almost always simpler than the actual situation being studied, since simplifying assumptions are usually required to obtain a mathematical problem that can be solved. For example, in modelling the motion of a falling object, we might neglect air resistance and the gravitational pull of celestial bodies other than Earth, or in modeling population growth we might assume that the population grows continuously rather than in discrete steps.
A good mathematical model has two important properties:
1. It's sufficiently simple so that the mathematical problem can be solved.
2. It represents the actual situation sufficiently well so that the solution to the mathematical problem predicts the outcome of the real problem to within a useful degree of accuracy. If results predicted by the model don't agree with physical observations, the underlying assumptions of the model must be revised until satisfactory agreement is obtained.
We'll now give examples of mathematical models involving differential equations. We'll return to these problems at the appropriate times, as we learn how to solve the various types of differential equations that occur in the models.
All the examples in this section deal with functions of time, which we denote by \(t\). If \(y\) is a function of \(t, y'\) denotes the derivative of \(y\) with respect to \(t\); thus,
\begin{eqnarray*}
y'={dy\over dt}
\end{eqnarray*}
Although the number of members of a population (people in a given country, bacteria in a laboratory culture, wildflowers in a forest, etc.) at any given time \(t\) is necessarily an integer, models that use differential equations to describe the growth and decay of populations usually rest on the simplifying assumption that the number of members of the population can be regarded as a differentiable function \(P=P(t)\). In most models, it is assumed that the differential equation takes the form
$$ P'=a(P)P, $$
where \(a\) is a continuous function of \(P\) that represents the rate of change of population per unit time per individual. In the \( \textcolor{blue}{\mbox{Malthusian model}} \) (https://en.wikipedia.org/wiki/Thomas_Robert_Malthus), it is assumed that \(a(P)\) is a constant, so equation \( (1) \) becomes
$$ P'=aP. $$
This model assumes that the numbers of births and deaths per unit time are both proportional to the population. The constants of proportionality are the \( \textcolor{blue}{\mbox{birth rate}} \) (births per unit time per individual) and the \( \textcolor{blue}{\mbox{death rate}} \) (deaths per unit time per individual); \(a\) is the birth rate minus the death rate.
You learned in calculus that if \(c\) is any constant then
$$ P=ce^{at} $$
satisfies equation \( (2) \), so equation \( (2) \) has infinitely many solutions. To select the solution of the specific problem that we're considering, we must know the population \(P_0\) at an initial time, say \(t=0\). Setting \(t=0\) in equation \( (3) \) yields \(c=P(0)=P_0\), so the applicable solution is
P(t)=P_0e^{at}.
This implies that
\lim_{t\to\infty}P(t)=\left\{\begin{array}{cl}\infty&\mbox{ if }a>0,\\ 0&\mbox{ if }a<0; \end{array}\right.
that is, the population approaches infinity if the birth rate exceeds the death rate, or zero if the death rate exceeds the birth rate.
To see the limitations of the Malthusian model, suppose we're modeling the population of a country, starting from a time \(t=0\) when the birth rate exceeds the death rate (so \(a>0\)), and the country's resources in terms of space, food supply, and other necessities of life can support the existing population. Then the prediction \(P=P_0e^{at}\) may be reasonably accurate as long as it remains within limits that the country's resources can support. However, the model must inevitably lose validity when the prediction exceeds these limits. (If nothing else, eventually there won't be enough space for the predicted population!)
This flaw in the Malthusian model suggests the need for a model that accounts for limitations of space and resources that tend to oppose the rate of population growth as the population increases. Perhaps the most famous model of this kind is the \( \textcolor{blue}{\mbox{Verhulst model}} \) (http://www-history.mcs.st-and.ac.uk/.../Verhulst.html) where equation \( (2) \) is replaced by
$$ P'=aP(1-\alpha P), $$
where \(\alpha\) is a positive constant. As long as \(P\) is small compared to \(1/\alpha\), the ratio \(P'/P\) is approximately equal to \(a\). Therefore the growth is approximately exponential; however, as \(P\) increases, the ratio \(P'/P\) decreases as opposing factors become significant.
Equation \( (4) \) is the \( \textcolor{blue}{\mbox{logistic equation}} \). The solution is
P={P_0\over\alpha P_0+(1-\alpha P_0)e^{-at}},
where \(P_0=P(0)>0\). Therefore \(\lim_{t\to\infty}P(t)=1/\alpha\), independent of \(P_0\).
Figure \(1.1.1\) shows typical graphs of \(P\) versus \(t\) for various values of \(P_0\).
\( \textcolor{blue}{\mbox{Figure \(1.1.1\): Solutions of the logistic equation}} \)
According to \( \textcolor{blue}{\mbox{Newton's law of cooling}} \) (http://www-history.mcs.st-and.ac.uk/...es/Newton.html) , the temperature of a body changes at a rate proportional to the difference between the temperature of the body and the temperature of the surrounding medium. Thus, if \(T_m\) is the temperature of the medium and \(T=T(t)\) is the temperature of the body at time \(t\), then
$$ T' = -k(T-T_m), $$
where \(k\) is a positive constant and the minus sign indicates; that the temperature of the body increases with time if it's less than the temperature of the medium, or decreases if it's greater. We'll see in Section~4.2 that if \(T_m\) is constant then the solution of equation \( (5) \) is
$$ T=T_m+(T_0-T_m)e^{-kt}, $$
where \(T_0\) is the temperature of the body when \(t=0\). Therefore \(\lim_{t\to\infty}T(t)=T_m\), independent of \(T_0\). (Common sense suggests this. Why?)
Figure \(1.1.2\) shows typical graphs of \(T\) versus \(t\) for various values of \(T_0\).
\( \textcolor{blue}{\mbox{Figure \(1.1.2\): Temperature according to Newton's Law of Cooling}} \)
Assuming that the medium remains at constant temperature seems reasonable if we're considering a cup of coffee cooling in a room, but not if we're cooling a huge cauldron of molten metal in the same room. The difference between the two situations is that the heat lost by the coffee isn't likely to raise the temperature of the room appreciably, but the heat lost by the cooling metal is. In this second situation we must use a model that accounts for the heat exchanged between the object and the medium. Let \(T=T(t)\) and \(T_m=T_m(t)\) be the temperatures of the object and the medium respectively, and let \(T_0\) and \(T_{m0}\) be their initial values. Again, we assume that \(T\) and \(T_m\) are related by equation \( (5) \). We also assume that the change in heat of the object as its temperature changes from \(T_0\) to \(T\) is \(a(T-T_0)\) and the change in heat of the medium as its temperature changes from \(T_{m0}\) to \(T_m\) is \(a_m(T_m-T_{m0})\), where \(a\) and \(a_m\) are positive constants depending upon the masses and thermal properties of the object and medium respectively. If we assume that the total heat of the in the object and the medium remains constant (that is, energy is conserved), then
a(T-T_0)+a_m(T_m-T_{m0})=0.
Solving this for \(T_m\) and substituting the result into equation \( (6) \) yields the differential equation
T'=-k\left(1+{a\over a_m}\right)T +k\left(T_{m0}+{a\over a_m}T_0\right)
for the temperature of the object. After learning to solve linear first order equations, you'll be able to show that
T={aT_0+a_mT_{m0}\over a+a_m}+{a_m(T_0-T_{m0})\over a+a_m}e^{-k(1+a/a_m)t}.
Glucose is absorbed by the body at a rate proportional to the amount of glucose present in the bloodstream. Let \(\lambda\) denote the (positive) constant of proportionality. Suppose there are \(G_0\) units of glucose in the bloodstream when \(t=0\), and let \(G=G(t)\) be the number of units in the bloodstream at time \(t>0\). Then, since the glucose being absorbed by the body is leaving the bloodstream, \(G\) satisfies the equation
$$ G'=-\lambda G. $$
From calculus you know that if \(c\) is any constant then
$$ G=ce^{-\lambda t} $$
satisfies equation \( (7) \) , so equation \( (7) \) has infinitely many solutions. Setting \(t=0\) in equation \( (8) \) and requiring that \(G(0)=G_0\) yields \(c=G_0\), so
G(t)=G_0e^{-\lambda t}.
Now let's complicate matters by injecting glucose intravenously at a constant rate of \(r\) units of glucose per unit of time. Then the rate of change of the amount of glucose in the bloodstream per unit time is
$$ G'=-\lambda G+r, $$
where the first term on the right is due to the absorption of the glucose by the body and the second term is due to the injection. After you've studied Section~2.1, you'll be able to show (Exercise~2.1.~\hspace*{-3pt}\ref{exer:2.1.43}) that the solution of equation \( (9) \) that satisfies \(G(0)=G_0\) is
G={r\over\lambda}+\left(G_0-{r\over\lambda}\right)e^{-\lambda t}.
Graphs of this function are similar to those in Figure \(1.1.2\). (Why?)
One model for the spread of epidemics assumes that the number of people infected changes at a rate proportional to the product of the number of people already infected and the number of people who are susceptible, but not yet infected. Therefore, if \(S\) denotes the total population of susceptible people and \(I=I(t)\) denotes the number of infected people at time \(t\), then \(S-I\) is the number of people who are susceptible, but not yet infected. Thus,
I'=rI(S-I),
where \(r\) is a positive constant. Assuming that \(I(0)=I_0\), the solution of this equation is
I={SI_0\over I_0+(S-I_0)e^{-rSt}}
(Exercise~2.2.~\hspace*{-3pt}\ref{exer:2.2.29}).
Since \(\lim_{t\to\infty}I(t)=S\), this model predicts that all the susceptible people eventually become infected.
According to \( \textcolor{blue}{\mbox{Newton's second law of motion}} \) (http://www-history.mcs.st-and.ac.uk/...es/Newton.html), the instantaneous acceleration \(a\) of an object with constant mass \(m\) is related to the force \(F\) acting on the object by the equation \(F=ma\). For simplicity, let's assume that \(m=1\) and the motion of the object is along a vertical line. Let \(y\) be the displacement of the object from some reference point on Earth's surface, measured positive upward. In many applications, there are three kinds of forces that may act on the object:
(a) A force such as gravity that depends only on the position \(y\), which we write as \(-p(y)\), where \(p(y)>0\) if \(y\ge0\).
(b) A force such as atmospheric resistance that depends on the position and velocity of the object, which we write as \(-q(y,y')y'\), where \(q\) is a nonnegative function and we've put \(y'\) "outside'' to indicate that the resistive force is always in the direction opposite to the velocity.
(c) A force \(f=f(t)\), exerted from an external source (such as a towline from a helicopter) that depends only on \(t\).
In this case, Newton's second law implies that
y''=-q(y,y')y'-p(y)+f(t),
which is usually rewritten as
y''+q(y,y')y'+p(y)=f(t).
Since the second (and no higher) order derivative of \(y\) occurs in this equation, we say that it is a \(\textcolor{blue}{\mbox{second order differential equation}} \).
Let \(P=P(t)\) and \(Q=Q(t)\) be the populations of two species at time \(t\), and assume that each population would grow exponentially if the other didn't exist; that is, in the absence of competition we would have
$$ P'=aP \mbox{\quad and \quad}Q'=bQ, $$
where \(a\) and \(b\) are positive constants. One way to model the effect of competition is to assume that the growth rate per individual of each population is reduced by an amount proportional to the other population, so equation \( (10) \) is replaced by
P'&=&\phantom{-}aP-\alpha Q\\
Q'&=&-\beta P+bQ,
where \(\alpha\) and \(\beta\) are positive constants. (Since negative population doesn't make sense, this system works only while \(P\) and \(Q\) are both positive.) Now suppose \(P(0)=P_0>0\) and \(Q(0)=Q_0>0\). It can be shown (Exercise~10.4.~\hspace*{-3pt}\ref{exer:10.4.42}) that there's a positive constant \(\rho\) such that if \((P_0,Q_0)\) is above the line \(L\) through the origin with slope \(\rho\), then the species with population \(P\) becomes extinct in finite time, but if \((P_0,Q_0)\) is below \(L\), the species with population \(Q\) becomes extinct in finite time. Figure \(1.1.3\) illustrates this. The curves shown there are given parametrically by \(P=P(t), Q=Q(t),\ t>0\). The arrows indicate direction along the curves with increasing \(t\).
\( \textcolor{blue}{\mbox{ Figure \(1.1.3\): Population of competing species }} \)
Trench, William F., "Elementary Differential Equations" (2013). Faculty Authored and Edited Books & CDs. 8.
https://digitalcommons.trinity.edu/mono/8
3.2: Basic Concepts
|
CommonCrawl
|
GetEasySolution.com
Math Solvers
Equations solver - equations involving one unknown
Quadratic equations solver
Percentage Calculator - Step by step
Derivative calculator - step by step
Graphs of functions
Greatest Common Factor
Least Common Multiple
System of equations - step by step solver
Fractions calculator - step by step
Theory in mathematics
Roman numerals conversion
Numbers as decimals, fractions, percentages
More or less than - questions
Numbers and activities
4th grade help
Math Games and Apps
Version en español
(5(2m-7)+8m)/(5)+(17)/(2) - adding of fractions
(5(2m-7)+8m)/(5)+(17)/(2) - step by step solution for the given fractions. Adding of fractions, full explanation.
If it's not what You are looking for just enter simple or very complicated fractions into the fields and get free step by step solution. Remember to put brackets in correct places to get proper solution.
+ - * /
fill out with example data
Solve the problem
Solution for the given fractions
$ \frac{(5*(2*m-7)+8*m)}{5 }+\frac{ 17}{2 }=? $
The common denominator of the two fractions is: 10
$ \frac{(5*(2*m-7)+8*m)}{5 }= \frac{(2*(5*(2*m-7)+8*m))}{(2*5)} = \frac{(2*(5*(2*m-7)+8*m))}{10} $
$ \frac{17}{2 }= \frac{(5*17)}{(2*5)} =\frac{ 85}{10} $
Fractions adjusted to a common denominator
$ \frac{(5*(2*m-7)+8*m)}{5 }+\frac{ 17}{2 }= \frac{(2*(5*(2*m-7)+8*m))}{10 }+\frac{ 85}{10} $
$ \frac{(2*(5*(2*m-7)+8*m))}{10 }+\frac{ 85}{10 }= \frac{(2*(5*(2*m-7)+8*m)+85)}{10} $
$ \frac{(2*(5*(2*m-7)+8*m)+85)}{10 }= \frac{(2*(5*(2*m-7)+8*m)+85)}{10} $
see mathematical notation
You can always share this solution
See similar equations:
| (3)/(4)+(1)/(24) - add fractions | | (7y)/(8)/(-9)/(5y) - divide fractions | | (-11a)/(4)+(11)/(8a) - adding of fractions | | (7)/(2)/(8)/(7) - dividing of fractions | | (-8)/(-5)/(7)/(5) - divide fractions | | (11)/(12)/(11)/(4) - dividing of fractions | | (a)/(7)-(5)/(7) - subtract fractions | | (13)/(56)+(5)/(7) - adding of fractions | | (5)/(16)-(1)/(20) - subtraction of fractions | | (1)/(5)+(3)/(20) - adding of fractions | | (5-2)/(3)+(2-2)/(4) - adding of fractions | | (x-2)/(3)+(x-2)/(4) - adding of fractions | | (7)/(1/4)*(1)/(18) - multiplication of fractions | | (13)/(56)+(5)/(7) - add fractions | | (2)/(15)-(1)/(20) - subtraction of fractions | | (35)/(36)*(6)/(7) - multiplying of fractions | | (150)/(45)+(180)/(45) - adding of fractions | | (150)/(45)*(180)/(45) - multiplication of fractions | | (2)/(5)+(1)/(9) - adding of fractions | | (10)/(81)+(25)/(72) - adding of fractions | | (2a)/(15)-(1)/(3) - subtraction of fractions | | (7)/(9)-(14)/(45) - subtraction of fractions | | (12)/(15)/(6)/(5) - dividing of fractions | | (2)/(7)/(10)/(3) - dividing of fractions | | (9)/(1)*(1)/(5) - multiplying of fractions | | (12)/(7)*(9)/(8) - multiply fractions | | (1)/(4)*(2)/(1) - multiplication of fractions | | (4)/(1)*(1)/(16) - multiplication of fractions | | (1)/(16)*(1)/(16) - multiplication of fractions | | (1)/(3)+(7a)/(8) - adding of fractions | | (16)/(1)*(25)/(36) - multiplication of fractions | | (12)/(1)*(4)/(3) - multiplication of fractions | | (12)/(1)+(4)/(3) - addition of fractions | | (12)/(1)*(25)/(36) - multiplication of fractions | | (15(2-x))/(1)+(13(3-x))/(1) - adding of fractions | | (7)/(15)+(13)/(18) - add fractions | | (4)/(9)+(3)/(8) - addition of fractions | | (11)/(15)*(3)/(22) - multiplication of fractions | | (9)/(14)+(10)/(21) - adding of fractions | | (3)/(4)+(1)/(9) - adding of fractions | | (7)/(30)+(5)/(18) - add fractions | | (1)/(10)-(2)/(5) - subtract fractions | | (10)/(9)*(3)/(2) - multiplying of fractions | | (4)/(49)+(1)/(7) - adding of fractions | | (5)/(3)*(6)/(7) - multiplication of fractions | | (15)/(8)+(7)/(8) - add fractions | | (4)/(3)*(2)/(5) - multiplying of fractions | | (4)/(7)*(11)/(6) - multiplying of fractions | | (5)/(21)*(7)/(15) - multiply fractions | | (3)/(14)+(10)/(7) - adding of fractions | | (3)/(14)+(7)/(10) - add fractions | | (4)/(3)*(4)/(5) - multiply fractions | | (9)/((7)x)-(1)/(x) - subtract fractions | | (3x+14)/(5)+(x+54)/(7) - add fractions | | (10)/(1)/(9)/(5) - dividing of fractions | | (5)/(4)/(1)/(8) - dividing of fractions | | (9)/(7x)-(1)/(x) - subtraction of fractions | | (x)/(5)+(2)/(7) - addition of fractions | | (4)/(10)/(4)/(4) - divide fractions | | (3)/(4)+(1)/(7) - adding of fractions | | (75)/(100)/(75)/(75) - divide fractions | | (4)/(10)/(4)/(4) - dividing of fractions | | (4)/(10)*(4)/(4) - multiplication of fractions | | (4n^2+15)/(7m)-(2n^2+4)/(7m) - subtraction of fractions | | (-2)/(9)*(5)/(8) - multiplying of fractions |
Equations solver categories
Copyright © 2011 Get Easy Solution
2x-2=8 x-3=5 3x+2=18 2x+10=12 6x-2=14 3x=12 4x-2=12 9x-3=6 12+x=5 x+8=13 all equations
|
CommonCrawl
|
Talk:Tetration/Archive 1
Template:Talkarchive
2 Thoughts on the "Extension to low values of the second operand"
3 Notation
4 Questions about super-exponentation
5 Conjecture about super-exponentiation and cosines
6 Inverse of super-exponentiation
7 True or false??
8 Categorizing this article
9 Soft hyphens in long numbers
10 Negative "super-exponents"?
11 Calculation errors?
12 Page title
13 Carets
14 Tetration number names
15 Anon minor formula edit
16 The other successor of exponentiation
17 Infinite power towers
18 New tetration and slog software
19 A thought about addition
20 Moved Comment
21 n ^^ 1/2
22 y=xx
22.1 Name?
23 Real Extension
23.1 real b{\displaystyle ~b~}
24 some small edits without author-notification
25 Pentation
26 Dubious
26.1 Discussion about "UXP"
26.2 Ultra exponential function
27 Tetration#Approaches to inverse functions
28 Interesting pattern
29 Ultra exponential
30 Analytic extention to real heights
31 Decremented tetration
32 Decimal digits of power towers
33 Derivatives
34 Naming?
35 2^^4
This page should have some "love" just like the hyper operator page. I have done a lot of work on tetration, and its many connections, so I think I can shed some clarity on this very confusing subject, I'm planning on adding two pages: a page about the Superlogarithm, and a page about Abel's Linearization Equation, does anyone know how to make a new page? Andrew Robbins 23:25, 13 December 2005 (UTC)
Its been awhile since I said this, but I'm serious about it this time. AJRobbins (talk) 02:53, 20 November 2007 (UTC)
Creating Pages
When you insert a Wikipedia link into a page, the link will either take you to the Wikipedia page if it exists or it will allow you to edit and save a new page named by the link used to create it.
Page Names
When assigning page names use might want to consider that Abel equation has 160,000 hits on google, Abel's equation is under 10,000 hits, and Abel's Linearization Equation doesn't appear in either the web or the book searches. This is odd because I immediately understood what you were referring to. You might what to create an Abel equation page and then set a redirect from Abel's linearization equation to Abel equation so that people could find both pages with the different search engines.
Different types of articles
A page on the superlogarithm would be interesting, but where it would belong would be a matter of its content. Ideally expositions of subjects like mathematics in Wikipedia are based on peer-reviewed material. I suspect that would constrain the article to a review of Ioannis Galidakis' published work.
WikiProject_Mathematics is a good place to review Wikipedia standards while Wikipedia talk:WikiProject Mathematics provides good feedback and discusses proposed projects like WikiScience. PlanetMath.Org is another possibility for math web publishing.
When publishing non–peer-reviewed material on Wikipedia, mileage may vary. I watched with horror as my own wonderfully crafted pages were banished from Wikipedia while other pages I saw as being sketchy underwent serious debate and survived. You can see on own user talk page User talk:Daniel Geisler that folks were concerned about the appropriateness of my addition of a link to my own web site on tetration.
On a humorous note
Doing a search on yahoo for superlogarithm is really entertaining. I don't know which site I like better, Free WebCam Tetration or Sex Pictures Logarithm. I couldn't even begin to tell you what might constitute a peer-review of these sites. I can see a cocky (pardon the pun) marketing effort in newpenisenlargement.com pedaling Tetration Penis Enlargement products. Or has some kinky webcrawling AI seen the pictures and their captions from the Wikipedia Tetration entry and gone nuts? And finally, how can my tetration website make money from this? Daniel Geisler 07:19, 14 December 2005 (UTC)
The reason why I want to write a page about the Superlogarithm, is that I found a way to define it over all real numbers, which naturally extends tetration to all real numbers. This method, in the limit produces an infinitely differentiable function, and the method is very simple. My paper describing the method is complete, and I only need to find a place to publish it now. Where do I publish? and can I at least reference my paper in a wikipedia article on the superlogarithm? or do I need to wait until others read my paper and let someone else write the wikipedia article? Andrew 10:23, 21 December 2005 (UTC)
I think I remember reading somebody on the internet (was it Daniel Geisler?) who had some doubts about the superlogarithm extension to real numbers that you mentioned, which goes to show the wisdom of the "no original research" policy. --Whiteknox 22:13, 28 November 2006 (UTC)
The applicable policy is Wikipedia:No original research. Research that is published in a reputable place (which for mathematics means peer-reviewed journals and academic publishers, for instance) can be included. Research reported in unpublished papers and personal website will be treated with suspicion. If you are wondering in what journal to publish your paper, a good guess is to try journals that published papers you refer to.
There is no hard rule forbidding you to reference your own papers, but you need to be sensitive if you do so and expect to be asked for clarification (the relevant document is Wikipedia:Autobiography). -- Jitse Niesen (talk) 20:59, 21 December 2005 (UTC)
Publishing works on tetration I would suggest submitting your paper to Complex Variables Theory and Appl. where Ioannis Galidakis has published his work. I wrote a paper defining tetration for complex numbers in 1990 and an much more advanced work in 2001, but I was unsucessful in getting the papers published or even obtaining any feedback on my work. I did copyright my 1990 work and believe that the work is valid with the exceptions of where the fixed points have a Lypanouv characteristic that is a root of unity. The dynamics of the real numbers 1, (1/e)^e = 0.0659880358 and e^(1/e) = 1.44466786 are different from the dynamics of all other numbers under iterated exponentiation. Even if someone correctly defines tetration for what appears to be all the real numbers, I'm still going to expect the attempt to break down at 0.0659880358 and 1.44466786. I did try and validate your value for e^^pi using the Mathematica software I've written for such tasks and concluded that your value for e^^pi is much too large. If you publish your work and I can't find a fundamental flaw with it I will gladly write an article myself. I only qualify this offer because I've done much work over the last twenty years trying to find ways of showing that a proposed definition for extending tetration is inconsistent. Daniel Geisler 04:32, 22 December 2005 (UTC)
How exactly did you define tetration in your 1990 paper? Andrew 06:11, 8 February 2006 (UTC)
Take Faà di Bruno's formula and substitute g(x)=fn−1(p){\displaystyle g(x)=f^{\ n-1}(p)} where p{\displaystyle p} is a fixed point f(p)=p{\displaystyle f(p)=p} . This gives the derivatives of an iterated function at a fixed point that leads to the Taylor series of an iterated function in the complex plane. All that is assumed is that f(z) has a derivative and a fixed point not at infinity. The iterated Faà di Bruno's formula works for tetratrion, pentation, hexation and so on. Dmfn(p)=∑k=0∞ckf′(p)k+Dmfn−1(p){\displaystyle D^{m}f^{n}(p)=\sum _{k=0}^{\infty }{c_{k}f'(p)^{k}+D^{m}f^{n-1}(p)}} which can be solved by noting the difference equation results in a geometrical progression. In the 1990 paper I took the natural step of simplify the geometrical progressions which introduces the terms throughout the results of 1/1−f′(p)m{\displaystyle 1/1-f'(p)^{m}} . In my 1990 paper I didn't treat the case where f′(p)m=1{\displaystyle f'(p)^{m}=1} which causes my formula for iterated functions to blow up; for tetration in the real numbers this happens at e 1 / e {\displaystyle e^{1/e}} and e − e {\displaystyle e^{-e}} . So I gave a definition for hyperbolic tetration with a fixed point. Daniel Geisler 01:08, 13 February 2006 (UTC)
Thoughts on the "Extension to low values of the second operand"
Using the relation n↑↑k=logn(n↑↑(k+1)){\displaystyle n\uparrow \uparrow k=\log _{n}\left(n\uparrow \uparrow (k+1)\right)} (which follows from the definition of tetration), one can derive (or define) values for n↑↑k{\displaystyle n\uparrow \uparrow k} where k∈−1,0,1{\displaystyle k\in {-1,0,1}} .
n↑↑1=logn(n↑↑2)=logn(nn)=nlognn=nn↑↑0=logn(n↑↑1)=lognn=1n↑↑−1=logn(n↑↑0)=logn1=0{\displaystyle {\begin{matrix}n\uparrow \uparrow 1&=&\log _{n}\left(n\uparrow \uparrow 2\right)&=&\log _{n}\left(n^{n}\right)&=&n\log _{n}n&=&n\\n\uparrow \uparrow 0&=&\log _{n}\left(n\uparrow \uparrow 1\right)&=&\log _{n}n&&&=&1\\n\uparrow \uparrow -1&=&\log _{n}\left(n\uparrow \uparrow 0\right)&=&\log _{n}1&&&=&0\end{matrix}}} Daniel Geisler
Tetration just like another mathematics needs to have an axiomatic basis. The trick is to properly enumerate the list of possible consistent systems which extend tetration beyond the positive integers. Negative integers, rational, real and complex numbers are possible examples. n↑↑k,{\displaystyle n\uparrow \uparrow k,} k>0{\displaystyle k>0} is computed by iterated exponentiation is standard. n↑↑k,{\displaystyle n\uparrow \uparrow k,} k<0{\displaystyle k<0} can extend k into the negative integers by using iterated logarithms instead of iterated exponentiation, but this means that n↑↑k{\displaystyle n\uparrow \uparrow k} becomes multivalued.
This confirms the intuitive definition of n↑↑1{\displaystyle n\uparrow \uparrow 1} as simply being n{\displaystyle n} . However, no further values can be derived by further iteration in this fashion, as logn0{\displaystyle \log _{n}0} is undefined.
Articles on arithmetic including treat n↑↑1{\displaystyle n\uparrow \uparrow 1} as an axiom, not something capable of confirming an intuition. Furthermore, the dynamics of the Riemann sphere have no problem with dealing with logn0{\displaystyle \log _{n}0} or the further logarithmic iterations from zero.
Similarly, since log11{\displaystyle \log _{1}1} is also undefined (log11=ln1/ln1=0/0{\displaystyle \log _{1}1=\ln 1{/}\ln 1=0/0} ), the derivation above does not hold when n=1{\displaystyle n=1} . Therefore, 1↑↑−1{\displaystyle 1\uparrow \uparrow {-1}} must remain an undefined quantity as well. (The figure 1↑↑0{\displaystyle 1\uparrow \uparrow {0}} can safely be defined as 1, however.)
Again, 00{\displaystyle 0^{0}} is an undefined quantity, so values for 0↑↑k{\displaystyle 0\uparrow \uparrow {k}} cannot be defined directly. However, limn→0n↑↑k{\displaystyle \lim _{n\rightarrow 0}n\uparrow \uparrow {k}} is well defined, and exists:
limn→0n↑↑k={1,k even0,k odd{\displaystyle \lim _{n\rightarrow 0}n\uparrow \uparrow k={\begin{cases}1,&k{\mbox{ even}}\\0,&k{\mbox{ odd}}\end{cases}}}
This limit holds for negative n{\displaystyle n} , as well. 0↑↑k{\displaystyle 0\uparrow \uparrow {k}} could be defined in terms of this limit, but 0↑↑2=0{\displaystyle 0\uparrow \uparrow 2=0} would conflict with the standard undefinedness of 00{\displaystyle 0^{0}} .
File:Tetration period.gif
Tetration by period
I've seen text where 00=1{\displaystyle 0^{0}=1} and where 00{\displaystyle 0^{0}} is undefined. Take a look at the enlarged version of the tetration by period fractal. Zero is in the yellow circle immediately to the left of the large red area. The yellow area is period two, it doesn't converge to a single value but oscillates between two values. The following shows that defining 00=1{\displaystyle 0^{0}=1} is consistent with the period two behavior in the neighborhood of zero.
0 = 0 {\displaystyle 0=0}
00=1{\displaystyle 0^{0}=1}
000=01=0{\displaystyle 0^{0^{0}}=0^{1}=0}
0000=00=1{\displaystyle 0^{0^{0^{0}}}=0^{0}=1}
Tetration base −10−6,{\displaystyle -{10}^{-6},}
1.,−1.10−6,{\displaystyle 1.,-1.\,{10}^{-6},} 1.00001−3.1416410−6i,−9.9981910−7−8.6790610−11i,{\displaystyle 1.00001-3.14164\,{10}^{-6}\,i,-9.99819\,{10}^{-7}-8.67906\,{10}^{-11}\,i,} 1.00001−3.1398710−6i,−9.9981910−7−8.6759210−11i,{\displaystyle 1.00001-3.13987\,{10}^{-6}\,i,-9.99819\,{10}^{-7}-8.67592\,{10}^{-11}\,i,} 1.00001−3.1398710−6i,−9.9981910−7−8.6759210−11i,{\displaystyle 1.00001-3.13987\,{10}^{-6}\,i,-9.99819\,{10}^{-7}-8.67592\,{10}^{-11}\,i,}
Tetration base 10−6,{\displaystyle {10}^{-6},} 1.,1.10−6,0.999986,1.0001910−6,0.999986,{\displaystyle 1.,1.\,{10}^{-6},0.999986,1.00019\,{10}^{-6},0.999986,} Daniel Geisler 19:32, 6 May 2005 (UTC)
> How many different names and notations are there for tetration? I've seen them called super-exponents and hyper-powers; the operation tetration and hyper-4, and at least three different symbolologies (Knuth's up arrows, the related ^^ notation, and the horrid left-superscript). Should we cross-reference any of these?
> Hi I don't know if this is the place, but I've been working on tetration for years, and i've had my share of describing it to the passer-by, so I've weeded out some of the more cumbersome notations and pronunciations...
Most intersections between iterated exponentials, tetration, and hyper4 can be described with what I call "auxilliary tetration" (c`b`a) == c b a, where c, and a are superscripts. Great care should be taken to use parentheses with auxilliary notation for tetration, because (E^x)^^y != (y`E`x) so be careful! (2`b`a) == b^b^a. I also found that Ioannis' [[1]] uses auxiliary tetration in the notation c(b, a) == (c`b`a), which I suppose I might start using, because HE's published it, i have not. Normal tetration is expressed as (c`b`1) == b^^c, and exponentiation is (1`b`a) == b^a, and multiple iterations of exp(x) n times is (n`e`x). One of the benefits of auxilliary tetration is that it has a few more axioms for moving things around: (c`b`a) == b^(c-1`b`a) == (c-1`b`(b^a)). Some interesting things you can express with auxilliary tetration are for example: googol=(2`10`2), and googolplex=(3`10`2). Oh i just remembered seeing an ASCII notation for auxilliary tetration somewhere online, i don't remember where, they had (c`b`a) == "a@b^^c" to make it look like scientific notation. The arguments of normal tetration I've heard unanimously called "base" and "order", while the final exponent in auxilliary tetration I call the "auxilliary". For the inverse functions I named them before I found anything online about them, and since I knew that it was called "tetration" i called the inverses "tetra-root" and "tetra-log" (tlog()) where the tetra-root finds the base b in (c`b`a), and tetra-log finds c, the order. I saw that bit about slog(), and I really don't think thats an appropriate function name, using that system, the inverse of pentation with respect to the second argument would be sslog(), and that just gets silly. The tetra-root/tetra-log terminology allows for the corresponding inverses of pentation (hyper5) to also have names: penta-root and penta-log (plog()). The pronunciation of the hyper function is pretty straight forward: b(d)c is pronounced "b hyper-d c", but I've had much more trouble finding a proper pronunciation for tetration and auxilliary tetration. The way its said currently without reference to tetration is "b to the b, c times", or when its about iterated exponentials, "the c-th iteration of b to the a", or a "c-th order power tower base b". The terminology I perfer the most is "the c-th tetration of b" or "b tetrated by c". When combined with an auxilliary, I've found the best way to pronounce (c`b`a) is "tetrate b by c to the a" and in a crunch, you could even leave off the tetrate if the context is clear and just say "b by c to the a", or without auxilliary: "b by c". -- Andrew Robbins and_j_rob(at)yahoo(dot)com
PS, I've also compiled a short list of the different continuous extensions of tetration I've come across, and tried myself, along with a bunch of Mathematica code for interpolation and turning the 0<order<1 strip into a whole function. Anyone think this would be of use to Wikipedia?
Questions about super-exponentation
1. Does anyone know how to define a@b when b is not an integer?? If a is not an integer, it is almost as easy as it is if both are integers. 1.5@2 = 1.5^1.5, but how about if b is not an integer??
> This is an open problem in mathematics. I've seen attempts to define them using fractals, combinatorics, and dynamics, but there's not that much progress.
2. Is e@2 known to be irrational?? The first few decimal places are 15.154262241479.
3. How about e@3, which is around 3814279.1047602??
I do not think the "subject" has been sufficiently studied to give an answer to those questions. Question 1 may either be senseless or be trivial via logarithms and exponentials. Just my 2c. Pfortuny 19:06, 7 Mar 2004 (UTC)
In one of Rudy Rucker's books, he calls this idea "tetration". -- Tarquin 19:11, 7 Mar 2004 (UTC)
But Wikipedia doesn't have a page titled tetration.
I don't understand your point. I said there are other names for this concept that we should perhaps mention in the article -- Tarquin 14:13, 8 Mar 2004 (UTC)
I finally made tetration a re-direct page. But, is there a corresponding "pentation"?? If so, what is its symbol??
Who wrote the last remark? Please sign your ideas so the conversation makes sense?
Theoretically, an unlimited number of binary operations can be methodically built upon one another via iteration. Practically, there is very little compelling justification to do so, however.
On rare occasion, you will discover an equation in an area of applied math which can be expressed more concisely via tetration than involution ("exponentiation"). With successively higher binary operations, though, I do not know of any more advantageous expressions of equations that exist.
This leaves only one practical usage for higher binary operations which I can think of: a more concise expression of extremely large, combinatoric values such as Graham's number. OmegaMan
Yes, there is a pentation. The article isn't very good yet, so can someone help make it better? Hexation also exists and suffers from the same problems, although it's slightly better than the "pentation" article. --116.14.26.124 (talk) 10:26, 23 June 2009 (UTC)
Conjecture about super-exponentiation and cosines
Using Microsoft Works Spreadsheet, I found the following properties:
1. If n is even, the limit of x@n as x approaches 0 is 1.
2. If n is odd, the limit of x@n as x approaches 0 is 0.
From this, I have conjectured, but not proven, that defining a@b when b is not an integer can be done in terms of a function containing cosine in it, because these limits are the same as (cos (pi*(x/2)))^2. User 66.32.73.125
Inverse of super-exponentiation
I would like to create an article about the inverse operation of super-exponentiation, used by the same source that has the @ symbol for super-exponentiation using the & symbol. However, I don't know what to call it, and I feel afraid someone will very likely put it on vfd. Do you know?? 66.32.82.95 17:56, 2 Apr 2004 (UTC)
All inverse binary operations express what can also, of course, be expressed using ordinary, non-inverse or straightforward binary operations. So, they should only be used wherever comparatively convenient.
Consequently, subtraction is used less often than addition; evolution is used less often than involution (often awkwardly called "exponentiation"). Division is the only inverse binary operation which rivals multiplication in its commonplace usage.
In my opinion, there is no hope for an inverse binary operation of tetration (often awkwardly called "super-exponentiation") being used at all since it could only be communicated clearly in terms of tetration. Moreover, agreed standards in mathematical language and notation would become a problem. OmegaMan
> Remember that there isn't an inverse for tetration; there are two inverses. Addition and multiplication are commutative, so they have only one inverse each -- but exponentiation* has two, roots and logs. In the same way, there are 'hyperlogs' and 'hyperroots' that undo tetration.
I couldn't find a dictionary entry that links "involution" with exponentiation, although it does have a different mathematical meaning. I'm not sure it's wise to use such a term.
True or false??
True or false: there are plenty of Wikipedia links that change in the following category:
Originally, the link at Article A was a direct link to Article C, but later, someone modifies it and makes Article A link to Article B where Article B re-directs to Article C.
There appear to be plenty in the case of this article being C, but I want to know if there are plenty with no particular Article C. 66.245.23.108 22:56, 12 Jul 2004 (UTC)
Categorizing this article
Can anyone think of a category for this article to go into?? 66.245.77.90 00:36, 26 Aug 2004 (UTC)
Soft hyphens in long numbers
I've placed soft hyphens (­) into the 155-digit (206-character) value of 4↑↑3 because it was making the page super-wide in my browser. - dcljr 05:08, 29 Aug 2004 (UTC)
Negative "super-exponents"?
The last three entries on the page read:
n↑↑(-1) = 0 for all real numbers not equal to 1
n↑↑(-2) = negative infinity for all real numbers greater than 1
n↑↑(-2) = infinity for all real numbers between 0 and 1
Are these by definition? (Whose?) Can someone explain to me (and in the article itself) what a negative "super-exponent" (or whatever you'd call it) would even mean? - dcljr 05:22, 29 Aug 2004 (UTC)
Take the sequence negative infinity, 0, 1, 2, 4, 16, 65536. Does it make sense?? What would make more sense to you?? 66.245.127.199 21:19, 30 Aug 2004 (UTC)
What would make more sense to me -- and probably to dcljr -- is what I've replaced that list of identities with. Even if my TeX skills leave much to be desired. --Aponar Kestrel (talk) 20:24, 2004 Sep 15 (UTC)
But ln (x) approaches –∞ as x approaches 0. --116.14.26.124 (talk) 10:31, 23 June 2009 (UTC)
Calculation errors?
I could be wrong, but the values I get for numbers as simple as 3↑↑3{\displaystyle 3\uparrow \uparrow 3} differ from those on the page. The values I get would be:
1↑↑3{\displaystyle 1\uparrow \uparrow 3} = 111{\displaystyle \,\!1^{1^{1}}} = 1
2↑↑3{\displaystyle 2\uparrow \uparrow 3} = 222{\displaystyle \,\!2^{2^{2}}} = 16
3↑↑3{\displaystyle 3\uparrow \uparrow 3} = 333{\displaystyle \,\!3^{3^{3}}} = 19,683
4↑↑3{\displaystyle 4\uparrow \uparrow 3} = 444{\displaystyle \,\!4^{4^{4}}} = 4,294,967,296
5↑↑3{\displaystyle 5\uparrow \uparrow 3} = 555{\displaystyle \,\!5^{5^{5}}} = 298,023,223,876,953,125
6↑↑3{\displaystyle 6\uparrow \uparrow 3} = 666{\displaystyle \,\!6^{6^{6}}} = 10,314,424,798,490,535,546,171,949,056
... with similar divergences for the next rows. Have I missed something? These values are easily checkable with a pocket calculator, if anyone would care to back me up. Aydee
Hmm. You're doing (3^3)^3 (= 19683), the page does 3^(3^3) (= 7.62559748×1012).
If I go to a linear calculator (like Google Calculator) and ask it for "3^3^3", it converts it into 3^(3^3).
But the definition of this function on the page is x↑↑y=x raised to its own power y times{\displaystyle x\uparrow \uparrow y=x{\mbox{ raised to its own power }}y{\mbox{ times}}} . Does that mean x^x, then the result of that raised to x, etc.? Or does it mean the resolution of the symbol xxx{\displaystyle \,\!x^{x^{x}}} ?
The definition at Knuth's up-arrow notation indicates that it is the latter. But if so, then the definition of "iterated exponentiation" is not accurate. It would seem more accurate to say something like "the xth power of x, y times".
- KeithTyler 21:33, Sep 15, 2004 (UTC)
Good point. It'd probably be best to use the definition on Knuth's up-arrow notation in some form to clarify this, but I'm not sure what the best policy in these cases may be; my gut feeling is that this article needs the definition and Knuth's up-arrow notation should refer here, as the up-arrow notation is merely a method of representing tetration. On the other hand, I could be wrong. Any other ideas? Aydee 01:41, 2004 Sep 16 (UTC)
I just added a quick example of iterated expectation to the page. Perhaps that's sufficient to clear up any misunderstandings? - dcljr 06:37, 10 Oct 2004 (UTC)
There was a note in this article near the beginning: Note that when evaluating multiple-level exponentiation, the exponentiation is done at the deepest level first (in the notation, at the highest level). In other words:
2222=2(2(22))=2(24)=216=65,536{\displaystyle \,\!2^{2^{2^{2}}}=2^{\left(2^{\left(2^{2}\right)}\right)}=2^{\left(2^{4}\right)}=2^{16}=65,\!536}
2222{\displaystyle \,\!2^{2^{2^{2}}}} is not equal to ((22)2)2=256{\displaystyle \,\!\left({\left(2^{2}\right)}^{2}\right)^{2}=256}
It means that
4↑↑3{\displaystyle 4\uparrow \uparrow 3} =4(44){\displaystyle 4^{\left(4^{4}\right)}} and not (44)4{\displaystyle \,\!\left(4^{4}\right)^{4}} , which is how you computed for the value. --Kevin_philippines 20:56, Sep 8, 2006 (UTC)
On July 4, 2004, User:Gdr asked this page to be renamed, but it hasn't been renamed. What happened?? 66.245.71.98 15:45, 16 Sep 2004 (UTC)
Your helpful wizards have finally woken up and noticed! Noel 20:01, 17 Sep 2004 (UTC)
Given that, as pointed out in the article, the up-arrow symbol (when present on a computer keyboard) is used similarly to the caret symbol (indicating superscripts), couldn't we just replace all the double up-arrows on the page (except the ones next to Knuth's name) to double carets (^^)? This would make the page readable in all browsers. Just a suggestion... - dcljr 06:56, 10 Oct 2004 (UTC)
Tetration number names
Please read very slowly and carefully:
Can anyone come up with a way to name numbers that tetration can help visualize the magnitude of?? One way that I came up with is a building with 103 floors, each of which is for numbers of various sizes; the higher the floor, the larger the numbers. The Tetrational System (the numbering system that this uses) has the following number names:
For numbers <= 10^30, whose magnitudes are easy to visualize without tetration, it uses the same number names as Rowlett. On the first 2 floors, we have:
First floor: Two, Three, Six, Nine, Twelve, Fifteen, Eighteen, Twenty-one, Twenty-four, Twenty-seven, Thirty
Second floor: Hundred, Thousand, Million, Gillion, Tetrillion, Pentillion, Hexillion, Heptillion, Oktillion, Ennillion, Dekillion
Now, let's go onto the third floor. This is where the numbers get large enough that tetration is a useful way; each term is 10 to the power of the previous term:
Third floor: Googol, Froogol; the remaining words on this floor are the same as those on the second floor only that they use -illoogol instead of -illion (that is, Milloogol to Dekilloogol are 10^(10^6) through 10^(10^30.) (Froogol is a back-formation on Froogle on the model of Googol/Google.)
Fourth throught 103rd floors: Simply take the number names on the third floor and add "-plex" for those on the fourth floor, "-duplex" for the fifth floor, "-triplex" for the sixth floor, and so on all the way to "-centuplex" for the 103rd floor. This makes the largest number in the building (dekilloogolcentuplex) 10^10^10^...10^10^10^30 with a total of 103 numbers (102 tens and a 30) between exponent signs.
Any numbers too large for this?? At this moment, I know of only one number too large for this building, Graham's number. 66.245.98.219 23:12, 16 Nov 2004 (UTC)
Jonathan Bowers created an extendtion to tetration called array notation which he explains on his Home Page. He also has many examples of Infinity Scrapers, which compared to the numbers in this building, are like Alpha Centura!--SurrealWarrior 20:23, 11 August 2005 (UTC)
Anon minor formula edit
Line 33, was: 2↑↑(n−3){\displaystyle 2\uparrow \uparrow (n-3)} − 3
Now is: 2↑↑(n+3){\displaystyle 2\uparrow \uparrow (n+3)} − 3
As I am no expert, please check if this revision is valid. -- AllyUnion (talk) 10:33, 10 Dec 2004 (UTC)
(The following discussion was movied from the Super-exponentiation entry on Wikipedia:Pages needing attention/Mathematics. Paul August ☎ 20:06, Feb 7, 2005 (UTC)")
Super-exponentiation -- is this notation genuine? The source given is a mathforum link to where the notation is "invented" by a enthusiastic student. It seems like a dup of Knuth's up-arrow notation. Motor 19:48, 18 Mar 2004 (UTC)
No, Knuth's up-arrow notation is for all operations from super-exponentiation upwards, and super-exponentiation is just for that operation itself. 66.32.89.242 23:33, 2 Apr 2004 (UTC)
I rewrote this page using Knuth's up-arrow notation and the standard term "tetration". Really this page should be moved and redirected to Tetration. Gdr 13:23, 2004 Jul 4 (UTC)
Done. Noel 20:06, 17 Sep 2004 (UTC)
I currently put this article in Category:Mathematics, but I want to see if anyone can give a more specific category for this article. 66.245.77.90 00:44, 26 Aug 2004 (UTC)
http://mathworld.wolfram.com/PowerTower.html This is topic is discussed at Mathworld under the heading "Power Tower" GulDan 17:48, Sep 16, 2004 (UTC)
I have added information about the standard notation with supporting references. I strongly recommend using the published notation and nomenclature. MathWorld's entry on Power Tower is great, but they do have a sub page for tetration that directs to their Power Tower entry. Technically these are not the same thing; tetration is much more comprehensive than Power Tower, but almost all published research on tetration has been confined to the Power Tower due to the profound difficulty of the subject. I have changed the category to arithmetic. User:Daniel Geisler 12:56, Dec 28, 2004 (UTC)
The other successor of exponentiation
The article seems to suggest that tetration is the logical extension of the sequence of addition, multiplication, and exponentiation. However, what about ((nn)n...) for m occurences of n? It seems to me this operation is equally an extension: Since exponentiation is non-commutative, the sequence bifurcates at this point. This "inferior super-exponentiation", if you will, has interesting properties of its own. For instance, a negative value for m corresponds to taking the nth root m times (since it can remove exponents), so m=0 yields nn{\displaystyle {\sqrt[{n}]{n}}} .
Anyway, shouldn't there be an article about this operation also? Who here knows its standard name(s) and notation(s), or does it have any? (I thought "iterated exponentiation" was reasonable, but the article's (and discussion's) usage conflicts.) --Ddawson 12:08, 20 Mar 2005 (UTC)
This is simply nnm nnm{\displaystyle n^{n^{m}}} (NESTED SUBS/SUPS ARE BROKEN ON WIKIPEDIA >:() , there is nothing especially "new" about it relative to the sense that tetration is some new operation. Dysprosia 22:25, 20 Mar 2005 (UTC)
Good point. (That should be nnm−1{\displaystyle n^{n^{m-1}}} , though, since I count the base n.) I see what you mean: this can be defined by a fixed expression, whereas tetration must be defined recursively in a formal sense. Ddawson 15:06, 21 Mar 2005 (UTC)
Maybe there could be a quick mention of this, somewhere soon after the definition of tetration, with the explanation of why it's not nearly as interesting. My reasoning is that it's still a valid way of iterating exponentiation. Ddawson 15:25, 21 Mar 2005 (UTC)
Infinite power towers
Someone made a strange removal on the basis that "2 is greater than e" (??). I have reverted that, but then I noticed that the reason that the article stipulates r <= e, is to ensure "x{\displaystyle x} is not more than e 1 / e {\displaystyle e^{1/e}} ". But investigation shows that r^(1/r) is never greater than e^(1/e). The article misleadingly implies otherwise when r > e. Therefore I think that that phrase should be removed or revised. Eric119 02:28, 2 August 2005 (UTC)
I rephrased it.--Patrick 07:40, 2 August 2005 (UTC)
I added a precise definition of infinite power towers, since I've had arguments with people before who insisted xxx...{\displaystyle x^{x^{x^{.^{.^{.}}}}}} is not well-defined. And I can see their point, since the result of raising x to an already existing tower depends on the value the tower already has. For instance, while the sequence 2222{\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\sqrt {2}}}}} , 22222{\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\sqrt {2}}}}}} , 222222{\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\sqrt {2}}}}}}} , ... clearly converges to 2, 2224{\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{4}}}} , 22224{\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{4}}}}} , 222224{\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{4}}}}}} , ... just stays at 4 every step of the way, even though it would seem to eventually lead to the same infinite tower. Gmalivuk 20:45, 6 July 2007 (UTC)
New tetration and slog software
A posting on Wolfram's NKS Forum from C. A. Rubtsov and G. F. Romerio has a link to their tetration and slog software. I can't say that I support their conclusions, but then I can't really name anyone who support my own conclusions, so it's all good. They have released a tetration and slog calculator that does give a much larger value for e^^pi that I can justify, but it is more in line with Andrew Robins' estimate. I do share their interest in S. C. Woon's work.Daniel Geisler 17:50, 27 December 2005 (UTC)
I've checked out their calculators, and I like the interface, but the method they use for calculating superlog is the "linear" extension that uses the s(z) = (z-1) critical function, which ammounts to a 1-st degree solution using my method. The method is described in:
Solving for the Analytic Piecewise Extension of Tetration and the Super-logarithm,
which should now be available. Andrew 22:15, 2 February 2006 (UTC)
A thought about addition
a+b=a+1+⋯+1⏟b{\displaystyle {{a+b=a+} \atop {\ }}{{\underbrace {1+\cdots +1} } \atop b}}
Or, a incremented b times. Or, applying b times the Successor function to a (See Peano axioms, it's all there)
Locoluis̊ 01:06, 31 March 2006 (UTC)
Just my tought. If tetration is "level 4", exponentiation is "level 3". multiplication is "level 2" and addition is "level 1" when the addition with 1 is "level 0" and "ground level". //Fabben
I have been trying to find a solution of "level 0 binary operator" for a few years. The hypothesis of increment by 1 proposed above, looks quite reasonable, however it leaves some blank spots in the whole picture.
Let's accept that a(1)b = a+b, a(2)b = ab, a(3)b = a^b etc. (just for further convenience).
Now we are discussing the problem of a(0)b. As Locoluis proposes, a(0)b = a+1. There are two logical conclusions from that:
1) The "level 0" operator is not commutative: a(0)b = a+1 != b(0)a = b+1
2) The function f(x)=a(0)x is a constant, and it does not depend on its "x" argument.
From the other hand, let's examine the following sequence:
a(1)a = a(2)2, i.e. a+a = a*2;
a(2)a = a(3)2, i.e. a*a = a^2;
a(3)a = a(4)2, i.e. a^a = a^^2 etc.
and let's try to extrapolate that sequence below "level 1". We will get:
a(0)a = a(1)2, i.e. a(0)a = a+2,
which comes into a contradiction with the assumption that a(0)b = a+1 (!)
In my opinion, the a(0)b operator definition should meet the following requirements:
1) a(0)a = a+2 (see explanation above);
2) a(0)m < a(0)n, if m<n;
3) m(0)a < n(0)a, if m<n;
4) a < a(0)b < a+b for any positive integer "a" and "b".
A very special case is when b=0. Accepting our assumption, we get
0(0)0 = 0+2 = 2, which is larger than 0(1)0 = 0+0 = 0 (!!!)
--Beloturkin 09:30, 8 September 2006 (UTC)
So far, welcome to continue discussing the Zero-Level Binary Operator... -- Unsigned
Addition can never be expressed in the same terms as further iterated operators. Note that per definition, n(2)1=n, n(3)1=n, and in general n(m+1)1=m (since you're doing n(m)n(m)n(m)...(m)n, except with only one m, leading to the degenerate case where no (m) operations actually take place). However, n(1)1 is not equal to n, even though it would be if addition were defined in terms of an iteration of a (0) operator - regardless of what that operator was. Thus, addition cannot be defined in that sense. -- Milo
Moved Comment
Isn't this page violating the third paragraph of "What Wikipedia is not"?
This should be added as a comment below. Apparently this is refering to the section on original research ([Wikipedia is not, 1.3]). I personally think it can be deleted because nobody signed to it. --Whiteknox 19:20, 28 December 2006 (UTC)
n ^^ 1/2
Would it help in extending tetration to the reals to extend it first to n^^(1/2) (and by extension n^^(k+1/2))? If so, what useful identities can be used to solve the equation x = n^^(1/2) for n? At first I thought about saying x = n^^(1/2) implies n = x^^2, but this really doesn't make sense, since (n^^a)^^b doesn't equal n^^(a*b) or n^^(a+b) or anything else simple. ((n^^2)^^2 = n^(n^(n+1)) = (n^^3)^n.)
Using the symbol "=/=" to denote "not equal to";
It sounds OK, the problem is though that in general n^^(1/2) =/= n^^(2/4), n^^(3/6), n^^(4/8) etc. which would be necessary for this extension to be rigorous (and mathematics is always rigorous). Other oddities crop up as well, such as the fact that 2^^(1/3) < 2^^(2/7) even though 1/3 > 2/7.
This and other problems in extending tetration to the reals is a consequence of the fact that exponentiation is neither commutative nor associative, so that in general a^b =/= b^a and a^(b^c) =/= (a^b)^c.
Meltingpot (talk) 09:19, 15 April 2008 (UTC)
It seems that there should be some identities along the lines of n^a*n^b = n^(a+b) and (n^a)^b = n^(a*b). Actually, one identity that may eventually be helpful is (n^^a)^^2 = (n^^(a+1))^(n^^(a-1)), although I don't know how helpful.
Also, in the extension to real numbers section, the slog examples seem wrong to me. It looks like they are arbitrarily equated to log. (slog10 3 = log10 3) The real inverse would be x in 10^^x = 3 which is currently undefined. (Both of the other examples make similar mistakes.) MagiMaster 07:47, 26 September 2006 (UTC)
Actually, does it make sense to define x^^(1/n) = y such that y^^n = x? If so, I might have a good definition for x^^y based on continued fractions. Basically, n in the above equation doesn't have to be an integer. Just apply that definition, for 0 < y < 1, and the definition of x^^y, for y >= 1, recursively. For any rational n, it's a finite process. Using those definitions, 4^^(3/5) ~= 2.2266. Does anyone see any problems with this definition? (Also, I have no idea how to prove monotonicity or continuity...) Hmm... I was just reading Andrew's paper (see above) and my values don't agree with his. (His approach looks a lot more rigorous than mine.) MagiMaster 05:31, 28 September 2006 (UTC)
Yes, unfortunately; for consistency 4^^(3/5) would have to equal 4^^(6/10), 4^^(9/15), 4^^(12/20) as well, which it almost certainly doesn't (as I explained above).
y=xx
Can xx be differentiated? And if not, is there proof? I think dxxdx=0{\displaystyle {\frac {dx^{x}}{dx}}=0} when 1e=x{\displaystyle {\frac {1}{e}}=x} and it appears that dxxdx=1{\displaystyle {\frac {dx^{x}}{dx}}=1} when 1=x{\displaystyle 1=x} . But I don't think I can find the derivative at first glance.--Steven Weston 12:33, 10 May 2007 (UTC)
dxxdx=dexlnxdx=exlnxd(xlnx)dx=exlnx(1+lnx)=xx(1+lnx).{\displaystyle {\frac {dx^{x}}{dx}}={\frac {de^{x\ln x}}{dx}}=e^{x\ln x}{\frac {d(x\ln x)}{dx}}=e^{x\ln x}(1+\ln x)=x^{x}(1+\ln x).}
This result is also mentioned in the table of derivatives. If you fill in x=1{\displaystyle x=1} , you'll see that the derivative is indeed 1, as you say. Similarly, the derivative is zero at x=1e{\displaystyle x={\frac {1}{e}}} . -- Jitse Niesen (talk) 13:47, 10 May 2007 (UTC)
Cheers. It looks so simple now... I've looked through the table of derivatives in the past, but I can't say that it's ever caught my eye. I've been interested in tetration for a while now. What's the antiderivative then, or indeed is there one? I couldn't immediately find it in the lists of integrals.--Steven Weston 19:28, 10 May 2007 (UTC)
I don't know how to find the antiderivative. I tried it in Maple and it did not give an answer. This probably means that the antiderivative cannot be written in terms of elementary functions like exponentials, sine, etc. Nevertheless, the antiderivative does exist; it's just that we cannot find a nice formula for it. -- Jitse Niesen (talk) 01:17, 11 May 2007 (UTC)
You said that it definitely exists: is there proof of this?--Steven Weston 09:28, 11 May 2007 (UTC)
The function f(x)=xx{\displaystyle f(x)=x^{x}} is a continuous function. There is a theorem that says that every continuous function has an antiderivative (this result is mentioned in antiderivative). -- Jitse Niesen (talk) 14:44, 11 May 2007 (UTC)
Just out of interest, can xx{\displaystyle ^{x}x} be differentiated?--Steven Weston 10:33, 15 May 2007 (UTC)
That depends on how you define xx{\displaystyle {}^{x}x} when x is not an integer. -- Jitse Niesen (talk) 01:39, 16 May 2007 (UTC)
Has this special case got a name? It is barely mentioned either here or on Exponentiation. —DIV (128.250.80.15 (talk) 07:43, 1 May 2008 (UTC))
Real Extension
Not my area of expertise, but the following two sentences in this section appear to be in direct conflict:
Extending x↑↑b{\displaystyle x\uparrow \uparrow b} to real numbers x>0{\displaystyle x>0} is straightforward...
At this time there is no commonly accepted solution to the general problem of extending tetration to the real or complex numbers...
Can someone clarify this? Ultraviolet777 22:58, 24 July 2007 (UTC)
The latter bit is talking about real or complex values of "b", the number of iterated exponents. I changed the wording a little in the article to improve this. Doctormatt 23:18, 24 July 2007 (UTC)
real b{\displaystyle ~b~}
Well, Doctormatt, on the base of your update of the "real extension", could you please, name, for example, the first terms of the expansion of the tetration in the Tailor series at samall values of the number b{\displaystyle ~b~} of exponentiations, ?
In other words, if
be=1+∑n=1∞cnbn{\displaystyle ~{^{b}{\rm {e}}}=1+\sum _{n=1}^{\infty }c_{n}b^{n}}
then, what are coefficients c{\displaystyle ~c~} ?
dima (talk) 02:24, 21 January 2008 (UTC) P.S. What is radius of convergence of the series?dima (talk) 02:27, 21 January 2008 (UTC)
It appears that you haven't carefully read Doctormatt's changes or comments. ba{\displaystyle ^{b}a} is analytic in a for fixed (integer) b. He removed (IMHO) any indication that there is a simple extension to real b of any sort, even if uxp is offered as such an extension. — Arthur Rubin | (talk) 07:46, 21 January 2008 (UTC)
Dear Artur. Sorry if my typing above is not clever enough. I understand that at fixed integer b the tetration is analytic function of a. Doctormatt typed about that case. Perhaps, his typing cannot be applied to the case of real b. The extension to the real heighs has two examples. First of them seems to be just uxp. The second one has smooth first derrivative, but it is still not analytic, and the series above in not valid. May I repeat the question: How about the analytic extension for real and for complex b? I suggest that somebody gives the reference to such an extension; overwice we should type that <no such extension is yet reported>. dima (talk) 09:14, 21 January 2008 (UTC)
A paper describing a real-analytic solution for the infra log (base e) was reported to me in a USENet post in the 90s, and (I believe) here in Wikipedia:Reference desk/Mathematics. I never looked up the paper, so I can't confirm it. — Arthur Rubin | (talk) 13:54, 21 January 2008 (UTC)
some small edits without author-notification
I'd like to excuse for doing edits without being logged in (usually I automatically logged in), so there is no author at my (small) series of edits just before. Don't remember, how to correct this - sorry Gottfried Helms Gotti 08:22, 2 November 2007 (UTC)
Pentation
Why not bother to add a Pentation article? Even though it's a redirect, I think it should have its own article. Pilover819 (talk) 11:37, 23 December 2007 (UTC)
Actually, it would be nice to have a page on pentation, as there is at least one result that has been found about it. For example Jay D. Fox notes in the Tetration Forum that the limit limn→(−∞)x↑↑↑n{\displaystyle \lim _{n\rightarrow (-\infty )}x{\uparrow }{\uparrow }{\uparrow }n} is the lower fixed point of real-valued tetration, for example: e↑↑(−1.85)=−1.85{\displaystyle e{\uparrow }{\uparrow }(-1.85)=-1.85} so e↑↑↑(−∞)=−1.85{\displaystyle e{\uparrow }{\uparrow }{\uparrow }(-\infty )=-1.85} . Aside from this result, though, not much is known about pentation. AJRobbins (talk) 16:08, 28 December 2007 (UTC)
There now is a page on pentation. Don't know how long it will last though. --116.14.26.124 (talk) 03:45, 23 June 2009 (UTC)
I am not satisfied with the paper. The definition of tetration is almost absent. The speculation about axtimesa...a{\displaystyle a_{^{~x{\rm {times}}}}^{a^{...^{a}}}} looks as property of xa{\displaystyle ~^{x}a~} in the specific case of real a{\displaystyle ~a~} and natural x{\displaystyle ~x~} . The definition should allow to see that this function is analytic with respect to each of its arguments, (at least in some range of values of arguments), reveal its singularities, and so on. Overwice we should write at the preamble that Tetration is some mathematical function of two variables which is believed to be definable as analytical in some (still unknown) set including natural numbers. P.S. Look at the definitions of other special functions; try to do the same for tetration. Then we can keep tetration among special funcitons. dima (talk) 04:39, 13 January 2008 (UTC)
I put the subsection about uxp as special case of tetration. I tried to make it short; only definition. I suppress the doubtful statement about analiticity at a=e. Now my dissatisfaction is rather about present state of the theory of tetration, not about the article. However, the article ultra exponential function should be improved; perhaps, including the proof of the Theorem about discontinuity of derivative at integer values of argument. It seems, there is no way to define tetration as analytical function and no regular way for the precise evaluation at non-integer points. Why, for example, factorial can be generalized to analytic gamma function, but the tetration looks so ugly at the complex plane? dima (talk) 23:17, 14 January 2008 (UTC)
Discussion about "UXP"
I have moved this section from the main article, as its claims are unclear, and if I interpret them correctly, contradictory. You cannot have differentiability at one integer value (Claim C), and have non-differentiability at another integer value (Claim E) while you have a functional equation that relates these two values (Claim A). That is simply impossible. If anyone can explain to me how these claims are not in contradiction with themselves, please do so. As for you Domitori/dima and Ultra.Power, there is an excelent forum for discussing these things, called the Tetration Forum. I encourage both of you to join and learn more about tetration, and also discuss this ultra-exponential-function you are writing about. It is a great place to discuss tetration so that you can get feedback on your theories and gaps in your knowledge. AJRobbins (talk) 06:43, 19 January 2008 (UTC)
I have since come to an understanding that (Claim E) has a major typo in it (should be (-1)+ not 0+, which makes a big difference, as noted by Arthur Rubin). This means that the extension called "UXP" is nothing more than the method described in the "linear approximation" section of this article. This indicates that a new article ultra exponential function is unnecessary. I also agree with most of the comments in the WikiProject Mathematics discussion about "UXP", and that the "UXP" article uses non-standard terminology with non-standard notation, to introduce a standard extension of tetration in a way that makes it seem different when it really isn't. AJRobbins (talk) 02:25, 20 January 2008 (UTC)
This could have been avoided if Dima and Ultra had found the Tetration Forum before Wikipedia. The purpose of the forum was to have a place to talk about terminology, notation, and methods so that things like this don't happen. The Tetration Forum is also host to many discussions about series expansions, and extensions to the complex plane. AJRobbins (talk) 02:25, 20 January 2008 (UTC)
Ultra exponential function
The functional approach to the tetration leads to so-called ultra exponential function f(b)=uxpa(b){\displaystyle ~{\rm {f}}(b)={\rm {uxp}}_{a}(b)~} , determined with following condiitons:
(A) f(x)=af(x−1)for allx>−1{\displaystyle ~{\rm {f}}(x)=a^{{\rm {f}}(x-1)}~~{\mbox{for all}}~~~x>-1~}
(B) f(0)=1{\displaystyle ~{\rm {f}}(0)=1}
(C) f{\displaystyle ~{\rm {f~}}} is differentiable on (−1,0){\displaystyle ~(-1,0)~}
(D) f′{\displaystyle ~{\rm {f}}'~} is non-increasing or non-decreasing at (−1,0){\displaystyle ~(-1,0)~}
(E) limx→0+f′(x)=ln(a)limx→0−f′(x),{\displaystyle \lim _{x\rightarrow 0^{+}}{\rm {f}}'(x)=\ln(a)\lim _{x\rightarrow 0^{-}}{\rm {f}}'(x),}
It happens that for real values of a{\displaystyle ~a~} , the conditions (A-E) unambiguously determine the funciton uxpa{\displaystyle ~{\rm {uxp}}_{a}~} [1], {{ safesubst:#invoke:Unsubst||$N=Dubious |date=__DATE__ |$B= {{#invoke:Category handler|main}}[dubious – discuss] }} and, for integer values of the argument, the uxp{\displaystyle ~{\rm {uxp}}~} function coincides with tetration defined above . See the specific article Ultra exponential function for the details.
At a≠e{\displaystyle ~a\neq {\rm {e}}~} the function uxpa{\displaystyle ~{\rm {uxp}}_{a}~} is not smooth, and the alternative approaches to the extension of the tetration exist, which lead to the smooth functions.
I see, some users dislike the theorem above, but please, will you return to the article at least the reference? dima (talk) 03:26, 24 January 2008 (UTC)
I consider the theorem uninteresting, thereby making the paper uninteresting. Others (including the author) may disagree.... — Arthur Rubin | (talk) 03:40, 24 January 2008 (UTC)
Tetration#Approaches to inverse functions
ssrt(x)=1limn→∞2n+1(1x){\displaystyle ssrt(x)={\frac {1}{\lim _{n\to \infty }{\text{ }}^{2n+1}\left({\frac {1}{x}}\right)}}} —Preceding unsigned comment added by Cʘʅʃʘɔ (talk • contribs) 08:15, 4 March 2008 (UTC)
y = x^x
inverse is:
x = y^y
y = x^(1/y)
y = x^(1/x^(1/y))
y = x^(1/x^(1/x^(…)
y = x^(1/x)^(1/x)^(…)
y = x^(1/x)^^∞
y = (1/(1/x)) ^ (1/x)^^∞)
y = 1/(1/x)^(1/x)^^∞)
y = 1/(1/x)^^(∞+1)
y = 1/(1/x)^^ ∞.
Cʘʅʃʘɔ (talk) 15:34, 3 March 2008 (UTC)
After careful study, the equations are correct, but convergence seems sufficiently questionable that it needs a reference that the ssrt expression in terms of the W function and the ∞z expression (also in terms of the W function) have the same or similar domains of correctness or convergence. — Arthur Rubin | (talk) 19:26, 3 March 2008 (UTC)
The domain of y = lim [n -> ∞] x^^(2n+1) (not x^^n - must be odd) is:
0 < x =< e^(1/e) (s. Images: and ),
so the domain of y = lim [n -> ∞] (1/x)^^(2n+1) is:
(1/e)^(1/e) =< x < ∞.
so the domain of y = lim [n -> ∞] (1/(1/x)^^(2n+1)) is:
The minimum of x^x is x0 = 1/e; the value is y0(x0) = (1/e)^(1/e);
so (1/e)^(1/e) =< y < ∞.
Invert function has a domain:
The value of lim [n -> ∞] (1/(1/e)^(1/e) ^^(2n+1)) = 1/e,
The value of invert of y = x^x for (1/e)^(1/e) is y((1/e)^(1/e)) = 1/e.
For both functions: y(1) = 1.
And: lim [x -> ∞] (x^x)' = ∞ and lim [x -> ∞] (1/(1/x)^^(2n+1))' = 0.
Interesting pattern
I discovered this pattern to do with xx{\displaystyle x^{x}} the other day and I thought it was quite interesting.
dxxdx=xx(lnx+1){\displaystyle {\frac {dx^{x}}{dx}}=x^{x}(\ln x+1)}
d2xxdx2=xx(ln2x+2lnx+x+1x){\displaystyle {\frac {d^{2}x^{x}}{dx^{2}}}=x^{x}(\ln ^{2}x+2\ln x+{\frac {x+1}{x}})}
d3xxdx3=xx(ln3x+3ln2x+3x+1xlnx+x2+3x−1x2){\displaystyle {\frac {d^{3}x^{x}}{dx^{3}}}=x^{x}(\ln ^{3}x+3\ln ^{2}x+3{\frac {x+1}{x}}\ln x+{\frac {x^{2}+3x-1}{x^{2}}})}
d4xxdx4=xx(ln4x+4ln3x+6x+1xln2x+4x2+3x−1x2lnx+x3+6x−x+2x3){\displaystyle {\frac {d^{4}x^{x}}{dx^{4}}}=x^{x}(\ln ^{4}x+4\ln ^{3}x+6{\frac {x+1}{x}}\ln ^{2}x+4{\frac {x^{2}+3x-1}{x^{2}}}\ln x+{\frac {x^{3}+6x-x+2}{x^{3}}})}
The nth derivative takes a similar form, with a pattern taken from Pascal's triangle and another pattern of the polynomials/(their degree) continuing down in the same positions relative to the greatest power of lnx. Lol, the only mildly tedious thing was proving the conjecture I made that this pattern was true for all natural n. If anyone has seen these polynomials before, please let me know. I'd like to know if they have a specific name. If you need to know any more of them, I wrote a program that generated the first 172 of them, before the floating point numbers stopped working.--Steven Weston (talk) 23:25, 6 July 2008 (UTC)
Yes, the coefficients of those polynomials are Lehmer-Comtet numbers Sloane's A008296. Also for more expansions of this nature, please see section 5.3.2 (page 42) of the Tetration Reference. AJRobbins (talk) 18:15, 24 September 2008 (UTC)
Ultra exponential
As proposed earlier at Talk:ultra exponential function#This is nothing new., I'm proposing that ultra exponential function be merged into Tetration#Extension to real heights. — Arthur Rubin (talk) 02:58, 17 August 2008 (UTC)
Agree.--Dojarca (talk) 15:20, 31 August 2008 (UTC)
Agreed AJRobbins (talk) 02:32, 17 September 2008 (UTC)
Agree. May be, even Tetration#Piece-vice extension to real heights. dima (talk) 08:41, 27 September 2008 (UTC)
Analytic extention to real heights
Since any analytic function is infinitely differentiable, it should be mentioned, that there is only one analytic extention of tetration into real heights, namely, based on infinite differentiability requirement.--Dojarca (talk) 15:55, 31 August 2008 (UTC)
That's not true. Well, it may be true about analytic, but I'm sure I could construct a large family of infinitely differentiable solutions. — Arthur Rubin (talk) 18:31, 31 August 2008 (UTC)
That satisfy all the integer values and functional equations? Only one.--Dojarca (talk) 09:56, 1 September 2008 (UTC)
{{cn}}? For a fixed base, you can add a small C∞{\displaystyle C^{\infty }} bridge (i.e., a C∞{\displaystyle C^{\infty }} function which is 0 outside the specified range) to xa{\displaystyle {}^{x}a} between (say) x=.2 and x=.3, and extend it by the functional equation for all real values of x. I think it can be made C∞{\displaystyle C^{\infty }} in both x and a, but the definitions are unclear. — Arthur Rubin (talk) 17:06, 1 September 2008 (UTC)
Such addition will break analiticity: the added function is clearly not analityc if it is 0 in one range and non-zero in another range.--Dojarca (talk) 17:23, 1 September 2008 (UTC)
I said C∞{\displaystyle C^{\infty }} (infinitely differentiable) not Cω{\displaystyle C^{\omega }} (analytic). I don't know if there is an analytic solution, although I've been pointed to an article on fluid flows which suggests that there are at most 3 analytic solutions. — Arthur Rubin (talk) 18:04, 1 September 2008 (UTC)
Infifnitely differeintiable in all points function cannot be zero in one range and non-zero in another range.--Dojarca (talk) 19:13, 1 September 2008 (UTC)
(unindent) Umm...
f(x)={e−1/x2x>00x≤0{\displaystyle f(x)={\begin{cases}e^{-1/x^{2}}&x>0\\0&x\leq 0\end{cases}}}
Next! siℓℓy rabbit (talk) 19:24, 1 September 2008 (UTC)
This function is not analytic: it is not equal to its Taylor series in the neighbourhood of zero.--Dojarca (talk) 19:43, 1 September 2008 (UTC)
Good job! But it is "Infifnitely differeintiableTemplate:Sic in all points function" and yet is "zero in one range and non-zero in another range", contrary to your above post. siℓℓy rabbit (talk) 19:46, 1 September 2008 (UTC)
Well I meant analytic function. The main point here is that there can be only one analytic extention of tetration into real heights.--Dojarca (talk) 19:53, 1 September 2008 (UTC)
Anyway, the point is moot. You can just add an (analytic) periodic function like sin(πx){\displaystyle \sin(\pi x)} . This ambiguity necessarily invalidates uniqueness, unless further restrictions are imposed. siℓℓy rabbit (talk) 19:51, 1 September 2008 (UTC)
Then it simply will not satisfy the main functional equation for tetration.--Dojarca (talk) 19:54, 1 September 2008 (UTC)
Ah... I was thinking that b+1a=a(ba){\displaystyle ^{b+1}a=a^{(^{b}a)}} only needs to hold for integer values of b. siℓℓy rabbit (talk) 19:59, 1 September 2008 (UTC)
Of course if you only fix the integer values, you can easily find multiple analytic solutions.--Dojarca (talk) 20:01, 1 September 2008 (UTC)
It's not obvious that there is any (real-)analytic solution, or that the solution is necessarily unique. Hmmm. Actually, thinking it over, the solution cannot be unique, with the following argument:
Let h be a (real-)analytic function satisfying:
h(x+1)=h(x),{\displaystyle h(x+1)=h(x),}
h(0)=0,{\displaystyle h(0)=0,}
h′(x)>−1,{\displaystyle h'(x)>-1,}
(The third condition is required to retain the obvious monotonicity requirement for a > 1.)
For example, h(x)=ϵsin(2nπx),{\displaystyle h(x)=\epsilon \sin(2n\pi x)\,,} , where n is an integer and ε is sufficiently small.
Let f∗(x)=f(x+h(x)).{\displaystyle f^{*}(x)=f(x+h(x)).}
Then f* also satisfies the equations, does it not? — Arthur Rubin
Oops. Perhaps my "3 solutions" above was an incorrect memory. I guess some other sort of regularity requirement is necessary to obtain uniqueness. I don't know why I didn't think of that 34 years ago when I first looked at the problem. — Arthur Rubin (talk) 21:24, 1 September 2008 (UTC)
Good one. siℓℓy rabbit (talk) 00:05, 2 September 2008 (UTC)
Um, how does f∗(x)=f(x+h(x)){\displaystyle f^{*}(x)=f(x+h(x))} satisfy #1? I get f∗(x+1)=f(x+1+h(x+1))=f(x+1+h(x)){\displaystyle f^{*}(x+1)=f(x+1+h(x+1))=f(x+1+h(x))} . How does this equal f(x+h(x)){\displaystyle f(x+h(x))} (i.e. f∗(x){\displaystyle f^{*}(x)} )? Furthermore, how is eq. 1. equivalent to the tetrational equation: f(x+1)=bf(x){\displaystyle f(x+1)=b^{f(x)}} ?
As for another stronger requirement, how about that the function have exactly one inflection point for x∈[−2,∞){\displaystyle x\in [-2,\infty )} ? (I'm also thinking of real bases b >= 1 only as well -- bases 0 < b < 1 seem to have a peculiar oscillating character that would necessitate a different requirement, and bases less than 0 yield complex values and all bets are off.) mike4ty4 (talk) 08:06, 17 November 2008 (UTC)
If f(x+1)=bf(x){\displaystyle f(x+1)=b^{f(x)}\,} (the relevant equation), then f∗(x+1)=bf∗(x){\displaystyle f^{*}(x+1)=b^{f^{*}(x)}} . — Arthur Rubin (talk) 14:53, 17 November 2008 (UTC)
Ah, I get it now. Thank you. mike4ty4 (talk) 06:52, 23 November 2008 (UTC)
I am surprised to admit my own ignorance, but presumably there are theorems that give sufficient conditions for the unique analytic solution of a functional equation, once appropriate boundary conditions are set. My own question is: (1) what are these theorems, and (2) which hypotheses are violated by the functional equation of the tetration. siℓℓy rabbit (talk) 10:33, 2 September 2008 (UTC)
I think there is simple not enough functional equations. For example, you need 3 functional equations to define gamma-function.--Dojarca (talk) 10:39, 2 September 2008 (UTC)
I think it should be a functional equation that connects tetration with super root function. That connection would greatly simplify dealing with tetration. There is also no ormula to change the base.--Dojarca (talk) 20:43, 2 September 2008 (UTC)
Silly rabbit, this is a known case for functional equation:[2]. It is known to have multiple solutions.--Dojarca (talk) 02:26, 7 September 2008 (UTC)
The Citizendium article seems to claim that there is a unique holomorphic solution in ℜ(x)>−2{\displaystyle \Re (x)>-2} . If true and sourced, it should be here. This does not (obviously (to me, anyway)) contradict my proof above that the real-analytic solution cannot be unique, as it requires properties of the function at points of arbitrarily large imaginary part. — Arthur Rubin (talk) 21:34, 9 November 2008 (UTC)
This is already mentioned, in the section "Extension to complex heights". The CZ article doesn't have any formulas either. mike4ty4 (talk) 08:14, 17 November 2008 (UTC)
Decremented tetration
I am speculating about another criteria, which may make fractional iteration unique. Assume b = sqrt(2), t2=2, t4=4 such that b = t2^(1/t2) = t4^(1/t4) . Assume also Tb(x) = b^x and Tb(x,h) the h'th iterate and also the "decremented tetration" Ut(x) = t^x - 1 and Ut(x,h) the h'th iterate. Then powertowers T_b(x,h) of b can be expressed by "decremented powertowers"
T_b(x,h) = (U_t2(x/t2-1,h)+1)*t2 = (U_t4(x/t4-1,h)+1)*t4
at least for integer heights. This construction relates tetrational functions of different bases with each other and introduces then new requirements for the fractional iteration as well. Although this restriction occurs yet only for "decremented" tetration, I think this idea has some importance and may be thought further... —Preceding unsigned comment added by Druseltal2005 (talk • contribs) Gotti 17:50, 13 September 2008 (UTC)
I don't think this is correct. Do you wish to further comment? — Arthur Rubin (talk) 21:34, 9 November 2008 (UTC)
?? Please, what do you mean is not correct? The formula? The idea to derive from it some requirements/restrictions?
--Gotti 19:15, 1 February 2009 (UTC) —Preceding unsigned comment added by Druseltal2005 (talk • contribs)
Yes, the formula seems incorrect, as well as not being obviously related to the tetration concept(s) described here. — Arthur Rubin (talk) 06:50, 6 February 2009 (UTC)
@Formula. Assume T-tetration
T°0(x) = x, T°1(x)= b^x, T°2(x)=b^b^x,...
Assume U-tetration
U°0(x) = x, U°1(x)= t^x -1, U°2(x) = t^(t^x -1)-1, ...
Use for an example a suitable base
b = sqrt(2) = 2^(1/2).
t = 2, u = log(2) , so b=t^(1/t) = exp(u/t).
Write '-operation as x' = x/t-1 and the inverse "-operation as x"=(x+1)*t , so (x')" = x
Then T-tetration and U-tetration are connected in the following way:
T(x) = b^x
= (t^(1/t))^x
= t^(x/t)
= t^(x/t-1)*t
= (t^x' )*t
= ( (t^x'-1) +1)*t
= (t^x'-1)"
= U(x')"
T°2(x) = b^b^x
= b^(U(x')")
= t^(U(x')"/t)
= t^(U(x')+1)
= t^(U(x'))*t
= ((t^(U(x'))-1)+1)*t
= ( U(U(x'))+1)*t
= U(U(x'))"
= U°2(x')"
Generally for any integer height
T°h(x) = U°h(x')"
where the base for U(x) is t and for T(x) is b=t^(1/t)
The importance of this is, that U-tetration (or dxp() or "decremented exponentiation") can be "regularly" iterated to fractional heights (powerseries has no constant term), for instance using a triangular matrix and its (well defined) fractional powers. This is then exploited as an easy interpolation-possibility for fractional iterates for T itself.
Now if b=sqrt(2), then the base for the according U-tetration can be either t= 2 or t=4 . It was said, that the different shifting using t=2 and t=4 were numerically different when *fractional* iterates are computed, although for *integer* iterates the above transfer works well.
For the relation to the tetration-concept here set x=1 (keep the powerseries in terms of x!) and have actually "b^b" for "b^b^x"
--Gotti 22:01, 13 February 2009 (UTC) —Preceding unsigned comment added by Druseltal2005 (talk • contribs) --Gotti 22:01, 13 February 2009 (UTC)
Seems to work for b < e1/e, where there are real fixed points available for T, and hence U — although the fact that t=2 and t=4 seem to produce different "natural" values for fractional iterates suggests that this problem is even more difficult than it appears.... — Arthur Rubin (talk) 22:33, 14 February 2009 (UTC)
Yes, I agree. My intention was: there is also said, that there are many functions possible - for the same (fractional) iterate: just add one periodic function with half-cycle 1 with an appropriate small amplitude. This idea is used to question the uniqueness of a solution. Now: if fixpoint-shifting using 2 and fixpoint-shifting using 4 give different functions - why not try and combine both aspects (add such a periodic function) such that the resulting functions are equal for both fixpoints? (Though I've to admit, that I've not even an example...) --Gotti 07:36, 15 February 2009 (UTC) —Preceding unsigned comment added by Druseltal2005 (talk • contribs)
Decimal digits of power towers
I would like some feedback on whether adequate sources can be found to justify having a section of the article about the following interesting fact:
Given any positive integers b and d, all towers b ↑↑ {\displaystyle \uparrow \uparrow } n with integer n > d + 1 have the same d rightmost decimal digits. The following is an extremely simple algorithm that produces these digits, assuming b is not divisible by ten (if b is divisible by ten, then the d digits are all 0):
Let x0 = b, and compute xi = bxi - 1 mod 10d (i = 1, 2, 3, ...), stopping when xi = xi - 1 mod 10d.
When the algorithm stops, xi will be the required d-digit string (in base-10, omitting any leading 0s), and the value of i will be the height of the shortest "base b" tower that has these d rightmost digits.
E.g., here are the ten rightmost digits of each sufficiently tall tower with b = 1,2,...,10, respectively — a tower is sufficiently tall if its height is equal to or greater than the indicated height h:
b b^^x, x ≥ h h
-- ------------- --
1 0000000001 1 (with leading 0s inserted)
2 ...3432948736 12
5 ...8408203125 4
10 ...0000000000 2
The algorithm and the "interesting fact" in italics above are described in (or, rather, I deduced them from) the discussion on this page. I'm not sure whether that's an adequate source to justify putting these conclusions into the article, though.
--r.e.s. (talk) 08:02, 13 September 2008 (UTC)
That's certainly not an adequate source. However, if there were a published article, it could be included, I would think. — Arthur Rubin (talk) 21:44, 9 November 2008 (UTC)
I noted the same phenomena in a sci.math.research posting [http://groups.google.com/group/sci.math.research/browse_thread/thread/7823e4a156fa08ac Recurring digits in tetration and the Ackermann function] , but soon afterward discovered that this phenomena is tied to the base one is working in. The digits reoccur in base 10 and base 6 for example but don't reoccur in base 11. Daniel Geisler (talk) 01:13, 10 February 2009 (UTC)
Can CX, XC, or XX be differentiated (or anti-differentiated) and if so how? srn347 —Preceding unsigned comment added by 68.7.25.121 (talk • contribs) 00:17, November 6, 2008
I don't see it. All I can see that you can get is by formal differentiation of the functional equation. — Arthur Rubin (talk) 21:40, 9 November 2008 (UTC)
For C=e , the evaluation of F(X)=XC and its derivatives is described in reference cited in the article, http://www.ams.org/mcom/0000-000-00/S0025-5718-09-02188-7/home.html If can be generalized for C>1. dima (talk) 15:37, 15 February 2009 (UTC)
Naming?
I saw this:
"The term super-exponentiation is the most proper candidate for a name, however, Bromer published his paper Superexponentiation in 1987, and Goodstein published his paper Transfinite Ordinals in Recursive Number Theory (which coined the term tetration) in 1947, which predates Bromer. So although this is not a misnomer, the shorter and older term has gained more use."
But how is that the most proper candidate? "Super-exponential" could just refer to anything that grows faster than exponential. So it is not specific. It could be a double-exponential. It could be pentation. It could be the tetrasquare (x^x). It could even be the generalized factorial function GAMMA(x). "Tetration", and "tetrational", however, refer only to one thing. mike4ty4 (talk) 09:15, 25 January 2009 (UTC)
2^^4
It does equal 256, if not, tetration isn't hyper 4 at all. Let's look at addition (hyper 1). 9+4=13, 9+3=12 and 9+1=10. 7+5=12, 7+2=9 and 7+3=10. 13-12=1 and 12-9=3, so (x+y)-(x+z)=y-z.
Now let's look at multiplication (hyper 2). 8 × 6=48 right, and 8×4=32. Also, 8×2=16. I'll make up some more; say 5×7=35, 5×6=30 and 5×1=5. 48-36=16, so 8×6-8×4=8×2. 35-30=5, so 5×7-5×6=5×1. From this it's clear that (x×y)-(x×z)=x×(y-z).
Now let's look at exponentiation (hyper 3). 4^5=1024, 4^2=16 and 4^3=64. 3^6=726, 3^5=243 and 3^1=3. 1024÷16=64, so 4^5÷4^2=4^3. 726÷243=3 so 3^6÷3^5=3^1. (x^y) ÷ (x^z)=x^(y-z).
One would expect then that with tetration (hyper 3) (x^^z)√(x^^y)=x^^(y-z), and if (x^^z)√(x^^y)≠x^^(y-z) something's wrong. If 256=2^^4 (and 4=2^^2) then the 4th root of 256 should be 4, which it is, but including to this article 256 doesn't equal 2^^4 at all, which to me makes no sense whatsoever. Wouldn't the logical thing to do be to do what every calculator in the world does already, and do tetration from left to right? Robo37 (talk) 10:09, 13 June 2009 (UTC)
No, you wouldn't, as exponentiation is not associative, so one has to decide whether 2^^4 is 2^(2^(2^2))) (normal convention) or (((2^2)^2)^2), which would lead to a^^b being a^(a^(b-1)), not normally considered sufficiently "hyper". — Arthur Rubin (talk) 15:58, 13 June 2009 (UTC)
Can you explain in more detail? Why would it lead to a^^b being a^(a^(b-1))? You do everything else from left to right even when it does make a difference like with division and subtraction for example. Doing it from right to left would also make it easier as going from x^^y to x^^(y+1) would only involve putting x^^y to the power of x just like going from x^y to x^(y+1) only involves multiplying x^y by x and going from x×y to x×(y+1) only involves adding x to x×y.Robo37 (talk) 16:27, 13 June 2009 (UTC)
I'm on my way out, and may not get back to this for a day or so, but you can show by induction on b that (...((a^a)^a)...)^a) is a^(a^(b-1)), by induction on b, and using (x^y)^z = x^(y*z). See the section #Iterated powers in the article.
For other situations where it makes a difference, see, for example a paper by Donner(?) and Tarksi entitled Extended Operations and Relations on the Ordinal Numbers (or something like that), c. 1962. There, addition is O0, multiplication is O1, exponentiation is (except for some intial values) is O2, and something like tetration is O4. — Arthur Rubin (talk) 16:42, 13 June 2009 (UTC)
{{#invoke:see also|seealso}}
— Arthur Rubin (talk) 19:34, 23 June 2009 (UTC)
↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
Retrieved from "https://en.formulasearchengine.com/index.php?title=Talk:Tetration/Archive_1&oldid=290296"
|
CommonCrawl
|
Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity
DCDS-B Home
Global phase portraits of uniform isochronous centers with quartic homogeneous polynomial nonlinearities
January 2016, 21(1): 103-119. doi: 10.3934/dcdsb.2016.21.103
Global behavior of delay differential equations model of HIV infection with apoptosis
Songbai Guo 1, and Wanbiao Ma 1,
Department of Applied Mathematics, School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China
Received February 2015 Revised May 2015 Published November 2015
In this paper, a class of delay differential equations model of HIV infection dynamics with nonlinear transmissions and apoptosis induced by infected cells is proposed, and then the global properties of the model are considered. It shows that the infection-free equilibrium of the model is globally asymptotically stable if the basic reproduction number $R_{0}<1$, and globally attractive if $R_{0}=1$. The positive equilibrium of the model is locally asymptotically stable if $R_{0}>1$. Furthermore, it also shows that the model is permanent, and some explicit expressions for the eventual lower bounds of positive solutions of the model are given.
Keywords: global asymptotic stability, permanence., delay differential equations, Lyapunov functional, HIV infection.
Mathematics Subject Classification: Primary: 37N25, 34A34; Secondary: 93D20, 34D2.
Citation: Songbai Guo, Wanbiao Ma. Global behavior of delay differential equations model of HIV infection with apoptosis. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 103-119. doi: 10.3934/dcdsb.2016.21.103
R. M. Anderson and R. M. May, The population dynamics of microparasites and their invertebrate hosts,, Philos. T. R. Soc. B, 291 (1981), 451. doi: 10.1098/rstb.1981.0005. Google Scholar
H. T. Banks, D. M. Bortz and S. E. Holte, Incorporation of variability into the modeling of viral delays in HIV infection dynamics,, Math. Biosci., 183 (2003), 63. doi: 10.1016/S0025-5564(02)00218-3. Google Scholar
A. L. Cunningham, H. Donaghy, A. N. Harman, M. Kim and S. G. Turville, Manipulation of dendritic cell function by viruses,, Curr. Opin. Microbiol., 13 (2010), 524. doi: 10.1016/j.mib.2010.06.002. Google Scholar
M. Carbonari, M. Cibati, A. M. Pesce, D. Sbarigia, P. Grossi, G. D'Offizi, G. Luzi and M. Fiorilli, Frequency of provirus-bearing CD4$^+$ cells in HIV type 1 infection correlates with extent of in vitro apoptosis of CD8$^+$ but not of CD4$^+$ cells,, AIDS Res. Hum. Retrov., 11 (1995), 789. Google Scholar
L. Conti, G. Rainaldi, P. Matarrese, B. Varano, R. Rivabene, S. Columba, A. Sato, F. Belardelli, W. Malorni and S. Gessani, The HIV-1 vpr protein acts as a negative regulator of apoptosis in a human lymphoblastoid T cell line: Possible implications for the pathogenesis of AIDS,, J. Exp. Med., 187 (1998), 403. doi: 10.1084/jem.187.3.403. Google Scholar
R. V. Culshaw, S. Ruan and G. Webb, A mathematical model of cell-to-cell spread of HIV-1 that includes a time delay,, J. Math. Biol., 46 (2003), 425. doi: 10.1007/s00285-002-0191-5. Google Scholar
W. Cheng, W. Ma and S. Guo, A class of virus dynamic model with inhibitory effect on the growth of uninfected T cells caused by infected T cells and its stability analysis,, Commun. Pur. Appl. Anal., (). Google Scholar
O. Diekmann, S. A. van Oils, S. M. Verduyn Lunel and H.-O. Walther, Delay Equations: Functional-, Complex-, and Nonlinear Analysis,, Springer-Verlag, (1995). doi: 10.1007/978-1-4612-4206-2. Google Scholar
O. Diekmann, J. A. P. Heesterbeek and J. A. J. Metz, On the definition and the computation of the basic reproduction ratio $R_0$ in models for infectious diseases in heterogeneous populations,, J. Math. Biol., 28 (1990), 365. doi: 10.1007/BF00178324. Google Scholar
P. van den Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission,, Math. Biosci., 180 (2002), 29. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar
J. Embretson, M. Zupancic, J. L. Ribas, A. Burke, P. Racz, K. T.-Racz and A. T. Haase, Massive covert infection of helper T lymphocytes and macrophages by HIV during the incubation period of AIDS,, Nature, 362 (1993), 359. doi: 10.1038/362359a0. Google Scholar
B. Ensoli, G. Barillari, S. Z. Salahuddin, R. C. Gallo and F. W.-Staal, Tat protein of HIV-1 stimulates growth of cells derived from Kaposi's sarcoma lesions of AIDS patients,, Nature, 345 (1990), 84. doi: 10.1038/345084a0. Google Scholar
Y. Enatsu, Y. Nakata and Y. Muroya, Lyapunov functional techniques for the global stability analysis of a delayed SIRS epidemic model,, Nonlinear Anal.-Real, 13 (2012), 2120. doi: 10.1016/j.nonrwa.2012.01.007. Google Scholar
A. M. Elaiw and N. H. AlShamrani, Global properties of nonlinear humoral immunity viral infection models,, Int. J. Biomath., 8 (2015). doi: 10.1142/S1793524515500588. Google Scholar
H. I. Freedman and S. Ruan, Uniform persistence in functional differential equations,, J. Differ. Equations, 115 (1995), 173. doi: 10.1006/jdeq.1995.1011. Google Scholar
Z. Feng and L. Rong, The influence of anti-viral drug therapy on the evolution of HIV-1 pathogens,, DIMACS Series in Discrete Math. Theor., 71 (2006), 161. Google Scholar
R. Fan, Y. Dong, G. Huang and Y. Takeuchi, Apoptosis in virus infection dynamics models,, J. Biol. Dyn., 8 (2014), 20. doi: 10.1080/17513758.2014.895433. Google Scholar
H. Garg, J. Mohl and A. Joshi, HIV-1 induced bystander apoptosis,, Viruses, 4 (2012), 3020. doi: 10.3390/v4113020. Google Scholar
M.-L. Gougeon, H. Lecoeur, A. Dulioust, M.-G. Enouf, M. Crouvoiser, C. Goujard, T. Debord and L. Montagnier, Programmed cell death in peripheral lymphocytes from HIV-infected persons: increased susceptibility to apoptosis of CD4 and CD8 T cells correlates with lymphocyte activation and with disease progression,, J. Immunol., 156 (1996), 3509. Google Scholar
S. B. Hsu, Limiting behavior for competing species,, SIAM J. Appl. Math., 34 (1978), 760. doi: 10.1137/0134064. Google Scholar
M. Heinkelein, S. Sopper and C. Jassoy, Contact of human immunodeficiency virus type 1-infected and uninfected CD4$^+$ T lymphocytes is highly cytolytic for both cells,, J. Virol., 69 (1995), 6925. Google Scholar
A. V. M. Herz, S. Bonhoeffer, R. M. Anderson, R. M. May and M. A. Nowak, Viral dynamics in vivo: Limitations on estimates of intracellular delay and virus decay,, P. Natl. Acad. Sci. USA, 93 (1996), 7247. doi: 10.1073/pnas.93.14.7247. Google Scholar
G. Huang, W. Ma and Y. Takeuchi, Global properties for virus dynamics model with Beddington-DeAngelis functional response,, Appl. Math. Lett., 22 (2009), 1690. doi: 10.1016/j.aml.2009.06.004. Google Scholar
G. Huang, Y. Takeuchi and W. Ma, Lyapunov functionals for delay differential equations model of viral infections,, SIAM J. Appl. Math., 70 (2010), 2693. doi: 10.1137/090780821. Google Scholar
G. Huang, H. Yokoi, Y. Takeuchi, T. Kajiwara and T. Sasaki, Impact of intracellular delay, immune activation delay and nonlinear incidence on viral dynamics,, Jpn. J. Ind. Appl. Math., 28 (2011), 383. doi: 10.1007/s13160-011-0045-x. Google Scholar
M. W. Hirsch, H. L. Smith and X.-Q. Zhao, Chain transitivity, attractivity, and strong repellors for semidynamical systems,, J. Dyn. Differ. Equ., 13 (2001), 107. doi: 10.1023/A:1009044515567. Google Scholar
J. K. Hale, P. Waltman, Persistence in infinite-dimensional systems,, SIAM J. Math. Anal., 20 (1989), 388. doi: 10.1137/0520025. Google Scholar
J. K. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations,, Springer-Verlag, (1993). doi: 10.1007/978-1-4612-4342-7. Google Scholar
Y. Kuang, Delay Differential Equations with Applications in Population Dynamics,, Academic Press, (1993). Google Scholar
A. Korobeinikov, Global properties of basic virus dynamics models,, B. Math. Biol., 66 (2004), 879. doi: 10.1016/j.bulm.2004.02.001. Google Scholar
A. Korobeinikov, Global properties of infectious disease models with nonlinear incidence,, B. Math. Biol., 69 (2007), 1871. doi: 10.1007/s11538-007-9196-y. Google Scholar
M. Y. Li and H. Shu, Global dynamics of an in-host viral model with intracellular delay,, B. Math. Biol., 72 (2010), 1492. doi: 10.1007/s11538-010-9503-x. Google Scholar
C. J. Li, D. J. Friedman, C. Wang, V. Metelev and A. B. Pardee, Induction of apoptosis in uninfected lymphocytes by HIV-1 Tat protein,, Science, 268 (1995), 429. doi: 10.1126/science.7716549. Google Scholar
X. Li and S. Fu, Global stability of the virus dynamics model with intracellular delay and Crowley-Martin functional response,, Math. Method. Appl. Sci., 37 (2014), 1405. doi: 10.1002/mma.2895. Google Scholar
X. Lai and X. Zou, Modeling HIV-1 Virus Dynamics with Both Virus-to-Cell Infection and Cell-to-Cell Transmission,, SIAM J. Appl. Math., 74 (2014), 898. doi: 10.1137/130930145. Google Scholar
X. Lai and X. Zou, Modeling cell-to-cell spread of HIV-1 with logistic target cell growth,, J. Math. Anal. Appl., 426 (2015), 563. doi: 10.1016/j.jmaa.2014.10.086. Google Scholar
C. C. McCluskey, Global stability of an SIR epidemic model with delay and general nonlinear incidence,, Math. Biosci. Eng., 7 (2010), 837. doi: 10.3934/mbe.2010.7.837. Google Scholar
B. Nardelli, C. J. Gonzalez, M. Schechter and F. T. Valentine, CD4$^+$ blood lymphocytes are rapidly killed in vitro by contact with autologous human immunodeficiency virus-infected cells,, P. Natl. Acad. Sci. USA, 92 (1995), 7312. doi: 10.1073/pnas.92.16.7312. Google Scholar
M. A. Nowak and R. M. May, Virus Dynamics: Mathematical Principles of Immunology and Virology,, Oxford University Press, (2000). Google Scholar
M. A. Nowak and C. R. M. Bangham, Population dynamics of immune responses to persistent viruses,, Science, 272 (1996), 74. doi: 10.1126/science.272.5258.74. Google Scholar
P. W. Nelson, J. D. Murray and A. S. Perelson, A model of HIV-1 pathogenesis that includes an intracellular delay,, Math. Biosci., 163 (2000), 201. doi: 10.1016/S0025-5564(99)00055-3. Google Scholar
P. W. Nelson and A. S. Perelson, Mathematical analysis of delay differential equation models of HIV-1 infection,, Math. Biosci., 179 (2002), 73. doi: 10.1016/S0025-5564(02)00099-8. Google Scholar
A. S. Perelson and P. W. Nelson, Mathematical analysis of HIV-1 dynamics in vivo,, SIAM Rev., 41 (1999), 3. doi: 10.1137/S0036144598335107. Google Scholar
L. Rong, Z. Feng and A. S. Perelson, Mathematical analysis of age-structured HIV-1 dynamics with combination antiretroviral therapy,, SIAM J. Appl. Math., 67 (2007), 731. doi: 10.1137/060663945. Google Scholar
N. Selliah and T. H. Finkel, Biochemical mechanisms of HIV induced T cell apoptosis,, Cell Death Differ., 8 (2001), 127. doi: 10.1038/sj.cdd.4400822. Google Scholar
H. Shu, L. Wang and J. Watmough, Global Stability of a nonlinear viral infection model with infinitely distributed intracellular delays and CTL immune responses,, SIAM J. Appl. Math., 73 (2013), 1280. doi: 10.1137/120896463. Google Scholar
H. L. Smith and X.-Q. Zhao, Robust persistence for semidynamical systems,, Nonlinear Anal.-Theor., 47 (2001), 6169. doi: 10.1016/S0362-546X(01)00678-2. Google Scholar
H. R. Thieme, Persistence under relaxed point-dissipativity (with application to an endemic model),, SIAM J. Math. Anal., 24 (1993), 407. doi: 10.1137/0524026. Google Scholar
J. Wu and X.-Q. Zhao, Permanence and convergence in multi-species competition systems with delay,, P. Am. Math. Soc., 126 (1998), 1709. doi: 10.1090/S0002-9939-98-04522-5. Google Scholar
W. Wang, Global behavior of an SEIRS epidemic model with time delays,, Appl. Math. Lett., 15 (2002), 423. doi: 10.1016/S0893-9659(01)00153-7. Google Scholar
X. Wang, S. Liu and X. Song, Dynamics of a non-autonomous HIV-1 infection model with delays,, Int. J. Biomath., 6 (2013). doi: 10.1142/S1793524513500307. Google Scholar
R. A. Weiss, How does HIV cause AIDS?,, Science, 260 (1993), 1273. doi: 10.1126/science.8493571. Google Scholar
R. Xu, Global stability of an HIV-1 infection model with saturation infection and intracellular delay,, J. Math. Anal. Appl., 375 (2011), 75. doi: 10.1016/j.jmaa.2010.08.055. Google Scholar
X.-Q. Zhao, Dynamical Systems in Population Biology,, Springer-Verlag, (2003). doi: 10.1007/978-0-387-21761-1. Google Scholar
Anatoli F. Ivanov, Musa A. Mammadov. Global asymptotic stability in a class of nonlinear differential delay equations. Conference Publications, 2011, 2011 (Special) : 727-736. doi: 10.3934/proc.2011.2011.727
Bao-Zhu Guo, Li-Ming Cai. A note for the global stability of a delay differential equation of hepatitis B virus infection. Mathematical Biosciences & Engineering, 2011, 8 (3) : 689-694. doi: 10.3934/mbe.2011.8.689
Jinliang Wang, Lijuan Guan. Global stability for a HIV-1 infection model with cell-mediated immune response and intracellular delay. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 297-302. doi: 10.3934/dcdsb.2012.17.297
Ismael Maroto, Carmen Núñez, Rafael Obaya. Exponential stability for nonautonomous functional differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3167-3197. doi: 10.3934/dcdsb.2017169
Abdelhai Elazzouzi, Aziz Ouhinou. Optimal regularity and stability analysis in the $\alpha-$Norm for a class of partial functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 115-135. doi: 10.3934/dcds.2011.30.115
Xiuli Sun, Rong Yuan, Yunfei Lv. Global Hopf bifurcations of neutral functional differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 667-700. doi: 10.3934/dcdsb.2018038
Yu Ji. Global stability of a multiple delayed viral infection model with general incidence rate and an application to HIV infection. Mathematical Biosciences & Engineering, 2015, 12 (3) : 525-536. doi: 10.3934/mbe.2015.12.525
Hermann Brunner, Chunhua Ou. On the asymptotic stability of Volterra functional equations with vanishing delays. Communications on Pure & Applied Analysis, 2015, 14 (2) : 397-406. doi: 10.3934/cpaa.2015.14.397
Tarik Mohammed Touaoula. Global stability for a class of functional differential equations (Application to Nicholson's blowflies and Mackey-Glass models). Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4391-4419. doi: 10.3934/dcds.2018191
Vitalii G. Kurbatov, Valentina I. Kuznetsova. On stability of functional differential equations with rapidly oscillating coefficients. Communications on Pure & Applied Analysis, 2018, 17 (1) : 267-283. doi: 10.3934/cpaa.2018016
Evelyn Buckwar, Girolama Notarangelo. A note on the analysis of asymptotic mean-square stability properties for systems of linear stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1521-1531. doi: 10.3934/dcdsb.2013.18.1521
Dimitri Breda, Sara Della Schiava. Pseudospectral reduction to compute Lyapunov exponents of delay differential equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2727-2741. doi: 10.3934/dcdsb.2018092
Ovide Arino, Eva Sánchez. A saddle point theorem for functional state-dependent delay differential equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (4) : 687-722. doi: 10.3934/dcds.2005.12.687
Ismael Maroto, Carmen NÚÑez, Rafael Obaya. Dynamical properties of nonautonomous functional differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3939-3961. doi: 10.3934/dcds.2017167
Leonid Berezansky, Elena Braverman. Stability of linear differential equations with a distributed delay. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1361-1375. doi: 10.3934/cpaa.2011.10.1361
Eduardo Liz, Gergely Röst. On the global attractor of delay differential equations with unimodal feedback. Discrete & Continuous Dynamical Systems - A, 2009, 24 (4) : 1215-1224. doi: 10.3934/dcds.2009.24.1215
Jan Čermák, Jana Hrabalová. Delay-dependent stability criteria for neutral delay differential and difference equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4577-4588. doi: 10.3934/dcds.2014.34.4577
Ferenc Hartung, Janos Turi. Linearized stability in functional differential equations with state-dependent delays. Conference Publications, 2001, 2001 (Special) : 416-425. doi: 10.3934/proc.2001.2001.416
Christian Lax, Sebastian Walcher. A note on global asymptotic stability of nonautonomous master equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (8) : 2143-2149. doi: 10.3934/dcdsb.2013.18.2143
Sergio Grillo, Jerrold E. Marsden, Sujit Nair. Lyapunov constraints and global asymptotic stabilization. Journal of Geometric Mechanics, 2011, 3 (2) : 145-196. doi: 10.3934/jgm.2011.3.145
HTML views (0)
Songbai Guo Wanbiao Ma
|
CommonCrawl
|
Difference between revisions of "Being Bayesian about Categorical Probability"
J52dong (talk | contribs)
(→Citations)
J27ni (talk | contribs)
(→Conclusion and Critiques)
== Introduction ==
Since the outputs of neural networks are not probabilities, Softmax (Bridle, 1990) is a staple for neural network's performing classification--it exponentiates each logit then normalizes by the sum, giving a distribution over the target classes. However, networks with softmax outputs give no information about uncertainty (Blundell et al., 2015; Gal & Ghahramani, 2016), and the resulting distribution over classes is poorly calibrated (Guo et al., 2017), often giving overconfident predictions even when the classification is wrong. In addition, softmax also raises concerns about overfitting NNs due to its confident predictive behaviors(Xie et al., 2016; Pereyra et al., 2017). To achieve better generalization performance, this may require some effective regularization techniques.
Since the outputs of neural networks are not probabilities, Softmax (Bridle, 1990) is a staple for neural network's performing classification -- it exponentiates each logit then normalizes by the sum, giving a distribution over the target classes. Logit is a raw output/prediction of the model which is hard for humans to interpret, thus we transform/normalize these raw values into categories or meaningful numbers for interpretability. However, networks with softmax outputs give no information about uncertainty (Blundell et al., 2015; Gal & Ghahramani, 2016), and the resulting distribution over classes is poorly calibrated (Guo et al., 2017), often giving overconfident predictions even when the classification is wrong. In addition, softmax also raises concerns about overfitting NNs due to its confident predictive behaviors (Xie et al., 2016; Pereyra et al., 2017). To achieve performance with better generalization, some more effective regularization techniques might be required.
Bayesian Neural Networks (BNNs; MacKay, 1992) can alleviate these issues, but the resulting posteriors over the parameters are often intractable. Approximations such as variational inference (Graves, 2011; Blundell et al., 2015) and Monte Carlo Dropout (Gal & Ghahramani, 2016) can still be expensive or give poor estimates for the posteriors. This work proposes a Bayesian treatment of the output logits of the neural network, treating the targets as a categorical random variable instead of a fixed label. This gives a computationally cheap way to get well-calibrated uncertainty estimates on neural network classifications.
Bayesian Neural Networks (BNNs; MacKay, 1992) can alleviate these issues, but the resulting posteriors over the parameters are often intractable. Approximations such as variational inference (Graves, 2011; Blundell et al., 2015) and Monte Carlo Dropout (Gal & Ghahramani, 2016) can still be expensive or give poor estimates for the posteriors. This work proposes a Bayesian treatment of the output logits of the neural network, treating the targets as a categorical random variable instead of a fixed label. This technique gives a computationally cheap way of being Bayesian to get well-calibrated uncertainty estimates on neural network classifications.
Using Bayesian Neural Networks is the dominant way of applying Bayesian techniques to neural networks. Many techniques have been developed to make posterior approximation more accurate and scalable, despite these, BNNs do not scale to the state of the art techniques or large data sets. There are techniques to explicitly avoid modeling the full weight posterior is more scalable, such as with Monte Carlo Dropout (Gal & Ghahramani, 2016) or tracking mean/covariance of the posterior during training (Mandt et al., 2017; Zhang et al., 2018; Maddox et al., 2019; Osawa et al., 2019). Non-Bayesian uncertainty estimation techniques such as deep ensembles (Lakshminarayanan et al., 2017) and temperature scaling (Guo et al., 2017; Neumann et al., 2018).
Using Bayesian Neural Networks is the dominant way of applying Bayesian techniques to neural networks. A Bayesian neural network is a stochastic artificial neural network that is trained using Bayesian inference. Bayesian neural networks usually have better calibration than classical neural networks, which indicates that their predicted uncertainty is more consistent with the observed errors. Bayesian networks are data-efficient and can learn with small datasets without overfitting (Jospin, Buntine, Boussaid, Laga, & Bennamoun, 2020). Many techniques have been developed to make posterior approximation more accurate and scalable, despite these, BNNs do not scale to the state of the art techniques or large data sets. There are techniques to explicitly avoid modeling the full weight posterior that is more scalable, such as with Monte Carlo Dropout (Gal & Ghahramani, 2016) or tracking mean/covariance of the posterior during training (Mandt et al., 2017; Zhang et al., 2018; Maddox et al., 2019; Osawa et al., 2019). Non-Bayesian uncertainty estimation techniques such as deep ensembles (Lakshminarayanan et al., 2017) and temperature scaling (Guo et al., 2017; Neumann et al., 2018).
== Preliminaries ==
=== Classification With a Neural Network ===
A typical loss function used in classification is cross-entropy. It's well known that optimizing <math>f^W</math> for <math>l_{CE}</math> is equivalent to optimizing for <math>l_{KL}</math>:
A typical loss function used in classification is cross-entropy, which is defined by
$$ l_{KL}(W) = KL(\text{true distribution} \;||\; \text{distribution encoded by }NN(W)) $$
Let's introduce notation for the underlying (true) distributions of our problem. Let <math>(x_0,y_0) \sim (\mathcal X \times \mathcal Y)</math>:
$$ l_{\rm CE}(\tilde{y},\phi(f^{W}(x)))=-\sum_k \tilde{y_k} \log \phi_k(f^{W}(x))) $$
,here <math>y_k</math> and <math>\phi_k</math> refers to the actual and predicted categorical distribution for each class. It's well known that optimizing <math>f^W</math> for <math>l_{CE}</math> is equivalent to optimizing for <math>l_{KL}</math>, the <math>KL</math> divergence between the true distribution and the distribution modeled by NN, that is:
$$ l_{KL}(W) = KL(\text{true distribution} \;|\; \text{distribution encoded by }NN(W)) $$
Let's introduce notations for the underlying (true) distributions of our problem. Let <math>(x_0,y_0) \sim (\mathcal X \times \mathcal Y)</math>:
$$ \text{Full Distribution} = F(x,y) = P(x_0 = x,y_0 = y) $$
$$ \text{Marginal Distribution} = P(x) = F(x_0 = x) $$
$$ F_x(y) \approx \hat F_x(y) = \frac{c^{\mathcal D}(x)}{|| c^{\mathcal D}(x) ||_1}$$
$$ \to l_{KL}(W) = \sum_{x \in \mathcal D} \frac{||c^{\mathcal D}(x)||_1}{N} KL \left( \frac{c^{\mathcal D}(x)}{||c^{\mathcal D}(x)||_1} \;||\; \phi(f^W(x)) \right) $$
The approximations <math>\hat F, \hat F_X</math> are often not very good though: consider a typical classification such as MNIST: we would never expect two handwritten digits to produce the exact same image. Hence <math>c^{\mathcal D}(x)</math> is (almost) always going to have a single index $1$ and the rest $0$. This has implications for our approximations:
The approximations <math>\hat F, \hat F_X</math> are often not very good though: consider a typical classification such as MNIST, we would never expect two handwritten digits to produce the exact same image. Hence <math>c^{\mathcal D}(x)</math> is (almost) always going to have a single index 1 and the rest 0. This has implications for our approximations:
$$ \hat F(x) \text{ is uniform for all } x \in \mathcal D $$
$$ \hat F_x(y) \text{ is degenerate for all } x \in \mathcal D $$
This clearly has implications for overfitting: to minimize the KL term in <math>l_{KL}(W)</math> we want <math>\phi(f^W(x))</math> to be very close to <math>\hat F_x(y)</math> at each point - this means that the loss function is in fact encouraging the neural network to output near degenerate distributions! One form of regularization to help this problem is called label smoothing. Instead of using the degenerate $F_x(y)$ as a target function, let's "smooth" it (by adding a scaled uniform distribution to it) so it's no longer degenerate:
This clearly has implications for overfitting: to minimize the KL term in <math>l_{KL}(W)</math> we want <math>\phi(f^W(x))</math> to be very close to <math>\hat F_x(y)</math> at each point - this means that the loss function is in fact encouraging the neural network to output near degenerate distributions!
'''Label Smoothing'''
One form of regularization to help this problem is called label smoothing. Instead of using the degenerate $$F_x(y)$$ as a target function, let's "smooth" it (by adding a scaled uniform distribution to it) so it's no longer degenerate:
$$ F'_x(y) = (1-\lambda)\hat F_x(y) + \frac \lambda K \vec 1 $$
'''BNNs'''
BBNs balances the complexity of the model and the distance to target distribution without choosing a single beset configuration (one-hot encoding). Specifically, BNNs with the Gaussian Weight prior $$F_x(y) = N (0,T^{-1} I)$$ has score of configuration <math>W</math> measured by the posterior density $$p_W(W|D) = p(D|W)p_W(W), \log(p_W(W)) = T||W||^2_2$$
Here <math>||W||^2_2</math> could be a poor proxy to penalized for the model complexity due to its linear nature.
== Method ==
=== Constructing Target Distribution ===
Recall that <math>F_x(y)</math> is a k-categorical probability distribution - it's PMF can be fully characterized by k numbers that sum to 1. Hence we can encode any such $F_x$ as a point in <math>\Delta^{k-1}</math>. We'll do exactly that - let's call this vecor <math>z</math>:
Recall that <math>F_x(y)</math> is a k-categorical probability distribution - its PMF can be fully characterized by k numbers that sum to 1. Hence we can encode any such <math>F_x</math> as a point in <math>\Delta^{k-1}</math>. We'll do exactly that - let's call this vector <math>z</math>:
$$ z \in \Delta^{k-1} $$
$$ \text{prior} = p_{z|x}(z) $$
$$ \propto -l_{CE}(\phi(f^W(x)),z) + \frac{K}{||\alpha^W(x)||}KL(\mathcal U_k \;||\; z) $$
It can actually be shown that the mean of <math>Z_x^W</math> is identical to <math>\phi(f^W(x))</math> - in other words, if we output the mean of the encoded distribution of our neural network under the BM framework, it is theoretically identical to a traditional neural network.
In the limit of <math> q^W_{z|x}(z) \rightarrow p_{z|x}(z)</math>, mean of the target posterior becomes a virtual label, for which individual z ought to match. Hence, the penalty for ambiguous configuration is determined by the number of observations. Therefore, the distribution matching in BM can be thought of as '''learning to score a categorical probability''' based on the closeness of the posterior mean, in which exploitation on the closeness of information is automatically controlled by the data.
=== Distribution Matching ===
=== On Prior Distributions ===
We must choose our concentration parameter, $\beta$, for our dirichlet prior. We see our prior essentially disappears as <math>\beta_0 \to 0</math> and becomes stronger as <math>\beta_0 \to \infty</math>. Thus, we want a small <math>\beta_0</math> so the posterior isn't dominated by the prior. But, the authors claim that a small <math>\beta_0</math> makes <math>\alpha_0^{\mathbf W}(\mathbf x)</math> small, which causes <math>\psi (\alpha_0^{\mathbf W}(\mathbf x))</math> to be large, which is problematic for gradient based optimization. In practice, many neural network techniques aim to make <math>\mathbb E [f^{\mathbf W} (\mathbf x)] \approx \mathbf 0</math> and thus <math>\mathbb E [\alpha^{\mathbf W} (\mathbf x)] \approx \mathbf 1</math>, which means making <math>\alpha_0^{\mathbf W}(\mathbf x)</math> small can be counterproductive.
We must choose our concentration parameter, <math>\beta</math>, for our dirichlet prior. We see our prior essentially disappears as <math>\beta_0 \to 0</math> and becomes stronger as <math>\beta_0 \to \infty</math>. Thus, we want a small <math>\beta_0</math> so the posterior isn't dominated by the prior. But, the authors claim that a small <math>\beta_0</math> makes <math>\alpha_0^{\mathbf W}(\mathbf x)</math> small, which causes <math>\psi (\alpha_0^{\mathbf W}(\mathbf x))</math> to be large, which is problematic for gradient based optimization. In practice, many neural network techniques aim to make <math>\mathbb E [f^{\mathbf W} (\mathbf x)] \approx \mathbf 0</math> and thus <math>\mathbb E [\alpha^{\mathbf W} (\mathbf x)] \approx \mathbf 1</math>, which means making <math>\alpha_0^{\mathbf W}(\mathbf x)</math> small can be counterproductive.
So, the authors set <math>\beta = \mathbf 1</math> and introduce a new hyperparameter <math>\lambda</math> which is multiplied with the KL term in the ELBO:
-\left(1-\left(\tilde{\alpha}_{0}^{W}(\mathbf{x})-\lambda K\right)\right)$$
As we can see, the first expression is affected by the magnitude of $\alpha^{\boldsymbol{W}}(\boldsymbol{x})$, whereas the second expression is not due to the <math>\frac{\psi^{\prime}\left(\tilde{\alpha}_{k}^{\mathbf W}(\mathbf{x})\right)}{\psi^{\prime}\left(\tilde{\alpha}_{0}^{\mathbf W}(\mathbf{x})\right)}</math> ratio.
As we can see, the first expression is affected by the magnitude of <math>\alpha^{\boldsymbol{W}}(\boldsymbol{x})</math>, whereas the second expression is not due to the <math>\frac{\psi^{\prime}\left(\tilde{\alpha}_{k}^{\mathbf W}(\mathbf{x})\right)}{\psi^{\prime}\left(\tilde{\alpha}_{0}^{\mathbf W}(\mathbf{x})\right)}</math> ratio.
== Experiments ==
Throughout the experiments in this paper, the authors employ various models based on residual connections (He et al., 2016 [1]) which are the models used for benchmarking in practice. The only additions in the experiments are initial learning rate warm-up and gradient clipping which are extremely helpful for stable training of BM.
Throughout the experiments in this paper, the authors employ various models based on residual connections (He et al., 2016 [1]) which are the models used for benchmarking in practice. We will first demonstrate improvements provided by BM, then we will show versatility in other applications. For fairness of comparisons, all configurations in the reference implementation will be fixed. The only additions in the experiments are initial learning rate warm-up and gradient clipping which are extremely helpful for stable training of BM.
=== Generalization performance ===
===== ID uncertainty =====
For ID (in-distribution) samples, calibration performance is measured, which is a measure of how well the model's confidence matches its actual accuracy. This measure can be visualized using reliability plots and quantified using a metric called expected calibration error (ECE). ECE is calculated by grouping predictions into M groups based on their confidence score and then finding the absolute difference between the average accuracy and average confidence for each group.
For ID (in-distribution) samples, calibration performance is measured, which is a measure of how well the model's confidence matches its actual accuracy. This measure can be visualized using reliability plots and quantified using a metric called expected calibration error (ECE). ECE is calculated by grouping predictions into M groups based on their confidence score and then finding the absolute difference between the average accuracy and average confidence for each group. We can define the ECE of <math>f^W </math> on <math>D </math> with <math>M</math> groups as
<math>ECE_M(f^W, D) = \sum^M_{i=1} \frac{|G_i|}{|D|}|acc(G_i) - conf(G_i)|</math>
Where <math>G_i</math> is a set of samples int the i-th group defined as <math>G_i = \{j:i/M < max_k\phi_k(f^Wx^{(j)}) \leq (1+i)/M\}</math>, <math>acc(G_i)</math> is an average accuracy in the i-th group and <math>conf(G_i)</math> is an average confidence in the i-th group.
The figure below is a reliability plot of ResNet-50 on CIFAR-10 and CIFAR-100 with 15 groups. It shows that BM has a significantly better calibration performance than softmax since the confidence matches the accuracy more closely (this is also reflected in the lower ECE).
[[File:being_bayesian_about_categorical_probability_semi_supervised_table.png]]
== Conclusion ==
== Conclusion and Critiques ==
Bayesian principles can be used to construct the target distribution by using the categorical probability as a random variable rather than a training label. This can be applied to neural network models by replacing only the softmax and cross-entropy loss, while improving the generalization performance and uncertainty estimation.
* Bayesian principles can be used to construct the target distribution by using the categorical probability as a random variable rather than a training label. This can be applied to neural network models by replacing only the softmax and cross-entropy loss while improving the generalization performance, uncertainty estimation and well-calibrated behavior.
In the future, the authors would like to allow for more expressive distributions in the belief matching framework, such as logistic normal distributions to capture strong semantic similarities among class labels. Furthermore, using input dependent priors would allow for interesting properties that would aid imbalanced datasets and multi-domain learning.
* In the future, the authors would like to allow for more expressive distributions in the belief matching framework, such as logistic normal distributions to capture strong semantic similarities among class labels. Furthermore, using input dependent priors would allow for interesting properties that would aid imbalanced datasets and multi-domain learning.
* Overall I think this summary is very good. The Method(Algorithm) section is described clearly, and the Results section is detailed, with many diagrams illustrating the main points. I just have one technical suggestion: the difference in performance for SOFTMAX and BM differs by model. For example, for RESNEXT-50 model, the difference in top1 is 0.2, whereas, for the RESNEXT-100 model, the difference in the top one is 0.5, which is significantly higher. It's true that BM method generally outperforms SOFTMAX. But seeing the relation between the choice of model and the magnitude of a performance increase could definitely strengthen the paper even further.
* The summary is good and the topic is interesting. Bayesian is a well know probabilistic model but did not know that it can be used as a neural network. Comparison between softmax and bayesian was interesting and more details would be great.
* It would be better if there is a future work section to discuss the current shortage and potential improvement. One thing would be that the theoretical part is complex in the process. In addition, optimizing a function is relatively hard if the structure is complex. Is it possible to have a good approximation without having too complex a calculation?
* Both experiments dealt with image data, however, softmax is used within classification neural networks that range from image to textual data. It would be interesting to see the performance of BM on textual data for text classification problems in addition to image classification.
* It would be better to briefly explain Bayesian treatment in the introduction part(i.e., considering the categorical probability as a random variable, construct the target distribution by means of the Bayesian inference), and to analyze the importance of considering the categorical probability as a random variable (for example explain it can be adapted to existing deep learning building blocks without huge modifications).
* Interesting topic that goes close to our lectures. Since this is a summary of the paper, it would be better if trim the explanation on Neural Network a little like getting rid of the substitution lines.
* I really liked the presentation and actually really appreciate the steps of the detailed derivation that were presented in this summary. In the introduction the researchers mentioned that BM is a computationally cheap method, however, I was wondering how much faster it is computationally as opposed to the other models to train. Additionally, the training data that was used to benchmark the classification performance seemed to all be image classifications (CIFAR-10, CIFAR-100, ResNet-50, ResNet-101), thus it would have been nice to see classification be applied in other multi-class contexts as well to see how well this new method performs there.
* It would be more clear if the metric used in evaluating the models is briefly explained!
* The author's work by applying a Bayesian treatment of the output logits of the neural network to reach a more effective regularization technique is impressive. However, the author lacks a comparison of the calculation efficiency difference between the results. When dealing with large data sets, it is difficult to be convincing if there is no advantage in computational efficiency.
* The paper was well written with few flaws, data was meticulously presented, yet with ensemble networks, it is important to notify the computational complexity increase that comes with training multiple models. It would also be nice to see their loss and entropy with regards to a standard training process (same epoch, train test split etc).
* In the original paper it contains a figure 2 of Penultimate layer's activations of examples belonging to one of three classes (beaver, dolphin, and otter; indexed by 0,1,2 in CIFAR-100). The figure vividly shows how successful the result is, so it would be better contained in the summary.
== Citations ==
[14] Pereyra, G., Tucker, G., Chorowski, J., Kaiser, Ł., and Hinton, G. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548, 2017.
[15] Jospin, L. V., Buntine, W. V., Boussaid, F. V., Laga, H. V., & Bennamoun, M. V. (2020). Hands-on Bayesian Neural Networks - a Tutorial for Deep Learning Users. Association for Computing Machiner, 3-7. doi:arXiv:2007.06823
3 Related Work
4 Preliminaries
4.2 Classification With a Neural Network
5 Method
5.1 Constructing Target Distribution
5.2 Representing Approximate Distribution
5.3 Distribution Matching
5.4 On Prior Distributions
6 Experiments
6.1 Generalization performance
6.1.1 Regularization effect of prior
6.1.2 Impact of [math]\beta[/math]
6.2 Uncertainty Representation
6.2.1 ID uncertainty
6.2.2 OOD uncertainty
6.3 Transfer learning
6.4 Semi-Supervised Learning
7 Conclusion and Critiques
Evan Li, Jason Pu, Karam Abuaisha, Nicholas Vadivelu
Let's formalize our classification problem and define some notations for the rest of this summary:
$$ \mathcal D = \{(x_i,y_i)\} \in (\mathcal X \times \mathcal Y)^N $$
General classification model
$$ f^W: \mathcal X \to \mathbb R^K $$
Softmax function:
$$ \phi(x): \mathbb R^K \to [0,1]^K \;\;|\;\; \phi_k(X) = \frac{\exp(f_k^W(x))}{\sum_{k \in K} \exp(f_k^W(x))} $$
Softmax activated NN:
$$ \phi \;\circ\; f^W: \chi \to \Delta^{K-1} $$
NN as a true classifier:
$$ arg\max_i \;\circ\; \phi_i \;\circ\; f^W \;:\; \mathcal X \to \mathcal Y $$
We'll also define the count function - a [math]K[/math]-vector valued function that outputs the occurences of each class coincident with [math]x[/math]: $$ c^{\mathcal D}(x) = \sum_{(x',y') \in \mathcal D} \mathbb y' I(x' = x) $$
Classification With a Neural Network
,here [math]y_k[/math] and [math]\phi_k[/math] refers to the actual and predicted categorical distribution for each class. It's well known that optimizing [math]f^W[/math] for [math]l_{CE}[/math] is equivalent to optimizing for [math]l_{KL}[/math], the [math]KL[/math] divergence between the true distribution and the distribution modeled by NN, that is: $$ l_{KL}(W) = KL(\text{true distribution} \;|\; \text{distribution encoded by }NN(W)) $$ Let's introduce notations for the underlying (true) distributions of our problem. Let [math](x_0,y_0) \sim (\mathcal X \times \mathcal Y)[/math]: $$ \text{Full Distribution} = F(x,y) = P(x_0 = x,y_0 = y) $$ $$ \text{Marginal Distribution} = P(x) = F(x_0 = x) $$ $$ \text{Point Class Distribution} = P(y_0 = y \;|\; x_0 = x) = F_x(y) $$ Then we have the following factorization: $$ F(x,y) = P(x,y) = P(y|x)P(x) = F_x(y)F(x) $$ Substitute this into the definition of KL divergence: $$ = \sum_{(x,y) \in \mathcal X \times \mathcal Y} F(x,y) \log\left(\frac{F(x,y)}{\phi_y(f^W(x))}\right) $$ $$ = \sum_{x \in \mathcal X} F(x) \sum_{y \in \mathcal Y} F(y|x) \log\left( \frac{F(y|x)}{\phi_y(f^W(x))} \right) $$ $$ = \sum_{x \in \mathcal X} F(x) \sum_{y \in \mathcal Y} F_x(y) \log\left( \frac{F_x(y)}{\phi_y(f^W(x))} \right) $$ $$ = \sum_{x \in \mathcal X} F(x) KL(F_x \;||\; \phi\left( f^W(x) \right)) $$ As usual, we don't have an analytic form for [math]l[/math] (if we did, this would imply we know [math]F_X[/math] meaning we knew the distribution in the first place). Instead, estimate from [math]\mathcal D[/math]: $$ F(x) \approx \hat F(x) = \frac{||c^{\mathcal D}(x)||_1}{N} $$ $$ F_x(y) \approx \hat F_x(y) = \frac{c^{\mathcal D}(x)}{|| c^{\mathcal D}(x) ||_1}$$ $$ \to l_{KL}(W) = \sum_{x \in \mathcal D} \frac{||c^{\mathcal D}(x)||_1}{N} KL \left( \frac{c^{\mathcal D}(x)}{||c^{\mathcal D}(x)||_1} \;||\; \phi(f^W(x)) \right) $$ The approximations [math]\hat F, \hat F_X[/math] are often not very good though: consider a typical classification such as MNIST, we would never expect two handwritten digits to produce the exact same image. Hence [math]c^{\mathcal D}(x)[/math] is (almost) always going to have a single index 1 and the rest 0. This has implications for our approximations: $$ \hat F(x) \text{ is uniform for all } x \in \mathcal D $$ $$ \hat F_x(y) \text{ is degenerate for all } x \in \mathcal D $$ This clearly has implications for overfitting: to minimize the KL term in [math]l_{KL}(W)[/math] we want [math]\phi(f^W(x))[/math] to be very close to [math]\hat F_x(y)[/math] at each point - this means that the loss function is in fact encouraging the neural network to output near degenerate distributions!
Label Smoothing
One form of regularization to help this problem is called label smoothing. Instead of using the degenerate $$F_x(y)$$ as a target function, let's "smooth" it (by adding a scaled uniform distribution to it) so it's no longer degenerate: $$ F'_x(y) = (1-\lambda)\hat F_x(y) + \frac \lambda K \vec 1 $$
BNNs
BBNs balances the complexity of the model and the distance to target distribution without choosing a single beset configuration (one-hot encoding). Specifically, BNNs with the Gaussian Weight prior $$F_x(y) = N (0,T^{-1} I)$$ has score of configuration [math]W[/math] measured by the posterior density $$p_W(W|D) = p(D|W)p_W(W), \log(p_W(W)) = T||W||^2_2$$ Here [math]||W||^2_2[/math] could be a poor proxy to penalized for the model complexity due to its linear nature.
The main technical proposal of the paper is a Bayesian framework to estimate the (former) target distribution [math]F_x(y)[/math]. That is, we construct a posterior distribution of [math] F_x(y) [/math] and use that as our new target distribution. We call it the belief matching (BM) framework.
Constructing Target Distribution
Recall that [math]F_x(y)[/math] is a k-categorical probability distribution - its PMF can be fully characterized by k numbers that sum to 1. Hence we can encode any such [math]F_x[/math] as a point in [math]\Delta^{k-1}[/math]. We'll do exactly that - let's call this vector [math]z[/math]: $$ z \in \Delta^{k-1} $$ $$ \text{prior} = p_{z|x}(z) $$ $$ \text{conditional} = p_{y|z,x}(y) $$ $$ \text{posterior} = p_{z|x,y}(z) $$ Then if we perform inference: $$ p_{z|x,y}(z) \propto p_{z|x}(z)p_{y|z,x}(y) $$ The distribution chosen to model prior was [math]dir_K(\beta)[/math]: $$ p_{z|x}(z) = \frac{\Gamma(||\beta||_1)}{\prod_{k=1}^K \Gamma(\beta_k)} \prod_{k=1}^K z_k^{\beta_k - 1} $$ Note that by definition of [math]z[/math]: [math] p_{y|x,z} = z_y [/math]. Since the Dirichlet is a conjugate prior to categorical distributions we have a convenient form for the mean of the posterior: $$ \bar{p_{z|x,y}}(z) = \frac{\beta + c^{\mathcal D}(x)}{||\beta + c^{\mathcal D}(x)||_1} \propto \beta + c^{\mathcal D}(x) $$ This is in fact a generalization of (uniform) label smoothing (label smoothing is a special case where [math]\beta = \frac 1 K \vec{1} [/math]).
Representing Approximate Distribution
Our new target distribution is [math]p_{z|x,y}(z)[/math] (as opposed to [math]F_x(y)[/math]). That is, we want to construct an interpretation of our neural network weights to construct a distribution with support in [math] \Delta^{K-1} [/math] - the NN can then be trained so this encoded distribution closely approximates [math]p_{z|x,y}[/math]. Let's denote the PMF of this encoded distribution [math]q_{z|x}^W[/math]. This is how the BM framework defines it: $$ \alpha^W(x) := \exp(f^W(x)) $$ $$ q_{z|x}^W(z) = \frac{\Gamma(||\alpha^W(x)||_1)}{\sum_{k=1}^K \Gamma(\alpha_k^W(x))} \prod_{k=1}^K z_{k}^{\alpha_k^W(x) - 1} $$ $$ \to Z^W_x \sim dir(\alpha^W(x)) $$ Apply [math]\log[/math] then [math]\exp[/math] to [math]q_{z|x}^W[/math]: $$ q^W_{z|x}(z) \propto \exp \left( \sum_k (\alpha_k^W(x) \log(z_k)) - \sum_k \log(z_k) \right) $$ $$ \propto -l_{CE}(\phi(f^W(x)),z) + \frac{K}{||\alpha^W(x)||}KL(\mathcal U_k \;||\; z) $$ It can actually be shown that the mean of [math]Z_x^W[/math] is identical to [math]\phi(f^W(x))[/math] - in other words, if we output the mean of the encoded distribution of our neural network under the BM framework, it is theoretically identical to a traditional neural network.
In the limit of [math] q^W_{z|x}(z) \rightarrow p_{z|x}(z)[/math], mean of the target posterior becomes a virtual label, for which individual z ought to match. Hence, the penalty for ambiguous configuration is determined by the number of observations. Therefore, the distribution matching in BM can be thought of as learning to score a categorical probability based on the closeness of the posterior mean, in which exploitation on the closeness of information is automatically controlled by the data.
Distribution Matching
We now need a way to fit our approximate distribution from our neural network [math]q_{\mathbf{z | x}}^{\mathbf{W}}[/math] to our target distribution [math]p_{\mathbf{z|x},y}[/math]. The authors achieve this by maximizing the evidence lower bound (ELBO):
$$l_{EB}(\mathbf y, \alpha^{\mathbf W}(\mathbf x)) = \mathbb E_{q_{\mathbf{z | x}}^{\mathbf{W}}} \left[\log p(\mathbf {y | x, z})\right] - KL (q_{\mathbf{z | x}}^{\mathbf W} \; || \; p_{\mathbf{z|x}}) $$
Each term can be computed analytically:
$$\mathbb E_{q_{\mathbf{z | x}}^{\mathbf{W}}} \left[\log p(\mathbf {y | x, z})\right] = \mathbb E_{q_{\mathbf{z | x}}^{\mathbf W }} \left[\log z_y \right] = \psi(\alpha_y^{\mathbf W} ( \mathbf x )) - \psi(\alpha_0^{\mathbf W} ( \mathbf x )) $$
Where [math]\psi(\cdot)[/math] represents the digamma function (logarithmic derivative of gamma function). Intuitively, we maximize the probability of the correct label. For the KL term:
$$KL (q_{\mathbf{z | x}}^{\mathbf W} \; || \; p_{\mathbf{z|x}}) = \log \frac{\Gamma(a_0^{\mathbf W}(\mathbf x)) \prod_k \Gamma(\beta_k)}{\prod_k \Gamma(\alpha_k^{\mathbf W}(x)) \Gamma (\beta_0)} + \sum_k (\alpha_k^{\mathbf W}(x)-\beta_k)(\psi(\alpha_k^{\mathbf W}(\mathbf x)) - \psi(\alpha_0^{\mathbf W}(\mathbf x)) $$
In the first term, for intuition, we can ignore [math]\alpha_0[/math] and [math]\beta_0[/math] since those just calibrate the distributions. Otherwise, we want the ratio of the products to be as close to 1 as possible to minimize the KL. In the second term, we want to minimize the difference between each individual [math]\alpha_k[/math] and [math]\beta_k[/math], scaled by the normalized output of the neural network.
This loss function can be used as a drop-in replacement for the standard softmax cross-entropy, as it has an analytic form and the same time complexity as typical softmax-cross entropy with respect to the number of classes ([math]O(K)[/math]).
On Prior Distributions
We must choose our concentration parameter, [math]\beta[/math], for our dirichlet prior. We see our prior essentially disappears as [math]\beta_0 \to 0[/math] and becomes stronger as [math]\beta_0 \to \infty[/math]. Thus, we want a small [math]\beta_0[/math] so the posterior isn't dominated by the prior. But, the authors claim that a small [math]\beta_0[/math] makes [math]\alpha_0^{\mathbf W}(\mathbf x)[/math] small, which causes [math]\psi (\alpha_0^{\mathbf W}(\mathbf x))[/math] to be large, which is problematic for gradient based optimization. In practice, many neural network techniques aim to make [math]\mathbb E [f^{\mathbf W} (\mathbf x)] \approx \mathbf 0[/math] and thus [math]\mathbb E [\alpha^{\mathbf W} (\mathbf x)] \approx \mathbf 1[/math], which means making [math]\alpha_0^{\mathbf W}(\mathbf x)[/math] small can be counterproductive.
So, the authors set [math]\beta = \mathbf 1[/math] and introduce a new hyperparameter [math]\lambda[/math] which is multiplied with the KL term in the ELBO:
$$l^\lambda_{EB}(\mathbf y, \alpha^{\mathbf W}(\mathbf x)) = \mathbb E_{q_{\mathbf{z | x}}^{\mathbf{W}}} \left[\log p(\mathbf {y | x, z})\right] - \lambda KL (q_{\mathbf{z | x}}^{\mathbf W} \; || \; \mathcal P^D (\mathbf 1)) $$
This stabilizes the optimization, as we can tell from the gradients:
$$\frac{\partial l_{E B}\left(\mathbf{y}, \alpha^{\mathbf W}(\mathbf{x})\right)}{\partial \alpha_{k}^{\mathbf W}(\mathbf {x})}=\left(\tilde{\mathbf{y}}_{k}-\left(\alpha_{k}^{\mathbf W}(\mathbf{x})-\beta_{k}\right)\right) \psi^{\prime}\left(\alpha_{k}^{\mathbf{W}}(\boldsymbol{x})\right) -\left(1-\left(\alpha_{0}^{\boldsymbol{W}}(\boldsymbol{x})-\beta_{0}\right)\right) \psi^{\prime}\left(\alpha_{0}^{\boldsymbol{W}}(\boldsymbol{x})\right)$$
$$\frac{\partial l_{E B}^{\lambda}\left(\mathbf{y}, \alpha^{\mathbf{W}}(\mathbf{x})\right)}{\partial \alpha_{k}^{W}(\mathbf{x})}=\left(\tilde{\mathbf{y}}_{k}-\left(\tilde{\alpha}_{k}^{\mathbf W}(\mathbf{x})-\lambda\right)\right) \frac{\psi^{\prime}\left(\tilde{\alpha}_{k}^{\mathbf W}(\mathbf{x})\right)}{\psi^{\prime}\left(\tilde{\alpha}_{0}^{\mathbf W}(\mathbf{x})\right)} -\left(1-\left(\tilde{\alpha}_{0}^{W}(\mathbf{x})-\lambda K\right)\right)$$
As we can see, the first expression is affected by the magnitude of [math]\alpha^{\boldsymbol{W}}(\boldsymbol{x})[/math], whereas the second expression is not due to the [math]\frac{\psi^{\prime}\left(\tilde{\alpha}_{k}^{\mathbf W}(\mathbf{x})\right)}{\psi^{\prime}\left(\tilde{\alpha}_{0}^{\mathbf W}(\mathbf{x})\right)}[/math] ratio.
Generalization performance
The paper compares the generalization performance of BM with softmax and MC dropout on CIFAR-10 and CIFAR-100 benchmarks.
The next comparison was performed between BM and softmax on the ImageNet benchmark.
For both datasets and In all configurations, BM achieves the best generalization and outperforms softmax and MC dropout.
Regularization effect of prior
In theory, BM has 2 regularization effects: The prior distribution, which smooths the target posterior Averaging all of the possible categorical probabilities to compute the distribution matching loss The authors perform an ablation study to examine the 2 effects separately - removing the KL term in the ELBO removes the effect of the prior distribution. For ResNet-50 on CIFAR-100 and CIFAR-10 the resulting test error rates were 24.69% and 5.68% respectively.
This demonstrates that both regularization effects are significant since just having one of them improves the generalization performance compared to the softmax baseline, and having both improves the performance even more.
Impact of [math]\beta[/math]
The effect of β on generalization performance is studied by training ResNet-18 on CIFAR-10 by tuning the value of β on its own, as well as jointly with λ. It was found that robust generalization performance is obtained for β ∈ [[math]e^{−1}, e^4[/math]] when tuning β on its own; and β ∈ [[math]e^{−4}, e^{8}[/math]] when tuning β jointly with λ. The figure below shows a plot of the error rate with varying β.
Uncertainty Representation
One of the big advantages of BM is the ability to represent uncertainty about the prediction. The authors evaluate the uncertainty representation on in-distribution (ID) and out-of-distribution (OOD) samples.
ID uncertainty
For ID (in-distribution) samples, calibration performance is measured, which is a measure of how well the model's confidence matches its actual accuracy. This measure can be visualized using reliability plots and quantified using a metric called expected calibration error (ECE). ECE is calculated by grouping predictions into M groups based on their confidence score and then finding the absolute difference between the average accuracy and average confidence for each group. We can define the ECE of [math]f^W [/math] on [math]D [/math] with [math]M[/math] groups as
[math]ECE_M(f^W, D) = \sum^M_{i=1} \frac{|G_i|}{|D|}|acc(G_i) - conf(G_i)|[/math]
Where [math]G_i[/math] is a set of samples int the i-th group defined as [math]G_i = \{j:i/M \lt max_k\phi_k(f^Wx^{(j)}) \leq (1+i)/M\}[/math], [math]acc(G_i)[/math] is an average accuracy in the i-th group and [math]conf(G_i)[/math] is an average confidence in the i-th group.
OOD uncertainty
Here, the authors quantify uncertainty using predictive entropy - the larger the predictive entropy, the larger the uncertainty about a prediction.
The figure below is a density plot of the predictive entropy of ResNet-50 on CIFAR-10. It shows that BM provides significantly better uncertainty estimation compared to other methods since BM is the only method that has a clear peak of high predictive entropy for OOD samples which should have high uncertainty.
Belief matching applies the Bayesian principle outside the neural network, which means it can easily be applied to already trained models. Thus, belief matching can be employed in transfer learning scenarios. The authors downloaded the ImageNet pre-trained ResNet-50 weights and fine-tuned the weights of the last linear layer for 100 epochs using an Adam optimizer.
This table shows the test error rates from transfer learning on CIFAR-10, Food-101, and Cars datasets. Belief matching consistently performs better than softmax.
Belief matching was also tested for the predictive uncertainty for out of dataset samples based on CIFAR-10 as the in distribution sample. Looking at the figure below, it is observed that belief matching significantly improves the uncertainty representation of pre-trained models by only fine-tuning the last layer's weights. Note that belief matching confidently predicts examples in Cars since CIFAR-10 contains the object category automobiles. In comparison, softmax produces confident predictions on all datasets. Thus, belief matching could also be used to enhance the uncertainty representation ability of pre-trained models without sacrificing their generalization performance.
Semi-Supervised Learning
Belief matching's ability to allow neural networks to represent rich information in their predictions can be exploited to aid consistency based loss function for semi-supervised learning. Consistency-based loss functions use unlabelled samples to determine where to promote the robustness of predictions based on stochastic perturbations. This can be done by perturbing the inputs (which is the VAT model) or the networks (which is the pi-model). Both methods minimize the divergence between two categorical probabilities under some perturbations, thus belief matching can be used by the following replacements in the loss functions. The hope is that belief matching can provide better prediction consistencies using its Dirichlet distributions.
The results of training on ResNet28-2 with consistency based loss functions on CIFAR-10 are shown in this table. Belief matching does have lower classification error rates compared to using a softmax.
Conclusion and Critiques
Bayesian principles can be used to construct the target distribution by using the categorical probability as a random variable rather than a training label. This can be applied to neural network models by replacing only the softmax and cross-entropy loss while improving the generalization performance, uncertainty estimation and well-calibrated behavior.
Overall I think this summary is very good. The Method(Algorithm) section is described clearly, and the Results section is detailed, with many diagrams illustrating the main points. I just have one technical suggestion: the difference in performance for SOFTMAX and BM differs by model. For example, for RESNEXT-50 model, the difference in top1 is 0.2, whereas, for the RESNEXT-100 model, the difference in the top one is 0.5, which is significantly higher. It's true that BM method generally outperforms SOFTMAX. But seeing the relation between the choice of model and the magnitude of a performance increase could definitely strengthen the paper even further.
The summary is good and the topic is interesting. Bayesian is a well know probabilistic model but did not know that it can be used as a neural network. Comparison between softmax and bayesian was interesting and more details would be great.
It would be better if there is a future work section to discuss the current shortage and potential improvement. One thing would be that the theoretical part is complex in the process. In addition, optimizing a function is relatively hard if the structure is complex. Is it possible to have a good approximation without having too complex a calculation?
Both experiments dealt with image data, however, softmax is used within classification neural networks that range from image to textual data. It would be interesting to see the performance of BM on textual data for text classification problems in addition to image classification.
It would be better to briefly explain Bayesian treatment in the introduction part(i.e., considering the categorical probability as a random variable, construct the target distribution by means of the Bayesian inference), and to analyze the importance of considering the categorical probability as a random variable (for example explain it can be adapted to existing deep learning building blocks without huge modifications).
Interesting topic that goes close to our lectures. Since this is a summary of the paper, it would be better if trim the explanation on Neural Network a little like getting rid of the substitution lines.
I really liked the presentation and actually really appreciate the steps of the detailed derivation that were presented in this summary. In the introduction the researchers mentioned that BM is a computationally cheap method, however, I was wondering how much faster it is computationally as opposed to the other models to train. Additionally, the training data that was used to benchmark the classification performance seemed to all be image classifications (CIFAR-10, CIFAR-100, ResNet-50, ResNet-101), thus it would have been nice to see classification be applied in other multi-class contexts as well to see how well this new method performs there.
It would be more clear if the metric used in evaluating the models is briefly explained!
The author's work by applying a Bayesian treatment of the output logits of the neural network to reach a more effective regularization technique is impressive. However, the author lacks a comparison of the calculation efficiency difference between the results. When dealing with large data sets, it is difficult to be convincing if there is no advantage in computational efficiency.
The paper was well written with few flaws, data was meticulously presented, yet with ensemble networks, it is important to notify the computational complexity increase that comes with training multiple models. It would also be nice to see their loss and entropy with regards to a standard training process (same epoch, train test split etc).
In the original paper it contains a figure 2 of Penultimate layer's activations of examples belonging to one of three classes (beaver, dolphin, and otter; indexed by 0,1,2 in CIFAR-100). The figure vividly shows how successful the result is, so it would be better contained in the summary.
[1] Bridle, J. S. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In Neurocomputing, pp. 227–236. Springer, 1990.
[2] Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. Weight uncertainty in neural networks. In International Conference on Machine Learning, 2015.
[3] Gal, Y. and Ghahramani, Z. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, 2016.
[4] Guo, C., Pleiss, G., Sun, Y., and Weinberger, K. Q. On calibration of modern neural networks. In International Conference on Machine Learning, 2017.
[5] MacKay, D. J. A practical Bayesian framework for backpropagation networks. Neural Computation, 4(3):448– 472, 1992.
[6] Graves, A. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems, 2011.
[7] Mandt, S., Hoffman, M. D., and Blei, D. M. Stochastic gradient descent as approximate Bayesian inference. Journal of Machine Learning Research, 18(1):4873–4907, 2017.
[8] Zhang, G., Sun, S., Duvenaud, D., and Grosse, R. Noisy natural gradient as variational inference. In International Conference of Machine Learning, 2018.
[9] Maddox, W. J., Izmailov, P., Garipov, T., Vetrov, D. P., and Wilson, A. G. A simple baseline for Bayesian uncertainty in deep learning. In Advances in Neural Information Processing Systems, 2019.
[10] Osawa, K., Swaroop, S., Jain, A., Eschenhagen, R., Turner, R. E., Yokota, R., and Khan, M. E. Practical deep learning with Bayesian principles. In Advances in Neural Information Processing Systems, 2019.
[11] Lakshminarayanan, B., Pritzel, A., and Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, 2017.
[12] Neumann, L., Zisserman, A., and Vedaldi, A. Relaxed softmax: Efficient confidence auto-calibration for safe pedestrian detection. In NIPS Workshop on Machine Learning for Intelligent Transportation Systems, 2018.
[13] Xie, L., Wang, J., Wei, Z., Wang, M., and Tian, Q. Disturblabel: Regularizing cnn on the loss layer. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
Retrieved from "http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Being_Bayesian_about_Categorical_Probability&oldid=49891"
|
CommonCrawl
|
On the self-dual Einstein-Maxwell-Higgs equation on compact surfaces
DCDS Home
Existence of self-similar solutions of the inverse mean curvature flow
February 2019, 39(2): 841-862. doi: 10.3934/dcds.2019035
Construction of solutions for some localized nonlinear Schrödinger equations
Olivier Bourget , Matias Courdurier , and Claudio Fernández
Departamento de Matemática, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Santiago, Chile
* Corresponding author: [email protected]
Received January 2018 Revised July 2018 Published November 2018
Fund Project: O. B. was partially supported by FONDECYT grant number 1161732. M. C. was partially supported by FONDECYT grant number 1141189. C. F. was partially supported by FONDECYT grant number 1141120
For an
$N$
-body system of linear Schrödinger equation with space dependent interaction between particles, one would expect that the corresponding one body equation, arising as a mean field approximation, would have a space dependent nonlinearity. With such motivation we consider the following model of a nonlinear reduced Schrödinger equation with space dependent nonlinearity
$\begin{align*}-\varphi''+V(x)h'(|\varphi|^2)\varphi = λ \varphi,\end{align*}$
$V(x) = -χ_{[-1,1]} (x)$
is minus the characteristic function of the interval
$[-1,1]$
and where
$h'$
is any continuous strictly increasing function. In this article, for any negative value of
$λ$
we present the construction and analysis of the infinitely many solutions of this equation, which are localized in space and hence correspond to bound-states of the associated time-dependent version of the equation.
Keywords: Nonlinear Schrödinger equation, bound states, solitons.
Mathematics Subject Classification: Primary: 35J60, 35P30; Secondary: 35C08.
Citation: Olivier Bourget, Matias Courdurier, Claudio Fernández. Construction of solutions for some localized nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 841-862. doi: 10.3934/dcds.2019035
M. J. Ablowitz, B. Prinari and A. D. Trubatch, Discrete and Continuous Nonlinear Schrödinger Systems, London Mathematical Society Lecture Note Series, 302, Cambridge University Press, 2004. Google Scholar
A. Ambrosetti, A. Malchiodi and D. Ruiz, Bound states of nonlinear Schrödinger equations with potentials vanishing at infinity, J. Anal. Math., 98 (2006), 317-348. doi: 10.1007/BF02790279. Google Scholar
G. Arioli and A. Szulkin, Semilinear Schrödinger equation in the presence of a magnetic field, Arch. Ration. Mech. Anal., 170 (2003), 277-295. doi: 10.1007/s00205-003-0274-5. Google Scholar
J. Bourgain, On nonlinear Schrödinger equations in Les Relations Entre Les Mathématiques et la Physique Théorique, Inst. Hautes Études Sci., Bures-sur-Yvette, (1998), 11-21. Google Scholar
J. Bourgain, Global Solutions of Nonlinear Schrödinger Equations, American Mathematical Society Colloquium Publications, 46. American Mathematical Society, 1999. doi: 10.1090/coll/046. Google Scholar
N. Burq and M. Zworski, Instability for the Semiclassical Non-linear Schrödinger equation, Commun. Math. Phys., 260 (2005), 45-58. doi: 10.1007/s00220-005-1402-x. Google Scholar
J. Byeon, L. Jeanjean and K. Tanaka, Standing waves for nonlinear Schrödinger equations with a general nonlinearity: one and two dimensional cases, Comm. Partial Differential Equations, 33 (2008), 1113-1136. doi: 10.1080/03605300701518174. Google Scholar
C. Cacciapuoti, D. Finco, D. Noja and A. Teta, The NLS equation in dimension one with spatially concentrated nonlinearities: the point like limit, Lett. Math. Phys., 104 (2014), 1557-1570. doi: 10.1007/s11005-014-0725-y. Google Scholar
R. Carretero-Gonzáaleza, J. D. Talley, C. Chong and B. A. Malomed, Multistable solitons in the cubic-quintic discrete nonlinear Schrödinger equation, Physica D, 216 (2006), 77-89. doi: 10.1016/j.physd.2006.01.022. Google Scholar
T. Cazenave, Semilinear Schrödinger Equations, Courant Lecture Notes in Mathematics, 10, American Mathematical Society, 2003. doi: 10.1090/cln/010. Google Scholar
T. Chen and N. Pavlović, The quintic NLS as the mean field limit of a boson gas with three-body interactions, J. Funct. Anal., 260 (2011), 959-997. doi: 10.1016/j.jfa.2010.11.003. Google Scholar
H. Christianson, J. Marzuola, J. Metcalfe and M. Taylor, Nonlinear Bound States on Weakly Homogeneous Spaces, Communications in Partial Differential Equations, 39 (2014), 34-97. doi: 10.1080/03605302.2013.845044. Google Scholar
S. Cingolani, L. Jeanjean and K. Tanaka, Multiplicity of positive solutions of nonlinear Schrödinger equations concentrating at a potential well, Calc. Var. Partial Differential Equations, 53 (2015), 413-439. doi: 10.1007/s00526-014-0754-5. Google Scholar
S. Cingolani, L. Jeanjean and K. Tanaka, Multiple complex-valued solutions for nonlinear magnetic Schrödinger equations, J. Fixed Point Theory Appl., 19 (2017), 37-66. doi: 10.1007/s11784-016-0347-3. Google Scholar
D. G. Costa and H. Tehrani, On a class of asymptotically linear elliptic problems in ${\mathbb R}^N$, J. Differential Equations, 173 (2001), 470-494. doi: 10.1006/jdeq.2000.3944. Google Scholar
R. de Marchi and R. Ruviaro, Existence of solutions for a nonperiodic semilinear Schrödinger equation, Complex Var. Elliptic Equ., 61 (2016), 1290-1302. doi: 10.1080/17476933.2016.1167887. Google Scholar
F. Della Casa and A. Sacchetti, Stationary states for non linear one-dimensional Schrödinger equations with singular potential, Phys. D, 219 (2006), 60-68. doi: 10.1016/j.physd.2006.05.014. Google Scholar
S. Deng, D. Garrido and M. Musso, Multiple blow-up solutions for an exponential nonlinearity with potential in ${\mathbb R}^2$, Nonlinear Anal., 119 (2015), 419-442. doi: 10.1016/j.na.2014.10.034. Google Scholar
Y. Ding and Z. Wang, Bound states of nonlinear Schrödinger equations with magnetic fields, Annali di Matematica, 190 (2011), 427-451. doi: 10.1007/s10231-010-0157-y. Google Scholar
M. Fei and H. Yin, Bound states of asymptotically linear Schrödinger equations with compactly supported potential, Pacific J. of Math., 261 (2013), 335-367. doi: 10.2140/pjm.2013.261.335. Google Scholar
R. Fukuizumi and L. Jeanjean, Stability of standing waves for a nonlinear Schrödinger equation with repulsive Dirac delta potential, Disc. and Cont. Dynamical Systems, 21 (2008), 121-136. doi: 10.3934/dcds.2008.21.121. Google Scholar
F. Gazzola, J. Serrin and M. Tang, Existence of ground states and free boundary problems for quasilinear elliptic operators, Adv. in Diff. Eq., 5 (2000), 1-30. Google Scholar
J. F. Lam, B. Lippmann and F. Tappert, Self-trapped laser beams in plasma, Physics of Fluids, 20 (1977), 1176. doi: 10.1063/1.861679. Google Scholar
M. Lewin, Mean-field limit of Bose systems: Rigorous results, Proceedings from the International Congress of Mathematical Physics at Santiago de Chile, 2015.Google Scholar
M. Lewin, P. T. Nam and N. Rougerie, Derivation of Hartree's theory for generic mean-field Bose systems, Adv. Math., 254 (2014), 570-621. doi: 10.1016/j.aim.2013.12.010. Google Scholar
M. Lewin, P. T. Nam and N. Rougerie, The mean-field approximation and the non-linear Schrödinger functional for trapped Bose gases, Trans. Amer. Math. Soc., 368 (2016), 6131-6157. doi: 10.1090/tran/6537. Google Scholar
Z. Liu, J. Su and T. Weth, Compactness results for Schrödinger equations with asymptotically linear terms, J. Differential Equations, 231 (2006), 501-512. doi: 10.1016/j.jde.2006.05.007. Google Scholar
C. Liu, Z. Wangand and H.-S. Zhou, Asymptotically linear Schrödinger equation with potential vanishing at infinity, J. Differential Equations, 245 (2008), 201-222. doi: 10.1016/j.jde.2008.01.006. Google Scholar
K. W. Mahmud, J. N. Kutz and W. P. Reinhardt, Bose-Einstein condensates in a one-dimensional double square well: Analytical solutions of the nonlinear Schrödinger equation, Physical Review A, 66 (2002), 063607, 11 pages.Google Scholar
B. Malomed and D. Pelinovsky, Persistence of the Thomas-Fermi approximation for ground states of the Gross-Pitaevskii equation supported by the nonlinear confinement, Applied Mathematics Letters, 40 (2015), 45-48. doi: 10.1016/j.aml.2014.09.004. Google Scholar
M. I. Molina and C. A. Bustamante, The Attractive Nonlinear Delta-function Potential, American Journal of Physics 70, 67 (2002); doi: http://dx.doi.org/10.1119/1.1417529.Google Scholar
A. R. Nahmod, The nonlinear Schrödinger equation on tori: Integrating harmonic analysis, geometry, and probability, Bull. Amer. Math. Soc. (N.S.), 53 (2016), 57-91. doi: 10.1090/bull/1516. Google Scholar
D. E. Pelinovsky, Localization in Periodic Potentials. From Schr ödinger operators to the Gross-Pitaevskii equation (London Mathematical Society Lecture Note Series), 1st Ed., Cambridge University Press, 2011. doi: 10.1017/CBO9780511997754. Google Scholar
P. Pucci and R. Servadei, Existence, non-existence and regularity of radial ground states for p-Laplacian equations with singular weights, Ann. Inst. H. Poincaré, 25 (2008), 505-537. doi: 10.1016/j.anihpc.2007.02.004. Google Scholar
J. J. Rasmussen and K. Rypdal, Blow-up in nonlinear Schrödinger equations-I, A general review, Physica Scripta, 33 (1986), 481-497. doi: 10.1088/0031-8949/33/6/001. Google Scholar
S. Sheng, F. Wang and T. An, Existence and multiplicity of positive bound states for Schrödinger equations, Boundary Value Problems, 2013 (2013), 11pp. doi: 10.1186/1687-2770-2013-271. Google Scholar
A. Soffer and M. I. Weinstein, Selection of the ground state for nonlinear Schrödinger equations, Reviews in Mathematical Physics, 16 (2004), 977-1071. doi: 10.1142/S0129055X04002175. Google Scholar
C. Sourdis, On the existence of dark solitons of the defocusing cubic nonlinear Schrödinger equation with periodic inhomogeneous nonlinearity, Applied Mathematics Letters, 46 (2015), 123-126. doi: 10.1016/j.aml.2015.02.018. Google Scholar
C. Sulem and P.-L. Sulem, The Nonlinear Schrödinger Equation. Self-focusing and Wave Collapse (Applied Mathematical Sciences), Springer, 1999. Google Scholar
T. Tao, Nonlinear Dispersive Equations. Local and Global Analysis (CBMS Regional Conference Series in Mathematics), American Mathematical Society, 2006. doi: 10.1090/cbms/106. Google Scholar
T. Tao, A global compact attractor for high-dimensional defocusing non-linear Schrödinger equations with potential, Dynamics of PDE, 5 (2008), 101-116. doi: 10.4310/DPDE.2008.v5.n2.a1. Google Scholar
T. Tsai, Asymptotic dynamics of nonlinear Schrödinger equations with many bound states, J. Od Diff. Eq., 192 (2003), 225-282. doi: 10.1016/S0022-0396(03)00041-X. Google Scholar
C. E. Wayne and M. I. Weinstein, Dynamics of Partial Differential Equations (Frontiers in Applied Dynamical Systems: Reviews and Tutorials), Springer, 2015. doi: 10.1007/978-3-319-19935-1. Google Scholar
M. Willem, Minimax Theorems (Progress in Nonlinear Differential Equations and Their Applications), Birkhäuser, 1996. doi: 10.1007/978-1-4612-4146-1. Google Scholar
Alexander Komech, Elena Kopylova, David Stuart. On asymptotic stability of solitons in a nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1063-1079. doi: 10.3934/cpaa.2012.11.1063
Tetsu Mizumachi. Instability of bound states for 2D nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 413-428. doi: 10.3934/dcds.2005.13.413
Walid K. Abou Salem, Xiao Liu, Catherine Sulem. Numerical simulation of resonant tunneling of fast solitons for the nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1637-1649. doi: 10.3934/dcds.2011.29.1637
Patricio Felmer, César Torres. Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2395-2406. doi: 10.3934/cpaa.2014.13.2395
Zhong Wang. Stability of Hasimoto solitons in energy space for a fourth order nonlinear Schrödinger type equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4091-4108. doi: 10.3934/dcds.2017174
Vianney Combet. Multi-existence of multi-solitons for the supercritical nonlinear Schrödinger equation in one dimension. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1961-1993. doi: 10.3934/dcds.2014.34.1961
Silvia Cingolani, Mónica Clapp. Symmetric semiclassical states to a magnetic nonlinear Schrödinger equation via equivariant Morse theory. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1263-1281. doi: 10.3934/cpaa.2010.9.1263
D.G. deFigueiredo, Yanheng Ding. Solutions of a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 563-584. doi: 10.3934/dcds.2002.8.563
Chuangye Liu, Zhi-Qiang Wang. A complete classification of ground-states for a coupled nonlinear Schrödinger system. Communications on Pure & Applied Analysis, 2017, 16 (1) : 115-130. doi: 10.3934/cpaa.2017005
Zupei Shen, Zhiqing Han, Qinqin Zhang. Ground states of nonlinear Schrödinger equations with fractional Laplacians. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2115-2125. doi: 10.3934/dcdss.2019136
Alex H. Ardila. Stability of ground states for logarithmic Schrödinger equation with a $δ^{\prime}$-interaction. Evolution Equations & Control Theory, 2017, 6 (2) : 155-175. doi: 10.3934/eect.2017009
M. D. Todorov, C. I. Christov. Collision dynamics of circularly polarized solitons in nonintegrable coupled nonlinear Schrödinger system. Conference Publications, 2009, 2009 (Special) : 780-789. doi: 10.3934/proc.2009.2009.780
A. Pankov. Gap solitons in periodic discrete nonlinear Schrödinger equations II: A generalized Nehari manifold approach. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 419-430. doi: 10.3934/dcds.2007.19.419
Christopher Chong, Dmitry Pelinovsky. Variational approximations of bifurcations of asymmetric solitons in cubic-quintic nonlinear Schrödinger lattices. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1019-1031. doi: 10.3934/dcdss.2011.4.1019
Pavel I. Naumkin, Isahi Sánchez-Suárez. On the critical nongauge invariant nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 807-834. doi: 10.3934/dcds.2011.30.807
Younghun Hong. Scattering for a nonlinear Schrödinger equation with a potential. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1571-1601. doi: 10.3934/cpaa.2016003
Dario Bambusi, A. Carati, A. Ponno. The nonlinear Schrödinger equation as a resonant normal form. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 109-128. doi: 10.3934/dcdsb.2002.2.109
Alireza Khatib, Liliane A. Maia. A positive bound state for an asymptotically linear or superlinear Schrödinger equation in exterior domains. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2789-2812. doi: 10.3934/cpaa.2018132
Eugenio Montefusco, Benedetta Pellacci, Marco Squassina. Energy convexity estimates for non-degenerate ground states of nonlinear 1D Schrödinger systems. Communications on Pure & Applied Analysis, 2010, 9 (4) : 867-884. doi: 10.3934/cpaa.2010.9.867
Dongdong Qin, Xianhua Tang, Qingfang Wu. Ground states of nonlinear Schrödinger systems with periodic or non-periodic potentials. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1261-1280. doi: 10.3934/cpaa.2019061
PDF downloads (70)
HTML views (110)
Olivier Bourget Matias Courdurier Claudio Fernández
|
CommonCrawl
|
Autotuning based on frequency scaling toward energy efficiency of blockchain algorithms on graphics processing units
Matthias Stachowski1,
Alexander Fiebig1 &
Thomas Rauber1
The Journal of Supercomputing volume 77, pages 263–291 (2021)Cite this article
Energy-efficient computing is especially important in the field of high-performance computing (HPC) on supercomputers. Therefore, automated optimization of energy efficiency during the execution of a compute-intensive program is desirable. In this article, a framework for the automatic improvement of the energy efficiency on NVIDIA GPUs (graphics processing units) using dynamic voltage and frequency scaling is presented. As application, the mining of crypto-currencies is used, since in this area energy efficiency is of particular importance. The framework first determines the energy-optimal frequencies for each available currency on each GPU of a computer automatically. Then, the mining is started, and during a monitoring phase it is ensured that always the most profitable currency is mined on each GPU, using optimal frequencies. Tests with different GPUs show that the energy efficiency, depending on the GPU and the currency, can be increased by up to 84% compared to the usage of the default frequencies. This in turn almost doubles the mining profit.
Avoid the most common mistakes and prepare your manuscript for journal editors.
Modern supercomputers provide massive computing power, but they also require large amounts of energy for computing. As an example, the current number one supercomputer in the June 2019 TOP 500 listing has 4608 computing nodes, each consisting of two IBM POWER9 processors and six NVIDIA Tesla V100 GPUs. Each GPU has a maximal power consumption of 300 W. The maximal power consumption of the entire system is 13 MW, 8.29 MW of which can be attributed to the power consumption of the GPUs [1, 2]. This indicates the importance of GPUs in high-performance computing and the tremendous energy consumption of supercomputers.
The energy consumption can be influenced by the operational frequency used by the hardware, and modern CPUs (central processing unit) and GPUs (graphics processing unit) provide DVFS (dynamic voltage and frequency scaling) to reduce the energy consumption by reducing the operational frequency. However, reducing the frequency usually increases the execution time. Therefore, it is important to select a frequency that reduces the energy consumption without increasing the execution time by a significant amount. The automatic optimization of energy efficiency during the compute-intensive execution of application programs is especially desirable for GPUs due to their large power consumption.
Blockchain mining algorithms play an increasingly important role due to the use of crypto-currencies such as Bitcoin, which are used to secure financial transactions, especially in the Internet. When mining crypto-currencies, solving a mathematical puzzle earns a reward in coins, which can then be exchanged for real money. The probability to win the reward is proportional to the computing power invested, whereas the costs correspond to the energy consumption of the hardware. Blockchain mining algorithms are often executed on GPUs, since GPUs are more effective than CPUs for these algorithms.
The goal of this article is to investigate how the energy consumption of blockchain mining algorithms can be optimized for NVIDIA GPUs. For the investigation, we propose and use a framework that automates the energy reduction using a systematic exploration. The framework consists of three phases. The first two phases determine optimal frequencies for each currency and each GPU using an offline selection process and an online optimization process. The third phase is a monitoring phase, which ensures that on each GPU, the most profitable currency is mined at each point of execution time. Tests with different GPUs show that the energy efficiency, depending on the GPU and the currency, can be increased by up to 84% compared to the usage of the default frequencies. This in turn almost doubles the mining profit.
The rest of the article is structured as follows. Section 2 gives a brief introduction to blockchain technology and the underlying algorithms. Section 3 explains some technical background, e.g., how the energy measurement and frequency scaling are performed. Section 4 introduces the energy-oriented autotuning framework. Section 5 presents an experimental evaluation with different currencies for different NVIDIA GPUs. Section 6 describes and discusses related work. Section 7 concludes the article.
In this chapter we shortly introduce how a blockchain works in general and introduce the three main blockchain algorithms used in this article, Ethereum (ETH), Monero (XMR) and ZCash (ZEC).
Introduction to blockchains
A blockchain is a list of data (blocks) which are connected through cryptographical hashes. Each block consists of a header and a body. The body contains an amount of finished transactions and their hashes which are stored in a hash tree (Merkle Tree). The header contains the root of this tree, a time stamp and a random number (Nonce). Additionally, the header includes the hash value of the previous block. This ensures a non-modified belated transaction. Depending on the blockchain technology, the header can include more information. Also the hashing function used can differ.
A blockchain is managed decentrally, meaning that there are no central servers where the blockchain is stored. Instead, a peer-to-peer network is used. Each participant of this network stores the complete blockchain. If a new block is generated by a set of transactions, every node verifies these transactions locally before sending this block to the other nodes in the network. To be prepared for failures or attacks, different blockchains offer different possibilities for protection. One of them is Proof of Work (PoW).
PoW participants are called miners. A miner can only add a new block of transactions, when he solves a Nonce. Simplified, the miner has to solve the following equation:
$$\begin{aligned} hash(block\_data||X) < Y, \end{aligned}$$
where X is the Nonce and Y is the given difficulty target. The target changes dynamically with the combined computing power of all miners to achieve the same time to solve this equation every time. The target corresponds to the difficulty to mine a block. Solving Eq. (1) is called mining a block. This procedure can only be done with a brute-force approach. If a miner mines a block, he gets a block reward [3]. This computation provides a large potential of parallelism, and GPUs can be used efficiently, as described in Sect. 3.1. However, a lot of energy may be used. Since it is almost impossible to find a block alone, miners are connected through a so-called mining pool. These pools concentrate the computing power of each miner who subscribes to this pool. The miner with the highest computing power contributed earns the most reward.
The crypto-currencies Ethereum (ETH), Monero (XMR) and ZCash (ZEC) are shortly described in the following subsection.
Overview of the algorithms of ETH, XMR and ZEC
This chapter explains the main concepts of the algorithms of the crypto-currencies ETH, XMR and ZEC used in this article. Ethereum uses the Ethash algorithm, Monero uses the CryptoNight algorithm, and ZCash uses the Equihash algorithm. These algorithms will be shortly explained in the following. A more detailed description can be found in [4].
ETH: the Ethash algorithm for Ethereum
Ethereum mining is based on the Ethash algorithm, also known as the Dagger-Hashimoto algorithm. The simplified flow diagram in Fig. 1 shows the main structure of the algorithm [5].
Flow diagram of the Ethash algorithm used by Ethereum mit a DAG size of 2.37 GB of late 2018 [5]
The Ethash algorithm uses a pseudo-generated data set [called DAG (directed acyclic graph)] based on all blocks generated so far by all participants and a nonce counting all confirmed transactions contributed by the specific participant. To generate a new block, the header from the previous block and the current nonce is hashed by a hashing algorithm similar to SHA-3 to get the initial 128-byte mix, also called Mix 0 (step 1 in Fig. 1). By using this Mix 0 the 128-byte site of the DAG can be determined (step 2). Then in step 3, Mix 1 is calculated based on this site and Mix 0. Steps 2 and 3 are repeated 64 times until Mix 64 is generated. In step 5, Mix 64 is compressed to a 32-byte mix, also called Mix Digest. This Mix Digest is then compared to the target threshold (also 32 bytes). If the value of Mix Digest is smaller or equal to this target threshold, the nonce is accepted and transferred to the Ethereum network. If this is not the case, the nonce is dismissed and a new nonce is used for the comparison, which is usually a newly generated random number, see step 6.
Analyzing the different steps of the algorithm, it becomes clear why memory bandwidth is the limiting factor: Each read of a new mix needs 128 byte from the DAG. Hashing just one nonce needs 64 mixes which corresponds to 8 KB of data. But reading one site of the DAG is random, so caching the DAG in a small CPU cache would be of no benefit because reading the next site of the DAG would be probably out of the cache. When benchmarking Ethereum for this article, the DAG size was around 2.37 GB, while the biggest CPU cache was around 128 MB at this time [6].
In conclusion, the only way to improve the performance is to decrease the access time to the DAG, which is equivalent to an increase in the overall memory bandwidth.
XMR: the CryptoNight algorithm
CryptoNight is the algorithm used by the crypto-currency Monero [4]. The algorithm basically consists of three main steps: the scratchpad initialization, the memory-hard loop and hashing operations.
In the first step, a large scratchpad is initialized with pseudo-random data. To do so, input data are hashed with Kecccak-1600 (from which the well-known SHA-3 (Secure Hash Algorithm 3) is a subset), which results in 200 bytes of pseudo-random data. By applying AES-256 (Advanced Encryption Standard) encryption to these 200 bytes, a 2-MB buffer of pseudo-random data is seeded. Bytes 0 to 31 of the Keccak-1600 hash are used as AES key. The encryption is performed on 128-byte payloads until 2 MB is reached. The Keccack-1600 bytes 66 to 191 are used as the first payload. The next payloads are encrypted on the results of the previous ones. Finally each 128-byte payload is encrypted 10 times.
The second step, the so-called memory-hard loop, basically consists of 524,288 iterations of a simple stateful algorithm. All iterations read and write the scratchpad at pseudo-random locations. It is not possible to calculate the state of future iterations directly.
The last step performs a hashing of the entire scratchpad to produce the resulting value. The step combines the original Keccak-1600 data with the entire scratchpad. Then, the algorithm picks one of four hashing algorithms (BLACK-256, Groestl-256, JH-256 or Skein-256) and hashes the result with the selected hashing function. The resulting 256-bit hash is the final output of the CryptoNight algorithm [7]. To make this algorithms ASICS (application-specific integrated circuit) safe, the algorithm is changed slightly every 6 months [8, 9].
ZEC: the Equihash algorithm for ZCash
Equihash is a Proof-of-Work algorithm which is based on the generalized birthday problem and the enhanced Wagner's algorithm. The algorithm uses three parameters n, k, and d [10, 11]. The values of these parameters determine the time and memory requirements of the algorithm. The Equihash algorithm solves a modified generalized birthday problem: Given a single list L of n-bit strings \({x_i}\) with \(|L| \ll 2^n\), find exactly \(2^k\) distinct strings \(x_1, x_2, \dots , x_{2^k}\) from L such that
$$\begin{aligned} H(x_1) \oplus H(x_2) \oplus \cdots \oplus H(x_{2^k}) = 0 \end{aligned}$$
and \(H(x_1 || x_2 || \ldots || x_{2^k})\) has d leading zeros. H is a given hash function and \(\oplus\) denotes the XOR on bit strings. ZCash uses the Equihash algorithm with \(n=200\) and \(k=9\) (d is set to zero), see Fig. 2 for an illustration. With these parameters the birthday problem has a minimum memory size of 522 MB. The algorithm also uses a seed I which is obtained by a hash transfer. V is a 160-bit nonce to be determined. The Equihash algorithm solves the modified generalized birthday problem by finding \(x_1, x_2, \dots , x_{2^k}\) with all numbers smaller than \(2^{\frac{n}{(k+1)}+1}\) to solve Eq. (2) as stated above. H is a Blake2b hashing function [11]. For a detailed explanation it is recommended to read the original publication [10].
Flow diagram of the Equihash algorithm used by ZCash [10]
Pre-profiling ETH, XMR, ZEC
The currencies employed in the evaluation in Chapter 5 are Ethereum (ETH), Monero (XMR) and ZCash (ZEC) which are all Proof-of-Work (PoW) algorithms. Ethereum, Monero and ZCash were chosen due to their different hashing functions, which lead to different computing characteristics. Ethereum uses the Ethash-based hashing function. Monero utilizes the CryptoNight protocol, and ZCash is Equihash based. At the time of writing this paper, CryptoNight in version 7 has been available. For the mining of ETH, the ethminer [12] in version 0.15.0.dev11 has been used, for XMR the xmr-stak [13] in version 2.4.5, and for ZEC the excavator [14] in version 1.1.0a.
The hash function used in the blockchain of the corresponding currency is essential for the performance: To quantify this effect, the mining of different currencies has been profiled in Windows 10 with the NVIDIA-Driver in version 391.24 and the CUDA toolkit in version 9.1. Figure 3 shows the profiling results of ETH, ZEC and XMR on a NVIDIA Titan X (Pascal) with the NVIDIA Visual Profiler, NVVP [15]. The utilized memory bandwidth for reading and writing access, as well as the number of instructions per clock cycle (IPC), is measured. The memory bandwidth measured indicates how much a program is memory-bound, while the IPC measured indicates how much a program is compute-bound. As the miner uses several CUDA kernels, the weighted mean of the individual kernels after the GPU time is displayed.
The memory bandwidth indicates that ETH is most strongly memory-bound among the currencies examined, followed by XMR and ZEC. The executed IPC shows that ZEC is most strongly compute-bound, followed by ETH and XMR. The stronger a currency is memory-bound, the faster and longer the hashrate will increase when the VRAM frequency is raised. Similarly, the stronger a currency is compute-bound, the faster and longer the hashrate will increase when the core frequency is raised.
Measured VRAM memory bandwidth (left) and executed IPC (right) during mining of ETH (ethminer), XMR (xmr-stak) and ZEC (excavator) on the Titan X with standard frequencies
In this section, we will explain some technical background. This includes information about the hardware used for the experimental evaluation, the operating system, the energy measurement and the frequency adaptation.
How GPUs work in this context
GPUs are massive parallel many core processors, which have, compared to CPUs, a high amount of computing power and a large memory bandwidth. On GPUs a larger number of transistors are assigned to data processing than on CPUs, where a relative high amount of transistors is used for caches and control logic. The reason for this lies in the fact that CPUs and GPUs serve different purposes: CPUs are designed to minimize the latency for one thread, whereas GPUs are designed to maximize the throughput of all threads.
On a GPU a thread corresponds to a sequence of SIMD (Single Instruction Multiple Data) lane operations. As a result, GPUs are well suited for computations in which the same instructions are executed on different data in parallel in SIMD style. Executing the same instructions on different data allows to hide memory access latencies by computations. This reduces the need for big caches on GPUs [16].
The hash algorithms described in Sect. 2.2 are utilized during the PoW approach described in Sect. 2.1 for the different currencies evaluated. To solve the PoW, different numbers (nonces) can be checked with the hash algorithms in parallel. This corresponds to SIMD computations suitable for GPUs (1:1 mapping of nonces to SIMD lanes). The more nonces can be checked in parallel, the faster a solution to the PoW can be found, earning the block reward. The number of checked nonces per second is called hashrate.
The energy measurement is incorporated as an individual module and is activated separately for each GPU as a background process. During the energy measurement, the real-time energy consumption of the specific GPU is measured periodically via an infinite loop using the NVML [17] library. The energy data are reported in a file with the corresponding system time. The energy consumption for a given time frame is calculated by reading the energy values for the specific time period and calculating the mean value.
Frequency adaptation
DVFS enables us to change the operational frequencies of a computing unit dynamically. DVFS is also available for GPUs. The frequency adaptation uses the NVML [17] library and, depending on the operating system, the NVAPI [18] (Windows) or the NV-Control X [19] (Linux) library. The parameters are the device ID, the adjustable VRAM frequency, as well as the core frequency as an index of a vector which holds the available frequencies. There are three ways of determining a suitable frequency, depending on which APIs are supported on the specific GPU:
NVML and NVAPI/NV-Control X: The VRAM and the core frequency can be set in the full available range of frequencies.
NVML only: The core frequency can be set in a slightly restricted value range; however, the VRAM frequency cannot be set.
NVAPI/NV-Control X only: The VRAM frequency can be set in the full range, and the core frequency is set in a strongly restricted range.
The GPUs used to evaluate the framework are shown in Table 1 along with important hardware information. The table also displays which APIs from Sect. 3.3 are available for overclocking and underclocking, as well as the adjustable frequency area.
Table 1 GPUs used for evaluation
The CPU used is an Intel Broadwell 6950X Processor. Since all measurements run on the GPUs, the main task of the CPU is to schedule work on the GPUs and to run the operating systems.
Autotuning framework
For our experiments, we have developed a framework with autotuning features. The source code of the framework is available at https://github.com/UBT-AI2/dvfs_gpu.
In this section, we focus on the main modules of the autotuning framework and its implementation. The framework has been developed for a collection of GPUs attached to the same computer. The goal is to reduce the overall energy consumption of the entire system as far as possible with a stable performance rate. The framework developed is organized in three main autotuning phases: (1) an offline frequency (pre-)selection, (2) an online frequency optimization and (3) a monitoring phase. This chapter describes the different phases in Sects. 4.2, 4.3 and 4.4. As usual for autotuning, we also use search strategies to determine whether a setting is better than another setting, see Sect. 4.1.
Optimization procedure
Three different search strategies have been implemented to find the energy optimal frequencies: Hill Climbing, Simulated Annealing, and Nelder-Mead [20]. The optimization function maps the adjustable frequency range to the amount of hashes per Joule, obtained while mining a currency.
The target function \(f:[a,b] \times [c,d] \longmapsto \mathbb {R}_0^+\) with \(a,b,c,d \in \mathbb {N}\) maps the adjustable frequency range of a GPU to the amount of hashes per Joule which are achieved while mining a crypto-currency in (3) where [a, b] corresponds to the range of the adjustable core frequencies and [c, d] corresponds to the range of the adjustable VRAM frequencies. The frequencies are expressed as integer values in MHz. The target function is computed by:
$$\begin{aligned} f(vram, core) = \frac{hashrate(vram,core)\;}{energy consumption(vram,core)\;} \end{aligned}$$
The optimization procedure aims to maximize the value of f in the adjustable frequency range. Additionally, as a side condition for the optimization procedure, a minimum hashrate to adhere to can be specified. The value range for the frequency to be adjusted is determined for each GPU at program start. It depends on the APIs supported on the GPU for frequency adjustment. Depending on supported APIs, the target function (3) is zero-dimensional, one-dimensional or two-dimensional.
Offline phase
The frequency optimization is executed for each GPU in a separate CPU thread which determines the energy optimal frequency for each currency assigned to the GPU. This is performed by using the optimization procedure introduced in Sect. 4.1.
The frequency optimization itself is divided in an offline and an online phase. These two phases differ in the evaluation of the target function from Eq. (3), i.e., in determining the hashrate and energy consumption at the given frequency.
In the offline phase, the frequencies are optimized via short offline benchmarks. In this context, offline is defined as having no connection to the mining pool and a lack of network communication. The performance (hash amounts per second) of the hash algorithm is thus determined under ideal conditions, as network faults and latency are no longer a bottleneck.
During every evaluation of the target function from Eq. (3), the binary of the miner is invoked in benchmark mode for the currency to be optimized. The hashrate achieved is stored in a data structure together with the maximum energy consumption measured during the benchmarking. Information regarding frequencies used in these measurements and the time frame of this benchmarking are also saved.
The duration of an offline phase depends on the run time of a measurement and the applied optimization procedure. For the tested currencies, the measurement duration ranged between 2 and 30 s. The optimization procedures need 10–15 function evaluations, depending on their individual starting points.
This module is also suitable for every kind of benchmarks to determine the optimal frequency setting if an energy optimal setting is considered.
Online phase
During the online phase, frequencies are optimized under real conditions. The online mining with a mining pool is initiated at the beginning of the online frequency optimization phase as a background process. This background process continuously writes the hashrate currently achieved along with the corresponding system time to a log file.
To evaluate the target function, the desired frequencies must be set and the current system time saved. While waiting for a determinable time frame, which is typically between two and three minutes, the average hashrate can be read off the log file. Moreover, the average energy consumption during this time can be determined as described in Sect. 3.2. Hashrate, energy consumption and measuring period are stored in a data structure as in the offline phase.
The result of the offline phase serves as a starting point for the online phase. The creation of a measuring point (function evaluation) takes significantly more time in the online phase compared to the offline phase. However, less function evaluations are needed as the starting point is usually already close to the optimum. Moreover, mining incomes can be earned during the online phase, as real mining is running in the background.
Similar to the frequency optimization phase, the monitoring phase is also executed by a separate CPU thread for each GPU. The monitoring thread executes an infinite loop and is started when there are optimal frequencies available for all currencies on the corresponding GPU. This is the case when the frequency optimization phase is completed for all GPUs of the associated GPU group (see Sect. 4.5).
The monitoring phase is responsible for the periodic calculation of energy costs and mining revenues. The energy costs in Euro per second are computed as follows:
$$\begin{aligned} energy\_cost \left[ \frac{\hbox {Euro}}{\hbox {s}}\right] = energy\_consumption\,[\hbox {Ws}] \cdot \frac{energy\_cost\left[ \frac{\hbox {Euro}}{\hbox {kWh}}\right] }{1000\cdot 3600} \end{aligned}$$
To compute the mining revenue in Euro per second the following formula is used where hr is the abbreviation for hashrate [21]:
$$\begin{aligned} mining\_reward = \frac{user\_hr}{net\_hr} \cdot \frac{1}{block\_time} \cdot block\_reward \cdot stock\_price \end{aligned}$$
Here the user hashrate (\(user\_hr\)) is the hashrate that is obtained by the miner and the network hashrate (\(net\_hr\)) is the total hashrate of all miners of the currency. Moreover, \(block\_time\) is the average time needed to mine a new block, \(block\_reward\) is the reward of the mined currency that a miner receives when finding a new block, and \(stock\_price\) is the stock price for one unit of the currency in Euros.
The average block time is given by the currency (e.g., 15 s for Ethereum) and should remain constant, independently from the miner and the current network hashrate. For this reason the block difficulty is continuously adjusted according to the current network hashrate. If it increases (decreases), the block difficulty \((block\_df)\) will rise (fall). Thus:
$$\begin{aligned} block\_df=net\_hr \cdot block\_time \end{aligned}$$
In the framework, \(stock\_price\) from CryptoCompare [22], and \(net\_hr\), \(block\_time\) as well as \(block\_reward\) from WhatToMine [23] are retrieved over their REST-APIs. Subtracting energy costs from the mining revenue gives the profit:
$$\begin{aligned} mining\_profit=mining\_reward - energy\_cost \end{aligned}$$
If the currently mined currency is no longer the most profitable one after the recalculation of energy costs and mining revenues, the background mining of this currency is terminated and the mining of a new, more profitable currency is initiated.
Subsequently, energy-efficient frequencies for the newly mined currency are determined again, trying to find even better frequencies. The search method is identical to the online frequency optimization method in Sect. 4.3. The starting point of the search is the previously used frequency for the particular currency. The previous optimization result is updated with the result of the new optimization.
Handling multiple GPUs
The device information of the hardware system to be used is read for all GPUs. The GPUs are subdivided into groups for the offline and the online frequency optimization. Identical GPUs will be allocated to the same group. Each GPU group must be able to optimize the frequencies for all available currencies. After the program is started, existing optimization results from previous measurements can be incorporated for the individual groups. This will reduce the optimization effort needed. If the optimization result of a group contains values for all currencies, the complete frequency optimization phase is skipped for the particular group. The currencies with no optimization result available are divided onto the individual GPUs of the group for frequency optimization.
The reason behind the arrangement of GPUs into groups is that the optimum frequencies for the individual currencies are considered to be identical for identical GPUs. Thus, the optimization for the individual currencies can be performed in a parallel fashion within each group, exchanging optimization results between the GPUs of the group. This results in an acceleration of the frequency optimization phase.
Subsequently, a thread is started for every GPU. These threads run for the complete length of the program, i.e., the complete frequency optimization and monitoring phase are executed separately for each GPU in a separate thread. In the monitoring phase, the threads run in an infinite loop until the user stops the program. At program termination the optimization results that have been updated during the monitoring phase are saved as a JSON file. These results contain information about optimal frequencies, hashrates and energy consumption for each currency on each GPU.
Experimental evaluation
In this section, we evaluate the framework introduced in Sect. 4. For the evaluation, the framework is tested with different GPUs and different currencies as introduced in Sects. 2.3 and 3.4.
Energy optimum ETH-Ethash
In order to determine the energy optimum, i.e., the maximum number of hashes per Joule, the hashrates and the corresponding energy consumption are measured for all adjustable frequencies on the individual GPU.
Figure 4 shows the results of the measurement on the Titan X and Titan V for ETH. It can be seen that ETH benefits most from higher VRAM frequencies compared to XMR and ZEC, confirming the profiling result in Sect. 2.3. The energy optimum is usually located in mid-range core frequencies and mid-range to high-range VRAM frequencies.
To detect the efficiency increase, the values at optimum frequencies are compared with those at default frequencies (see Table 2). The default frequencies are determined by starting the miner on the corresponding GPU and by observing the frequencies adjusted by the NVIDIA-Driver.
ETH: hashrate (above), energy consumption (middle) and hashes per Joule (below) for all adjustable frequencies on the Titan X (left column) and the Titan V (right column)
Table 2 ETH: optimal versus default frequencies on different GPUs
Energy optimum XMR-CryptoNight
As can be seen in Fig. 5, XMR behaves similar to ETH. However, as XMR is less compute-bound and less memory-bound than ETH (see Fig. 3), the increase of the hashrate while raising the frequency level produces a flatter slope. It is also noteworthy that XMR requires relatively low amounts of energy. The efficiency gain via the optimization of frequencies is summarized in Table 3.
Unlike other currencies, the mining of XMR is also worthwhile on CPUs. A possible reason for this is that the higher amount of computing units does not push a GPU to the limits of its capacity, as the low IPC values demonstrate. This also explains the low demand for energy.
XMR: hashrate (top), energy usage (middle) and hashes per Joule (below) at all adjustable frequencies on the Titan X (left column) and the Titan V (right column)
Table 3 XMR: optimal versus default frequencies on the different GPUs
Energy optimum ZEC-Equihash
In comparison with ETH and XMR, ZEC is more compute-bound, as visible in Fig. 3. Hence, the energy optimum is usually located at lower VRAM-clock rates and slightly higher core-clock rates (see Fig. 6). For this reason, mining of ZEC on the Titan V is also not very efficient, as this GPU is characterized by a fast HBM2 memory which cannot be utilized by the ZEC algorithm very well. Moreover, due to the high IPC value (see also Fig. 3) and the corresponding compute-bound characteristic, ZEC requires a higher energy demand because of a high exploitation of the CUDA cores. Table 4 indicates the efficiency gain for ZEC when using optimal frequencies.
ZEC: hashrate (above), energy consumption (middle) and hashes per Joule (below) at all adjustable frequencies on the Titan X (left column) and the Titan V (right column)
Table 4 ZEC: optimal versus default frequencies on the different GPUs
Search strategies evaluation
In this section, we will look at the different optimization algorithms (see Sect. 4.1), which are used during the frequency optimization phases (see Sects. 4.2 and 4.3). To do so, we will observe both a 2D optimization (VRAM frequency, core frequency) on the Titan X and a 1D optimization (core frequency) on the Quadro P4000 since the Quadro does not allow us to change the VRAM frequency. ZEC will be used as currency.
Every algorithm (Hill Climbing, Simulated Annealing and Nelder-Mead) is executed three times with different starting frequencies. The different starting points are the maximum, the minimum and the middle frequency. Furthermore all algorithms are also evaluated with a minimum hashrate as constraint. The maximum number of iterations is set to six for all tests.
The following sections introduce the optimization process and evaluate the performance of the individual algorithms. The performance of the optimization is measured with the following two criteria:
deviation between the estimated optimum and the real optimum and
required number of function evaluations to find the best frequency value.
Performance of Hill Climbing
The results of optimizing with Hill Climbing are summarized in Table 5. The first table row shows the optimum found by an exhaustive search (see Table 4). The second table row lists the energy efficiency (number of hashes per Joule) obtained with the associated frequencies in brackets (VRAM frequency, core frequency) for the different starting points. The third table row displays the required number of function evaluations to find the best energy efficiency, as well as the total number of function evaluations until termination in brackets. Both are shown columnwise for the Titan X (2D optimization) and the Quadro P4000 (1D optimization).
Figure 7 shows the associated optimization procedure of Hill Climbing by following the additional black lines. The focus of these figures is more the amount of iterations than the exact path of the Hill Climbing approximation itself. The algorithm process is always shown in combination with the function to be optimized at different starting points for both GPUs.
Amid all starting points, Hill Climbing attains a good result value near the optimum on both GPUs. On average a result value of 3.98 H/J is found on the Titan X. The optimum from Table 4 lies at 4.13 H/J. On the Quadro P4000, the result value of 3.31 H/J found on average is almost identical to the optimum from Table 4, which is 3.34 H/J. These small deviations can be explained by measurement inaccuracies. In general, it is easier to find the optimum at a 1D optimization like on the Quadro P4000. On average it requires five function evaluations to find the best value on the Quadro P4000 and 15 evaluations on the Titan X.
Table 5 Hill Climbing: result and number of function evaluations of optimization with starting points minimum, medial and maximum frequencies
Hill Climbing: frequency optimization procedure of ZEC with starting points of minimal (above), medial (middle) and maximal (below) frequencies on the Titan X (left column) and the Quadro P4000 (right column)
Performance of Simulated Annealing
The performance of Simulated Annealing is similar to that of Hill Climbing on both GPUs. Yet, it is not necessary to escape local optima with Simulated Annealing, since they do not exist. Table 6 shows the result and the required amount of function evaluations for the optimization. The table structure is identical to that of Table 5 explained in Sect. 5.4.1. The corresponding optimization procedures at different starting points are shown in Fig. 8. Again, the exact paths shown as black lines are not as important as the amount of iteration of the Simulated Annealing needs to find a point near the optimum.
On average a value of 4 H/J is found on the Titan X. This is equivalent to a deviation of 0.13 H/J from the optimum 4.13 H/J in Table 4. Aside from measurement inaccuracies, the maximal value is also found on the Quadro P4000. The average number of function evaluations is identical to that of Hill Climbing, being 15 on the Titan X and five on the Quadro P4000. This is in part due to the fact that Simulated Annealing uses the same pattern as Hill Climbing to explore new solution candidates.
Table 6 Simulated Annealing: result and amount of function evaluations of optimization of ZEC, starting with minimal, medial and maximal frequencies
Simulated Annealing: frequency optimization procedure of ZEC with starting points of minimal (above), medial (middle) and maximal (below) frequencies on the Titan X (left column) and the Quadro P4000 (right column)
Performance of Nelder-Mead
The optimization with the Nelder-Mead procedure performs slightly worse than with Hill Climbing and Simulated Annealing. The values obtained and the required number of function evaluations are summarized in Table 7. The table structure is already described in Sect. 5.4.1. Figure 9 shows the corresponding optimization procedures at different starting frequencies. The black lines in this figure represent the path of Nelder-Mead. More important than the exact path is the amount of iterations Nelder-Mead uses to get near the minimum which is represented by the green dot.
The energy efficiency averages at 3.88 H/J on the Titan X, corresponding to a deviation of 0.25 H/J from the optimum in Table 4. Especially when starting with maximum frequency, the VRAM frequency area is not explored enough and hence only a value of 3.72 H/J is found. As with Hill Climbing and Simulated Annealing and irrespective of measurement inaccuracies, the maximum value is also found on the Quadro P4000. In order to find the best value, the Nelder-Mead procedure needs on average ten function evaluations on both the Titan X and the Quadro P4000. This makes the Nelder-Mead procedure faster on the Titan X (10 vs. 15 function evaluation), but at the same time slower than Hill Climbing and Simulated Annealing on the Quadro P4000 (10 vs. 5 function evaluations).
In general, the Nelder-Mead procedure is relatively independent of the dimension when exploring new solution candidates. Though the simplex has more or less points depending on the dimension, the worst point of the simplex must be substituted or the simplex must be compressed as a whole in each iteration [24]. Only when calculating the initial simplex or when compressing the simplex the dimension has an influence on the number of function evaluations.
Table 7 Nelder-Mead: result and number of function evaluations from optimizing the ZEC, starting with minimal, medial and maximal frequencies
Nelder-Mead: frequency optimization procedure of ZEC with starting points of minimal (above), medial (middle) and maximal (below) frequencies on the Titan X (left column) and the Quadro P4000 (right column)
Optimization with constraints
To evaluate an optimization under constraints the three different algorithms are executed with a minimum target hashrate of 80% (95%) of the maximum hashrate on the Titan X (Quadro P4000). The starting frequencies must always be at the maximum for optimization with minimum hashrates, as the absolute value of applicable hashrates is calculated using the measurement values at maximum frequencies.
The results and the required number of function evaluations for the different algorithms are shown in Table 8. The table structure is similar to that described in Sect. 5.4.1. However, instead of different starting points, the different algorithms are displayed. Figure 10 shows the optimization procedures corresponding to the table. The purple grid (Titan X) or the green line (Quadro P4000) marks the function area where the constraint of the applicable hashrate is satisfied. The best value of energy efficiency on this grid or line is the result. The optimum is subject to measurement fluctuations and is around 3.6 H/J on the Titan X and 3.05 H/J on the Quadro P4000.
The results of the different algorithms are relatively similar. Nonetheless, the Nelder-Mead procedure provides slightly worse values. The number of function evaluations needed to find the best value behaves similar to an optimization without a constraint. The Nelder-Mead procedure needs less function evaluations than Hill Climbing and Simulating Annealing during 2D optimization on the Titan X, but more function evaluations than these procedures during 1D optimization on the Quadro P4000.
Table 8 Result and number of function evaluations of ZEC optimization with different algorithms under the constraint of a minimally applicable hashrate
Fig. 10
Frequency optimization procedure of ZEC with Hill Climbing (above), Simulated Annealing (middle) and Nelder-Mead (below) with the constraint of a minimally applicable hashrate on the Titan X (left column) and the Quadro P4000 (right column)
To evaluate the monitoring phase, the framework has been executed on a computer with all four GPUs from Table 1 attached in a time frame from 04.10.2018 20:00h till 07.10.2018 20:00h. The currencies used were ETH, XMR and ZEC, as well as some new, less popular currencies Lux-Coin, Raven, Bitcore and Vertcoin (LUX, RVN, BTX and VTC), so-called Altcoins. Those were added for this evaluation due to their popularity at the time. For Altcoins the ccminer [25] in version 2.3 has been used. The energy costs were set to 0.1 Euro/kWh.
Figure 11 shows the calculated mining profit at energy optimal frequencies for every currency on the individual GPUs. In each case only the most profitable currency is mined. Figure 12 shows the earnings obtained and the energy costs, as well as the resulting profits for all GPUs individually and overall.
The figure indicates that the Titan V draws the highest profits, followed by the Titan X. Here, the GTX 1080 and Quadro P4000 lie on average. The energy costs are quite similar for the Titan V and the Titan X, followed by the GTX 1080 and the Quadro P4000, the latter being the most economical one. The currency ETH is predominantly mined on the Titan V, Titan X and Quadro P4000, while LUX is mostly mined on the GTX 1080.
Calculated mining profits of the available currencies on the individual GPUs of a computer during the monitoring phase
Calculated mining profits (above), mining earnings (below left) and energy costs (below right) of the most profitable currency, respectively, on all GPUs of a computer during profit monitoring
The effect of DVFS on the energy consumption of CPUs and GPUs has been explored by several research papers, see [1, 26] for an overview. In [27] different technologies for DVFS on GPUs are introduced and compared. In [28] the effect of DVFS is studied on a NVIDIA Geforce GTX 560 Ti using various sample programs. Both core and VRAM frequencies as well as core and VRAM voltages are manually adjusted using the tools NVIDIA Inspector and MSI Afterburner. This paper was the inspiration for our work. Our contribution is the dynamic setting of the frequencies and the determination of the optimal frequencies. In [29] the effect of DVFS on GPU and CPU is compared using matrix calculation as example. Cameirinha Diogo Mineiro [30] introduces a technology to reduce the energy consumption during the run time of GPU programs using DVFS and monitoring of the memory bandwidth currently observed. Bishwajit et al. [31] presents models to forecast the energy consumption of a GPU when using different core and VRAM frequencies. The models are trained using machine learning techniques and measurement data from different applications. In [32], these kinds of models are used to increase the energy efficiency of mobile video games. The trained models are then used in a power management system which adjusts CPU and GPU frequencies at run time. In [33] a general overview and a categorization of various autotuning techniques are given.
There exists commercial software in the field of mining, named Awesome Miner [34]. This software is able to manually issue a profile with GPU frequencies, hashrates and energy consumption for every GPU and currency. The mining profit is calculated based on these profiles and their equivalent coin statistics. The most profitable currency is then mined.
In this article, we have presented an autotuning framework to augment the energy efficiency on NVIDIA GPUs. Special attention has been given to the application of the framework for the mining of crypto-currencies.
The framework has been applied in a manner so that several GPUs of a computer and as many parameters as possible can be adapted using configuration data. The program procedure is divided into a frequency optimization and a profit monitoring phase.
During the frequency optimization phase, the frequency optimization occurs simultaneously on all specified GPUs. Yet, for each GPU every available currency must be optimized individually. The concept of GPU groups for identical GPUs solves this issue and allows for a division of currencies onto the different GPUs of a group. In order to permit frequency adjustments in a large enough area and allow for energy demand measurements on Windows and Linux, three NVIDIA-specific libraries (NVML, NVAPI, NV-Control X) were necessary. The optimization itself is based on three different optimization algorithms (Hill Climbing, Simulated Annealing, Nelder-Mead). These are employed in an offline phase based on short benchmarks and an online phase during which the mining with mining pools is already running. At the end of the frequency optimization phase, energy optimal frequencies are made known on all GPUs for all available currencies. In the following profit monitoring phase, energy costs and mining revenues at optimal frequencies are calculated and mining of the most profitable currency is initiated. E The energy consumption and mining revenues are periodically updated while taking into account current stock prices and hashrates. If the presently mined currency is not the most profitable one anymore, it is substituted followed by a frequency re-optimization.
The framework has been evaluated using different GPUs and currencies. First, the energy optimal frequencies were determined and the energy efficiency was compared with the optimum frequencies and the frequencies used by the NVIDIA driver. Depending on GPU used and the currency, an efficiency increase of up to 84% could be obtained. In the following, the different optimization algorithms were evaluated using the optimum found and the required number of function evaluations. Finally, the mining revenues calculated in the profit monitoring phase for available currencies were researched on a computer with four GPUs for a longer time period.
Our new contribution is the development of an easy-to-use open-source framework which allows to start program binaries which then are automatically adjusted to the best energy-efficient GPU setting. Until our publication there was no open-source project which was able to adjust the frequencies automatically. We used mining algorithms for evaluation because of the high energy consumption. But, it is also possible to run our framework with other GPU implementations like weather simulations. The criterium of the hashrate would be replaced with some other application specific or general criterium like the inverse run time.
For future work the voltages of the GPUs should also be considered. But, since our free available APIs NVAPI and NV-Control X do not provide this functionality, we were not able to change the voltages.
TOP500.org (2018) TOP500 List November 2018. https://www.top500.org/lists/2018/11/. Accessed 18 Mar 2020
Oak Ridge National Laboratory (2018) SUMMIT. https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit/. Accessed 18 Mar 2020
Konstantopoulos G (2017) Understanding blockchain fundamentals. https://medium.com/loom-network/search?q=Understanding%20Blockchain%20Fundamentals. Accessed 18 Mar 2020
Bashir I (2017) Mastering blockchain. Packt Publishing Ltd., ISBN: 978-1-78712-544-5
Unknown (2018) ETHASH. https://miningbitcoinguide.com/mining/sposoby/ethash. Accessed 18 Mar 2020
Constantin V (2018) ETHASH. https://cryptomonday.de/wie-funktioniert-mining-in-ethereum/. Accessed 18 Mar 2020
Cavicchioli M (2018) CryptoNight. https://monerodocs.org/proof-of-work/cryptonight/. Accessed 18 Mar 2020
Dölle M (2018) Ende der Grafikkarten- Ära: 8000 ASIC-Miner für Zcash, Bitcoin Gold & Co. https://www.heise.de/newsticker/meldung/Ende-der-Grafikkarten-Aera-8000-ASIC-Miner-fuer-Zcash-Bitcoin-Gold-Co-4091821.html. Accessed 18 Mar 2020
Vorick D (2018) The state of cryptocurrency mining. https://blog.sia.tech/the-state-of-cryptocurrency-mining-538004a37f9b. Accessed 18 Mar 2020
Biryukov A, Khovratovich D (2018) Equihash: asymmetric proof-of- work based on the generalized birthday problem (full version). https://orbilu.uni.lu/bitstream/10993/22277/2/946.pdf. Accessed 18 Mar 2020
Cavicchioli M (2018) Proof of work algorithms: Blake2b, Equihash, Tensority and X16R & S. https://en.cryptonomist.ch/2019/07/28/mining-algorithms-proof-of-work-2/. Accessed 18 Mar 2020
Oberhumer S (2018) ethminer. https://github.com/ethereum-mining/ethminer. Accessed 18 Mar 2020
fireice-uk. xmr-stak (2018) https://github.com/fireice-uk/xmr-stak. Accessed 18 Mar 2020
Nicehash (2018) excavator. https://github.com/nicehash/excavator. Accessed 18 Mar 2020
NVIDIA (2018) NVIDIA Visual Profiler. https://developer.nvidia.com/nvidia-visual-profiler. Accessed 18 Mar 2020
NVIDIA (2019) CUDA C++ programming guide. https://docs.nvidia.com/cuda/cuda-c-programming-guide. Accessed 18 Mar 2020
NVIDIA (2018) NVIDIA Management Library (NVML). https://developer.nvidia.com/nvidia-management-library-nvml. Accessed 18 Mar 2020
NVIDIA (2018) NVAPI. https://developer.nvidia.com/nvapi. Accessed 18 Mar 2020
NVIDIA (2018) NV-CONTROL X Extension-API specification v 1.6. https://github.com/NVIDIA/nvidia-settings/blob/master/doc/NV-CONTROL-API.txt. Accessed 18 Mar 2020
ASL ETHZ (2018) Numerical methods. https://github.com/ethz-asl/numerical_methods. Accessed 18 Mar 2020
Forum Bitcoin (2017) Ethereum (ETH) mining profit formula. https://bitcointalk.org/index.php?topic=2262328.0. Accessed 18 Mar 2020
CryptoCompare.com (2018) CryptoCompare API. https://www.cryptocompare.com/api/#-api-data-price-. Accessed 18 Mar 2020
whattomine.com (2018) Coin calculators. https://whattomine.com/calculators. Accessed 18 Mar 2020
Cheng J (2018) Numerical optimization.http://www.jade-cheng.com/au/coalhmm/optimization/. Accessed 18 Mar 2020
Pruvot T (2018) ccminer. https://github.com/tpruvot/ccminer. Accessed 18 Mar 2020
Rauber T et al (2014) Energy measurement, modeling, and prediction for processors with frequency scaling. J Supercomput 70(3):1451–1476. https://doi.org/10.1007/s11227-014-1236-4
Mishra A, Khare N (2015) Analysis of DVFS techniques for improving the GPU energy efficiency. Open J Energy Effic 4:77–86
Mei X, Yung LS, Zhao K, Chu X (2013) A measurement study of GPU DVFS on energy conservation. https://www.researchgate.net/publication/262365062. Accessed 18 Mar 2020
Ge R, Vogt R, Majumder J, Alam A, Burtscher M, Zong Z (2013) Effects of dynamic voltage and frequency scaling on a K20 GPU. https://ieeexplore.ieee.org/document/6687422. Accessed 18 Mar 2020
Cameirinha DM (2015) Exploiting DVFS for GPU energy management. https://fenix.tecnico.ulisboa.pt/downloadFile/563345090414604/Dissertacao.pdf. Accessed 18 Mar 2020
Dutta B, Adhinarayanan V, Feng W (2018) GPU power prediction via ensemble machine learning for DVFS space exploration. https://www.researchgate.net/publication/326637320. Accessed 18 Mar 2020
Park J-G, Dutt N, Lim S-S (2017) ML-Gov: a machine learning enhanced integrated CPU-GPU DVFS governor for mobile gaming. https://www.researchgate.net/publication/320850321. Accessed 18 Mar 2020
Durillo JJ, Fahringer T (2014) From single- to multi-objective auto-tuning of programs: Advantages and implications. https://www.researchgate.net/publication/271724916. Accessed 18 Mar 2020
IntelliBreeze Software AB (2018) Awesome Miner.http://www.awesomeminer.com/home. Accessed 18 Mar 2020
Open Access funding provided by Projekt DEAL.
University of Bayreuth, Bayreuth, Germany
Matthias Stachowski, Alexander Fiebig & Thomas Rauber
Matthias Stachowski
Alexander Fiebig
Correspondence to Matthias Stachowski.
Publisher's Note
Stachowski, M., Fiebig, A. & Rauber, T. Autotuning based on frequency scaling toward energy efficiency of blockchain algorithms on graphics processing units. J Supercomput 77, 263–291 (2021). https://doi.org/10.1007/s11227-020-03263-5
Issue Date: January 2021
DVFS
|
CommonCrawl
|
Health insurance subscription among women in reproductive age in Ghana: do socio-demographics matter?
Hubert Amu1Email authorView ORCID ID profile and
Kwamena Sekyi Dickson1
Health Economics Review20166:24
https://doi.org/10.1186/s13561-016-0102-x
Received: 11 March 2016
Accepted: 7 June 2016
Premised that health insurance schemes in Africa have only been introduced recently and continue evolving, various concerns have been raised regarding their effectiveness in improving utilisation of orthodox health care and the reduction of out-of-pocket expenditures for their population, particularly women.
To examine the effects of socio-demographics on health insurance subscription among women in Ghana.
The study draws on the 2014 Ghana Demographic and Health Survey. Bivariate descriptive analysis and binary logistic regression were used to analyse the data.
Wealth status, age, religion, birth parity, marriage and ecological zone were found to have significantly predicted health insurance subscription among women in reproductive age in Ghana. Urban dwellers, women who are nulliparous, those with no or low levels of education, African traditionalists and the poor were those who largely did not subscribe to the scheme.
The findings underscore the need for the National Health Insurance Authority to carry out more education in association with the National Commission for Civic Education and the Information Services Department to recruit more urban dwellers, nulliparous women, those with no or low levels of education, African traditionalists and the poor unto the scheme.
Socio-demographics
Out-of-pocket payments
A major issue that continues to be of principal prominence in most countries across the globe entails the capacity of their health financing structures to provide adequate financial risk safeguard to all of their population against the costs of health care as they strive to achieve universal health coverage [1, 2]. In developing countries, health care accessibility remains limited as a result of financial and socio-cultural challenges. Out-of-pocket payments are among the main factors which prevent majority of the people in these countries from accessing timely health care [3]. This sometimes results in circumstances whereby enormous financial hurdles come upon entire countries through higher spending on treatment of ailments [4].
Out-of-pocket payments constitute a strong barrier to the utilisation of health care services, as well as precluding adherence to long term treatment especially among the vulnerable and poor [5]. Out-of-pocket payments for health care at the point of service delivery have also been troubling for the economic disposition of the poor, therefore, causing serious challenges with regards to essential daily needs as their incomes are drained by health care spending [6]. In order to substitute the user fees approach to health care financing and hence ameliorating the exorbitant health care cost for the masses, all member States of the World Health Organisation (WHO) adopted a resolution which aimed at encouraging nations to develop health financing systems with the aim of providing universal coverage [7]. One major approach to adequate health financing for many of these countries in efforts to achieve universal health coverage has therefore become health insurance [8]. There has been considerable interest in exploring the capabilities of health insurance in Africa, a continent which is considered to have a strong propensity for risk sharing across populations and time [9]. A number of African countries (Rwanda, Nigeria, Tanzania, Kenya and Ghana) are thus currently experimenting with various health insurance options which comprise both private and public schemes [10–14].
Premised that the health insurance schemes in Africa have been introduced within the last four decades and continue evolving, various concerns have been raised regarding their effectiveness in the reduction of out-of-pocket expenditures for their populations and improving utilisation of orthodox health care; a system in which health care professionals treat diseases and symptoms using radiation, drugs or surgery [15]. Access to effective health insurance has been noted to affect households by leading to better health especially among women who constantly need maternal and child health services as well as mitigating the risk of health shocks and reducing out-of-pocket health expenses [16, 17]. It is therefore imperative to understand the various factors which affect the ability of people, particularly women in their reproductive age to subscribe to health insurance.
Even though Ghana's National Health Insurance Scheme (NHIS) became operational in 2003 through the National Health Insurance Law (Act 650 of Parliament), the scheme had a legal framework in 2004 through the National Health Insurance Regulations (L.I. 1809) [18, 19]. Financing of the NHIS is done by a 2.5 % insurance levy as Valued Added Tax (VAT) on goods and services, 2.5 % deductions from pension contributions of workers in the formal sector with the Social Security and National Insurance Trust (SSNIT) and yearly premiums paid by adults (persons eighteen years of age and above) [20]. The scheme is also financed through monies allocated to the Health Insurance Fund (HIF) by the legislature in addition to grants, investments, donations, voluntary contributions and gifts [21]. SSNIT pensioners, persons who are seventy years and above, children under the age of eighteen years and pregnant women, however, constitute exemptions on the NHIS from payment of yearly premium [20].
The National Health Insurance Scheme covers about 95 % of the disease burden of Ghana. These comprise services provided for out-patient clients such as diagnostic testing and operations including repair of hernia; most services for in-patient clients which include care by specialists, majority of surgeries and accommodation at the wards of health facilities; treatments for oral health; services related to maternal care including caesarean sections; emergency care; and all drugs that are listed on the medicines list of the National Health Insurance Scheme [22].
The institution mandated by law to manage the National Health Insurance Scheme is the National Health Insurance Authority (NHIA). The NHIA, in efforts to improve the scheme's performance in meeting the health needs of Ghanaians, has since its inception in 2003, taken a number of initiatives. These include the free maternal health care introduced in 2008, a health insurance claims processing center established in 2010, introduction of clinical audit in 2010, creation of a consolidated premium account in 2011 and introduction of the national health insurance call center in 2012 [23]. The Authority also introduced biometric identification cards in 2014 to improve client identification and efficient delivery of services [24].
The National Health Insurance Scheme was designed to be pro-poor [20]. In practice, however, most subscribers to the scheme are people in the upper wealth quintile, as the poor and vulnerable including women in their reproductive years are rather less likely to subscribe to the scheme. The present study therefore sought to examine the determinants of health insurance ownership among women of reproductive age in Ghana with data from the 2014 Ghana Demographic and Health Survey. Several studies have investigated the influence of health insurance on other issues including utilisation of maternal health care services and medical out-of-pocket expenses [13, 22, 25–29].
Much attention has, however, not been paid to the background factors which influence subscription to the scheme. Even though Kumi-Kyereme and Amo-Adjei [30] examined factors influencing health insurance subscription in Ghana using data from the 2008 Ghana Demographic and Health Survey (GDHS), the authors focused mainly on spatial location and household wealth as principal determinants. The present study, however, examines the effects of all background characteristics on health insurance subscription and also compares the results to the study conducted by Kumi-Kyereme and Amo-Adjei [30] to identify variations in the factors influencing subscription in the 2008 and 2014 GDHS.
Source of data
The study made use of data from the female data file of the 2014 GDHS. The GDHS is a nationwide survey which covers all ten regions and is conducted every five years. The survey is carried out by the Ghana Statistical Service and the Ghana Health Service with ICF International providing technical support for the survey through MEASURE DHS. The GDHS focuses on child and maternal health and is designed to provide adequate data to monitor the population and health situation in Ghana. The survey gathers data on various demographic and health issues including fertility, contraceptive use, child health, nutrition, malaria, HIV and AIDS, family planning, health insurance and maternal health; antenatal care, delivery care and post-natal care. In the 2014 version 9396 women between the ages 15 and 49 were interviewed from 12,831 households covering 427 clusters throughout the country. The survey had a response rate of 97 % [31]. Permission to use the data set was given by the MEASURE DHS following the assessment of a concept note.
The dependent variable employed for this study was ownership of NHIS. The dependent variable was coded 1 = "Yes" and 0 = "No" since it was dichotomous. A discrete choice model was employed to show how the independent variables correlated with the dependent variable. Specifically, the binary logistic regression was employed since it allows the predictions on a mixture of continuous and categorical variables, given that this technique is more appropriate for dichotomous variables. A key assumption underlying the binary logistic regression model is that the dependent variable should be dichotomous in nature and the data should not have any outlier. The formulae underpinning the model is given as;
Let Y be a dichotomous variable which is defined as
$$ Y=\left\{\begin{array}{c}\hfill 1\kern0.24em for\kern0.24em subscribers\hfill \\ {}\hfill 0\kern0.24em for\kern0.24em non\mathit{\hbox{-}} subscribers\hfill \end{array}\right. $$
and \( \mathrm{p} = \Pr \left(\mathrm{Y} = 1\Big|{\mathrm{X}}_1,\dots, {\mathrm{X}}_{\mathrm{k}}\right) \).
$$ \mathrm{p}=\frac{1}{1+ \exp \left[-\left({\upbeta}_0+{\upbeta}_1{\mathrm{X}}_1+{\upbeta}_2{\mathrm{X}}_2+\dots +{\upbeta}_{\mathrm{k}}{\mathrm{X}}_{\mathrm{k}}\right)\right]} $$
and \( \widehat{\mathrm{p}}=\frac{1}{1+ \exp \left[-\left({\widehat{\upbeta}}_0+{\widehat{\upbeta}}_1{\mathrm{X}}_1+{\widehat{\upbeta}}_2{\mathrm{X}}_2+\dots +{\widehat{\upbeta}}_{\mathrm{k}}{\mathrm{X}}_{\mathrm{k}}\right)\right]} \)
Note: With no predictors, \( \widehat{\mathrm{p}}=\frac{{\displaystyle \sum_{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{Y}}_{\mathrm{i}}}}{\mathrm{n}}=\overline{\mathrm{Y}} \)
Nine independent variables were used for the study; Maternal age, marital status, educational level, residence, wealth quintile, ethnicity, occupation, parity (Birth order) and region of residence. Maternal age was categorised into, 15–19, 20–24, 25–29, 20–34, 35–39, 40–44, and 45–49. Marital status was recoded as single (never married, widowed, divorced, not living together), married and cohabitation (living together). Education level was classified into four categories: No education, primary education, secondary education and higher education.
Type of residence was coded as urban and rural while wealth quintile was categorised as poorest, poorer, middle, richer and richest. Ethnicity was recoded as Akan, Ga/Dangme, Ewe, Guan, Mole–Dagbani, Grusi, Gurma and Other. Occupation was captured as not working and working. Parity (birth order) was categorised as zero birth, one birth, two births, three births and four births or more. Region of residence was re – coded to capture the general ecological zones as follows: Northern, Upper West and Upper East regions were re - coded as the 'Savannah zone'; the Brong – Ahafo, Ashanti and Eastern regions were designated as the 'forest zone'; while the Western, Central, Greater Accra and Volta regions were coded as the 'coastal zone'. Religion was captured as Catholic, Anglican, Methodist, Presbyterian, Pentecostal/charismatic, Other Christian, Islam, African Traditional/spiritual and other. Survey weights, which are typical of nationally representative studies, were factored into both inferential and descriptive analyses conducted. The weights helped to offset the challenges of under and over sampling usually associated with national surveys. All analyses were conducted with STATA, version 13.
Table 1 presents respondents who were registered under the NHIS, based on various socio-demographic characteristics which comprised residence, ecological zone, age, ethnicity, occupation, education, religion, wealth status, marital status, and parity. In terms of religion, it was observed that less than half of the women in both rural (47.5 %) and urban (47.3 %) settlements had health insurance. While majority of women in the forest zone (64.2 %) had insurance, less than 50 % of their counterparts in the coastal (37 %) and savannah (39.5 %) were NHIS subscribers. Less than half of the respondents in their late teens (42 %), early thirties (42.7 %), and late forties (41.2 %) also had insurance.
Insurance coverage and background characteristics
Correlates
N = 9363
Proportion registered
0.050*
Ecological Zone
Coastal zone
Forest zone
Savannah zone
Ga/dangme
Mole-Dagbani
Grusi
Gurma
No education
Pentecostal/charismatic
African Traditional/spiritual
Wealth status
Poorest
Poorer
No birth
One birth
Two births
Three births
Four births or more
Source: GDHS 2014
Proportion registered* mean the proportion of women who had registered under the National Health Insurance Scheme (NHIS)
It was observed that below 50 % of respondents who were Ga/Dangmes (29.6 %), Ewes (38.6 %), Gurmas (27.6 %), and other (44.1 %) actually owned health insurance. Respondents with no education also had the lowest insurance ownership (38.6 %). It was also realised that health insurance subscription was higher for those working than those who were not working at the time of the survey. While over 50 % of married women were having health insurance, less than half of those single and cohabiting were subscribed to the scheme. African Traditionalists were the least of the respondents who owned health insurance. It was only respondents in the 'Richer' and 'Richest' wealth categories that majority had health insurance. In terms of parity, women who had never given birth were the least to have health insurance. Overall, 47.4 % of the weighted sample was registered under the scheme. It was observed from the Chi-square tests conducted that in exception of occupation, all the other background characteristics had significant associations with health insurance ownership.
Table 2 presents results of the Binary logistic regression of NHIS subscription among the women surveyed. The results reveal significant effects of wealth status (richer, richest), age (45–49), ecological zone (forest zone, savannah), religion (Pentecostal/charismatic, other Christian, African traditionalist/spiritualist, no religion), ethnicity (Ga/Dangme), education, marital status (married), and parity on subscription. Women in the rural areas were more likely to own health insurance compared with their colleagues in urban areas. Those in the 'richer' wealth quintile were found to be 170 % more likely to register with the NHIS than those in the 'poorest' wealth quintile. Respondents in the forest (OR = 2.863, CI = 2.393–3.426) and savannah (OR = 2.167, CI = 1.580–2.972) zones also had higher probabilities of owning health insurance compared with women in the coastal zone. Regarding religion, it was observed that only Anglicans (OR = 1.231, CI = 0.515–2.944) and Methodists (OR = 1.010, CI = 0.692–1.475) were more likely to own health insurance than Catholics. Irrespective of the number of births, it was observed that respondents who had ever given birth were all at least 140 % more likely to own health insurance compared with those who had never given birth (Table 2).
Binary logistic regression on ownership of health insurance
95 % confidence interval
1.700**
2.863***
Traditionalist/spiritualist
1.783–10.733
Zero birth
Source: GDHS 2014 *P < 0.05 **p < 0.01 ***p < 0.001
Even though the overall weighted sample registered under the scheme was slightly higher than what was reported by Kumi-Kyereme and Amo-Adjei [30] (40.2 %) it does not refute the argument made by Amu [32] that the NHIS has failed to cover 100 % of the Ghanaian population after five years as stipulated in the objectives of the scheme upon its creation in 2003. The less than 50 % NHIS ownership observed within the weighted sample may be due to the fact that people are unable to afford the cost of subscribing to the scheme as opined by Amu [32]. Boateng and Awunyor-Vitor [33] also argued that some people usually consider the premium for subscribing to the NHIS as too expensive and this serves as a barrier to their ownership of the scheme. This is particularly so for persons who pay annual premiums on the scheme; eg. Informal sector workers. The fact that 'Richer' and 'Richest' wealth quintiles were the only categories within wealth status where majority were NHIS subscribers thus underscores the role of wealth in influencing the ability of people to subscribe to health insurance as mentioned by Kumi-Kyereme and Amo-Adjei [30]. It therefore clearly justifies the inability of majority of women in the lower wealth quintiles to subscribe to the scheme.
The issue of expensiveness of yearly premium for subscribing to the NHIS was one of the reasons in 2008, prior to the general elections in Ghana, and still dominating debates in the country, a one-time premium payment on the scheme became a major issue of debate among the various contesting political parties, with the National Democratic Congress (NDC) proposing its adoption. With a one-time NHIS premium system, subscribers will be required to pay premium only once in their entire life time [34]. The major argument against this policy by opposing political parties, including the New Patriotic Party (NPP), is, however, the fact that a one-time premium would mean subscribers paying huge sums of money, which in itself, defeats the purpose of introducing such a policy in the first place; to reduce the cost of paying for the scheme by subscribers.
The findings where 40.2 % were subscribed to the scheme, may also be attributed to delays – as a result of long queues – which characterise registration/renewal of NHIS membership and utilisation of health services with insurance or dissatisfaction with quality of services received from the scheme regarding registration/renewal and health care use [10, 35]. The fact that African traditionalists were the least to own health insurance may be attributable to the fact that they largely believe in the use of herbal and/or African traditional medicine [36] at the expense of orthodox ones compared to people with other religious perspectives.
The fact that level of education predicted health insurance subscription is an indication that educational attainment cannot be ruled out when decisions regarding utilisation of health care services are concerned. It was thus obvious to note from the study that at least half of all women with secondary and higher levels of education were subscribed to the NHIS [37]. Amu [32] noted in this regard that people with high levels of education may have a higher outlook with regards to the necessity of being ready for any unforeseen health challenges and as such decide to own health insurance, as opposed to those with lower or no level of education who may not realise the level of threat that will be posed to their health and life if they are not prepared financially for any unforeseen health challenges but they eventually occur [38].
As observed in our study, a woman's age does inform her decisions regarding health insurance subscription. Age influences perception of susceptibility to health conditions and the seriousness attached such conditions [39]. The level of susceptibility then results in decisions to either subscribe to health insurance in order to be ready for unforeseen health challenges or not to do so [38]. Women in rural areas being more likely to subscribe to health insurance than women in urban areas is an indication of the higher need for health care services documented to exist in rural areas than in urban areas [40–42]. Thus, women in the rural areas were more likely to subscribe to the health insurance so as to be able to afford health care in times of need, as they may not be able to afford health care out-of-pocket if they fall sick [43].
Our findings regarding ecological zone confirm those found by Kumi-Kyereme and Amo-Adjei [30] in which the authors posited that the highest percentage of women subscribed to the NHIS was in the forest zone while the least was from the Coastal zone. The fact that women with zero parity had the least rate of subscription to the NHIS may be because they did not require pregnancy and childbirth services which are services that make most women in their reproductive age to Ghana to subscribe to the NHIS as they are exempted from the payment of premiums [44].
We found that education, wealth status, age, religion, birth parity, marriage and ecological zone predict health insurance subscription among women in their reproductive ages in Ghana. Urban dwellers, women who have never given birth, those with no or low levels of education, African traditionalists and the poor were women who largely did not subscribe to the scheme. The NHIA should therefore carry out more education than they are currently doing, in association with the National Commission for Civic Education and the Information Services Department to recruit more of these people unto the scheme. The indigent for instance should be made aware through these educational campaigns that subscription to the NHIS for them is without any charges, which will encourage them to subscribe.
We wish to thank MEASURE DHS for granting us permission to use their data for our analysis.
HA and KSD conceived the study. KSD designed and performed the analysis. HA drafted and edited the manuscript. Both authors proof-read the final manuscript and approved it.
Department of Population and Health, University of Cape Coast, Cape Coast, Ghana
Carman KG, Eibner C. Changes in health insurance enrollment since 2013: evidence from the RAND health reform opinion study. Washington DC: RAND Corporation; 2014.Google Scholar
Lagomarsino G, Garabrant A, Adyas A, Muga R, Otoo N. Moving towards universal health coverage: health insurance reforms in nine developing countries in Africa and Asia. Lancet. 2012;380:933–43.View ArticlePubMedGoogle Scholar
Van Doorslaer E, O'donnell O, Rannan-Eliya RP, Somanathan A, Adhikari SR, Garg C, et al. Effect of payments for health care on poverty estimates in 11 countries in Asia: an analysis of household survey data. Lancet. 2006;368:1357–64.View ArticlePubMedGoogle Scholar
Palmer R, Weiss BD. Relationship between health care costs and very low literacy skills in a medically needy and indigent medicaid population. J Am Board Fam Pract. 2004;17:44–7.View ArticlePubMedGoogle Scholar
Sarpong N, Loag W, Fobil J, Meyer CG, Adu-Sarkodie Y, May J, Schwarz NG. National health insurance coverage and socio-economic status in a rural district of Ghana. Tropical Med Int Health. 2010;15(2):191–7. doi:10.1111/j.1365-3156.2009.02439.x.View ArticleGoogle Scholar
Leive A, Xu K. Coping with out-of-pocket health payments: empirical evidence from 15 African countries. Bull World Health Organ. 2008;86:849–56.View ArticlePubMedPubMed CentralGoogle Scholar
World Health Organization. Sustainable health financing, universal coverage and social health insurance. World Health Assembly Resolution. 58. Geneva: WHO; 2005.Google Scholar
Aggrey M, Appiah SCY. The influence of clients' perceived quality on health care utilisation. Int J Innov Appl Stud. 2014;9(2):918–24.Google Scholar
Wagstaff A. Soc Health Insur Reexamined Health Econ. 2010;19:503–17.Google Scholar
Mulupi S, Kirigia D, Chuma J. Community perceptions of health insurance and their preferred design features: implications for the design of universal health coverage reforms in Kenya. BMC Health Serv Res. 2013;13:474. doi:10.1186/1472-6963-13-474.View ArticlePubMedPubMed CentralGoogle Scholar
Lekashingo LD. Exploring the effects of user fees, quality of care and utilisation of health services on enrolment in community health fund, Bagamoyo District, Tanzania. Master's thesis. Dar es Salaam: Muhimbili University of Health and Allied Sciences; 2012.Google Scholar
Republic of Kenya. Kenya national health accounts 2009/2010. Nairobi: Ministry of Medical Services and Ministry of Public Health and Sanitation; 2011.Google Scholar
Mensah J. The impact of national health insurance scheme on health delivery in Brong Ahafo Region: a case study on Jaman North, Master's thesis. Kumasi: Kwame Nkrumah University of Science and Technology; 2011.Google Scholar
Unumeri GO. Perception and conflict. Lagos: National Open University of Nigeria; 2009.Google Scholar
Chaudhury A, Roy K. Changes in out-of-pocket payments for health care in Vietnam and its impact on equity in payments, 1992–2002. Health Policy. 2008;88:38–48.View ArticleGoogle Scholar
Currie J, Madrian B. Health, health insurance and the labor market. Handbook of Labor Economics. Amsterdam: Elsevier-North Holland; 2005.Google Scholar
Xu K, Evans DB, Kawabata K, Zeramdini R, Klavus J, Murray CJ. Household catastrophic health expenditure: a multicounty analysis. Lancet. 2003;362:111–7.View ArticlePubMedGoogle Scholar
Government of Ghana. National health insurance act, 2003 (Act 650). Accra: Ghana Publishing Corporation; 2003.Google Scholar
Government of Ghana. National health insurance regulations, 2004 (L.I. 1809). Accra: Ghana Publishing Corporation; 2004.Google Scholar
Universal Access to Health Care Campaign Coalition. Ten years of the national health insurance scheme in Ghana: a civil society perspective on its successes and failures. Accra: Universal Access to Health Care Campaign Coalition; 2013.Google Scholar
Boakye-Frimpong P. The quest for equity in the provision of health care in Ghana. Afr Rev Econ Finance. 2013;4(2):254–72.Google Scholar
Blanchet NJ, Fink G, Osei-Akoto I. The effect of Ghana's national health insurance scheme on health care utilisation. Ghana Med J. 2012;46(2):76–84.PubMedPubMed CentralGoogle Scholar
National Health Insurance Authority (NHIA). 2012 annual report. Accra: NHIA; 2012.Google Scholar
National Health Insurance Authority (NHIA). Functions of the authority. 2015. Accessed 17 Jan 2016 from http://www.nhis.gov.gh/nhia.aspx.Google Scholar
Brugiavini A, Pace N. Extending health insurance in Ghana: effects of the National Health Insurance Scheme on maternity care. Heal Econ Rev. 2016;6:7.View ArticleGoogle Scholar
Dapatem DA. Nine million Ghanaians use health insurance. 2013. Accessed 16 Nov 2015 from http://www.graphic.com.gh/.Google Scholar
Adjei AM. The impact of national health insurance on community pharmacies: a case study of the Western Region of Ghana, Master's thesis. Kumasi: Kwame Nkrumah University of Science and Technology; 2012.Google Scholar
Ghana Health Service. An evaluation of the effects of the national health insurance scheme in Ghana. Bethesda: Abt Associates Inc. and Ghana Health Service; 2009.Google Scholar
Aikins M, Okan G. Effects of health insurance on utilisation and cost of health service. Accra: JSA consultants Ltd; 2005.Google Scholar
Kumi-Kyereme A, Amo-Adjei J. Effects of spatial location and household wealth on health insurance subscription among women in Ghana. BMC Health Serv Res. 2013;13:221. doi:10.1186/1472-6963-13-221.View ArticlePubMedPubMed CentralGoogle Scholar
Ghana Statistical Service (GSS), Ghana Health Service (GHS), ICF International. Ghana demographic and health survey 2014: key indicators report. Maryland: GSS, GHS and ICF; 2015.Google Scholar
Amu H. Health insurance subscription in the Cape Coast Metropolis, Unpublished Master's thesis. Cape Coast: University of Cape Coast; 2015.Google Scholar
Boateng D, Awunyor-Vitor D. Health insurance in Ghana: evaluation of policy holders' perceptions and factors influencing policy renewal in the Volta Region. Int J Equity Health. 2013;12:50. doi:10.1186/1475-9276-12-50.View ArticlePubMedPubMed CentralGoogle Scholar
Allotey A. Financing health care in Ghana: Is one-time insurance premium the answer? 2012. Accessed 16 Jan 2016.from http://www.myjoyonline.com/ghana-news/opinion.php.Google Scholar
Jehu-Appiah C, Aryeetey G, Agyepong I, Spaan E, Baltussen R. Household perceptions and their implications for enrolment in the national health insurance scheme in Ghana. Health Policy Plan. 2012;27:222–33.View ArticlePubMedGoogle Scholar
Adjei B. Utilisation of traditional herbal medicine and its role in health care delivery in Ghana: the case of wassa Amenfi west district, Master's thesis. Kumasi: Kwame Nkrumah University of Science and Technology; 2013.Google Scholar
Andersen RM. National health surveys and the behavioural model of health services use. Med Care. 2008;46(7):647–53.View ArticlePubMedGoogle Scholar
Mhere F. Health insurance determinants in Zimbabwe: case of Gweru urban. J Appl Bus Econ. 2013;14(2):62–79.Google Scholar
Karen G, Rimer BK, Viswanath K. Health behaviour and health education: theory, research and practice. 4th ed. San Francisco: Jossey-Bass; 2008.Google Scholar
Alkire S, Chatterjee M, Conconi A, Suman S, Vaz A. Poverty in Rural and Urban Areas Direct comparisons using the global MPI 2014. Oxford: Oxford Poverty & Human Development Initiative; 2014.Google Scholar
World Bank. Agriculture and poverty reduction. Washington DC: World Bank; 2013.Google Scholar
O'Hare W. Poverty is a persistent reality for many rural children in U.S. Washington DC: Population Reference Bureau; 2014.Google Scholar
Duku SKO, Fenenga CJ, Alhassan RK, Nketiah-Amponsah E. Rural-urban differences in the determinants of enrolment in health insurance in Ghana. Paris: International union for the scientific study of population; 2013.Google Scholar
|
CommonCrawl
|
Autonomic nervous system response to remote ischemic conditioning: heart rate variability assessment
Daniel Noronha Osório1,2 na1,
Ricardo Viana-Soares3 na1,
João Pedro Marto3,4,
Marcelo D. Mendonça3,4,5,
Hugo P. Silva2,6,7,
Cláudia Quaresma1,
Miguel Viana-Baptista3,4,
Hugo Gamboa1 na1 &
Helena L. A. Vieira ORCID: orcid.org/0000-0001-9415-37423 na1
BMC Cardiovascular Disorders volume 19, Article number: 211 (2019) Cite this article
Remote ischemic conditioning (RIC) is a procedure applied in a limb for triggering endogenous protective pathways in distant organs, namely brain or heart. The underlying mechanisms of RIC are still not fully understood, and it is hypothesized they are mediated either by humoral factors, immune cells and/or the autonomic nervous system. Herein, heart rate variability (HRV) was used to evaluate the electrophysiological processes occurring in the heart during RIC and, in turn to assess the role of autonomic nervous system.
Healthy subjects were submitted to RIC protocol and electrocardiography (ECG) was used to evaluate HRV, by assessing the variability of time intervals between two consecutive heart beats. This is a pilot study based on the analysis of 18 ECG from healthy subjects submitted to RIC. HRV was characterized in three domains (time, frequency and non-linear features) that can be correlated with the autonomic nervous system function.
RIC procedure increased significantly the non-linear parameter SD2, which is associated with long term HRV. This effect was observed in all subjects and in the senior (> 60 years-old) subset analysis. SD2 increase suggests an activation of both parasympathetic and sympathetic nervous system, namely via fast vagal response (parasympathetic) and the slow sympathetic response to the baroreceptors stimulation.
RIC procedure modulates both parasympathetic and sympathetic autonomic nervous system. Furthermore, this modulation is more pronounced in the senior subset of subjects. Therefore, the autonomic nervous system regulation could be one of the mechanisms for RIC therapeutic effectiveness.
Organisms have developed endogenous mechanisms of defence against external aggressions. Therapeutic strategies enhancing these mechanisms can be more efficient and safer than pharmacological exogenous treatments. Hormesis or conditioning, which is a procedure by which noxious stimuli below the threshold of damage are applied to a tissue or system, promotes cellular tolerance against more severe stimuli [1]. Interestingly, it was found that conditioning could be made in a distant (remote) non-vital organ, such as a limb, but still exerting its effects in vital organs [2]. Remote ischemic conditioning (RIC) is a good example of this: in humans, it is easily applied by repetitive inflation (occlusion) and deflation (non-occlusion) of a blood pressure cuff on a limb, causing transient limb ischemia, remotely triggering self-protective pathways in the brain, heart, kidney or liver [2]. The exact mechanisms of RIC are yet not known, nevertheless, signal transmission from the remote location to target organs is hypothesized to be mediated either by humoral factors, immune cells and/or the autonomic nervous system [2].
The involvement of the autonomic nervous system was discovered when pharmacological blockade of ganglionic neurons inhibited RIC in animal models of cerebral and heart ischemia [3,4,5]. In experimental models, bilateral vagotomy, blockade of opioid receptors or spinal cord resection all abolished RIC effects [6,7,8,9]. Thus, evidence suggests that autonomic nervous system is involved in RIC-induced protection in experimental models.
RIC clinical studies in myocardial infarction patients have reported reductions in infarcted area, as well as an improvement of left-ventricle ejection fractions, reduction of creatinine-kinase myocardial plasma release or even ST-segment elevation resolution [10,11,12,13,14]. In acute ischemic stroke, two proof-of-concept clinical trials showed that RIC can increase tissue survival after 1 month [15, 16] and improve neurological outcome [17]. In patients with symptomatic intracranial arterial stenosis daily and bilateral application of RIC reduced stroke recurrence [18]. Despite some clinical evidence of RIC beneficial role, the underlying mechanisms are still unclear in humans, namely whether it acts via circulating signalling molecules or via autonomic nervous system.
Herein, a pilot study was performed to evaluate potential alterations in autonomic nervous system due to RIC. In healthy subjects, heart rate variability (HRV) was assessed through electrocardiography (ECG) during RIC procedure. HRV studies the variation between the interval of consecutive beats and it can be described by a set of features correlating these variations with the autonomic nervous system [19]. Measuring HRV before, after and during RIC procedure can be used to clarify the involvement of autonomic nervous system in the mechanisms of remote ischemic conditioning. Moreover, two subsets (young and senior) were studied because aging might impact the autonomic nervous system activity.
A total of 20 subjects were selected according to two age subgroups: senior and young. Senior subjects were recruited in our hospital volunteers association: "Liga dos Amigos do Hospital Sao Francisco Xavier", while the younger subjects were recruited among Nova Medical School. The following exclusion criteria were applied: Any previous neurological disease or neurosurgical procedure, severe heart failure (NYHA class III or higher), peripheral artery disease, skin ulcers or other severe dermatological disease. Subjects were also excluded per investigator judgment if they had any unstable/severe disease. Subjects were screened for vascular risk factors (arterial hypertension, diabetes, dyslipidemia, smoking, obesity, coronary artery disease, atrial fibrillation) and current medication, which is summarized in Additional file 3: Table S1.
For this study, ECG and blood volume pulse (BVP) signals were recorded during the RIC procedure. BVP signal was used to confirm blood occlusion. The ECG signal was recorded using a 1-lead local differential bipolar sensor from PLUX® (Portugal), placed on the left chest, above the heart. This sensor has an input range of + − 1.5 mV, a signal band width of 0.5–100 Hz, an input impedance of >100GOhm and a common mode rejection ratio of 100 dB (Datasheet available at https://www.biosignalsplux.com/datasheets/ECG_Sensor_Datasheet.pdf). The interface with the body is made through Ag/AgCl electrodes with a solid adhesive gel. PLUX® (Portugal) also provided two BVP sensors and the data acquisition module (Datasheet available at https://biosignalsplux.com/datasheets/biosignalsplux_hub_Spec_Sheet.pdf). The data was streamed via Bluetooth to a nearby computer at 1000 Hz sampling rate and 16-bit resolution.
The RIC protocol consisted of four cycles of 5-min ischemia/5-min reperfusion, applied to the upper limb (Fig. 1). Ischemia was performed with a blood pressure cuff inflated to above 220 mmHg or at least 20 mmHg above the subject's systolic arterial pressure. RIC-associated adverse reactions were screened during the entire procedure.
Timeline of the RIC procedure (time in minutes). Four periods of 5-min occlusion were applied with an inflated limb-cuff, each period followed by 5-min of rest (cuff deflated). Baseline recordings were taken immediately before and after RIC
The protocol can be summarized as follows:
Explanation of the study, as well as its objectives, and obtainment of participants' informed consent;
Connect the ECG lead, BVP finger sensors and place the blood pressure cuff in the left arm;
For 10 min, record resting ECG and BVP signal from the test subject;
Rapidly inflate the cuff and keep the pressure high enough (about 220 mmHg) to allow the occlusion of the brachial artery. Keep the cuff inflated for 5-min;
Deflate the cuff. Keep the cuff deflated for another 5-min;
Repeat the inflation (occlusion) and deflation (non-occlusion) 3 times more;
Keep the cuff deflated for another 5-min;
All subjects were under the same conditions, namely: RIC procedure was applied between 9 to 10 am, in a quiet and isolated room, free from distraction. Subjects were not fasted and have taken their usual medication.
QRS detection
According to [19], the calculation of HRV features should be done in either 24 h or 5-min intervals. Since each RIC step has a 5-min interval, each test subject's ECG was manually segmented in 5-min segments, using the signal from the BVP as reference. After segmentation, the Pan-Tompkins algorithm [20] was applied for R-peak detection.
Heart rate variability features
The features assessed from the HRV can be divided in three domains: time, frequency and non-linear domain.
In the time domain, the most commonly used features are the mean R-R interval, median, root mean square of successive differences (rMSSD) and the pNN50 [19]. Using the position of the R-peaks, the difference between two consecutive beats was calculated to obtain the interval between beats, their mean, median and standard deviation. Then pNN50 and rMSSD were calculated. The pNN50 is the ratio between the number of times that changes in successive R-R interval falls outside a 50 ms threshold (NN50) by the total number of R-R interval [21, 22]. Both rMSSD and pNN50 reflect parasympathetic (vagal) activity. In fact, a decrease of pNN50 percentage suggests a lower parasympathetic activity; and rMSSD correlates with short-term HRV activity, which is used to estimate vagal-mediated changes reflected in HRV.
In the frequency domain, the normalized low-frequency (LF) and high-frequency (HF) power spectra were calculated, as well as their ratio. The literature specifies that the LF spectrum covers frequencies from 0.04 Hz to 0.15 Hz, while the HF spectrum ranges from 0.15 Hz to 0.4 Hz [19]. Since R-R interval is not a perfectly sampled event, the tachogram must be resampled to be an equally sampled signal. This can be achieved by interpolating the signal to a higher frequency [19, 23]. For this study, the inter-beat interval signal was re-sampled at 10 Hz using a cubic spline interpolation and the mean value was subtracted from the signal (the DC component of the signal). Isolated ectopic beats were corrected by linear interpolation. To get the frequency power spectral density (PSD), the Welch method [24] was used, with a 256-sample Hanning window. The normalized LF and HF power bands were obtained by dividing the power spectrum in each window by the total power minus the VLF spectral power (< 0.04 Hz) [19].
HF band reflects faster changes of heart rate, which are associated with parasympathetic activity. In fact, parasympathetic activity produces responses that have a higher frequency compared to those from sympathetic branch [25]. The LF spectrum is a combination of both the fast vagal response (part of the parasympathetic nervous system) and the slow sympathetic response to baroreceptors stimulation [26, 27]. The LF/HF ratio can also be used to predict the sympathetic/vagal balance [19]. Nevertheless, there is no consensus about the contributionof the sympathetic and parasympathetic branch in the LF band [28, 29].
Finally, in the non-linear domain, the Poincaré plot is used to describe the nature of R-R interval fluctuations, by plotting a certain R-R interval (R-Rn) versus the next one (R-Rn + 1) [30]. The resulting plot can be usually fitted into an ellipse and thus this method is used to describe the Poincaré plot. The ellipse axis associated with its width is SD1, which is a linear scaling of the standard deviation of successive differences, an important short-term measure of HRV. While, ellipse length corresponds to SD2 axis, which reflects the long-term HRV being more consistent than the standard deviation of successive R-R interval [30]. The Poincaré descriptors, SD1 and SD2, were calculated using Eqs. 1 and 2 developed by Brennan and colleagues [30], which were obtained using an ellipse fitting technique:
$$ SD{1}^2=\frac{1}{2} Var\left({RR}_n-{RR}_{n+1}\right)=\frac{1}{2}{SDSD}^2 $$
$$ SD{2}^2=2{SDRR}^2-\frac{1}{2}{SDSD}^2 $$
In Eq. 1, Var represents the variance and SDSD is the standard deviation of successive differences, calculated by getting the differences between a pair of beats and the immediately next pair.
In Eq. 2, SDRR is the standard deviation of the R-R intervals.
SD1 is mostly influenced by the parasympathetic activity, while SD2 is influenced by both the sympathetic and parasympathetic activities.
Features analysis
The analysed parameters were summarized into tables and filtered by age. In Additional file 1: Figure S1 there are boxplot graphs representing each 5 min-interval of the entire procedure for global population for all analysed HRV features. Finally, the first and last 10 min of each table were used to compare the effects of RIC in the entire sample and in each age group.
The non-parametric Wilcoxon signed-rank test was used to statistically analyse the RIC procedure effect on the different subjects. For each HRV parameter, it was analysed and compared: (i) sample pairs for occlusion and non-occlusion intervals and (ii) sample pairs for the first 10 min before the procedure and the 10 min after the last occlusion. Furthermore, same analysis was done in young and senior population subsets.
The used significance level was 0.05 (p < 0.05). Out of the 18 test subjects, 1 of the tested subjects had to be removed from the statistical analysis due to the lack of one time interval (before procedure). The correlation studies between changes in SD2 and in pNN50 and rMSSD were also done by non-parametric Wilcoxon test.
All calculation and data processing were done using programming language Python (v. 3.6.4) and using the following libraries: NumPy (v. 1.10.4), SciPy (v. 0.16.1) and Nova instrumentation (version 1.0).
Among the 20 subjects initially included, 10 in each age subgroup, two of the senior subjects were excluded from our analysis due to noisy segments, not allowing the R peak detection. On total (n = 18), 11 (61.1%) were female, and mean age was 47.0 ± 21.9 years. For detailed information on vascular risk factors and current medication see Additional file 3: Table S1.
HRV features before and after RIC procedure
HRV features were analysed at 10 min before the procedure and 10 min after the last occlusion for assessing whether four cycles of RIC can modulate autonomic nervous system. All calculated values and statistical analysis (Wilcoxon signed-rank test) are represented in Additional file 4: Table S2 for the entire subject population. Additional files 5 and 6: Tables S3 and S4 represent senior and young subsets, respectively.
During aging, there is a decrease of endogenous response to stress and organism defences. Thus the use of two subsets (young and senior) can eventually disclose potential different responses of autonomic nervous system to RIC and putative novel mechanisms associated with aging processes. Furthermore, co-morbidities associated with aging such as risk of cardiovascular disease or Diabetes Mellitus might also influence autonomic nervous system response.
The non-linear feature SD2 is the single parameter that significantly increases after RIC procedure in both global (Fig. 2 and Additional file 4: Table S2) and senior subset (Fig. 3 and Additional file 5: Table S3). While in the young subset, SD2 does not significantly change (Fig. 4 and Additional file 6: Table S4). SD2 is associated with long-term HRV, which is a combination of both the fast vagal response (parasympathetic nervous system) and the slow sympathetic response to the baroreceptors stimulation. Thus, one can propose that RIC might play a role in both sympathetic and parasympathetic activities. Concerning time and frequency domain parameters, no significant statistic differences were found before and after RIC procedure.
Global analysis of HRV's features before and after RIC procedure. Each graph corresponds to one HRV feature: Mean R-R intervals (ms); median R-R intervals (ms); percentage of intervals falling outside a 50 ms difference, pNN50 (%); root mean square of successive differences of the R-R interval values per event (rMSSD (ms); normalized low frequency power spetrum density, nuLF PSD (%); normalized high frequency power spectrum density, nuHF PSD (%); LF and HF normalized power spectrum density ratio, LF/HF; SD1 axis of the Poincaré plot, SD1 axis (ms); SD2 axis of the Poincaré plot, SD2 axis (ms) and SD1/SD2 per event. Red lines correspond to mean value, black lines correspond to median value and * p-value < 0.05
Senior subset analysis of HRV's features before and after RIC procedure. Each graph corresponds to one HRV feature: Mean R-R intervals (ms); median R-R intervals (ms); percentage of intervals falling outside a 50 ms difference, pNN50 (%); root mean square of successive differences of the R-R interval values per event (rMSSD (ms); normalized low frequency power spetrum density, nuLF PSD (%); normalized high frequency power spectrum density, nuHF PSD (%); LF and HF normalized power spectrum density ratio, LF/HF; SD1 axis of the Poincaré plot, SD1 axis (ms); SD2 axis of the Poincaré plot, SD2 axis (ms) and SD1/SD2 per event. Red lines correspond to mean value, black lines correspond to median value and * p-value < 0.05
Young subset analysis of HRV's features before and after RIC procedure. Each graph corresponds to one HRV feature: Mean R-R intervals (ms); median R-R intervals (ms); percentage of intervals falling outside a 50 ms difference, pNN50 (%); root mean square of successive differences of the R-R interval values per event (rMSSD (ms); normalized low frequency power spetrum density, nuLF PSD (%); normalized high frequency power spectrum density, nuHF PSD (%); LF and HF normalized power spectrum density ratio, LF/HF; SD1 axis of the Poincaré plot, SD1 axis (ms); SD2 axis of the Poincaré plot, SD2 axis (ms) and SD1/SD2 per event. Red lines correspond to mean value, black lines correspond to median value and * p-value < 0.05
Because the sample size is limited and in order to further evaluate the role of RIC on autonomic nervous system, correlations between key features were also performed for all subjects. For each subject and feature the difference between its value before and after RIC procedure was calculated. Then, the differences between features were correlated. Changes in SD2 were significantly and positively correlated with changes in pNN50 and rMSSD. These results reinforce the RIC involvement of parasympathetic system via vagal response, since SD2 positively correlates with pNN50 (r = 0.45 with p = 0.03) and with rMSSD (r = 0.54 with p = 0.01), which are both associated with parasympathetic system, being rMSSD via vagal response. Nevertheless, one cannot exclude the involvement of sympathetic activity, since SD2 is related with both branches of autonomic nervous system.
HRV features in non-occlusion vs occlusion intervals
HRV analysis during non-occlusion and occlusion intervals reveal no difference in time, frequency or non-linear domain features in senior or young subset analysis (Additional files 5 and 6: Tables S3 and S4). Nevertheless, in global analysis during non-occlusion period there is an increase on time domain parameter rMSSD (Additional file 2: Figure S2 and Additional file 4: Table S2). Thus, during non-occlusion period there might be an increase on parasympathetic activity, since rMSSD is associated with short-term HRV and with vagal response.
In this pilot study, the role of RIC on autonomic nervous system was assessed by analysing several HRV parameters before and after RIC procedure, namely time, frequency and non-linear domain features. Among the measured and analysed parameters, only SD2 is significantly altered due to RIC procedure. This change is positively and significantly correlated with changes in pNN50 and rMSSD, in line with the proposed biological meaning of these features. This increase in SD2 reflects an increase in heart rate variability. In fact, HRV has been associated with health-promoting conditions and can even present a prognostic value [31].
SD2 increase appears to be higher in the case of senior subset. Accordingly, age appears to influence the response of autonomic nervous system to RIC. Moreover, before RIC procedure SD2 parameter is much lower (44.309 ms) in senior subset (Additional file 5: Table S3) than SD2 parameter (86.350 ms) in young one (Additional file 1: Table S4). Thus, the more pronounced difference in SD2 parameter found in senior group could be due to a lower basal autonomic nervous activity. Furthermore, multiple studies state that the HRV parameters decrease with age [32,33,34]. Indeed, when parameters were analysed before RIC procedure, which indicates HRV basal levels, pNN50, rMSSD, SD2 and LF features are higher for young subset than for senior subset, accordingly with Antelmi and colleagues [32] that have shown HRV decreases with age. In addition, the risk of cardiovascular diseases increases with age, which might also decrease the response of autonomic nervous system. In particular Diabetes Mellitus (DM) is known to be associated with a decrease in HRV and it is thought to be related with the deleterious effects of sucrose in the nerves leading to an autonomic cardiac neuropathy. However, it is also known that the severity of variability reduction is related to bad glycemic control and higher HbA1c levels [35]. Although a differential confounding role of diabetes cannot be excluded, only 2 subjects (11.1%) presented a diagnosis of type 2 DM (T2DM). Moreover, these subjects had a proper glycemic control, and also had no clinical evidence of neurological dysfunction. Thus, it is not expectable this group to significantly disrupt the final findings. Finally, in the literature there are no definitive HRV parameters that can be used as an optimal biomarker to assess CVD. However, there is evidence pointing to the correlation between reduced HRV and increase risk of CVD, in particular concerning the following parameters: SDNN and nuLF and nuHF [19, 36, 37].
Furthermore, no difference in HRV was found whenever gender is considered (data not shown). In fact, in individuals older than 60 years-old, the differences in the HRV features between genders are negligible [32] due to menopause and lower oestrogen levels in women [38]. Although HRV vary immensely due to circadian rhythm [39], RIC procedure was applied always at the same hour and in a short time window (total procedure duration 60 min). Thus, daily changes might not play a key role in variations found in heart rate parameters. Finally discomfort and pain could be confounding factors able to regulate autonomic nervous system. Out of 18 subjects only 2 subjects express mild discomfort and one a mild paresthesia. Thus, the found changes in autonomic nervous system must be due to RIC procedure and not to pain or discomfort.
It was expected to find significant differences in HRV parameters during the alternation of occlusion and non-occlusion procedures, since there is disturbance introduced into the organism. Nevertheless, in global analysis only rMSSD parameter increased during non-occlusion period, suggesting higher parasympathetic activity.
RIC-induced cytoprotection can be promoted by: (i) modulation of autonomic nervous system, (ii) production and release of bio-molecules and/or (iii) immune cells signalling. In fact, in experimental models RIC also mediates distant organ protection by the release of blood-borne factors. In mice, RIC increases nitric oxide levels, which induces vasodilation and increases cerebral blow flow, besides protecting mitochondria from oxidative stress [40]. Other autacoids have been detected as possible mediators of RIC, for instance adenosine, bradykinin or calcitonin gene-related peptide [41]. In plasma from conditioned animals, SDF-1, IL-10 and microRNA-144 have been detected, miRNA-144 was also detected in human subjects but their actual role in RIC remains to be elucidated [41,42,43,44]. Finally, modulation of autonomic nervous system by RIC can be directly due to mechanical disruption of blood flow followed by reperfusion or by the release of blood borne factors. Therefore, future research on RIC will clarify the involvement of each factor (either humoral, neural or immune) in the RIC action protective mechanisms in distant organs.
Limitations of the study
The first limitation concerns the sample size. In fact, the present work is a pilot study to assess the potential role of autonomic nervous system in RIC and to further explore the underlying mechanisms of RIC on distant organ protection. Further studies with increased number of subjects are crucial to deeper assess the role of autonomic nervous system. The second limitation is the time window of ECG recording and HRV analysis. RIC-induced modulation of autonomic nervous system was only assessed during RIC procedure and at short periods (10 min) before and after it. Therefore, further studies must be performed to assess a potential second window of autonomic nervous system modulation, namely at 2-3 h and/or 24 h following RIC procedure. Finally, more frequent RIC procedures (such as daily frequency) may also amplify the effect of RIC on sympathetic and parasympathetic activities.
Electrocardiography (ECG) was used to study the effects of remote ischemic conditioning (RIC) in autonomic nervous systems of healthy subjects. RIC procedure significantly increased the non-linear parameter SD2. Finally, this data suggests that autonomic nervous system involvement could be one of the mechanisms for RIC therapeutic effectiveness.
The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
BVP:
Blood volume pulse
ECG:
HF:
LF:
PSD:
Power spectrum density
Remote ischemic conditioning
R-R:
Interval between two consecutive beats
VLF:
Very low frequency
Dirnagl U, Becker K, Meisel A. Preconditioning and tolerance against cerebral ischaemia: from experimental strategies to clinical use. Lancet Neurol. 2009;8:398–412.
Hess DC, Blauenfeldt R a, Andersen G, Hougaard KD, Hoda MN, Ding Y, et al. Remote ischaemic conditioning-a new paradigm of self-protection in the brain. Nat Rev Neurol. 2015;11:698–710.
Gho BC, Schoemaker RG, van den Doel MA, Duncker DJ, Verdouw PD. Myocardial protection by brief ischemia in noncardiac tissue. Circulation. 1996;94:2193–200 http://www.ncbi.nlm.nih.gov/pubmed/8901671.
Malhotra S, Naggar I, Stewart M, Rosenbaum DM. Neurogenic pathway mediated remote preconditioning protects the brain from transient focal ischemic injury. Brain Res. 2011;1386:184–90. https://doi.org/10.1016/j.brainres.2011.02.032.
Wei D, Ren C, Chen X, Zhao H. The chronic protective effects of limb remote preconditioning and the underlying mechanisms involved in inflammatory factors in rat stroke. PLoS One. 2012;7:e30892. https://doi.org/10.1371/journal.pone.0030892.
Basalay MV, Mastitskaya S, Mrochek A, Ackland GL, del Arroyo AG, Sanchez J, et al. Glucagon-like peptide-1 (GLP-1) mediates cardioprotection by remote ischaemic conditioning. Cardiovasc Res. 2016;112:669–76. https://doi.org/10.1093/cvr/cvw216.
Donato M, Buchholz B, Rodríguez M, Pérez V, Inserte J, García-Dorado D, et al. Role of the parasympathetic nervous system in cardioprotection by remote hindlimb ischaemic preconditioning. Exp Physiol. 2013;98:425–34. https://doi.org/10.1113/expphysiol.2012.066217.
Mei B, Li W, Cheng X, Liu X, Gu E, Zhang Y. Activating mu-opioid receptors in the spinal cord mediates the cardioprotective effect of remote preconditioning of trauma. Cardiol J. 2017;24:314–23. https://doi.org/10.5603/CJ.a2016.0062.
Wong GTC, Lu Y, Mei B, Xia Z, Irwin MG. Cardioprotection from remote preconditioning involves spinal opioid receptor activation. Life Sci. 2012;91:860–5. https://doi.org/10.1016/j.lfs.2012.08.037.
Crimi G, Pica S, Raineri C, Bramucci E, De Ferrari GM, Klersy C, et al. Remote ischemic post-conditioning of the lower limb during primary percutaneous coronary intervention safely reduces enzymatic infarct size in anterior myocardial infarction. JACC Cardiovasc Interv. 2013;6:1055–63. https://doi.org/10.1016/j.jcin.2013.05.011.
Munk K, Andersen NH, Schmidt MR, Nielsen SS, Terkelsen CJ, Sloth E, et al. Remote ischemic conditioning in patients with myocardial infarction treated with primary angioplasty: impact on left ventricular function assessed by comprehensive echocardiography and gated single-photon emission CT. Circ Cardiovasc Imaging. 2010;3:656–62. https://doi.org/10.1161/CIRCIMAGING.110.957340.
Prunier F, Angoulvant D, Saint Etienne C, Vermes E, Gilard M, Piot C, et al. The RIPOST-MI study, assessing remote ischemic perconditioning alone or in combination with local ischemic postconditioning in ST-segment elevation myocardial infarction. Basic Res Cardiol. 2014;109:400. https://doi.org/10.1007/s00395-013-0400-y.
Rentoukas I, Giannopoulos G, Kaoukis A, Kossyvakis C, Raisakis K, Driva M, et al. Cardioprotective role of remote ischemic periconditioning in primary percutaneous coronary intervention. JACC Cardiovasc Interv. 2010;3:49–55. https://doi.org/10.1016/j.jcin.2009.10.015.
White SK, Frohlich GM, Sado DM, Maestrini V, Fontana M, Treibel TA, et al. Remote ischemic conditioning reduces myocardial infarct size and edema in patients with ST-segment elevation myocardial infarction. JACC Cardiovasc Interv. 2015;8:178–88. https://doi.org/10.1016/j.jcin.2014.05.015.
Hougaard KD, Hjort N, Zeidler D, Sørensen L, Nørgaard A, Thomsen RB, et al. Remote ischemic perconditioning in thrombolysed stroke patients: randomized study of activating endogenous neuroprotection - design and MRI measurements. Int J Stroke. 2013;8:141–6.
Hougaard KD, Hjort N, Zeidler D, SØrensen L, NØrgaard A, Hansen TM, et al. Remote ischemic perconditioning as an adjunct therapy to thrombolysis in patients with acute ischemic stroke: a randomized trial. Stroke. 2014;45:159–67.
England TJ, Hedstrom A, O'Sullivan S, Donnelly R, Barrett DA, Sarmad S, et al. RECAST (remote ischemic conditioning after stroke trial). Stroke. 2017;48:1412–5.
Meng R, Ding Y, Asmaro K, Brogan D, Meng L, Sui M, et al. Ischemic conditioning is safe and effective for Octo- and nonagenarians in stroke prevention and treatment. Neurotherapeutics. 2015;12:667–77.
Task Force of the European Society of Cardiology and the North American Society of Pacing Electrophysiology. Heart rate variability: standards of measurement, physiological interpretation, and clinical use. Circulation. 1996;93:1043–65. https://doi.org/10.1161/01.CIR.93.5.1043.
Pan J, Tompkins WJ. A real-time QRS detection algorithm. IEEE Trans Biomed Eng. 1985;BME-32:230–6.
Ewing DJ, Neilson JM, Travis P. New method for assessing cardiac parasympathetic activity using 24 hour electrocardiograms. Heart. 1984;52:396–402. https://doi.org/10.1136/hrt.52.4.396.
Bigger JT, Kleiger RE, Fleiss JL, Rolnitzky LM, Steinman RC, Miller JP. Components of heart rate variability measured during healing of acute myocardial infarction. Am J Cardiol. 1988;61:208–15. https://doi.org/10.1016/0002-9149(88)90917-4.
Hilton MF, Bates RA, Godfrey KR, Cayton RM. A new application for heart rate variability: diagnosing the sleep apnoea syndrome. In: Computers in cardiology. Vol. 25 (Cat. No.98CH36292): IEEE; 1998. p. 1–4. https://doi.org/10.1109/CIC.1998.731694.
Myers GA, Martin GJ, Magid NM, Barnett PS, Schaad JW, Weiss JS, et al. Power spectral analysis of heart rate varability in sudden cardiac death: comparison to other methods. IEEE Trans Biomed Eng. 1986;BME-33:1149–56. https://doi.org/10.1109/TBME.1986.325694.
Akselrod S, Gordon D, Ubel F, Shannon D, Berger A, Cohen R. Power spectrum analysis of heart rate fluctuation: a quantitative probe of beat-to-beat cardiovascular control. Science (80- ). 1981;213:220–2. https://doi.org/10.1126/science.6166045.
Sleight P, La Rovere MT, Mortara A, Pinna G, Maestri R, Leuzzi S, et al. Physiology and pathophysiology of heart rate and blood pressure variability in humans: is power spectral analysis largely an index of baroreflex gain? Clin Sci. 1995;88:103–9. https://doi.org/10.1042/cs0880103.
deBoer RW, Karemaker JM, Strackee J. Hemodynamic fluctuations and baroreflex sensitivity in humans: a beat-to-beat model. Am J Physiol Circ Physiol. 1987;253:H680–9. https://doi.org/10.1152/ajpheart.1987.253.3.H680.
Billman GE. The LF/HF ratio does not accurately measure cardiac sympatho-vagal balance. Front Physiol. 2013;4. https://doi.org/10.3389/fphys.2013.00026.
Goldstein DS, Bentho O, Park M-Y, Sharabi Y. Low-frequency power of heart rate variability is not a measure of cardiac sympathetic tone but may be a measure of modulation of cardiac autonomic outflows by baroreflexes. Exp Physiol. 2011;96:1255–61. https://doi.org/10.1113/expphysiol.2010.056259.
Brennan M, Palaniswami M, Kamen P. Do existing measures of Poincare plot geometry reflect nonlinear features of heart rate variability? IEEE Trans Biomed Eng. 2001;48:1342–7. https://doi.org/10.1109/10.959330.
La Rovere MT, Bigger JT, Marcus FI, Mortara A, Schwartz PJ. Baroreflex sensitivity and heart-rate variability in prediction of total cardiac mortality after myocardial infarction. Lancet. 1998;351:478–84. https://doi.org/10.1016/S0140-6736(97)11144-8.
Antelmi I, De Paula RS, Shinzato AR, Peres CA, Mansur AJ, Grupi CJ. Influence of age, gender, body mass index, and functional capacity on heart rate variability in a cohort of subjects without heart disease. Am J Cardiol. 2004;93:381–5. https://doi.org/10.1016/j.amjcard.2003.09.065.
Kuo TBJ, Lin T, Yang CCH, Li C-L, Chen C-F, Chou P. Effect of aging on gender differences in neural control of heart rate. Am J Physiol Circ Physiol. 1999;277:H2233–9. https://doi.org/10.1152/ajpheart.1999.277.6.H2233.
Pfeifer MA, Weinberg CR, Cook D, Best JD, Reenan A, Halter JB. Differential changes of autonomic nervous system function with age in man. Am J Med. 1983;75:249–58. https://doi.org/10.1016/0002-9343(83)91201-9.
Benichou T, Pereira B, Mermillod M, Tauveron I, Pfabigan D, Maqdasy S, et al. Heart rate variability in type 2 diabetes mellitus: a systematic review and meta–analysis. PLoS One. 2018;13:e0195166. https://doi.org/10.1371/journal.pone.0195166.
Tsuji H, Larson MG, Venditti FJ, Manders ES, Evans JC, Feldman CL, et al. Impact of reduced heart rate variability on risk for cardiac events. The framingham heart study. Circulation. 1996;94:2850–5 http://www.ncbi.nlm.nih.gov/pubmed/8941112.
Hillebrand S, Gast KB, de Mutsert R, Swenne CA, Jukema JW, Middeldorp S, et al. Heart rate variability and first cardiovascular event in populations without known cardiovascular disease: meta-analysis and dose–response meta-regression. EP Eur. 2013;15:742–9. https://doi.org/10.1093/europace/eus341.
Du X-J, Dart AM, Riemersma RA. Sex differences in the parasympathetic nerve control of rat heart. Clin Exp Pharmacol Physiol. 1994;21:485–93. https://doi.org/10.1111/j.1440-1681.1994.tb02545.x.
Maestri R, Raczak G, Danilowicz-Szymanowicz L, Torunski A, Sukiennik A, Kubica J, et al. Reliability of heart rate variability measurements in patients with a history of myocardial infarction. Clin Sci. 2009;118:195–201. https://doi.org/10.1042/CS20090183.
Rassaf T, Totzeck M, Hendgen-Cotta UB, Shiva S, Heusch G, Kelm M. Circulating nitrite contributes to cardioprotection by remote ischemic preconditioning. Circ Res. 2014;114:1601–10.
Hess DC, Hoda MN, Khan MB. Humoral mediators of remote ischemic conditioning: Important role of eNOS/NO/nitrite. In: Acta neurochirurgica, supplementum; 2016. p. 45–8.
Cai ZP, Parajuli N, Zheng X, Becker L. Remote ischemic preconditioning confers late protection against myocardial ischemia–reperfusion injury in mice by upregulating interleukin-10. Basic Res Cardiol. 2012;107:277. https://doi.org/10.1007/s00395-012-0277-1.
Davidson SM, Selvaraj P, He D, Boi-Doku C, Yellon RL, Vicencio JM, et al. Remote ischaemic preconditioning involves signalling through the SDF-1α/CXCR4 signalling axis. Basic Res Cardiol. 2013;108:377. https://doi.org/10.1007/s00395-013-0377-6.
Li J, Rohailla S, Gelber N, Rutka J, Sabah N, Gladstone RA, et al. MicroRNA-144 is a circulating effector of remote ischemic preconditioning. Basic Res Cardiol. 2014;109:423. https://doi.org/10.1007/s00395-014-0423-z.
The authors acknowledge the association "Liga dos Amigos do Hospital Sao Francisco Xavier" for their support.
This work was supported by the Portuguese Fundação para a Ciência e Tecnologia (FCT) with grants: I&D 2015–2020 "iNOVA4Health - Programme in Translational Medicine" (UID/Multi/04462/2013) and PTDC/MEC-NEU/28750/2017 for funding consumables and RVS's fellowship; IF/00185/2012 for supporting HLAV's salary and PD/BDE/130374/2017 for supporting DNO's fellowship. The "Sociedade Portuguesa do Acidente Vascular Cerebral" (SPAVC) within "Bolsa de Investigação em Doenças Vasculares Cerebrais - 2015 - Bolsa 10 anos SPAVC" supports JPM's participation. This work was also supported by AHA grant with reference AHA CMUP-ERI/HCI/0046 and by PLUX, Wireless Biosignals, S.A., Portugal for the development of technical devices for ANS analysis.
Daniel Noronha Osório, Ricardo Viana-Soares, Hugo Gamboa and Helena L. A. Vieira contributed equally to this work.
LIBPhys-UNL - Laboratorio de Instrumentação, Engenharia Biomédica e Física da Radiação (LIBPhys-UNL), Departamento de Física, Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, Monte da Caparica, 2892-516, Caparica, Portugal
Daniel Noronha Osório, Cláudia Quaresma & Hugo Gamboa
PLUX - Wireless Biosignals, S.A, Lisboa, Portugal
Daniel Noronha Osório & Hugo P. Silva
CEDOC - NOVA Medical School, Faculdade de Ciências Médicas, Universidade Nova de Lisboa, Campo Mártires da Pátria, 130, 1169-056, Lisboa, Portugal
Ricardo Viana-Soares, João Pedro Marto, Marcelo D. Mendonça, Miguel Viana-Baptista & Helena L. A. Vieira
Department of Neurology, Hospital Egas Moniz, Centro Hospitalar Lisboa Ocidental, Lisboa, Portugal
João Pedro Marto, Marcelo D. Mendonça & Miguel Viana-Baptista
Champalimaud Research, Champalimaud Centre for the Unknown, Lisboa, 7IT - Instituto de Telecomunicações, Lisboa, Portugal
Marcelo D. Mendonça
EST/IPS - Escola Superior de Tecnologia do Instituto Politécnico de Setúbal, Setúbal, Portugal
Hugo P. Silva
iBET - Instituto de Biologia Experimental e Tecnológica, Oeiras, Portugal
Daniel Noronha Osório
Ricardo Viana-Soares
João Pedro Marto
Cláudia Quaresma
Miguel Viana-Baptista
Hugo Gamboa
Helena L. A. Vieira
DNO and RV-S performed experimental procedures, carried out the analysis, interpretation of data and wrote the manuscript. HPS and CQ provided technical support. JPM, MDM, MV-B, HG and HLAV participated in the discussion of data and critically reviewed the manuscript. CSFQ, MDM, MVB, HG and HLAV participated in the conception and design of the study. All authors read and approved the final manuscript.
Correspondence to Hugo Gamboa or Helena L. A. Vieira.
The study was approved by the Centro Hospitalar de Lisboa Ocidental (Lisbon, Portugal) and the Nova Medical School, Universidade Nova de Lisboa, Lisbon, Portugal (no10/2016/CEFCM) Independent Ethical Committees and all subjects signed a written informed consent before entering the study.
Figure S1. Global population boxplots for each phase of the RIC procedure. Each graph corresponds to one HRV feature: Mean R-R intervals (ms); median R-R intervals (ms); percentage of intervals falling outside a 50 ms difference, pNN50 (%); root mean square of successive differences of the R-R interval values per event (rMSSD (ms); normalized low frequency power spetrum density, nuLF PSD (%); normalized high frequency power spectrum density, nuHF PSD (%); LF and HF normalized power spectrum density ratio, LF/HF; SD1 axis of the Poincaré plot, SD1 axis (ms); SD2 axis of the Poincaré plot, SD2 axis (ms) and SD1/SD2 per event. (TIFF 293 kb)
Figure S2. Global population boxplots comparing occlusion and non-occlusion intervals. Each graph corresponds to one HRV feature: Mean R-R intervals (ms); median R-R intervals (ms); percentage of intervals falling outside a 50 ms difference, pNN50 (%); root mean square of successive differences of the R-R interval values per event (rMSSD (ms); normalized low frequency power spetrum density, nuLF PSD (%); normalized high frequency power spectrum density, nuHF PSD (%); LF and HF normalized power spectrum density ratio, LF/HF; SD1 axis of the Poincaré plot, SD1 axis (ms); SD2 axis of the Poincaré plot, SD2 axis (ms) and SD1/SD2 per event. Red lines correspond to mean value, black lines correspond to median value and * p-value < 0.05. (TIFF 129 kb)
Table S1. Subjects baseline characteristics: Demographics, relevant cardiovascular risk factors and medication. (PDF 44 kb)
Table S2. Global population analysis for the first and last 10 min and occlusion and non-occlusion intervals. For the first and last 10 min analysis, the mean values are presented as well as a comparison between them and the p-value for the Wilcoxon signed-rank test. For the occlusion and non-occlusion interval analysis, the mean values are presented as well as a comparison between them and the p-value for the Wilcoxon signed-rank test. (PDF 60 kb)
Table S3. Senior population analysis for the first and last 10 min and occlusion and non-occlusion intervals. For the first and last 10 min analysis, the mean values are presented as well as a comparison between them and the p-value for the Wilcoxon signed-rank test. For the occlusion and non-occlusion interval analysis, the mean values are presented as well as a comparison between them and the p-value for the Wilcoxon signed-rank test. (PDF 60 kb)
Table S4. Young population analysis for the first and last 10 min and occlusion and non-occlusion intervals. For the first and last 10 min analysis, the mean values are presented as well as a comparison between them and the p-value for the Wilcoxon signed-rank test. For the occlusion and non-occlusion interval analysis, the mean values are presented as well as a comparison between them and the p-value for the Wilcoxon signed-rank test. (PDF 60 kb)
Noronha Osório, D., Viana-Soares, R., Marto, J.P. et al. Autonomic nervous system response to remote ischemic conditioning: heart rate variability assessment. BMC Cardiovasc Disord 19, 211 (2019). https://doi.org/10.1186/s12872-019-1181-5
Hypertension and Cardiovascular Risk
|
CommonCrawl
|
How do I declare a countably infinite list of variables as being integers?
I want to do this,
Solve[ { n1 + n2 + n3 + .... = 10, n1 + 2 n2 + 3 n3 + .... = 100},
{n1,n2,n3,....}, Assumptions -> {n1,n2,n3,.... are Integers >=0} ]
Note that here .... means "ad infinitum" . Can someone write a genuine Mathematica code that solves this system?
equation-solving diophantine-equations
J. M. can't deal with it♦
Quasar SupernovaQuasar Supernova
$\begingroup$ Are you asking if Mathematica can calculate values for $n_1$, $n_2$, $n_3$, ... , $n_{\infty - 3}$, $n_{\infty - 2}$, $n_{\infty - 1}$, $n_{\infty}$? How exactly would you display the solutions or store them in memory? $\endgroup$
– MassDefect
$\begingroup$ Have you seen FrobeniusSolve[]? $\endgroup$
– J. M. can't deal with it ♦
$\begingroup$ FrobeniusSolve[] is indeed a nice shortcut. Thanks. Now that I think about it the header is somewhat misleading. The reason why I felt I may have to declare an infinite array of integers is because I wanted to make the numbers 10 and 100 variables. Even then one can get by without having to declare an infinite array I guess. It is hard to guess that the problem I have described arises naturally while trying to compute the entropy of a physical system. $\endgroup$
– Quasar Supernova
$\begingroup$ But I am unable to see how to Solve two Frobenius Equations Simultaneously.. $\endgroup$
To solve the full problem, I don't know how to coax Mathematica into running 100 nested Do's in a reasonable amount of time. Instead, I would use Mathematica to generate C-code that is then executed externally. In this way, the the full problem can be solved in about 40 seconds.
Here's the Mathematica code that generates the C-code. Assume that we want $\sum_{i=1}^{i_{\text{max}}}n_i=\text{sum}_0=10$ and $\sum_{i=1}^{i_{\text{max}}}in_i=\text{sum}_1=100$, and go to the full problem with $i_{\text{max}}=100$:
sum0 = 10;
sum1 = 100;
imax = 100;
f = OpenWrite["~/Desktop/loop.c"]; (* or whereever you want to save it *)
WriteString[f, "#include <stdio.h>\n"];
WriteString[f, "int main() {\n"];
WriteString[f, "int n[" <> ToString[imax] <> "];\n"];
WriteString[f, "int s0[" <> ToString[imax + 1] <> "];\n"];
WriteString[f, "s0[0]=s1[0]=0;\n"];
Do[WriteString[f, "for (n[" <> ToString[i - 1] <> "]=0; (s0[" <> ToString[i - 1] <>
"]+n[" <> ToString[i - 1] <> "]<=" <> ToString[sum0] <> ") && (s1[" <>
ToString[i - 1] <> "]+" <> ToString[i] <> "*n[" <> ToString[i - 1] <> "]<=" <>
ToString[sum1] <> "); n[" <> ToString[i - 1] <> "]++) {\ns0[" <> ToString[i] <>
"]=s0[" <> ToString[i - 1] <> "]+n[" <> ToString[i - 1] <> "];\ns1[" <>
ToString[i] <> "]=s1[" <> ToString[i - 1] <> "]+" <> ToString[i] <> "*n[" <>
ToString[i - 1] <> "];\n"], {i, imax}];
WriteString[f, "if ((s0[" <> ToString[imax] <> "]==" <> ToString[sum0] <>
") && (s1[" <> ToString[imax] <> "]==" <> ToString[sum1] <> "))\n"];
WriteString[f, "printf(\""];
Do[WriteString[f, "%d "], {imax}];
WriteString[f, "\\n\", n[0]"];
Do[WriteString[f, ",n[" <> ToString[i - 1] <> "]"], {i, 2, imax}];
WriteString[f, ");\n"];
Do[WriteString[f, "}"], {imax + 1}];
WriteString[f, "\n"];
Close[f];
Compile this code in a terminal with
gcc -O3 loop.c -o loop
and run it with
./loop > loop.dat
On my laptop the code runs in about 40 seconds and generates 2'977'866 solutions. These solutions can be read into Mathematica with
data = ReadList["~/Desktop/loop.dat", Number, RecordLists -> True];
Dimensions[data]
{2977866, 100}
To determine how many solutions you get as a function of $i_{\text{max}}$, first count how many nonzero $n_i$ are involved in each solution:
lengths = Replace[data, {x___, Except[0], 0 ...} :> Length[{x}] + 1, {1}];
Then count how many solutions have a length $\le i_{\text{max}}$ as a function of $i_{\text{max}}$:
Transpose[{Range[100], Accumulate[Lookup[Counts[lengths], Range[100], 0]]}]
{{1, 0}, {2, 0}, {3, 0}, {4, 0}, {5, 0}, {6, 0}, {7, 0}, {8, 0}, {9, 0}, {10, 1}, {11, 42}, {12, 463}, {13, 2507}, {14, 8861}, {15, 23601}, {16, 51376}, {17, 96314}, {18, 161073}, {19, 246448}, {20, 351344}, {21, 473259}, {22, 608704}, {23, 753813}, {24, 904675}, {25, 1057740}, {26, 1209868}, {27, 1358546}, {28, 1501764}, {29, 1638097}, {30, 1766535}, {31, 1886522}, {32, 1997755}, {33, 2100247}, {34, 2194143}, {35, 2279771}, {36, 2357509}, {37, 2427846}, {38, 2491252}, {39, 2548263}, {40, 2599369}, {41, 2645085}, {42, 2685871}, {43, 2722199}, {44, 2754470}, {45, 2783097}, {46, 2808428}, {47, 2830808}, {48, 2850528}, {49, 2867882}, {50, 2883106}, {51, 2896444}, {52, 2908092}, {53, 2918248}, {54, 2927072}, {55, 2934729}, {56, 2941344}, {57, 2947052}, {58, 2951956}, {59, 2956162}, {60, 2959751}, {61, 2962811}, {62, 2965403}, {63, 2967597}, {64, 2969442}, {65, 2970991}, {66, 2972282}, {67, 2973358}, {68, 2974245}, {69, 2974977}, {70, 2975575}, {71, 2976063}, {72, 2976456}, {73, 2976774}, {74, 2977026}, {75, 2977227}, {76, 2977384}, {77, 2977507}, {78, 2977601}, {79, 2977674}, {80, 2977728}, {81, 2977769}, {82, 2977799}, {83, 2977821}, {84, 2977836}, {85, 2977847}, {86, 2977854}, {87, 2977859}, {88, 2977862}, {89, 2977864}, {90, 2977865}, {91, 2977866}, {92, 2977866}, {93, 2977866}, {94, 2977866}, {95, 2977866}, {96, 2977866}, {97, 2977866}, {98, 2977866}, {99, 2977866}, {100, 2977866}}
For reference, here's the generated C-code:
int n[100];
int s0[101];
s0[0]=s1[0]=0;
for (n[0]=0; (s0[0]+n[0]<=10) && (s1[0]+1*n[0]<=100); n[0]++) {
s0[1]=s0[0]+n[0];
s1[1]=s1[0]+1*n[0];
for (n[9]=0; (s0[9]+n[9]<=10) && (s1[9]+10*n[9]<=100); n[9]++) {
s0[10]=s0[9]+n[9];
s1[10]=s1[9]+10*n[9];
for (n[10]=0; (s0[10]+n[10]<=10) && (s1[10]+11*n[10]<=100); n[10]++) {
s0[11]=s0[10]+n[10];
s1[11]=s1[10]+11*n[10];
for (n[99]=0; (s0[99]+n[99]<=10) && (s1[99]+100*n[99]<=100); n[99]++) {
s0[100]=s0[99]+n[99];
s1[100]=s1[99]+100*n[99];
if ((s0[100]==10) && (s1[100]==100))
printf("%d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d \n", n[0],n[1],n[2],n[3],n[4],n[5],n[6],n[7],n[8],n[9],n[10],n[11],n[12],n[13],n[14],n[15],n[16],n[17],n[18],n[19],n[20],n[21],n[22],n[23],n[24],n[25],n[26],n[27],n[28],n[29],n[30],n[31],n[32],n[33],n[34],n[35],n[36],n[37],n[38],n[39],n[40],n[41],n[42],n[43],n[44],n[45],n[46],n[47],n[48],n[49],n[50],n[51],n[52],n[53],n[54],n[55],n[56],n[57],n[58],n[59],n[60],n[61],n[62],n[63],n[64],n[65],n[66],n[67],n[68],n[69],n[70],n[71],n[72],n[73],n[74],n[75],n[76],n[77],n[78],n[79],n[80],n[81],n[82],n[83],n[84],n[85],n[86],n[87],n[88],n[89],n[90],n[91],n[92],n[93],n[94],n[95],n[96],n[97],n[98],n[99]);
}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}
Surely someone more skilled than me can find a C++ template programming way of generating the same code without needing Mathematica in the first place.
RomanRoman
All the $n_i$ with $i>100$ must be zero, so it's enough to look at $n_1\ldots n_{100}$. The below code will crash for this large system though.
However, if you only look at $i\le i_{\text{max}}$ (and you may make $i_{\text{max}}$ as large as your computer allows, though probably not 100 as truly needed), then you could first generate the list of all tuples of length imax that sum to 10:
imax = 15;
S = Join @@ Permutations /@ IntegerPartitions[10, {imax}, Range[0, 10]];
Careful with memory, there are $\binom{i_{\text{max}} + 9}{10}$ of them. For $i_{\text{max}}=100$ this will generate 42'634'215'112'710 tuples of length 100.
Then find the positions of those that sum to 100 when dotted with the vector $\{1,2,3,\ldots,i_{\text{max}}\}$:
P = Position[S.Range[imax], 100] // Flatten
{10, 39, 276, 359, 398, 573, ...}
Show the results:
S[[P]]
{{0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0}, {1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 0, 0, 0, 0}, {0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0}, {0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 8, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 2, 0}, {1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0}, ...}
To calculate the number of solutions as a function of $i_{\text{max}}$:
num[imax_Integer /; 1 <= imax <= 100] := Module[{S},
P = Position[S.Range[imax], 100] // Flatten;
Length[P]]
Table[num[imax], {imax, 1, 20}]
{0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 42, 463, 2507, 8861, 23601, 51376, 96314, 161073, 246448, 351344}
$\begingroup$ Thanks. Can you add a code fragment that computes the number of such tuples as a function of imax $\endgroup$
One approach is to name your variables as elements of an array (instead of having separate names). For example, allA defines 100 variables called a[1], a[2], etc. You can use these in Solve directly, and force them all to be positive integers:
n = 100;
allA = Array[a, n]
Solve[{Sum[a[i], {i, 1, n}] == 100 && Sum[i a[i], {i, 1, n}] == 1000 &&
Thread[allA >= ConstantArray[0, n]] //.List -> And}, allA, Integers]
We can test that it works by using a simple case where n=3
n = 3;
Solve[{Sum[a[i], {i, 1, n}] == 6 && Sum[i a[i], {i, 1, n}] == 14 &&
which returns three answers:
{{a[1] -> 0, a[2] -> 4, a[3] -> 2}, {a[1] -> 1, a[2] -> 2, a[3] -> 3}
{a[1] -> 2, a[2] -> 0, a[3] -> 4}}
For larger n, if you don't want to wait for all the answers, you can replace Solve with FindInstance and just get a few answers.
I used FrobeniusSolve but in the most wasteful way imaginable. I solved these two equations separately and found the intersection of the solutions and counted how many there are as a function of U. The Log of this number is the entropy of the system. The physical model is a set of N identical marbles occupying an infinite staircase where ascending each step implies gaining a unit of energy. The total energy of the system is U.
n1 + n2 + ... + nK = N
n1 + 2 n2 + .... + K nK = U
It is easy to see that
K = U - N + 1
The Code I wrote is,
funit[x_] := 1
Arr[max_] := Array[funit, max]
FSolveTotN[ N_, U_]
:= FrobeniusSolve[ Arr[U - N + 1], N]
FSolveEgy[N_, U_]
:= FrobeniusSolve[ Range[U - N + 1], U]
MarbleList[N_, U_]
:= Intersection[FSolveTotN[N, U],FSolveEgy[N, U]]
Entropy of N marbles on an infinite staircase as a function of the total energy U
S[N_, U_] := Log[ Length[MarbleList[N, U]] ]
TASK[N_] := Table[{U, S[N, U]},{U, N+1, N+15}]
ListPlot[TASK[3], AxesLabel -> {"U", "Entropy"}]
TASK[3] is
{{4, 0}, {5, Log[2]}, {6, Log[3]}, {7, Log[4]},
{8, Log[5]}, {9,Log[7]}, {10, Log[8]},
{11, Log[10]}, {12, Log[12]}, {13,Log[14]},
{14, Log[16]}, {15, Log[19]}, {16, Log[21]},
{17, Log[24]}, {18, Log[27]}}
Not the answer you're looking for? Browse other questions tagged equation-solving diophantine-equations or ask your own question.
FrobeniusSolve with solutions only being 0 or 1 being acceptable
Using 'Reduce' to solve a set of inequalities, specified by a list
How to solve for exponents in an infinite product?
Solve returns empty list on system of 62 linear equations with undeclared known variables
Why can Solve solve this system of expressions but not a similar system?
Solving an implicit equation and plot solution
How to solve equation in integer numbers
Why am I unable to solve this system of equations?
|
CommonCrawl
|
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About
Annals of Statistics
Ann. Statist.
Volume 40, Number 2 (2012), 1171-1197.
Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions
Alekh Agarwal, Sahand Negahban, and Martin J. Wainwright
More by Alekh Agarwal
More by Sahand Negahban
More by Martin J. Wainwright
Full-text: Open access
Enhanced PDF (266 KB)
Article info and citation
We analyze a class of estimators based on convex relaxation for solving high-dimensional matrix decomposition problems. The observations are noisy realizations of a linear transformation $\mathfrak{X}$ of the sum of an (approximately) low rank matrix $\Theta^{\star}$ with a second matrix $\Gamma^{\star}$ endowed with a complementary form of low-dimensional structure; this set-up includes many statistical models of interest, including factor analysis, multi-task regression and robust covariance estimation. We derive a general theorem that bounds the Frobenius norm error for an estimate of the pair $(\Theta^{\star},\Gamma^{\star})$ obtained by solving a convex optimization problem that combines the nuclear norm with a general decomposable regularizer. Our results use a "spikiness" condition that is related to, but milder than, singular vector incoherence. We specialize our general result to two cases that have been studied in past work: low rank plus an entrywise sparse matrix, and low rank plus a columnwise sparse matrix. For both models, our theory yields nonasymptotic Frobenius error bounds for both deterministic and stochastic noise matrices, and applies to matrices $\Theta^{\star}$ that can be exactly or approximately low rank, and matrices $\Gamma^{\star}$ that can be exactly or approximately sparse. Moreover, for the case of stochastic noise matrices and the identity observation operator, we establish matching lower bounds on the minimax error. The sharpness of our nonasymptotic predictions is confirmed by numerical simulations.
Ann. Statist., Volume 40, Number 2 (2012), 1171-1197.
First available in Project Euclid: 18 July 2012
Permanent link to this document
https://projecteuclid.org/euclid.aos/1342625465
doi:10.1214/12-AOS1000
Mathematical Reviews number (MathSciNet)
MR2985947
Zentralblatt MATH identifier
Primary: 62F30: Inference under constraints 62F30: Inference under constraints
Secondary: 62H12: Estimation
High-dimensional inference nuclear norm composite regularizers
Agarwal, Alekh; Negahban, Sahand; Wainwright, Martin J. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. Ann. Statist. 40 (2012), no. 2, 1171--1197. doi:10.1214/12-AOS1000. https://projecteuclid.org/euclid.aos/1342625465
[1] Agarwal, A., Negahban, S. and Wainwright, M. J. (2012). Supplement to "Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions." DOI:10.1214/12-AOS1000SUPP.
[2] Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis, 3rd ed. Wiley, Hoboken, NJ.
Mathematical Reviews (MathSciNet): MR1990662
[3] Ando, R. K. and Zhang, T. (2005). A framework for learning predictive structures from multiple tasks and unlabeled data. J. Mach. Learn. Res. 6 1817–1853.
Zentralblatt MATH: 1222.68133
[4] Blitzer, J., Foster, D. P. and Kakade, S. M. (2009). Zero-shot domain adaptation: A multi-view approach. Technical report, Toyota Technological Institute at Chicago.
[5] Blitzer, J., Mcdonald, R. and Pereira, F. (2006). Domain adaptation with structural correspondence learning. In EMNLP Conference, Sydney, Australia.
[6] Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cambridge Univ. Press, Cambridge.
[7] Candès, E. J., Li, X., Ma, Y. and Wright, J. (2011). Robust principal component analysis? J. ACM 58 Art. 11, 37.
[8] Chandrasekaran, V., Parillo, P. A. and Willsky, A. S. (2010). Latent variable graphical model selection via convex optimization. Technical report, Massachusetts Institute of Technology.
[9] Chandrasekaran, V., Sanghavi, S., Parrilo, P. A. and Willsky, A. S. (2011). Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim. 21 572–596.
Digital Object Identifier: doi:10.1137/090761793
[10] Fan, J., Liao, Y. and Mincheva, M. (2012). Large covariance estimation by thresholding principal orthogonal components. Technical report, Princeton Univ. Available at arXiv:1201.0175v1.
[11] Hsu, D., Kakade, S. M. and Zhang, T. (2011). Robust matrix decomposition with sparse corruptions. IEEE Trans. Inform. Theory 57 7221–7234.
Digital Object Identifier: doi:10.1109/TIT.2011.2158250
[12] Johnstone, I. M. (2001). On the distribution of the largest eigenvalue in principal components analysis. Ann. Statist. 29 295–327.
Digital Object Identifier: doi:10.1214/aos/1009210544
Project Euclid: euclid.aos/1009210544
[13] McCoy, M. and Tropp, J. A. (2011). Two proposals for robust PCA using semidefinite programming. Electron. J. Stat. 5 1123–1160.
Digital Object Identifier: doi:10.1214/11-EJS636
Project Euclid: euclid.ejs/1316092870
[14] Negahban, S., Ravikumar, P., Wainwright, M. J. and Yu, B. (2009). A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. In NIPS Conference, Vancouver, Canada, December 2009. Full length version available at arXiv:1010.2731v1. Statist. Sci. To appear.
[15] Negahban, S. and Wainwright, M. J. (2011). Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Ann. Statist. 39 1069–1097.
Digital Object Identifier: doi:10.1214/10-AOS850
[16] Negahban, S. and Wainwright, M. J. (2012). Restricted strong convexity and (weighted) matrix completion: Optimal bounds with noise. J. Mach. Learn. Res. 13 1665–1697.
[17] Raskutti, G., Wainwright, M. J. and Yu, B. (2011). Minimax rates of estimation for high-dimensional linear regression over $\ell_q$-balls. IEEE Trans. Inform. Theory 57 6976–6994.
[18] Recht, B., Fazel, M. and Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52 471–501.
[19] Rockafellar, R. T. (1970). Convex Analysis. Princeton Mathematical Series 28. Princeton Univ. Press, Princeton, NJ.
Mathematical Reviews (MathSciNet): MR274683
[20] Rohde, A. and Tsybakov, A. B. (2011). Estimation of high-dimensional low-rank matrices. Ann. Statist. 39 887–930.
[21] Xu, H., Caramanis, C. and Sanghavi, S. (2010). Robust PCA via outlier pursuit. Technical report, Univ. Texas, Austin. Available at arXiv:1010.4237.
[22] Yuan, M., Ekici, A., Lu, Z. and Monteiro, R. (2007). Dimension reduction and coefficient estimation in multivariate linear regression. J. R. Stat. Soc. Ser. B Stat. Methodol. 69 329–346.
Digital Object Identifier: doi:10.1111/j.1467-9868.2007.00591.x
Supplementary material: Simulations and proofs. This supplementary material contains numerical simulations that demonstrate excellent agreement between the theoretical predictions and the practical behavior of our estimators. We also provide proofs for our upper and lower bounds, including slightly sharpened versions of Corollaries 2 and 6.
Digital Object Identifier: doi:10.1214/12-AOS1000SUPP
Supplemental PDF (274 KB)
The Institute of Mathematical Statistics
Future Papers
New content alerts
Email RSS ToC RSS Article
What is MathJax?
Estimation of (near) low-rank matrices with noise and high-dimensional scaling
Negahban, Sahand and Wainwright, Martin J., Annals of Statistics, 2011
ROP: Matrix recovery via rank-one projections
Cai, T. Tony and Zhang, Anru, Annals of Statistics, 2015
Entrywise eigenvector analysis of random matrices with low expected rank
Abbe, Emmanuel, Fan, Jianqing, Wang, Kaizheng, and Zhong, Yiqiao, Annals of Statistics, 2020
Matrix completion via max-norm constrained optimization
Cai, T. Tony and Zhou, Wen-Xin, Electronic Journal of Statistics, 2016
Estimation of high-dimensional low-rank matrices
Rohde, Angelika and Tsybakov, Alexandre B., Annals of Statistics, 2011
Optimal selection of reduced rank estimators of high-dimensional matrices
Bunea, Florentina, She, Yiyuan, and Wegkamp, Marten H., Annals of Statistics, 2011
Minimax sparse principal subspace estimation in high dimensions
Vu, Vincent Q. and Lei, Jing, Annals of Statistics, 2013
Fast global convergence of gradient methods for high-dimensional statistical recovery
Agarwal, Alekh, Negahban, Sahand, and Wainwright, Martin J., Annals of Statistics, 2012
Spectral method and regularized MLE are both optimal for top-$K$ ranking
Chen, Yuxin, Fan, Jianqing, Ma, Cong, and Wang, Kaizheng, Annals of Statistics, 2019
Low rank multivariate regression
Giraud, Christophe, Electronic Journal of Statistics, 2011
euclid.aos/1342625465
|
CommonCrawl
|
August 2016 , Volume 53, Issue 4, pp 1051–1084 | Cite as
Socioeconomic Segregation in Large Cities in France and the United States
Lincoln Quillian
Hugues Lagrange
Past cross-national comparisons of socioeconomic segregation have been undercut by lack of comparability in measures, data, and concepts. Using IRIS data from the French Census of 2008 and the French Ministry of Finance as well as tract data from the American Community Survey (2006–2010) and the U.S. Department of Housing and Urban Development Picture of Subsidized Households, and constructing measures to be as similar as possible, we compare socioeconomic segregation in metropolitan areas with a population of more than 1 million in France and the United States. We find much higher socioeconomic segregation in large metropolitan areas in the United States than in France. We also find (1) a strong pattern of low-income neighborhoods in central cities and high-income neighborhoods in suburbs in the United States, but varying patterns across metropolitan areas in France; (2) that high-income persons are the most segregated group in both countries; (3) that the shares of neighborhood income differences that can be explained by neighborhood racial/ethnic composition are similar in France and the United States; and (4) that government-assisted housing is disproportionately located in the poorest neighborhoods in the United States but is spread across many neighborhood income levels in France. We conclude that differences in government provision of housing assistance and levels of income inequality are likely important contributing factors to the Franco-U.S. difference in socioeconomic segregation.
Segregation Income segregation Socioeconomic status Franco-U.S. comparisons Urban demography
Work on this project was supported by a grant from the Partner University Fund of the FACE foundation and a residential fellowship from the Russell Sage Foundation to the first author. An earlier version of this article was presented at the IPR-OSC Conference in Paris, France, June 21–22, 2012, and at the meetings of the American Sociological Association in New York City, August 10–13, 2013.
Appendix: Measures and Methods for Income Segregation Statistics
NSI Calculation
NSI for a metropolitan area is defined as follows:
$$ NSI=\frac{\upsigma_N}{\upsigma_H}=\frac{\sqrt{\frac{{\displaystyle {\sum}_{n=1}^N{h}_n{\left({\overline{y}}_n-\overline{y}\right)}^2}}{H}}}{\sqrt{\frac{{\displaystyle {\sum}_{i=1}^H{\left({y}_i-\overline{y}\right)}^2}}{H}}}, $$
where H is the number of households in the metropolitan area, h n represents the number of households in the nth neighborhood, y represents income for the ith household, \( {\overline{y}}_n \) represents the average income for the nth neighborhood, and \( \overline{y} \) indicates metropolitan average income. The numerator may be calculated for both France and the United States directly from the French Ministry of Finance IRIS data and the ACS data, respectively. The denominator—the standard deviation of metropolitan household income—may be directly calculated from the IRIS data for France from summing within-IRIS deviation (provided in the data) and between-IRIS deviation (calculated from IRIS means). For the United States, we estimate the denominator from counts of numbers of households in 16 income ranges in each metropolitan area. We do this by assuming a lognormal distribution of income and then using a maximum likelihood estimation to estimate the variability of tract income for each metropolitan income from the data. In practice, this is done using Stata's intreg command, estimating an intercept-only model of metropolitan income from tract income counts in categories, which also generates an estimate of the variability of income. We then calculate the mean and standard deviation of household income unlogged from the logged mean and standard deviation estimates produced by intreg using formulas from Johnson et al. (1994).
Theil's Segregation Index, Income Percentile Segregation Calculations, and Reardon's Rank-Ordered H
If p denotes income percentile ranks for an income distribution, for any value of p, we dichotomize the income distribution at p and compute the segregation between those with income ranks less than p and those with income ranks greater than or equal to p. If H(p) is Theil's information theory index of segregation (see James and Taeuber 1985), and E(p) is the entropy statistic for p (used in the calculation of H(p)), then the rank-order information theory index (H R ) is defined as follows:
$$ {H}^R=2 ln(2){\displaystyle {\int}_0^1E(p)}H(p)dp. $$
We calculate H(p) and H R using methods described in Reardon and Bischoff (2011:1110–1111, and appendix A).We also apply their method for making income percentile graphs developed with H(p) to the standard index of dissimilarity, which is a straightforward extension.
We initially perform standard computations of Theil's entropy index of segregation (H(p)) and the index of dissimilarity (D(p)) for everyone below p and at or above p for each of the income cut points available in the two data sets.
In the U.S. data, counts of households are reported in 16 categories. For the French data, we have reports of income deciles, from which we calculated counts of households in 10 income categories. We also compute the percentile corresponding to each of these cut points on the income distribution from the data (p).
We then regress these calculated segregation indexes (H(p)) on the corresponding percentiles (p). Our specification uses a fourth-order quadratic for p to allow for nonlinearity. (We found very little predictive change from adding a fifth order term.) We use the resulting curve to predict the segregation scores for all percentiles of the income distribution from the 10th to the 90th percentile in the two countries. These are shown in the Figs. 1 and 2 for both entropy and the index of dissimilarity.
To compute the rank-ordered H R statistic, we apply Reardon and Bischoff's (2011: appendix A) integral evaluation formula to the fourth-order quadratic coefficients. The formula evaluates the integral and also applies a set of weights, which weight percentiles of the income distribution toward the center of the income distribution more heavily and give little weight to percentiles at the extremes.
Comparison of tract and block group segregation, United States
% Living In High-, Low-, and Middle-Income Neighborhoods
Block Group
Ratio: Tract Median to Region Median
Low income (67% or lower)
>67 % to 80 %
>80 % to 100 %
>100 % to 125 %
High income (> 150 %)
Means: CBSA > 1 Million Population, Pooled (N = 51)
Segregation indexes income
Neighborhood sorting index (NSI)
Rank-ordered H
Segregation indexes unemployed vs. employed
Dissimilarity index (D)
Theil segregation index (H)
Segregation associate's degree vs. high school diploma or less
Note: Tabulations and means are weighted as indicated in main tables.
American Housing Survey. (2013). AHS table creator [Data file]. Retrieved from http://sasweb.ssd.census.gov/ahs/ahstablecreator.html
Atkinson, A. B., Piketty, T., & Saez, E. (2011). Top incomes in the long run of history. Journal of Economic Literature, 49, 3–71.CrossRefGoogle Scholar
Bischoff, K., & Reardon, S. F. (2014). Residential segregation by income, 1970–2009. In J. R. Logan (Ed.), Diversity and disparities (pp. 208–234). New York, NY: Russell Sage.Google Scholar
Chetty, R., Hendren, N., & Katz, L. F. (2015). The effects of exposure to better neighborhoods on children: New evidence from the moving to opportunity experiment (Working Paper No. 21156). Cambridge, MA: National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w21156
Clapier, P., & Tabard, N. (1981). Transformation de la morphologie sociale des communes, et variation des consommation [Transformation of the social morphology of communities, and changes in consumption]. Consommation, 28(2), 3–40.Google Scholar
Duncan, O. D., & Duncan, B. (1955). Residential distribution and occupational stratification. American Journal of Sociology, 60, 493–503.CrossRefGoogle Scholar
Esping-Andersen, G. (1990). The three worlds of welfare capitalism. Princeton, NJ: Princeton University Press.Google Scholar
Fischer, C. S., Stockmayer, G., Stiles, J., & Hout, M. (2004). Distinguishing the geographic levels and social dimensions of U.S. metropolitan segregation, 1960–2000. Demography, 41, 37–59.CrossRefGoogle Scholar
François, J.-C., Ribardière, A., Fleury, A., Mathian, H., Pavard, A., & Saint-Julien, T. (2011). La disparité des revenus des ménages franciliens, analyse de l'évolution entre 1999 et 2007 [Income inequality of Paris households: Analysis of changes from 1999 and 2007]. Retrieved from https://halshs.archives-ouvertes.fr/halshs-00737156
Glaeser, E. L., Kahn, M. E., & Rappaport, J. (2008). Why do the poor live in cities? Journal of Urban Economics, 63, 1–24.CrossRefGoogle Scholar
Hamnett, C. (1996). Social polarization, economic restructuring and welfare state regimes. Urban Studies, 33, 1407–1430.CrossRefGoogle Scholar
Iceland, J., & Steinmetz, E. (2003). The effects of using census block groups instead of census tracts when examining residential housing patterns. Retrieved from http://www.census.gov/housing/patterns/publications/unit_of_analysis.pdf
INSEE. (2006). Enquête Logement en 2006, France metropolitaine [Housing Survey 2006, metropolitan France]. Retrieved from http://www.insee.fr/fr/methodes/default.asp?page=sources/ope-enq-logement.htm
INSEE. (2013a). IRIS [Definition]. Retrieved from http://www.insee.fr/en/methodes/default.asp?page=definitions/iris.htm
INSEE. (2013b). Unité urbaine [Definition]. Retrieved from http://www.insee.fr/fr/methodes/default.asp?page=definitions/unite-urbaine.htm
INSEE. (2015). Une pauvreté très présente dans les centre villes des grands pôles urbains [A very present poverty in cities near major urban centers] (INSEE Première Report, No. 1552). Paris, France: Institute National de la Statistique et des Études Économique.Google Scholar
INSEE-DGFiP. (2009). Indicateurs de distribution des revenus fiscaux déclarés par les ménages, année 2009 [Tax income distribution indicators reported by households, 2009] [Data Set].Google Scholar
Jackson, K. T. (1985). Crabgrass frontier: The suburbanization of the United States. New York, NY: Oxford University Press.Google Scholar
James, D. R., & Taeuber, K. (1985). Measures of segregation. Sociological Methodology, 14, 1–32.CrossRefGoogle Scholar
Jargowsky, P. A. (1996). Take the money and run: Economic segregation in U.S. metropolitan areas. American Sociological Review, 61, 984–998.CrossRefGoogle Scholar
Jargowsky, P. A. (1997). Poverty and place: Ghettos, barrios, and the American city. New York, NY: Russell Sage Foundation.Google Scholar
Jargowsky, P. A. (2014). Segregation, neighborhoods, and schools. In A. Lareau & K. Goyette (Eds.), Choosing homes, choosing schools (pp. 97–136). New York, NY: Russell Sage Foundation.Google Scholar
Johnson, N. L., Kotz, S., & Balakrishnan, N. (1994). Continuous univariate distributions (Vol. 1, 2nd ed.). New York, NY: John Wiley & Sons.Google Scholar
Kruythoff, H. M., & Baart, B. (1998). Towards undivided cities in Western Europe. New challenges for urban policy: Part 6 Lille. Delft, The Netherlands: Delft University Press.Google Scholar
Kucheva, Y. A. (2013). Subsidized housing and the concentration of poverty, 1977–2008: A comparison of eight U.S. metropolitan areas. City & Community, 12, 113–133.CrossRefGoogle Scholar
Lagrange, H. (2010). Réussite scolaire et inconduites adolescentes: Origine culturelle, mixité et capital social [School success and adolescent misconduct: Cultural origin, diversity, and social capital]. Sociétés Contemporaines, 80, 73–111.CrossRefGoogle Scholar
Lapeyronnie, D. (2008). Ghetto urbain: Ségrégation, violence, pauvreté dans la France actuelle [Urban ghetto: Segregation, violence, and poverty in France today]. Paris, France: Robert Laffont.Google Scholar
Le Blanc, D., Laferrère, A., & Pigois, R. (1999). Les effets de l'existence du parc HLM sur le profil de consommation des ménages [Effects of HLMs on household consumption]. Economie et Statistiques, 328, 37–60.Google Scholar
Ludwig, J., Sanbonmatsu, L., Gennetian, L., Adam, E., Duncan, G. J., Katz, L. F., . . . McDade, T. W. (2011). Neighborhoods, obesity, and diabetes—A randomized social experiment. New England Journal of Medicine, 365, 1509–1519.Google Scholar
Maloutas, T., & Fujita, K. (Eds.). (2012). Residential segregation in comparative perspective: Making sense of contextual diversity. Burlington, VT: Ashgate.Google Scholar
Massey, D. S., & Denton, N. A. (1993). American apartheid: Segregation and the making of the underclass. Cambridge, MA: Harvard University Press.Google Scholar
Massey, D. S., & Eggers, M. L. (1993). The spatial concentration of affluence and poverty during the 1970s. Urban Affairs Quarterly, 29, 299–315.CrossRefGoogle Scholar
Massey, D. S., & Kanaiaupuni, S. M. (1993). Public housing and the concentration of poverty. Social Science Quarterly, 74, 109–122.Google Scholar
Mayer, S. (2001). How the growth in income inequality increased economic segregation (JCPR Working Paper No. 230). Chicago, IL: Northwestern University/University of Chicago Joint Center for Poverty Research.Google Scholar
Minnesota Population Center. (2011). National historical geographic information system: Version 2.0 [Data set]. Minneapolis: University of Minnesota.Google Scholar
Musterd, S. (2005). Social and ethnic segregation in Europe: Levels, causes, and effects. Journal of Urban Affairs, 27, 331–348.CrossRefGoogle Scholar
Musterd, S., & Deurloo, R. (1997). Ethnic segregation and the role of public housing in Amsterdam. Tijdschrift voor Economische en Sociale Geografie, 88, 158–168.CrossRefGoogle Scholar
Musterd, S., & Ostendorf, W, (Eds.). (2011). Urban segregation and the welfare state: Inequality and exclusion in western cities (Reprint ed.). New York, NY: Routledge. (Original work published 1998)Google Scholar
National Geographic Society. (2012). Greendex 2012: Consumer choice and the environment, a national tracking survey. Toronto, Canada: GlobeScan, Inc.. Retrieved from http://images.nationalgeographic.com/wpf/media-content/file/NGS_2012_Final_Global_report_Jul20-cb1343059672.pdf
Newman, S. J., & Schnare, A. B. (1997). "… And a suitable living environment": The failure of housing programs to deliver on neighborhood quality. Housing Policy Debate, 8, 703–741.CrossRefGoogle Scholar
Owens, A. (2015). Housing policy and urban inequality: Did the transformation of assisted housing reduce poverty concentration? Social Forces, 94, 325–348.CrossRefGoogle Scholar
Owens, A. (2016). Inequality in children's contexts: Income segregation of households with and without children. American Sociological Review, 81, 549–574.CrossRefGoogle Scholar
Pan Ké Shon, J.-P. (2009). Ségrégation ethnique et ségrégation sociale en quartiers sensibles [Ethnic segregation and social segregation in distressed neighborhoods]. Revue Française de Sociologie, 50, 451–487.CrossRefGoogle Scholar
Pendall, R., Puentes, R., & Martin, J. (2006). From traditional to reformed: A review of the land use regulations in the nation's 50 largest metropolitan areas (Report). Washington, DC: The Brookings Institution. Retrieved from http://www.brookings.edu/research/reports/2006/08/metropolitanpolicy-pendall
Peterson, R. D., & Krivo, L. J. (2010). Divergent social worlds: Neighborhood crime and the racial-spatial divide. New York, NY: Russell Sage Foundation.Google Scholar
Pinçon, M., & Pinçon-Charlot, M. (2005). Sociologie de la Bourgeoisie [Sociology of the Bourgeoisie] (3rd ed.). Paris, France: La Découverte.Google Scholar
Préteceille, E. (2006). La ségrégation sociale a-t-elle augmenté? [Has social segregation increased?]. Sociétés Contemporaines, 62, 69–93.CrossRefGoogle Scholar
Préteceille, E. (2011). Has ethno-racial segregation increased in the greater Paris metropolitan area? Revue Française de Sociologie, 52, 31–62.CrossRefGoogle Scholar
Préteceille, E. (2012). Segregation, social mix, and public policies in Paris. In T. Maloutas & K. Fujita (Eds.), Residential segregation in comparative perspective: Making sense of contextual diversity (pp. 153–176). Burlington, VT: Ashgate.Google Scholar
Quillian, L. (2003). The decline of male employment in low-income black neighborhoods, 1950–1990. Social Science Research, 32, 220–250.CrossRefGoogle Scholar
Quillian, L. (2012). Segregation and poverty concentration the role of three segregations. American Sociological Review, 77, 354–379.CrossRefGoogle Scholar
Quillian, L. (2014). Does segregation create winners and losers? Residential segregation and inequality in educational attainment. Social Problems, 61, 402–426.CrossRefGoogle Scholar
Reardon, S. F., & Bischoff, K. (2011). Income inequality and income segregation. American Journal of Sociology, 116, 1092–1153.CrossRefGoogle Scholar
Reardon, S. F., Yun, J. T., & Eitle, T. M. (2000). The changing structure of school segregation: Measurement and evidence of multiracial metropolitan-area school segregation, 1989–1995. Demography, 37, 351–364.CrossRefGoogle Scholar
Rhein, C. (1998). Globalisation, social change, and minorities in metropolitan Paris: The emergence of new class patterns. Urban Studies, 35, 429–447.CrossRefGoogle Scholar
Safi, M. (2006). Le processus d'intégration des immigrés en France: inégalités et segmentation [The integration process of immigrants in France: Inequalities and segmentation]. Revue Française de Sociologie, 47, 3–48.CrossRefGoogle Scholar
Safi, M. (2009). La dimension spatiale de l'intégration: évolution de la ségrégation des populations immigrées en France entre 1968 et 1999 [The spatial dimension of integration: Development of the segregation of immigrant populations in France between 1968 and 1999]. Revue Française de Sociologie, 209, 521–552.CrossRefGoogle Scholar
Sassen, S. (1991). The global city: New York, London, Tokyo (1st ed.). Princeton, NJ: Princeton University Press.Google Scholar
Schnell, I., & Osendorf, W. (Eds.). (2002). Studies in segregation and desegregation. Burlington, VT: Ashgate.Google Scholar
Simkus, A. A. (1978). Residential segregation by occupation and race in ten urbanized areas, 1950–1970. American Sociological Review, 43, 81–93.CrossRefGoogle Scholar
Taghavi, L. (2008). HUD-assisted housing 101: Using "A Picture of Subsidized Households: 2000." Cityscape, 10(1), 211–220.Google Scholar
U.S. Census Bureau. (2011). U.S. neighborhood income inequality in the 2005–2009 period (American Community Survey Report, No. ACS-16). Washington, DC: U.S. Census Bureau. Retrieved from https://www.census.gov/prod/2011pubs/acs-16.pdf
U.S. Census Bureau. (2013). Metropolitan and micropolitan statistical areas main [Data set]. Retrieved from http://www.census.gov/population/metro/
U.S. Department of Housing and Urban Development. (2012). A Picture of Subsidized Households 2008 [Data set]. Retrieved from http://www.huduser.org/portal/datasets/picture/about.html
Verdugo, G. (2011). Public housing and residential segregation of immigrants in France, 1968–1999. Population-E, 66, 169–194.CrossRefGoogle Scholar
Vincent, P., Chantreuil, F., & Tarroux, B. (2015, June). Income segregation in large French cities. Paper presented at the Meetings of the French Economic Association, Rennes, France.Google Scholar
Wacquant, L. (2007). French working-class bainlieus and black American ghetto: From conflation to comparison. Qui Parle?, 16, 5–38.Google Scholar
Wagmiller, R. L. (2007). Race and the spatial segregation of jobless men in urban America. Demography, 44, 539–562.CrossRefGoogle Scholar
Wilson, W. J. (1987). The truly disadvantaged: The inner city, the underclass, and public policy. Chicago, IL: University of Chicago Press.Google Scholar
Wodtke, G. T., Harding, D. J., & Elwert, F. (2011). Neighborhood effects in temporal perspective: The impact of long-term exposure to concentrated disadvantage on high school graduation. American Sociological Review, 76, 713–736.CrossRefGoogle Scholar
World Bank. (2013). Indicators [Data Set]. Retrieved from http://data.worldbank.org/indicator
© Population Association of America 2016
1.Department of SociologyNorthwestern UniversityEvanstonUSA
2.CNRS & Sciences Po Paris, Observatoire Sociologique du ChangementParis Cedex 07France
Quillian, L. & Lagrange, H. Demography (2016) 53: 1051. https://doi.org/10.1007/s13524-016-0491-9
Publisher Name Springer US
|
CommonCrawl
|
A Neutron Diffraction Study of the Effect Produced by the Direction of Crystal Growth on the Distribution of Residual Stresses in Austenite Steel Prisms Manufactured by Selective Laser Melting
STRENGTH AND PLASTICITY
I. D. Karpov1,
V. T. Em1,
S. A. Rylov1,
E. A. Sul'yanova2,
D. I. Sukhov3 &
N. A. Khodyrev3
Physics of Metals and Metallography volume 123, pages 624–631 (2022)Cite this article
The effect affected by the choice of the direction of the crystal growth by selective laser melting on the distribution of residual stresses was studied on the example of initiated growing of a 20 × 20 × 70-mm prism of steel 316L. Prisms with different growth directions (along their long and short edges) have been investigated. Neutron stress diffractometry providing the measurement of all three stress tensor components in massive materials and products by a nondestructive method was used. Compressive stresses are formed in the central part of a prism in both cases. They are close to zero or transit to tensile stresses when approaching the surface. In the prism grown vertically along the long edge, tensile stresses are higher and occupy a larger volume as compared to the prism grown along the short edge. Maximum tensile stresses (~500 MPa) near the vertical prism edges are close to the ultimate yield strength of the material (~540 MPa). The maximum compressive stresses (~–400 MPa) are formed in the central part of the vertical prism.
Avoid the most common mistakes and prepare your manuscript for journal editors.
Additive technologies (ATs) are one of the most dynamically developing trends in contemporary industry. Compared to traditional technologies, they provide the ability to substantially reduce material consumption, labor efforts, and time in the manufacturing of parts. Using additive technologies, it is possible to obtain principally new materials and products, which cannot be manufactured by traditional technologies. Metal products are manufactured with the use of additive technologies via the layer-by-layer deposition of fused metal to a required thickness. The technologies differ in the ways of layers formation (selective powder bed melting and direct growth), energy sources (laser or electron beam or electrical arc), and used material types (powder or wire). The most widely recognized methods are based on the use of laser radiation: selective laser melting (SLM) and direct laser deposition (DLD). In the SLM method, a homogeneous powder bed several tens of microns in thickness is initially formed on a substrate and further subjected to sintering by means of a laser beam to form a horizontal part layer. The next layer is then deposited and the process is repeated. In the DLD method a metallic powder is sent through a special nozzle to the same area to which a laser beam is directed to form a local liquid melt pool. The corresponding horizontal layer of a part is formed by displacing the beam.
The creation of an AT material occurs at high temperature gradients and cooling rates. For this reason, considerable residual stresses appear in a material and may substantially worsen its fatigue strength to result in warping, cracking, and deformation in the part under its formation [1‒3]. High residual stresses are one of the major factors that prevent the wide recognition of the additive production of metallic products. To understand the nature of residual stresses in the materials manufactured by additive technologies and search for methods for their reduction, the effect of technology types and technological process parameters (material, scanning pattern, scanning speed, rotation angle between neighboring layers, laser power, specimen geometry, etc.) on the distribution of residual stresses has been studied [4–9]. Residual stresses are difficult in theoretical calculations; thus, experimental investigations are important for the verification of different computational models. Let us note that the growth direction is also one of the technological parameters. However, there are comparatively few papers devoted to this problem [9–13].
The objective of this study was to investigate the effect produced by the direction of growth on the distribution of residual stresses by neutron stress diffractometry with the use of two identical rectangular prisms grown by selective laser melting from steel 316L as an example and compare the results with the distribution of stresses in a prism grown from the same steel by direct laser-assisted deposition [14]. At the present time, neutron stress diffractometry is the only method that provides the measurement of all three stress tensor components in massive metallic parts (of up to 50 mm in thickness) by a nondestructive technique due to the high penetrating ability of neutrons [15–17]. The penetrating ability of X-rays is much lower (~10 µm in steels); thus, the X-ray method gives information only on the stresses in surface or near-surface material layers [15].
Preparation of Specimens
Specimens were prepared by using a metal powder composition (10–63-µm fraction; average particle size, 36 µm) of 316 L steel. The chemical composition of this steel is given in Table 1. The SLM process was performed on a Concept Laser M2 Cusing setup. Specimens of 20 × 20 × 70 mm (hereinafter, dimensions are given in millimeters) were grown on a common basic steel 316L plate by the SLM. The growth direction of the first prism coincided with its long edge of 70 mm (Fig. 1a) and the growth direction of the second prism coincided with its short edge of 20 mm (Fig. 1b).
Table 1. The chemical composition of 316 L steel
Rectangular 316L steel prisms grown by selective laser melting: (a) vertical prism, (b) horizontal prism. L1, L2, L3, L4, L5, and L6 are lines, which are parallel to the Z axis (longitudinal direction) and used to measure the stresses. Cross sections XY, which are perpendicular to Z axis (Z = 1.5, 17, 35) and used to measure the stresses, are also shown. Dimensions are given in millimeters.
After manufacturing, the specimens were separated from the base plate and supporting structures with a cutting wheel. To attain a regular shape, a layer ~1 mm in thickness was additionally removed from the horizontal prism by electrical discharge (spark) cutting from its base side such that the horizontal prism finally was 20 × 19 × 70 mm (19 mm is the vertical edge along the growth direction). The coordinate systems for both prisms are shown in Fig. 1. In both prisms, the Z axis corresponds to the longitudinal direction, the X axis is the transverse direction, and the Y axis, a normal direction.
Measurement of Stresses by the Neutron Diffraction Method
The neutron diffraction method of stress measurements is based on the measurement of the shift of the angular position of the diffraction peak produced by the change in the interplanar distance of a crystal lattice under tensile or compressive stresses [18]. According to the Wulff–Bragg's law,
$$2{{d}_{{hkl}}}\sin {{\theta }_{{hkl}}} = \lambda ,$$
where \({{d}_{{hkl}}}\) is the distance between the atomic planes of a crystal lattice with the Miller indices \(hkl,\) \({{\theta }_{{hkl}}}\) is the Bragg angle of scattering from planes \(\left( {hkl} \right)\), and \(\lambda \) is the wavelength of neutrons. The relative strain averaged over a trial volume in the direction of a normal to reflecting planes \(\left( {hkl} \right)\) is determined as
$$\begin{gathered} {{\varepsilon }_{{hkl}}} = \frac{{{{d}_{{hkl}}} - {{d}_{{0,hkl}}}}}{{{{d}_{{0,hkl}}}}} = \frac{{{\text{sin}}{{\theta }_{{0,hkl}}} - {\text{sin}}{{\theta }_{{hkl}}}}}{{{\text{sin}}{{\theta }_{{0,hkl}}}}} \\ \approx ~ - \left( {\theta - ~{{\theta }_{{0,hkl}}}} \right){\text{cot}}{{\theta }_{{0,hkl}}}, \\ \end{gathered} $$
where \({{d}_{{0,hkl}}}\) and \({{\theta }_{{0,hkl}}}\) are the interplanar distance and scattering angle for the unstressed material. Hence, the interplanar distance serves as an embedded detector for the relative strain, which can be measured by the shift of a diffraction peak. Using the measured strain tensor components \({{\varepsilon }_{x}},\) \({{\varepsilon }_{y}},\) \({{\varepsilon }_{z}}\) (hereinafter, the indices \(hkl\) are omitted) along the major directions x, y, z and the generalized Hooke's law, it is possible to calculate the stress tensor components \({{\sigma }_{x}},\) \(~{{\sigma }_{y}},\) \({{\sigma }_{z}}\) along these directions [18] as
$$\begin{gathered} {{\sigma }_{x}} = {{E\left[ {\left( {1 - 2\nu } \right){{\varepsilon }_{x}} + \nu \left( {{{\varepsilon }_{x}} + {{\varepsilon }_{y}} + {{\varepsilon }_{z}}} \right)} \right]} \mathord{\left/ {\vphantom {{E\left[ {\left( {1 - 2\nu } \right){{\varepsilon }_{x}} + \nu \left( {{{\varepsilon }_{x}} + {{\varepsilon }_{y}} + {{\varepsilon }_{z}}} \right)} \right]} {\left( {1 + \nu } \right)\left( {1 - 2\nu } \right)}}} \right. \kern-0em} {\left( {1 + \nu } \right)\left( {1 - 2\nu } \right)}}, \\ {{\sigma }_{y}} = {{E\left[ {\left( {1 - 2\nu } \right){{\varepsilon }_{y}} + \nu \left( {{{\varepsilon }_{x}} + {{\varepsilon }_{y}} + {{\varepsilon }_{z}}} \right)} \right]} \mathord{\left/ {\vphantom {{E\left[ {\left( {1 - 2\nu } \right){{\varepsilon }_{y}} + \nu \left( {{{\varepsilon }_{x}} + {{\varepsilon }_{y}} + {{\varepsilon }_{z}}} \right)} \right]} {\left( {1 + \nu } \right)\left( {1 - 2\nu } \right)}}} \right. \kern-0em} {\left( {1 + \nu } \right)\left( {1 - 2\nu } \right)}}, \\ {{\sigma }_{z}} = {{E\left[ {\left( {1 - 2\nu } \right){{\varepsilon }_{z}} + \nu \left( {{{\varepsilon }_{x}} + {{\varepsilon }_{y}} + {{\varepsilon }_{z}}} \right)} \right]} \mathord{\left/ {\vphantom {{E\left[ {\left( {1 - 2\nu } \right){{\varepsilon }_{z}} + \nu \left( {{{\varepsilon }_{x}} + {{\varepsilon }_{y}} + {{\varepsilon }_{z}}} \right)} \right]} {\left( {1 + \nu } \right)\left( {1 - 2\nu } \right)}}} \right. \kern-0em} {\left( {1 + \nu } \right)\left( {1 - 2\nu } \right)}}, \\ \end{gathered} $$
where \(E\) is the Young modulus, and \(\nu \) is the Poisson ratio. Let us note that the calculation should be performed by using the diffraction elastic constants \({{E}_{{hkl}}}\) and \({{\nu }_{{hkl}}}\) for the planes \(\left( {hkl} \right),\) which are engaged in the measurement of strains.
The stress distribution was studied on a STRESS neutron diffractometer on an IR-8 reactor in the National Research Center "Kurchatov Institute" [19‒21]. To decrease the time of measurements of a neutron beam and obtain the information on the distribution of stresses over the volume of a prism, the measurements were performed by the following scheme. In the vertical prism (Fig. 1a), the measurements were carried out at the points located on three lines L1, L2, and L3 parallel to the vertical Z axis. These lines pass through the prism center (L1, X = 10, Y = 10), near the side prism face (L2, X = 10, Y = 1.5), and near the prism edge (L3, X = 1.5, Y = 1.5). The measurements were performed at the points from Z = 2 to Z = 68 with a step of 3. Some measurements were also performed in prism cross sections XY at distances of 1.5, 17, and 35 mm from its upper face along Z axis (Fig. 1a). The closest points to side faces in cross sections were at a depth of 1.5 mm. Several points at a depth of 1 mm from the surface of side faces were additionally measured in the cross sections. In the horizontal prism (Fig. 1b), the measurements were carried out at the points located on three lines L4, L5, and L6, which were parallel to horizontal Z axis and passed through the prism center (L4, X = 10, Y = 9.5), near the lower prism face (L5, X = 10, Y = 1.5), and near its upper face (L6, X = 10, Y = 17.5). The measurements were performed at the points from Z = 2 to Z = 68 with a step of 3. Similarly to the measurements in the vertical prism, the stresses were measured in prism cross sections XY at distances of 1.5, 17, and 35 mm from the side face along Z axis. To decrease the measurement time, the measurements in cross section of Z = 1.5 were performed in an 1/4 cross section. The diffraction peak (311) of the face-centered cubic (FCC) lattice of austenite steel 316L was measured at an angle \(2\theta \approx 91^\circ .\) The reflecting plane (311) was recommended for the measurement of stresses in the materials with a FCC lattice, as it has low sensitivity to microstresses [18]. The measurements were performed with a gauge volume of ∼1.5 × 1.5 × 2 mm. The points at a depth of 1 mm from the surface of faces were measured with a gauge volume of ~1 × 1 × 3 mm. In all the measurements, the gauge volume was completely immersed into the material to avoid measurement error in the case of its incomplete immersion [18]. Strain components were measured with a statistical error of ~50 × 10–6, which corresponded to the stress measurement error of ~20 MPa. The reference interplanar distance \({{d}_{0}},\) which corresponded to the unstressed material, was determined from the condition of balance between the forces along Z axis in cross sections XY with Z = 17.5 and 35 [9, 15]. The spread of \({{d}_{0}},\) which was determined for different cross sections, corresponded to the stress change of less than 40 MPa at a measured point.
In the calculations, the Young modulus \({{E}_{{311}}} = 184\,\,{\text{GPa}}\) and the Poisson ratio \({{\nu }_{{311}}} = 0.294\) were taken for steel 316L [15].
RESULTS OF EXPERIMENTS AND THEIR DISCUSSION
The results of measurements along the lines parallel to Z axis in the vertical (L1, L2, L3) and horizontal (L4, L5, L6) prisms are shown in Fig. 2. In the middle part of the vertical prism (15 < Z < 55), all the stress components varied within a range of ±50 MPa along all the lines (Fig. 2a).
The distribution of stress components along the lines parallel to the Z axis in (a) vertical (L1, L2, L3) and (b) horizontal (L4, L5, L6) prisms.
The maximum compressive stress (\({{\sigma }_{z}}\) ~ –400 MPa) is observed in the middle part of the measurement region of line L1 (X = 10, Y = 10), which passes through the prism center. Compressive stresses abruptly decrease when approaching the upper and lower prism faces to near-zero values, as the stress component normal to the free surface must be equal to zero on the surface. In contrast, the components \({{\sigma }_{x}}\) and \({{\sigma }_{y}},\) which are close to zero in the middle part, abruptly increase to ~220/270 MPa when approaching the upper/lower prism faces. Let us note that the distribution of stresses along central line L1 is in good agreement with the distribution along the central line in the steel 316L prism grown by direct laser deposition [14].
The middle region of line L2 (X = 10 mm, Y = 1.5 mm) (Fig. 2a) that passes near the central line of the side prism face has tensile stresses (\({{\sigma }_{z}}\) ~350 MPa), which abruptly decrease when approaching the upper and lower prism faces. The component \({{\sigma }_{x}}\) in the middle part of the prism is ~160 MPa. It initially slightly decreases when approaching the upper and lower faces and further grows to ~230 MPa. The component \({{\sigma }_{y}}\) is close to zero, as it is perpendicular to the surface.
The maximum values of tensile stresses (~500 MPa) are observed for the longitudinal component \({{\sigma }_{z}}\) in the middle part of the prism near the side edges along line L3 (X = 1.5, Y = 1.5) (Fig. 2a). It abruptly decreases when approaching the upper and lower prism faces, being perpendicular to them. For the same reason the stress components \({{\sigma }_{x}}~\) and \({{\sigma }_{y}}\) are close to zero at the points along line L3 near the side faces, and the maximum stress σz ≈ 500 MPa is close to the equivalent Mises stress. Let us note that the stresses near the lower prism apex are slightly higher than the stresses near the upper apex.
In the horizontal prism (Fig. 2b), similarly to the vertical prism, all the stress components except \({{\sigma }_{z}}\) near the upper and lower faces (L5, L6) are changed slightly in the middle part of the prism in the region 5 ≤ Z ≤ 65 within a range of ±50 MPa.
All the stress components along line L4 that pass through the center of the horizontal prism are compressive. When approaching the prism edges (Z = 0, Z = 70), the compressive stresses \(~{{\sigma }_{z}}\) and \({{\sigma }_{x}}~\) decrease to nearly zero values, and the component \({{\sigma }_{y}}\) parallel to the growth direction switches its sign and increases to ~200 MPa.
The stresses near the upper and lower faces of the horizontal prism (L5, L6) are close to zero or are tensile. Let us note that the stresses near the upper face (L6) are higher than the stresses near the lower face L5).
The distributions of stresses in cross sections XY (Z = 1.5, 17, 35) along the central lines of these cross sections Y (X = 10) and X (Y = 10/9.5) in the vertical and horizontal prisms are shown in Fig. 3. The maps of the two-dimensional distribution of stresses in cross sections XY (Z = 1.5, 17, 35) are plotted in Fig. 4.
The distribution of stresses in cross sections XY (Z = 1.5, 17, 35) along the central lines of cross sections Y (X = 10) and X (Y = 10/9.5) in (a) vertical and (b) horizontal prisms.
Maps of the two-dimensional distribution of stresses in cross sections XY (Z = 1.5, 17, 35) of (a) vertical and (b) horizontal prisms.
Based on the results of measurements along lines L1, L2, and L3 (Fig. 2a) and in cross sections (Figs. 3a and 4a), it is possible to make the following conclusion about the distribution of stresses in the vertical prism. In a major part of the prism volume (at a depth of more than 3 mm from the side faces), the normal and transverse stress components are small, and a uniaxial stressed state (along the vertical Z axis) occurs. In the central part of a prism, high (~–400 MPa) compressive longitudinal stresses decrease when approaching the side faces and become tensile at a distance of ~3 from a face. The stresses quickly increase in the course of further approach to the face to attain ~350 MPa at a distance of 1.5 mm from the face. The maximum tensile longitudinal stresses (~500 MPa) are observed near the side faces of the prism.
The compressive stresses in the central part of the prism are compensated by the tensile stresses near its side faces. At a depth of 1.5 mm from the side faces small normal and transverse stresses (~150 MPa), parallel to the faces and also near-zero stresses occur perpendicular to the side faces. Near the upper face in the cross section Z = 1.5 mm, the vertical component σz perpendicular to this face is close to zero. The transverse \({{\sigma }_{x}}\) and normal \(~{{\sigma }_{y}}\) close to zero in the middle part of the prism increase to ~200 MPa.
The ultimate yield strength of steel 316L obtained by selective laser melting depends on the growth direction and technological process parameters [22–24]. The maximum tensile stresses (~500 MPa) near the edges in the central part of the prism are close to the ultimate yield stress of steel 316L in the growth direction (540 MPa) [25].
Based on the results of measurements along lines L4, L5, and L6 (Fig. 2b) and cross sections (Figs. 3b and 4b), it is possible to make the following conclusions about the distribution of stresses in the horizontal prism. In the central part of the prism volume at a depth of 5 mm or more from its faces, a triaxial stressed state occurs: all the three stress components are compressive with a maximum value of ~–200 MPa. The compressive stresses in the central part are compensated by the tensile stresses around this part. Tensile stress components increase when approaching the faces. Maximum values of stress components are observed near the surfaces of the faces parallel to the corresponding component: σZ ~ 100 MPa, σX = 300 MPa, σY = 350 MPa.
The maximum tensile stresses near the surface in the vertical prism are much higher than the stresses in the horizontal prism and occupy a much larger volume (Figs. 4a and 4b). They are oriented along the long edge in the vertical prism (σz), and the short edge in the horizontal prism \(({{\sigma }_{y}}).\) Qualitatively, such an essential distinction can be explained as follows. In the vertical prism, tensile stresses oriented vertically along the growth direction (long edge) are formed near the vertical faces. They increase with distance from the free horizontal surfaces (prism faces) to attain a maximum at a distance of ~15 mm from the faces (Fig. 2a). In the horizontal prism, tensile stresses are also formed near the vertical faces along the growth direction (short edge), but the maximum stress at a distance of ~10 mm will be smaller than the stress at a distance of ~15 mm from the free horizontal surfaces (upper and lower faces). In both the vertical and horizontal prisms, there are compressive stresses in the central part and tensile stresses near the surface in good agreement with the results [14, 15]. Let us note that, to understand the reasons for the formation of near-surface tensile stresses and compressive stresses compensating them in the middle part of both prisms, it is necessary to perform calculations by the finite element method.
Tensile stresses on the surface of a part worsen corrosion resistance, strength characteristics, and cracking resistance. For this reason, when growing a massive part by selective laser melting, the direction of its growth should be selected as parallel to its smallest dimension as possible to reduce the residual stresses.
The effect produced by the direction of growth on the residual stresses in 316L steel specimens manufactured by selective laser melting has been studied by neutron stress diffractometry. Using a specimen shaped as a prism as an example, it has been shown that the value and distribution of residual stresses strongly depend on the growth direction. In the prism grown along the long edge, the tensile stresses are higher and occupy a larger volume compared to an identical prism grown along the short edge. Maximum tensile stresses (~500 MPa) close to the ultimate yield stress of the material (~540 MPa) are formed near the long edges of the vertical prism. The formation of compressive stresses inside a part and tensile stresses near its surface is common for the parts manufactured by selective laser melting.
H. Köhler, K. Partes, J. R. Kornmeier, and F. Vollertsen, "Residual stresses in steel specimens induced by laser cladding and their effect on fatigue strength," Phys. Procedia 39, 354–361 (2012).
A. B. Spierings, T. L. Starr, and K. Wegener, "Fatigue performance of additive manufactured metallic parts," Rapid Prototyping J. 19, 88–94 (2013).
A. Riemer, S. Leuders, M. Thone, H. A. Richard, T. Troster, and T. Niendorf, "On the fatigue crack growth behavior in 316L stainless steel manufactured by selective laser melting," Eng. Fract. Mech. 120, 15–25 (2014).
P. Rangaswamy, M. L. Griffth, M. B. Prime, T. M. Holden, R. B. Rogge, J. M. Edwards, and R. J. Sebring, "Residual stresses in LENS components using neutron diffraction and contour method," Mater. Sci. Eng., A 399, 72–83 (2005).
L. Wang, S. D. Felicelli, and P. Pratt, "Residual stresses in LENS-deposited AISI 410 stainless steel plates," Mater. Sci. Eng., A 496, 234–241 (2008).
Y. Liu, Y. Yang, and D. Wang, "A study on the residual stress during selective laser melting (SLM) of metallic powder," Int. J. Adv. Manuf. Technol. 87, 647–656 (2016).
B. Cheng, S. Shrestha, and K. Chou, "Stress and deformation evaluations of scanning strategy effect in selective laser melting,"Addit. Manuf. 12, 240–251 (2016).
J. Robinson, I. Ashton, P. Fox, E. Jones, and C. Sutcliffe, "Determination of the effect of scan strategy on residual stress in laser powder bed fusion additive manufacturing," Addit. Manuf. 23, 13–24 (2018).
B. A. Szost, S. Terzi, T. Martina, D. Boisselier, A. Prytuliak, T. Pirling, M. Hofmann, and D. J. Jarvis, "A comparative study of additive manufacturing techniques: Residual stress and microstructural analysis of CLAD and WAAM printed Ti–6Al–4V components," Mater. Des. 89, 559–567 (2016).
A. S. Wu, D. W. Brown, M. Kumar, G. F. Gallegos, and W. E. King, "An experimental investigation into additive manufacturing-induced residual stresses in 316L stainless steel," Metall. Mater. Trans. A 45, 6260–6270 (2014).
B. Vrancken, V. Cain, R. Knutsen, and J. Van Humbeeck, "Residual stress via the contour method in compact tension specimens produced via selective laser melting," Scr. Mater. 87, 29–32 (2014).
L. Mugwagwa, D. Dimitrov, S. Matope, and T. Becker, "A methodology to evaluate the influence of part geometry on residual stresses in selective laser melting," Int. Conf. Competitive Manuf. (COMA'16) (2016), pp. 133–139.
A. Salmi, G. Piscopo, E. Atzeni, P. Minetola, and L. Iuliano, "On the effect of part orientation on stress distribution in AlSi10Mg specimens fabricated by laser powder bed fusion (L-PBF)," Procedia CIRP 67, 191–196 (2018).
P. Pant, S. Proper, V. Luzin, S. Sjostrom, K. Simonsson, J. Moverare, S. Hosseini, V. Pacheco, and R. L. Peng, "Mapping of residual stresses in as-built Inconel 718 fabricated by laser powder bed fusion: A neutron diffraction study of build orientation influence on residual stresses," Addit. Manuf. 36, 101501 (2020).
P. Rangaswamy, T. M. Holden, R. Rogge, and M. L. Griffith, "Residual stresses in components formed by the laser-engineered net shaping (LENS®) process," J. Strain Anal. Eng. Des. 38 (6), 519–527 (2003).
P. J. Withers, "Depth capabilities of neutron and synchrotron diffraction strain measurement instruments. II. Practical implications," J. Appl. Crystallogr. 37, 607–612 (2004).
W. Woo, V. T. Em, B. Seong, E. Shin, P. Mikula, J. Joo, and M. Kang, "Effect of wavelength-dependent attenuation on neutron diffraction stress measurements at depth in steels," J. Appl. Crystallogr. 44, 747–754 (2011).
W. Woo, V. T. Em, P. Mikula, G. B. An, and B. Seong, "Neutron diffraction measurements of residual stresses in a 50mm thick weld," Mater. Sci. Eng., A 528, 4120–4124 (2011).
M. T. Hutchings, P. J. Withers, T. M. Holden, and T. Lorentzen, Introduction to the Characterization of Residual Stress by Neutron Diffraction, 1st ed. (CRC Press, New York, 2005).
V. T. Em, V. P. Glazkov, I. D. Karpov, N. F. Miron, V. A. Somenkov, M. N. Shushunov, V. V. Sumin, P. Mikula, and J. Šaroun, "A double-crystal monochromator for neutron stress diffractometry," Instrum. Exp. Tech., No. 4, 526–532 (2017).
V. T. Em, I. D. Karpov, V. A. Somenkov, V. P. Glazkov, A. M. Balagurov, V. V. Sumin, P. Mikula, and J. Šaroun, "Residual stress instrument with double-crystal monochromator at research reactor IR-8," Phys. B: Condens. Matter 551, 413–416 (2018).
A. I. Mertens, S. Reginster, Q. Contrepois, T. Dormal, and O. Lemaire, "Microstructures and mechanical properties of stainless steel AISI 316L processed by selective laser melting," Mater. Sci. Forum 783–786, 898–903 (2014).
G. Buchanan, V.-P. Matilinen, A. Salminen, and L. Gardnera, "Structural performance of additive manufactured metallic material and cross-sections," J. Construct. Steel Res. 136, 35–48 (2017).
J. Suryawanshi, K. G. Prashanth, and U. Ramamurty, "Mechanical behavior of selective laser melted 316L stainless steel," Mater. Sci. Eng., A 696, 113–121 (2017).
P. Erikson, "Evaluation of mechanical and microstructural properties for laser powder-bed fusion 316L," Master degree thesis (Uppsala University, Appl. Mater. Sci., 2018). http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1231504&dswid=-6174.
This work was performed using the equipment of Unique Scientific Facility NRC IR-8, National Research Center "Kurchatov Institute."
This study was supported in part by the Ministry of Science and Higher Education of the Russian Federation within the framework of researches by the state task to the Federal Research Center Crystallography and Photonics of the Russian Academy of Sciences (project code RFMEFI62119X0035).
National Research Center "Kurchatov Institute", 123182, Moscow, Russia
I. D. Karpov, V. T. Em & S. A. Rylov
Federal Research Center Crystallography and Photonics, Russian Academy of Sciences, 119333, Moscow, Russia
E. A. Sul'yanova
All-Russian Scientific Research Institute of Aviation Materials of the National Research Center "Kurchatov Institute", 105005, Moscow, Russia
D. I. Sukhov & N. A. Khodyrev
I. D. Karpov
V. T. Em
S. A. Rylov
D. I. Sukhov
N. A. Khodyrev
Correspondence to I. D. Karpov.
Translated by E. Glushachenkova
Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Karpov, I.D., Em, V.T., Rylov, S.A. et al. A Neutron Diffraction Study of the Effect Produced by the Direction of Crystal Growth on the Distribution of Residual Stresses in Austenite Steel Prisms Manufactured by Selective Laser Melting. Phys. Metals Metallogr. 123, 624–631 (2022). https://doi.org/10.1134/S0031918X22060096
Revised: 21 March 2022
Issue Date: June 2022
DOI: https://doi.org/10.1134/S0031918X22060096
additive technologies
residual stresses
neutron stress diffractometry
|
CommonCrawl
|
Flocking of non-identical Cucker-Smale models on general coupling network
DCDS-B Home
Boundary dynamics of the replicator equations for neutral models of cyclic dominance
February 2021, 26(2): 1083-1109. doi: 10.3934/dcdsb.2020154
Numerical study of vanishing and spreading dynamics of chemotaxis systems with logistic source and a free boundary
Lei Yang 1,2, and Lianzhang Bao 2,3,,
School of Mathematics, Jilin University, Changchun, Jilin 130012, China
School of Mathematical Sciences, University of Science and Technology of China, Hefei, 230026, P. R. China
Department of Mathematics and Statistics, Auburn University, AL 36849, USA
* Corresponding author: Lianzhang Bao
Dedicated to Professor Zhuoqun Wu on occasion of his 90 birthday
Received July 2019 Revised December 2019 Published February 2021 Early access May 2020
Figure(24)
The current paper is to investigate the numerical approximation of logistic type chemotaxis models in one space dimension with a free boundary. Such a model with a free boundary describes the spreading of a new or invasive species subject to the influence of some chemical substances in an environment with a free boundary representing the spreading front (see Bao and Shen [1], [2]). The main challenges in the numerical studies lie in tracking the moving free boundary and the nonlinear terms from the chemical. To overcome them, a front-fixing framework coupled with the finite difference method is introduced. The accuracy of the proposed method, the positivity of the solution, and the stability of the scheme are discussed. The numerical simulations agree well with theoretical results such as the vanishing spreading dichotomy, local persistence, and stability. These simulations also validate some conjectures in our future theoretical studies such as the dependence of the vanishing-spreading dichotomy on the initial value $ u_0 $, initial habitat $ h_0 $, the moving speed $ \nu $ and the chemotactic sensitivity coefficients $ \chi_1, \chi_2 $.
Keywords: Chemoattraction-repulsion system, nonlinear parabolic equations, free boundary problem, spreading-vanishing dichotomy, front-fixing, finite difference, invasive population.
Mathematics Subject Classification: Primary: 35R35, 35J65, 35K20; Secondary: 78M20, 92B05.
Citation: Lei Yang, Lianzhang Bao. Numerical study of vanishing and spreading dynamics of chemotaxis systems with logistic source and a free boundary. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1083-1109. doi: 10.3934/dcdsb.2020154
L. Bao and W. Shen, Logistic type attraction-repulsion chemotaxis systems with a free boundary or unbounded boundary. I. Asymptotic dynamics in fixed unbounded domain, Discrete Contin. Dyn. Syst. Ser. A, 40 (2020), 1107-1130. doi: 10.3934/dcds.2020072. Google Scholar
L. Bao and W. Shen, Logistic type attraction-repulsion chemotaxis systems with a free boundary or unbounded boundary. II, Spreading-vanishing dichotomy in a domain with a free boundary, preprint. Google Scholar
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler, Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues, Math. Models Methods Appl. Sci., 25 (2015), 1663-1763. doi: 10.1142/S021820251550044X. Google Scholar
C. C. Chiu and J. L. Yu, An optimal adaptive time-stepping scheme for solving reaction-diffusion-chemotaxis systems, Math. Biosci. Eng., 4 (2007), 187-203. doi: 10.3934/mbe.2007.4.187. Google Scholar
J. I. Diaz and T. Nagai, Symmetrization in a parabolic-elliptic system related to chemotaxis, Advances in Mathematical Science and Applications, 5 (1995), 659-680. Google Scholar
J. I. Diaz, T. Nagai and J.-M. Rakotoson, Symmetrization techniques on unbounded domains: Application to a chemotaxis system on $\mathbb{R}^N$, J. Differential Equations, 145 (1998), 156-183. doi: 10.1006/jdeq.1997.3389. Google Scholar
Y.-H. Du and Z.-G. Lin, Spreading-vanishing dichotomy in the diffusive logistic model with a free boundary, SIAM J. Math. Anal., 42 (2010), 377-405. doi: 10.1137/090771089. Google Scholar
Y.-H. Du and X. Liang, Pulsating semi-waves in periodic media and spreading speed determined by a free boundary model, Ann. Inst. H. Poincar$\acute{e}$ Anal. Non Lin$\acute{e}$aire, 32 (2015), 279–305. doi: 10.1016/j.anihpc.2013.11.004. Google Scholar
E. Galakhov, O. Salieva and J. I. Tello, On a parabolic-elliptic system with chemotaxis and logistic type growth, J. Differential Equations, 261 (2016), 4631-4647. doi: 10.1016/j.jde.2016.07.008. Google Scholar
D. Horstmann and M. Winkler, Boundedness vs. blow-up in a chemotaxis system, J. Differential Equations, 215 (2005), 52-107. doi: 10.1016/j.jde.2004.10.022. Google Scholar
T. B. Issa and W. Shen, Dynamics in chemotaxis models of parabolic-elliptic type on bounded domain with time and space dependent logistic sources, SIAM J. Appl. Dyn. Syst., 16 (2017), 926-973. doi: 10.1137/16M1092428. Google Scholar
H. Jin and Z. A. Wang, Boundedness, blowup and critical mass phenomenon in competing chemotaxis, J. Differential Equations, 260 (2016), 162-196. doi: 10.1016/j.jde.2015.08.040. Google Scholar
K. Kanga and A. Steven, Blowup and global solutions in a chemotaxis-growth system, Nonlinear Analysis, 135 (2016), 57-72. doi: 10.1016/j.na.2016.01.017. Google Scholar
E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theoret. Biol., 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. Google Scholar
E. F. Keller and L. A. Segel, A Model for chemotaxis, J. Theoret. Biol., 30 (1971), 225-234. doi: 10.1016/0022-5193(71)90050-6. Google Scholar
H. G. Landau, Heat conduction in a melting solid, Quaterly of Applied Mathematics, 8 (1950), 81-94. doi: 10.1090/qam/33441. Google Scholar
F. Li, X. Liang and W. Shen, Diffusive KPP equations with free boundaries in time almost periodic environments: I. Spreading and vanishing dichotomy, Discrete Contin. Dyn. Syst, 36 (2016), 3317-3338. doi: 10.3934/dcds.2016.36.3317. Google Scholar
F. Li, X. Liang and W. Shen, Diffusive KPP equations with free boundaries in time almost periodic environments: II. Spreading speeds and semi-wave solutions, J. Differential Equations, 261 (2016), 2403-2445. doi: 10.1016/j.jde.2016.04.035. Google Scholar
R. H. Li, Z. Y. Chen and W. Wu, Generalized Difference Methods for Differential Equations- Numerical Analysis of Finite Volume Methods, Marcel Dekker, Inc, 2000. Google Scholar
X. J. Li, C. W. Shu and Y. Yang, Local discontinuous Galerkin method for the Keller-Segel chemotaxis model, J. Sci. Comput., 73 (2017), 943-967. doi: 10.1007/s10915-016-0354-y. Google Scholar
J. G. Liu, L. Wang and Z. N. Zhou, Positivity-preserving and asymptotic preserving method for 2D Keller-Segal equations, Math. Comp., 87 (2018), 1165-1189. doi: 10.1090/mcom/3250. Google Scholar
S. Liu and X. F. Liu, Numerical methods for a wwo-species competition-diffusion model with free boundaries, Mathematics, 6 (2018), 72-96. Google Scholar
S. Liu, Y. H. Du and X. F. Liu, Numerical studies of a class of reaction-diffusion equations with stefan conditions, International Journal of Computer Mathematics, 97 (2020), 959-979. doi: 10.1080/00207160.2019.1599868. Google Scholar
J. L. Lockwood, M. F. Hoopes and M. P. Marchetti, Invasion Ecology, Blackwell Publishing, 2007. Google Scholar
M. Luca, A. Chavez-Ross, L. Edelstein-Keshet and A. Mogilner, Chemotactic signaling, microglia, and Alzheimer's disease senile plaques: Is there a connection?, Bulletin of Mathematical Biology, 65 (2003), 693-730. doi: 10.1016/S0092-8240(03)00030-2. Google Scholar
M.-A. Piqueras, R. Company and L. L$\acute{o}$dar, A front-fixing numerical method for a free boundary nonlinear diffusion logistic population model, J. Comput. Appl. Math., 309 (2017), 473-481. doi: 10.1016/j.cam.2016.02.029. Google Scholar
T. Nagai, T. Senba and K. Yoshida, Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis, Funkcialaj Ekvacioj, 40 (1997), 411-433. Google Scholar
N. Saito and T. Suzuki, Notes on finite difference schemes to a parabolic-elliptic system modelling chemotaxis, Appl. Math. Comput., 171 (2005), 72-90. doi: 10.1016/j.amc.2005.01.037. Google Scholar
R. B. Salako and W. Shen, Spreading Speeds and Traveling waves of a parabolic-elliptic chemotaxis system with logistic source on $\mathbb{R}^N$, Discrete Contin. Dyn. Syst., 37 (2017), 6189-6225. doi: 10.3934/dcds.2017268. Google Scholar
R. B. Salako, W. Shen and S. W. Xue, Can chemotaxis speed up or slow down the spatial spreading in parabolic-elliptic chemotaxis systems with logistic source?, J. Math. Biol., 79 (2019), 1455-1490. doi: 10.1007/s00285-019-01400-0. Google Scholar
N. Shigesada and K. Kawasaki, Biological Invasions: Theory and Practice, Oxford Series in Ecology and Evolution, Oxford Univ. Press., Oxford, 1997. Google Scholar
Y. Sugiyama, Global existence in sub-critical cases and finite time blow up in super critical cases to degenerate Keller-Segel systems, Differential Integral Equations, 19 (2006), 841-876. Google Scholar
Y. Sugiyama and H. Kunii, Global Existence and decay properties for a degenerate keller-Segel model with a power factor in drift term, J. Differential Equations, 227 (2006), 333-364. doi: 10.1016/j.jde.2006.03.003. Google Scholar
Y.-S. Tao and Z. A. Wang, Competing effects of attraction vs. repulsion in chemotaxis, Math. Models Methods Appl. Sci., 23 (2013), 1-36. doi: 10.1142/S0218202512500443. Google Scholar
J. I. Tello and M. Winkler, A chemotaxis system with logistic source, Communications in Partial Differential Equations, 32 (2007), 849-877. doi: 10.1080/03605300701319003. Google Scholar
L. Wang, C. Mu and P. Zheng, On a quasilinear parabolic-elliptic chemotaxis system with logistic source, J. Differential Equations, 256 (2014), 1847-1872. doi: 10.1016/j.jde.2013.12.007. Google Scholar
M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model, J. Differential Equations, 248 (2010), 2889-2905. doi: 10.1016/j.jde.2010.02.008. Google Scholar
M. Winkler, Blow-up in a higher-dimensional chemotaxis system despite logistic growth restriction, Journal of Mathematical Analysis and Applications, 384 (2011), 261-272. doi: 10.1016/j.jmaa.2011.05.057. Google Scholar
M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., 100 (2013), 748-767. doi: 10.1016/j.matpur.2013.01.020. Google Scholar
M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening, J. Differential Equations, 257 (2014), 1056-1077. doi: 10.1016/j.jde.2014.04.023. Google Scholar
M. Winkler, How far can chemotactic cross-diffusion enforce exceeding carrying capacities?, J. Nonlinear Sci., 24 (2014), 809-855. doi: 10.1007/s00332-014-9205-x. Google Scholar
T. Yokota and N. Yoshino, Existence of solutions to chemotaxis dynamics with logistic source, Discrete Contin. Dyn. Syst. Dynamical systems, differential equations and applications. 10th AIMS Conference. Suppl., 2015 (2015), 1125-1133. doi: 10.3934/proc.2015.1125. Google Scholar
P. Zheng, C. Mu, X. Hu and Y. Tian, Boundedness of solutions in a chemotaxis system with nonlinear sensitivity and logistic source, Math. Anal. Appl., 424 (2015), 509-522. doi: 10.1016/j.jmaa.2014.11.031. Google Scholar
Figure 1. Evolution of the density $ u(t,x) $
Figure Options
Download as PowerPoint slide
Figure 2. Evolution of the speed $ \frac{h(t)}{t} $
Figure 10. Evolution of the speed $ \frac{h(t)}{t} $
Figure 11. Evolution of the habitat length $ h(t) $
Figure 13. Evolution of the density $ u(t,x) $
Jingli Ren, Dandan Zhu, Haiyan Wang. Spreading-vanishing dichotomy in information diffusion in online social networks with intervention. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1843-1865. doi: 10.3934/dcdsb.2018240
Jianping Wang, Mingxin Wang. Free boundary problems with nonlocal and local diffusions Ⅱ: Spreading-vanishing and long-time behavior. Discrete & Continuous Dynamical Systems - B, 2020, 25 (12) : 4721-4736. doi: 10.3934/dcdsb.2020121
Fang Li, Xing Liang, Wenxian Shen. Diffusive KPP equations with free boundaries in time almost periodic environments: I. Spreading and vanishing dichotomy. Discrete & Continuous Dynamical Systems, 2016, 36 (6) : 3317-3338. doi: 10.3934/dcds.2016.36.3317
Zhiguo Wang, Hua Nie, Yihong Du. Asymptotic spreading speed for the weak competition system with a free boundary. Discrete & Continuous Dynamical Systems, 2019, 39 (9) : 5223-5262. doi: 10.3934/dcds.2019213
Marcel Braukhoff, Ansgar Jüngel. Entropy-dissipating finite-difference schemes for nonlinear fourth-order parabolic equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3335-3355. doi: 10.3934/dcdsb.2020234
Ugur G. Abdulla. On the optimal control of the free boundary problems for the second order parabolic equations. II. Convergence of the method of finite differences. Inverse Problems & Imaging, 2016, 10 (4) : 869-898. doi: 10.3934/ipi.2016025
Hongsong Feng, Shan Zhao. A multigrid based finite difference method for solving parabolic interface problem. Electronic Research Archive, 2021, 29 (5) : 3141-3170. doi: 10.3934/era.2021031
Noriaki Yamazaki. Doubly nonlinear evolution equations associated with elliptic-parabolic free boundary problems. Conference Publications, 2005, 2005 (Special) : 920-929. doi: 10.3934/proc.2005.2005.920
Junde Wu, Shangbin Cui. Asymptotic behavior of solutions for parabolic differential equations with invariance and applications to a free boundary problem modeling tumor growth. Discrete & Continuous Dynamical Systems, 2010, 26 (2) : 737-765. doi: 10.3934/dcds.2010.26.737
Chang-Hong Wu. Spreading speed and traveling waves for a two-species weak competition system with free boundary. Discrete & Continuous Dynamical Systems - B, 2013, 18 (9) : 2441-2455. doi: 10.3934/dcdsb.2013.18.2441
Gary Bunting, Yihong Du, Krzysztof Krakowski. Spreading speed revisited: Analysis of a free boundary model. Networks & Heterogeneous Media, 2012, 7 (4) : 583-603. doi: 10.3934/nhm.2012.7.583
Mingyou Zhang, Qingsong Zhao, Yu Liu, Wenke Li. Finite time blow-up and global existence of solutions for semilinear parabolic equations with nonlinear dynamical boundary condition. Electronic Research Archive, 2020, 28 (1) : 369-381. doi: 10.3934/era.2020021
H. Gajewski, I. V. Skrypnik. To the uniqueness problem for nonlinear parabolic equations. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 315-336. doi: 10.3934/dcds.2004.10.315
Hua Chen, Wenbin Lv, Shaohua Wu. A free boundary problem for a class of parabolic type chemotaxis model. Kinetic & Related Models, 2015, 8 (4) : 667-684. doi: 10.3934/krm.2015.8.667
Hua Chen, Wenbin Lv, Shaohua Wu. A free boundary problem for a class of parabolic-elliptic type chemotaxis model. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2577-2592. doi: 10.3934/cpaa.2018122
R.G. Duran, J.I. Etcheverry, J.D. Rossi. Numerical approximation of a parabolic problem with a nonlinear boundary condition. Discrete & Continuous Dynamical Systems, 1998, 4 (3) : 497-506. doi: 10.3934/dcds.1998.4.497
Hawraa Alsayed, Hussein Fakih, Alain Miranville, Ali Wehbe. Finite difference scheme for 2D parabolic problem modelling electrostatic Micro-Electromechanical Systems. Electronic Research Announcements, 2019, 26: 54-71. doi: 10.3934/era.2019.26.005
Navnit Jha. Nonpolynomial spline finite difference scheme for nonlinear singuiar boundary value problems with singular perturbation and its mechanization. Conference Publications, 2013, 2013 (special) : 355-363. doi: 10.3934/proc.2013.2013.355
Vo Van Au, Mokhtar Kirane, Nguyen Huy Tuan. On a terminal value problem for a system of parabolic equations with nonlinear-nonlocal diffusion terms. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1579-1613. doi: 10.3934/dcdsb.2020174
Victor A. Galaktionov, Juan-Luis Vázquez. The problem Of blow-up in nonlinear parabolic equations. Discrete & Continuous Dynamical Systems, 2002, 8 (2) : 399-433. doi: 10.3934/dcds.2002.8.399
Lei Yang Lianzhang Bao
|
CommonCrawl
|
Explain classification of chemical reactions ?
written 9 months ago by RakeshBhuse • 3.0k • modified 9 months ago
advanced engineering chemistry
written 9 months ago by RakeshBhuse • 3.0k
Classification of chemical reaction
A. Classification based on phases involved
1) Homogeneous Reaction :
Homogeneous reaction is one in which the only one phase I.e all the reacting materials, products and catalyst are in same phase.
Oxidation of nitrogen oxide to nitrogen dioxides with air, is a gas phase reaction.
$$ \mathrm{NO}+\frac{1}{2} \mathrm{O}_{2} \rightarrow \mathrm{NO}_{2} $$
2) Heterogeneous reaction
A Heterogeneous reaction is one which involves presence of more than one phase i.e in heterogeneous reaction, at least one of the reactants, catalyst or products is present in phase different from the remaining components of the reacting system.
Oxidation of sulfur dioxide to aulfur trioxide using vanadium pentaoxide catalyst. is a heterogeneous reaction as $\mathrm{SO}_{2} ~,\mathrm{O}_{2}$ and $\mathrm{SO}_{3}$ are gaseous while $\mathrm{V}_{2} \mathrm{O}_{5}$ is a solid material.
$$ \mathrm{SO}_{2}+\frac{1}{2} \mathrm{O}_{2} \rightarrow \mathrm{SO}_{3} $$
B. Classification based on catalyst property
1) Catalytic reaction
Catalytic reactions are those reactions which involve the use of catalyst to enhance the rate of a reaction speed of a reaction.
$$ \mathrm{C}_{2} \mathrm{H}_{4}+\mathrm{H}_{2} \underset{\text { Heat }}{\stackrel{\mathrm{Ni}}{\longrightarrow}} \mathrm{{C}_{2}H_6} $$
Hydrogenation of ethylene is a catalytic reaction which makes use of nickel catalyst.
2) Non-catalytic reactions
Non-catalytic reactions are those reactions which does not involve use of catalyst.
Oxidation of $\mathrm{NO}$ to $\mathrm{NO}_{2}$ is a non-catalytic reaction: $$ \mathrm{NO}+\frac{1}{2} \mathrm{O}_{2} \rightarrow \mathrm{NO}_{2} $$
C. Classification based on the molecularity of a reaction
i.e based upon the number of molecules that take part in the reaction (in rate determining step) as : unimolecular, bimolecular and termolecular reactions.
1) Decomposition of cyclobutane is a unimolecular reaction.
$$\text{cyclobutane}\rightarrow \text{ethylene}$$
2) Decomposition of hydrogen iodide is a bimolecular reaction which involves collision of two molecules.
$$ 2 \mathrm{HI} \rightarrow \mathrm{I}_{2}+\mathrm{H}_{2} $$
3) Oxidation of $\mathrm{NO}$ to $\mathrm{NO}_{2}$ is a trimolecular/ termolecular reaction which involves collision of three molecules.
$$ 2 \mathrm{NO}+\mathrm{O}_{2} \rightarrow 2 \mathrm{NO}_{2} $$
D. Classification based on heat effect
as they give off or absorb heat to or from the surroundings as exothermic and endothermic reactions.
1) Exothermic reaction is the one in which heat is evolved
The reaction between $\mathrm {C O}$ and $\mathrm H_2$ wo produce methanol in exothermic reaction
$$\mathrm {CO_2+2H_2 \stackrel{Cu}\rightarrow CH_3OH+\text{heat}}$$
2) Endothermic reaction is the one in which heat in absorbed
Dehydration af ethyl alcohol to produce ethylene in a endothermic reaction.
$$ \mathrm {C_2H_5OH \stackrel{Al_2O_3}\rightarrow C_2H_4+H_2O -\text{Heat}}$$
E. Classification based on the order of a reaction
Such as first order reaction, second order reaction, third order reaction, etc
1) First order reaction of which overall order of reaction ie one (ie sum of orders with respect to participants in a reaction is unity) in called as first order reaction.
Decomposition of nitrogen pentaoxide is a first order reaction.
$$ \mathrm{N}_{2} \mathrm{O}_{5} \rightarrow \mathrm{NO}_{2}+\frac{3}{2} \mathrm{O}_{2} $$
2) Second order reaction at which the sum of orders with respect to reactants participated in reaction is two is called as second order reaction.
Saponification of ester is a second order reaction.
3) Third order reaction at which the sum of orders with respect to reactants participated in reaction is three is called as third order reaction.
$$ 2 \mathrm{NO}+\mathrm{H}_{2} \longrightarrow \mathrm{N}_{2} \mathrm{Q}+\mathrm{H}_{2} \mathrm{O} $$
F. Classification based on reaction direction
Such as reversible and irreversible reactions based upon whether they proceed in one or both the directions.
1) Reversible reaction are those in which the forward and reverse reactions take place simultaneously.
Example Esterification reaction is a reversible reaction.
$$\mathrm{C}_{2} \mathrm{H}_{5} \mathrm{OH}+\mathrm{CH}_{3} \mathrm{COOH} \stackrel{\mathrm{H}^{+}} \leftrightarrow \mathrm{CH}_{3} \mathrm{COOC_2H}_{5} +\mathrm{H}_{2} \mathrm{O}$$
2) Irreversible renction are those which can proceed only in one direction.
Example: Nitration of benzene is a irreversible reaction.
$$\mathrm {C_6H_6+HNO_3 \stackrel{H^+} \rightarrow C_6H_5NO_2 +H_2O}$$
Please log in to add an answer.
|
CommonCrawl
|
What are non-heritable changes to genomes?
I am told that mutations are heritable changes to the genome.
So this begs the question - what are non-heritable changes to genome?
DissenterDissenter
$\begingroup$ Mutations are not necesarily heritable changes to the genome. Think of newly acquired mutations that lead to cancer. These are not heritable. Only mutations which are present in the sperm or egg cells are heritable, no matter whether these are old or newly acquired. Non-heritable changes in the genome occur anywhere else, for example in your skin after prolonged sun exposure. $\endgroup$ – Chris♦ Oct 30 '14 at 19:28
$\begingroup$ @Chris "Mutations Are Heritable Changes in DNA" $\endgroup$ – Dissenter Oct 30 '14 at 19:33
$\begingroup$ The passage says mutations are changes in the DNA sequence that are passed on from one cell or organism to another. Their definition of heritable seems to be how a layman would perceive it, not necessarily how am evolutionary biologist would. $\endgroup$ – canadianer Oct 30 '14 at 22:17
$\begingroup$ Also what is your definition of changes to the genome? Sequence changes, epigenetic changes, regulatory changes? $\endgroup$ – canadianer Oct 30 '14 at 22:23
$\begingroup$ Somatic mutations are "heritable" by the daughter cells of the somatic cell. It is indeed heritable in the cellular level; doesn't mean that it will always be inherited (what if the cell dies/doesn't divide) $\endgroup$ – WYSIWYG Oct 31 '14 at 6:03
I don't know what you really mean by "heritable changes to the genome". I think you will understand why this sentence makes no sense after reading what follows. I start with some background and then try to address directly what confuses you.
Short introduction to the concept of heritability
The concept of heritability may have two meanings.
Heritability is a concept defined at the population level for one given trait. The heritability ($h_B^2$) (in the broad sense) is the ratio of the genetic variance $V_p$ over the phenotypic variance $V_p$, where the phenotypic variance can itself be decomposed into environmental $V_e$ and genetic variance $V_g$ (and their covariance that we will neglect for the purpose of this question).
$$h_B^2 = \frac{V_{g}}{V_{p}} = \frac{V_{g}}{V_{e} + V_{g}}$$
When saying environmental variance $V_e$, we don't refer to the total variance in the environment (such as the variance in temperature for example) but we refer to the phenotypic variance (in a given population of a given trait of interest) that is caused by environmental variance. The same logic is true for the genetic variance.
Inheritence
While the concept of heritability applies to phenotypic traits, the concept of inheritence can apply to DNA material. A mutation is inherited if the offspring receives the sequence from his/her mother or from his/her father (assuming we are talking about a species that have sexual reproduction with 2 genders).
Somatic versus germline mutations
As pointed by @Chris in the comments. In multicellular organisms, not all mutations can be transmitted to the multicellular offspring. Most of the cells do not give rise to any other multicellular organisms. For example, imagine that while the biceps is under development at very low age, a mutation occurs that will be inherited by the daughter cells but not to the multicellular offspring. we call the line of cells that do not give rise to gametes (sperm and ovules), the soma line, by opposition to the germ line which gives rise to the gametes.
To answer your question
As soon as a given locus (position on the DNA) has some variance and that this variance explains some phenotypic variance, then the phenotypic trait it influences has a heritability greater than zero. If the variance at this locus has no effect on the phenotype, then it's heritability is zero (because $V_g = 0$, although there is some variance in the actual sequence). If the phenotypic trait of interest does not show any variance (in the population considered), then the concept of heritability is undefined for this trait!
I think you may have a confusion between "inherited trait" and "heritable trait". A mutation will necessarily be transmissible to the offspring (except if it happen to be in the somatic line in such case it will only be transmitted to daughter cells but not to the multicellular offspring) but it doesn't mean that this mutation will necessarily explain some variance in a phenotypic trait. A new much is necessarily heritable (pay attention to the soma vs germ line) but does not mean that a phenotypic will have higher heritability thanks to this mutation.
Remi.bRemi.b
$\begingroup$ it's what my book tells me; "Mutations Are Heritable Changes in DNA." $\endgroup$ – Dissenter Oct 30 '14 at 19:32
$\begingroup$ See my update. Hope that will help you. I think you got messed up by the concepts of heritability and inheritance. The answer is maybe complciated I am a bit scared to have just render things even more complicated. But I hope it will help you. I've got to go now, so I cannot give you further answers today! Sorry $\endgroup$ – Remi.b Oct 30 '14 at 19:51
$\begingroup$ @Dissenter yes, mutations are heritable i.e. they can be inherited but they don't necessarily get inherited since during meiosis you have processes such as independent assortment and chromosomal cross over, which means the gamete won't necessarily have the mutation of the parents! $\endgroup$ – Bez Oct 31 '14 at 0:43
Most mutations to the DNA are heritable, but not all are inherited.
Mutations occur in the DNA, DNA is then replicated and transmitted to offspring (cells or organisms). Generally all mutations are heritable because they can be inherited, but some aren't - either by random chance (drift) or deleterious fitness effects (selection). Any mutation which makes the cell less fit has a reduced chance of being inherited, but even mutations leading to cancer are inherited at some level (from mother cancer cell to daughter cancer cells) but at some point this lineage of inheritance will stop because it will kill the host (but this is no different to species/lineages of organism going extinct - if I carried a new novel mutation but didn't have children then that mutation is not inherited).
*I suppose a mutation which stops DNA from being replicated could be classed as not-heritable because the DNA is not going to be replicated, but that depends on there being no other sources of replication machinery (other cells perhaps) - a process I don't know well enough to offer any firm conclusions on.
Non-heritable changes to the genome
DNA can be altered beyond just sequence mutation, for example methylation- but this can be used as a heritable change to the genome (genomic imprinting). Perhaps gene expression with strong environmental effects on gene expression could be considered as not heritable (because if the environment is not "inherited" then parent-offspring regression would be 0) but maybe that's a change to the transcriptome rather than genome.
rg255rg255
Not the answer you're looking for? Browse other questions tagged genetics or ask your own question.
Are there any DNA base sequences that are fully conserved between the genomes of all humans?
When can I assume individuals marrying into a pedigree are non-carriers?
How does the modern theory of evolution solve these apparent problems?
What species have had their genomes sequenced/are being sequenced?
Are non-coding RNAs introns?
What distinguishes Mendelian Inheritance from Non-Mendelian Inheritance?
Why are de novo genetic mutations usually NOT inherited?
DNA mutations in CHO-KI mammalian cells
Are the changes required to produce genetically manipulated organisms (GMOS) considered to be mutations
Non-coding DNA correlation with rate of evolution
|
CommonCrawl
|
Visitor $0.00 Login or Register
Fight Finance
Courses Tags Random All Recent Scores
Question 44 NPV
The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time.
What is the Net Present Value (NPV) of the project?
Project Cash Flows
Time (yrs) Cash flow ($)
0 -100
(a) -100
(b) 0
(e) 121
Question 250 NPV, Loan, arbitrage table
Your neighbour asks you for a loan of $100 and offers to pay you back $120 in one year.
You don't actually have any money right now, but you can borrow and lend from the bank at a rate of 10% pa. Rates are given as effective annual rates.
Assume that your neighbour will definitely pay you back. Ignore interest tax shields and transaction costs.
The Net Present Value (NPV) of lending to your neighbour is $9.09. Describe what you would do to actually receive a $9.09 cash flow right now with zero net cash flows in the future.
(a) Borrow $109.09 from the bank and lend $100 of it to your neighbour now.
(b) Borrow $100 from the bank and lend it to your neighbour now.
(c) Borrow $209.09 from the bank and lend $100 to your neighbour now.
(d) Borrow $120 from the bank and lend $100 of it to your neighbour now.
(e) Borrow $90.91 from the bank and lend it to your neighbour now.
Question 501 NPV, IRR, pay back period
The below graph shows a project's net present value (NPV) against its annual discount rate.
Which of the following statements is NOT correct?
(a) When the project's discount rate is 18% pa, the NPV is approximately -$30m.
(b) The payback period is infinite, the project never pays itself off.
(c) The addition of the project's cash flows, ignoring the time value of money, is approximately $20m.
(d) The project's IRR is approximately 5.5% pa.
(e) As the discount rate rises, the NPV falls.
Question 532 mutually exclusive projects, NPV, IRR
An investor owns a whole level of an old office building which is currently worth $1 million. There are three mutually exclusive projects that can be started by the investor. The office building level can be:
Rented out to a tenant for one year at $0.1m paid immediately, and then sold for $0.99m in one year.
Refurbished into more modern commercial office rooms at a cost of $1m now, and then sold for $2.4m when the refurbishment is finished in one year.
Converted into residential apartments at a cost of $2m now, and then sold for $3.4m when the conversion is finished in one year.
All of the development projects have the same risk so the required return of each is 10% pa. The table below shows the estimated cash flows and internal rates of returns (IRR's).
Mutually Exclusive Projects
Project Cash flow
now ($) Cash flow in
one year ($) IRR
(% pa)
Rent then sell as is -900,000 990,000 10
Refurbishment into modern offices -2,000,000 2,400,000 20
Conversion into residential apartments -3,000,000 3,400,000 13.33
Which project should the investor accept?
(a) Rent then sell as is.
(b) Refurbishment into modern offices.
(c) Conversion into residential apartments.
(d) All of the above.
(e) Any of the above.
Question 353 income and capital returns, inflation, real and nominal returns and cash flows, real estate
A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 3% pa.
Inflation is expected to be 2% pa. All rates are given as effective annual rates.
What are the property's expected real total, capital and income returns? The answer choices below are given in the same order.
(a) 3.9216%, 2.9412%, 0.9804%.
(b) 3.9216%, 0.9804%, 2.9412%.
(c) 3.9216%, 0.9804%, 0.9804%.
(d) 1.9804%, 1.0000%, 0.9804%.
(e) 1.9608%, 0.9804%, 0.9804%.
Question 407 income and capital returns, inflation, real and nominal returns and cash flows
A stock has a real expected total return of 7% pa and a real expected capital return of 2% pa.
What is the nominal expected total return, capital return and dividend yield? The answers below are given in the same order.
(a) 11.100%, 4.000%, 7.100%.
(b) 11.140%, 4.040%, 7.100%.
(c) 4.902%, 0.000%, 4.902%.
(d) 9.140%, 4.040%, 5.100%.
(e) 9.140%, 4.040%, 7.100%.
Question 531 bankruptcy or insolvency, capital structure, risk, limited liability
Who is most in danger of being personally bankrupt? Assume that all of their businesses' assets are highly liquid and can therefore be sold immediately.
(a) Alice has $6,000 cash, owes $10,000 credit card debt due immediately and 100% owns a sole tradership business with assets worth $10,000 and liabilities of $3,000.
(b) Billy has $10,000 cash, owes $6,000 credit card debt due immediately and 100% owns a corporate business with assets worth $3,000 and liabilities of $10,000.
(c) Carla has $6,000 cash, owes $10,000 credit card debt due immediately and 100% owns a corporate business with assets worth $10,000 and liabilities of $3,000.
(d) Darren has $10,000 cash, owes $6,000 credit card debt due immediately and 100% owns a sole tradership business with assets worth $3,000 and liabilities of $10,000.
(e) Ernie has $1,000 cash, lent $3,000 to his friend, and doesn't have any personal debt or own any businesses.
Question 295 inflation, real and nominal returns and cash flows, NPV
When valuing assets using discounted cash flow (net present value) methods, it is important to consider inflation. To properly deal with inflation:
(I) Discount nominal cash flows by nominal discount rates.
(II) Discount nominal cash flows by real discount rates.
(III) Discount real cash flows by nominal discount rates.
(IV) Discount real cash flows by real discount rates.
Which of the above statements is or are correct?
(a) I only.
(b) III only.
(c) IV only.
(d) I and IV only.
(e) II and III only.
Question 300 NPV, opportunity cost
What is the net present value (NPV) of undertaking a full-time Australian undergraduate business degree as an Australian citizen? Only include the cash flows over the duration of the degree, ignore any benefits or costs of the degree after it's completed.
Assume the following:
The degree takes 3 years to complete and all students pass all subjects.
There are 2 semesters per year and 4 subjects per semester.
University fees per subject per semester are $1,277, paid at the start of each semester. Fees are expected to stay constant for the next 3 years.
There are 52 weeks per year.
The first semester is just about to start (t=0). The first semester lasts for 19 weeks (t=0 to 19).
The second semester starts immediately afterwards (t=19) and lasts for another 19 weeks (t=19 to 38).
The summer holidays begin after the second semester ends and last for 14 weeks (t=38 to 52). Then the first semester begins the next year, and so on.
Working full time at the grocery store instead of studying full-time pays $20/hr and you can work 35 hours per week. Wages are paid at the end of each week.
Full-time students can work full-time during the summer holiday at the grocery store for the same rate of $20/hr for 35 hours per week. Wages are paid at the end of each week.
The discount rate is 9.8% pa. All rates and cash flows are real. Inflation is expected to be 3% pa. All rates are effective annual.
The NPV of costs from undertaking the university degree is:
(a) $98,385.98
(b) $97,915.91
(c) $75,130.29
(d) $54,018.93
(e) $27,523.89
Question 43 pay back period
A project to build a toll road will take 3 years to complete, costing three payments of $50 million, paid at the start of each year (at times 0, 1, and 2).
After completion, the toll road will yield a constant $10 million at the end of each year forever with no costs. So the first payment will be at t=4.
The required return of the project is 10% pa given as an effective nominal rate. All cash flows are nominal.
What is the payback period?
(a) Negative since the NPV is negative.
(b) Zero since the project's internal rate of return is less than the required return.
(c) 15 years.
(d) 18 years.
(e) Infinite, since the project will never pay itself off.
Question 269 time calculation, APR
A student won $1m in a lottery. Currently the money is in a bank account which pays interest at 6% pa, given as an APR compounding per month.
She plans to spend $20,000 at the beginning of every month from now on (so the first withdrawal will be at t=0). After each withdrawal, she will check how much money is left in the account. When there is less than $500,000 left, she will donate that remaining amount to charity.
In how many months will she make her last withdrawal and donate the remainder to charity?
(a) In 31 months (t=31 months).
(b) In 30 months (t=30 months).
(c) In 28 months (t=28 months).
(d) In 27 months (t=27 months).
(e) In 26 months (t=26 months).
Question 490 expected and historical returns, accounting ratio
Which of the following is NOT a synonym of 'required return'?
(a) total required yield
(b) cost of capital
(c) discount rate
(d) opportunity cost of capital
(e) accounting rate of return
Question 404 income and capital returns, real estate
One and a half years ago Frank bought a house for $600,000. Now it's worth only $500,000, based on recent similar sales in the area.
The expected total return on Frank's residential property is 7% pa.
He rents his house out for $1,600 per month, paid in advance. Every 12 months he plans to increase the rental payments.
The present value of 12 months of rental payments is $18,617.27.
The future value of 12 months of rental payments one year in the future is $19,920.48.
What is the expected annual rental yield of the property? Ignore the costs of renting such as maintenance, real estate agent fees and so on.
(a) 3.1029%
(b) 3.3201%
(c) 3.7235%
(d) 3.9841%
(e) 7%
Question 456 inflation, effective rate
In the 'Austin Powers' series of movies, the character Dr. Evil threatens to destroy the world unless the United Nations pays him a ransom (video 1, video 2). Dr. Evil makes the threat on two separate occasions:
In 1969 he demands a ransom of $1 million (=10^6), and again;
In 1997 he demands a ransom of $100 billion (=10^11).
If Dr. Evil's demands are equivalent in real terms, in other words $1 million will buy the same basket of goods in 1969 as $100 billion would in 1997, what was the implied inflation rate over the 28 years from 1969 to 1997?
The answer choices below are given as effective annual rates:
(a) 0.5086% pa
(b) 1.5086% pa
(c) 5.0859% pa
(d) 50.8591% pa
(e) 150.8591% pa
Question 234 debt terminology
An 'interest only' loan can also be called a:
(a) Discount loan
(b) Par loan
(c) Premium loan
(d) Deferred repayment loan
(e) Fully amortising loan.
Which of the following statements is NOT correct? Lenders:
(a) Are long debt.
(b) Invest in debt.
(c) Are owed money.
(d) Provide debt funding.
(e) Have debt liabilities.
Question 290 APR, effective rate, debt terminology
Which of the below statements about effective rates and annualised percentage rates (APR's) is NOT correct?
(a) An effective annual rate could be called: "a yearly rate compounding per year".
(b) An APR compounding monthly could be called: "a yearly rate compounding per month".
(c) An effective monthly rate could be called: "a yearly rate compounding per month".
(d) An APR compounding daily could be called: "a yearly rate compounding per day".
(e) An effective 2-year rate could be called: "a 2-year rate compounding every 2 years".
Question 26 APR, effective rate
A European bond paying annual coupons of 6% offers a yield of 10% pa.
Convert the yield into an effective monthly rate, an effective annual rate and an effective daily rate. Assume that there are 365 days in a year.
All answers are given in the same order:
### r_\text{eff, monthly} , r_\text{eff, yearly} , r_\text{eff, daily} ###
(a) 0.0041, 0.05, 0.0001.
(b) 0.0080, 0.1, 0.0003.
(c) 0.0083, 0.1, 0.0003.
(d) 0.0083, 2.1384, 0.0031.
(e) 0.0083, 0.1047, 0.0033.
Question 42 interest only loan
You just signed up for a 30 year interest-only mortgage with monthly payments of $3,000 per month. The interest rate is 6% pa which is not expected to change.
How much did you borrow? After 15 years, just after the 180th payment at that time, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. Remember that the mortgage is interest-only and that mortgage payments are paid in arrears (at the end of the month).
(a) $495,533.92, $349,640.96
(b) $500,374.84, $355,510.54
(c) $500,374.84, $250,187.42
(d) $600,000.00, $600,000.00
(e) $600,000.00, $300,000.00
You just borrowed $400,000 in the form of a 25 year interest-only mortgage with monthly payments of $3,000 per month. The interest rate is 9% pa which is not expected to change.
You actually plan to pay more than the required interest payment. You plan to pay $3,300 in mortgage payments every month, which your mortgage lender allows. These extra payments will reduce the principal and the minimum interest payment required each month.
At the maturity of the mortgage, what will be the principal? That is, after the last (300th) interest payment of $3,300 in 25 years, how much will be owing on the mortgage?
(a) $6,766.6469
(b) $35,748.4866
(c) $63,663.4188
(d) $90,000.0000
(e) Nothing, the mortgage will be fully paid off prior to maturity.
Question 239 income and capital returns, inflation, real and nominal returns and cash flows, interest only loan
A bank grants a borrower an interest-only residential mortgage loan with a very large 50% deposit and a nominal interest rate of 6% that is not expected to change. Assume that inflation is expected to be a constant 2% pa over the life of the loan. Ignore credit risk.
From the bank's point of view, what is the long term expected nominal capital return of the loan asset?
(a) Approximately 6%.
(b) Approximately 4%.
(c) Approximately 2%.
(d) Approximately 0%.
(e) Approximately -2%.
Question 48 IRR, NPV, bond pricing, premium par and discount bonds, market efficiency
The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero.
Considering this, which of the following statements is NOT correct?
(a) The internal rate of return (IRR) of buying a fairly priced bond is equal to the bond's yield.
(b) The Present Value of a fairly priced bond's coupons and face value is equal to its price.
(c) If a fairly priced bond's required return rises, its price will fall.
(d) Fairly priced premium bonds' yields are less than their coupon rates, prices are more than their face values, and the NPV of buying them is therefore positive.
(e) The NPV of buying a fairly priced bond is zero.
Question 56 income and capital returns, bond pricing, premium par and discount bonds
Which of the following statements about risk free government bonds is NOT correct?
(a) Premium bonds have a positive expected capital return.
(b) Discount bonds have a positive expected capital return.
(c) Par bonds have a zero expected capital return.
(d) Par bonds have a total expected yield equal to their coupon yield.
(e) Zero coupon bonds selling at par would have zero expected total, income and capital yields.
Hint: Total return can be broken into income and capital returns as follows:
###\begin{aligned} r_\text{total} &= \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0} \\ &= r_\text{income} + r_\text{capital} \end{aligned} ###
The capital return is the growth rate of the price.
The income return is the periodic cash flow. For a bond this is the coupon payment.
Question 63 bond pricing, NPV, market efficiency
(a) The internal rate of return (IRR) of buying a bond is equal to the bond's yield.
(c) If the required return of a bond falls, its price will fall.
(d) Fairly priced discount bonds' yield is more than the coupon rate, price is less than face value, and the NPV of buying them is zero.
Question 153 bond pricing, premium par and discount bonds
Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100) and maturity (3 years).
The only difference is that bond X and Y's yields are 8 and 12% pa respectively. Which of the following statements is true?
(a) Bonds X and Y are premium bonds.
(b) Bonds X and Y are discount bonds.
(c) Bond X is a discount bond but bond Y is a premium bond.
(d) Bond X is a premium bond but bond Y is a discount bond.
(e) Bonds X and Y have the same price.
Question 461 book and market values, ROE, ROA, market efficiency
One year ago a pharmaceutical firm floated by selling its 1 million shares for $100 each. Its book and market values of equity were both $100m. Its debt totalled $50m. The required return on the firm's assets was 15%, equity 20% and debt 5% pa.
In the year since then, the firm:
Earned net income of $29m.
Paid dividends totaling $10m.
Discovered a valuable new drug that will lead to a massive 1,000 times increase in the firm's net income in 10 years after the research is commercialised. News of the discovery was publicly announced. The firm's systematic risk remains unchanged.
Which of the following statements is NOT correct? All statements are about current figures, not figures one year ago.
(a) The book value of equity would be larger than the market value of equity.
(b) The book ROA from accounting would be larger than the required return on assets from finance.
(c) The book ROE from accounting would be larger than the required return on equity from finance.
(d) The book ROE would be larger than the book ROA.
(e) The required return on equity would be larger than the required return on assets.
Hint: Book return on assets (ROA) and book return on equity (ROE) are ratios that accountants like to use to measure a business's past performance.
###\text{ROA}= \dfrac{\text{Net income}}{\text{Book value of assets}}###
###\text{ROE}= \dfrac{\text{Net income}}{\text{Book value of equity}}###
The required return on assets ##r_V## is a return that financiers like to use to estimate a business's future required performance which compensates them for the firm's assets' risks. If the business were to achieve realised historical returns equal to its required returns, then investment into the business's assets would have been a zero-NPV decision, which is neither good nor bad but fair.
###r_\text{V, 0 to 1}= \dfrac{\text{Cash flow from assets}_\text{1}}{\text{Market value of assets}_\text{0}} = \dfrac{CFFA_\text{1}}{V_\text{0}}###
Similarly for equity and debt.
Question 499 NPV, Annuity
Some countries' interest rates are so low that they're zero.
If interest rates are 0% pa and are expected to stay at that level for the foreseeable future, what is the most that you would be prepared to pay a bank now if it offered to pay you $10 at the end of every year for the next 5 years?
In other words, what is the present value of five $10 payments at time 1, 2, 3, 4 and 5 if interest rates are 0% pa?
(a) $0
(b) $10
(c) $50
(d) Positive infinity
(e) Priceless
Question 7 DDM
For a price of $1040, Camille will sell you a share which just paid a dividend of $100, and is expected to pay dividends every year forever, growing at a rate of 5% pa.
So the next dividend will be ##100(1+0.05)^1=$105.00##, and the year after it will be ##100(1+0.05)^2=110.25## and so on.
The required return of the stock is 15% pa.
Would you like to the share or politely ?
Question 217 NPV, DDM, multi stage growth model
A stock is expected to pay a dividend of $15 in one year (t=1), then $25 for 9 years after that (payments at t=2 ,3,...10), and on the 11th year (t=11) the dividend will be 2% less than at t=10, and will continue to shrink at the same rate every year after that forever. The required return of the stock is 10%. All rates are effective annual rates.
What is the price of the stock now?
(a) $361.78
(b) $236.33
(c) $237.93
(d) $348.69
(e) $223.24
Question 465 NPV, perpetuity
The boss of WorkingForTheManCorp has a wicked (and unethical) idea. He plans to pay his poor workers one week late so that he can get more interest on his cash in the bank.
Every week he is supposed to pay his 1,000 employees $1,000 each. So $1 million is paid to employees every week.
The boss was just about to pay his employees today, until he thought of this idea so he will actually pay them one week (7 days) later for the work they did last week and every week in the future, forever.
Bank interest rates are 10% pa, given as a real effective annual rate. So ##r_\text{eff annual, real} = 0.1## and the real effective weekly rate is therefore ##r_\text{eff weekly, real} = (1+0.1)^{1/52}-1 = 0.001834569##
All rates and cash flows are real, the inflation rate is 3% pa and there are 52 weeks per year. The boss will always pay wages one week late. The business will operate forever with constant real wages and the same number of employees.
What is the net present value (NPV) of the boss's decision to pay later?
(b) $1,919.39
(e) $1,000,000.00
Question 50 DDM, stock pricing, inflation, real and nominal returns and cash flows
Most listed Australian companies pay dividends twice per year, the 'interim' and 'final' dividends, which are roughly 6 months apart.
You are an equities analyst trying to value the company BHP. You decide to use the Dividend Discount Model (DDM) as a starting point, so you study BHP's dividend history and you find that BHP tends to pay the same interim and final dividend each year, and that both grow by the same rate.
You expect BHP will pay a $0.55 interim dividend in six months and a $0.55 final dividend in one year. You expect each to grow by 4% next year and forever, so the interim and final dividends next year will be $0.572 each, and so on in perpetuity.
Assume BHP's cost of equity is 8% pa. All rates are quoted as nominal effective rates. The dividends are nominal cash flows and the inflation rate is 2.5% pa.
What is the current price of a BHP share?
(a) $57.1734
(b) $28.0394
(c) $27.7723
(d) $27.5000
(e) $27.2330
Question 733 DDM, income and capital returns
A share's current price is $60. It's expected to pay a dividend of $1.50 in one year. The growth rate of the dividend is 0.5% pa and the stock's required total return is 3% pa. The stock's price can be modeled using the dividend discount model (DDM):
##P_0=\dfrac{C_1}{r-g}##
Which of the following methods is NOT equal to the stock's expected price in one year and six months (t=1.5 years)? Note that the symbolic formulas shown in each line below do equal the formulas with numbers. The formula is just repeated with symbols and then numbers in case it helps you to identify the incorrect statement more quickly.
(a) ##P_{1.5}=P_0 (1+g)^1 (1+r)^{0.5}=60(1+0.005)^1 (1+0.03)^{0.5}##
(b) ##P_{1.5}=(P_0 (1+r)^1-C_1 ) (1+r)^{0.5}=(60(1+0.03)^1-1.5) (1+0.03)^{0.5}##
(c) ##P_{1.5}=\dfrac{C_1}{r-g} (1+r)^1 (1+g)^{0.5}=\dfrac{1.5}{0.03-0.005} (1+0.03)^1 (1+0.005)^{0.5}##
(d) ##P_{1.5}=\dfrac{C_1 (1+g)^1}{r-g} (1+r)^{0.5}=\dfrac{1.5(1+0.005)^1}{0.03-0.005} (1+0.03)^{0.5}##
(e) ##P_{1.5}=\dfrac{C_1 (1+g)^2}{r-g}/(1+r)^{0.5} +C_1 (1+g)^1/(1+r)^{0.5} =\dfrac{1.5(1+0.005)^2}{0.03-0.005}/(1+0.03)^{0.5} +1.5(1+0.005)^1/(1+0.03)^{0.5} ##
Question 457 PE ratio, Multiples valuation
Which firms tend to have low forward-looking price-earnings (PE) ratios? Only consider firms with positive PE ratios.
(a) Highly liquid publically listed firms.
(b) Firms in a declining industry with very low or negative earnings growth.
(c) Firms expected to have temporarily low earnings over the next year, but with higher earnings later.
(d) Firms whose returns have a very low level of systematic risk.
(e) Firms whose assets include a very large proportion of cash.
Question 503 DDM, NPV, stock pricing
A share currently worth $100 is expected to pay a constant dividend of $4 for the next 5 years with the first dividend in one year (t=1) and the last in 5 years (t=5).
The total required return is 10% pa.
What do you expected the share price to be in 5 years, just after the dividend at that time has been paid?
(a) $100
Question 365 DDM, stock pricing
Stocks in the United States usually pay quarterly dividends. For example, the software giant Microsoft paid a $0.23 dividend every quarter over the 2013 financial year and plans to pay a $0.28 dividend every quarter over the 2014 financial year.
Using the dividend discount model and net present value techniques, calculate the stock price of Microsoft assuming that:
The time now is the beginning of July 2014. The next dividend of $0.28 will be received in 3 months (end of September 2014), with another 3 quarterly payments of $0.28 after this (end of December 2014, March 2015 and June 2015).
The quarterly dividend will increase by 2.5% every year, but each quarterly dividend over the year will be equal. So each quarterly dividend paid in the financial year beginning in September 2015 will be $ 0.287 ##(=0.28×(1+0.025)^1)##, with the last at the end of June 2016. In the next financial year beginning in September 2016 each quarterly dividend will be $0.294175 ##(=0.28×(1+0.025)^2)##, with the last at the end of June 2017, and so on forever.
The total required return on equity is 6% pa.
The required return and growth rate are given as effective annual rates.
Dividend payment dates and ex-dividend dates are at the same time.
Remember that there are 4 quarters in a year and 3 months in a quarter.
What is the current stock price?
(a) $32.71126
(b) $32.298457
(d) $30.859679
(e) $8
Question 534 NPV, no explanation
You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate.
You wish to consume half as much now (t=0) as in one year (t=1) and have nothing left in the bank at the end.
How much can you consume at time zero and one? The answer choices are given in the same order.
(a) $26,190.48, $52380.95
(b) $31,250, $62,500
(c) $33,333.33, $66,666.67
(d) $34,375, $68,750
(e) $35,483.87, $70,967.74
Question 524 risk, expected and historical returns, bankruptcy or insolvency, capital structure, corporate financial decision theory, limited liability
(a) Stocks are higher risk investments than debt.
(b) Stocks have higher expected returns than debt.
(c) Firms' past realised stock returns are always higher than their past realised debt returns.
(d) In the event of bankruptcy, stock holders are paid after debt holders are fully paid.
(e) Stock holders have a residual claim on the firm's assets.
Question 195 equivalent annual cash flow
An industrial chicken farmer grows chickens for their meat. Chickens:
Cost $0.50 each to buy as chicks. They are bought on the day they're born, at t=0.
Grow at a rate of $0.70 worth of meat per chicken per week for the first 6 weeks (t=0 to t=6).
Grow at a rate of $0.40 worth of meat per chicken per week for the next 4 weeks (t=6 to t=10) since they're older and grow more slowly.
Feed costs are $0.30 per chicken per week for their whole life. Chicken feed is bought and fed to the chickens once per week at the beginning of the week. So the first amount of feed bought for a chicken at t=0 costs $0.30, and so on.
Can be slaughtered (killed for their meat) and sold at no cost at the end of the week. The price received for the chicken is their total value of meat (note that the chicken grows fast then slow, see above).
The required return of the chicken farm is 0.5% given as an effective weekly rate.
Ignore taxes and the fixed costs of the factory. Ignore the chicken's welfare and other environmental and ethical concerns.
Find the equivalent weekly cash flow of slaughtering a chicken at 6 weeks and at 10 weeks so the farmer can figure out the best time to slaughter his chickens. The choices below are given in the same order, 6 and 10 weeks.
(a) $0.3651, $0.2374
(b) $0.3172, $0.3506
(c) $0.3065, $0.2157
(d) $0.3050, $0.2142
(e) $0.0157, $0.0491
You own some nice shoes which you use once per week on date nights. You bought them 2 years ago for $500. In your experience, shoes used once per week last for 6 years. So you expect yours to last for another 4 years.
Your younger sister said that she wants to borrow your shoes once per week. With the increased use, your shoes will only last for another 2 years rather than 4.
What is the present value of the cost of letting your sister use your current shoes for the next 2 years?
Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new pair of shoes when your current pair wears out and your sister will not use the new ones; your sister will only use your current shoes so she will only use it for the next 2 years; and the price of new shoes never changes.
(a) $164.6662
(b) $181.1328
(c) $199.2461
(d) $226.2443
(e) $301.1312
Question 207 income and capital returns, bond pricing, coupon rate, no explanation
For a bond that pays fixed semi-annual coupons, how is the annual coupon rate defined, and how is the bond's annual income yield from time 0 to 1 defined mathematically?
Let: ##P_0## be the bond price now,
##F_T## be the bond's face value,
##T## be the bond's maturity in years,
##r_\text{total}## be the bond's total yield,
##r_\text{income}## be the bond's income yield,
##r_\text{capital}## be the bond's capital yield, and
##C_t## be the bond's coupon at time t in years. So ##C_{0.5}## is the coupon in 6 months, ##C_1## is the coupon in 1 year, and so on.
(a) coupon rate = ##\dfrac{C_{0.5}+C_{1}}{F_T}##, ##r_\text{income, 0 to 1}=\dfrac{C_{0.5}(1+r_\text{total})^{0.5}+C_{1}}{P_0}##
(b) coupon rate = ##\dfrac{2 \times C_{0.5}}{F_T}##, ##r_\text{income, 0 to 1}=\dfrac{C_{0.5}(1+r_\text{capital})^{0.5}+C_{1}}{P_0}##
(c) coupon rate = ##\dfrac{2 \times C_{1}}{P_0}##, ##r_\text{income, 0 to 1}=\dfrac{C_{0.5}+C_{1}}{P_0}##
(d) coupon rate = ##\dfrac{2 \times C_{1}}{P_0}##, ##r_\text{income, 0 to 1}=\dfrac{2 \times C_{1}}{P_0}##
(e) coupon rate = ##\dfrac{2 \times C_{1}}{F_T}##, ##r_\text{income, 0 to 1}=\dfrac{C_{0.5}(1+r_\text{total})^{0.5}+C_{1}}{F_T}##
Question 213 income and capital returns, bond pricing, premium par and discount bonds
The coupon rate of a fixed annual-coupon bond is constant (always the same).
What can you say about the income return (##r_\text{income}##) of a fixed annual coupon bond? Remember that:
###r_\text{total} = r_\text{income} + r_\text{capital}###
###r_\text{total, 0 to 1} = \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0}###
Assume that there is no change in the bond's total annual yield to maturity from when it is issued to when it matures.
Select the most correct statement.
From its date of issue until maturity, the income return of a fixed annual coupon:
(a) Premium bond will increase.
(b) Premium bond will decrease.
(c) Premium bond will remain constant.
(d) Par bond will increase.
(e) Par bond will decrease.
Question 255 bond pricing
In these tough economic times, central banks around the world have cut interest rates so low that they are practically zero. In some countries, government bond yields are also very close to zero.
A three year government bond with a face value of $100 and a coupon rate of 2% pa paid semi-annually was just issued at a yield of 0%. What is the price of the bond?
(a) 94.20452353
(b) 100
(c) 106
(d) 112
(e) The bond is priceless.
Question 616 idiom, debt terminology, bond pricing
"Buy low, sell high" is a phrase commonly heard in financial markets. It states that traders should try to buy assets at low prices and sell at high prices.
Traders in the fixed-coupon bond markets often quote promised bond yields rather than prices. Fixed-coupon bond traders should try to:
(a) Buy at low yields, sell at high yields.
(b) Buy at high yields, sell at low yields.
(c) Buy at high yields, sell at high yields.
(d) Buy at low yields, sell at low yields.
(e) There is no preferable yield to buy or sell fixed-coupon debt.
Question 173 CFFA
Find Candys Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013.
Candys Corp
Income Statement for
year ending 30th June 2013
$m
COGS 50
Operating expense 10
Depreciation 20
Interest expense 10
Income before tax 110
Tax at 30% 33
Net income 77
as at 30th June 2013 2012
$m $m
Current assets 220 180
Cost 300 340
Accumul. depr. 60 40
Carrying amount 240 300
Total assets 460 480
Current liabilities 175 190
Non-current liabilities 135 130
Owners' equity
Retained earnings 50 60
Contributed equity 100 100
Total L and OE 460 480
Note: all figures are given in millions of dollars ($m).
(a) 242
A firm has forecast its Cash Flow From Assets (CFFA) for this year and management is worried that it is too low. Which one of the following actions will lead to a higher CFFA for this year (t=0 to 1)? Only consider cash flows this year. Do not consider cash flows after one year, or the change in the NPV of the firm. Consider each action in isolation.
(a) Buy less land, buildings and trucks than what was planned. Assume that this has no impact on revenue.
(b) Pay less cash to creditors by refinancing the firm's existing coupon bonds with zero-coupon bonds that require no interest payments. Assume that there are no transaction costs and that both types of bonds have the same yield to maturity.
(c) Change the depreciation method used for tax purposes from diminishing value to straight line, so less depreciation occurs this year and more occurs in later years. Assume that the government's tax department allow this.
(d) Buying more inventory than was planned, so there is an increase in net working capital. Assume that there is no increase in sales.
(e) Raising new equity through a rights issue. Assume that all of the money raised is spent on new capital assets such as land and trucks, but they will be fitted out and delivered in one year so no new cash will be earned from them.
Over the next year, the management of an unlevered company plans to:
Achieve firm free cash flow (FFCF or CFFA) of $1m.
Pay dividends of $1.8m
Complete a $1.3m share buy-back.
Spend $0.8m on new buildings without buying or selling any other fixed assets. This capital expenditure is included in the CFFA figure quoted above.
Assume that:
All amounts are received and paid at the end of the year so you can ignore the time value of money.
The firm has sufficient retained profits to pay the dividend and complete the buy back.
The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year.
How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued?
(a) $2.1m
(b) $1.3m
(c) $0.8m
(d) $0.3m
(e) No new shares need to be issued, the firm will be sufficiently financed.
Question 366 opportunity cost, NPV, CFFA, needs refinement
Your friend is trying to find the net present value of a project. The project is expected to last for just one year with:
a negative cash flow of -$1 million initially (t=0), and
a positive cash flow of $1.1 million in one year (t=1).
The project has a total required return of 10% pa due to its moderate level of undiversifiable risk.
Your friend is aware of the importance of opportunity costs and the time value of money, but he is unsure of how to find the NPV of the project.
He knows that the opportunity cost of investing the $1m in the project is the expected gain from investing the money in shares instead. Like the project, shares also have an expected return of 10% since they have moderate undiversifiable risk. This opportunity cost is $0.1m ##(=1m \times 10\%)## which occurs in one year (t=1).
He knows that the time value of money should be accounted for, and this can be done by finding the present value of the cash flows in one year.
Your friend has listed a few different ways to find the NPV which are written down below.
(I) ##-1m + \dfrac{1.1m}{(1+0.1)^1} ##
(II) ##-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1m}{(1+0.1)^1} \times 0.1 ##
(III) ##-1m + \dfrac{1.1m}{(1+0.1)^1} - \dfrac{1.1m}{(1+0.1)^1} \times 0.1 ##
(IV) ##-1m + 1.1m - \dfrac{1.1m}{(1+0.1)^1} \times 0.1 ##
(V) ##-1m + 1.1m - 1.1m \times 0.1 ##
Which of the above calculations give the correct NPV? Select the most correct answer.
(b) II only.
(c) III only.
(e) II and V only.
Question 486 capital budgeting, opportunity cost, sunk cost
A young lady is trying to decide if she should attend university. Her friends say that she should go to university because she is more likely to meet a clever young man than if she begins full time work straight away.
What's the correct way to classify this item from a capital budgeting perspective when trying to find the Net Present Value of going to university rather than working?
The opportunity to meet a desirable future spouse should be classified as:
(a) A sunk cost.
(b) An opportunity cost.
(c) A negative side effect.
(d) A positive side effect.
(e) A depreciation expense.
Question 511 capital budgeting, CFFA
Find the cash flow from assets (CFFA) of the following project.
One Year Mining Project Data
Project life 1 year
Initial investment in building mine and equipment $9m
Depreciation of mine and equipment over the year $8m
Kilograms of gold mined at end of year 1,000
Sale price per kilogram $0.05m
Variable cost per kilogram $0.03m
Before-tax cost of closing mine at end of year $4m
Note 1: Due to the project, the firm also anticipates finding some rare diamonds which will give before-tax revenues of $1m at the end of the year.
Note 2: The land that will be mined actually has thermal springs and a family of koalas that could be sold to an eco-tourist resort for an after-tax amount of $3m right now. However, if the mine goes ahead then this natural beauty will be destroyed.
Note 3: The mining equipment will have a book value of $1m at the end of the year for tax purposes. However, the equipment is expected to fetch $2.5m when it is sold.
Find the project's CFFA at time zero and one. Answers are given in millions of dollars ($m), with the first cash flow at time zero, and the second at time one.
(a) -9, 15.65
(b) -9, 14.3
(c) -12, 16.8
(d) -12, 16.35
(e) -12, 14.3
Project Data
Project life 2 years
Initial investment in equipment $6m
Depreciation of equipment per year for tax purposes $1m
Unit sales per year 4m
Sale price per unit $8
Variable cost per unit $3
Fixed costs per year, paid at the end of each year $1.5m
Note 1: The equipment will have a book value of $4m at the end of the project for tax purposes. However, the equipment is expected to fetch $0.9 million when it is sold at t=2.
Note 2: Due to the project, the firm will have to purchase $0.8m of inventory initially, which it will sell at t=1. The firm will buy another $0.8m at t=1 and sell it all again at t=2 with zero inventory left. The project will have no effect on the firm's current liabilities.
Find the project's CFFA at time zero, one and two. Answers are given in millions of dollars ($m).
(a) -6, 12.25, 16.68
(b) -6.8, 13.25, 14.05
(c) -6.8, 13.25, 15.88
(d) -6.8, 13.25, 18.51
(e) -6.8, 13.25, 17.71
Question 273 CFFA, capital budgeting
Value the following business project to manufacture a new product.
Project life 2 yrs
Depreciation of equipment per year $3m
Expected sale price of equipment at end of project $0.6m
Fixed costs per year, paid at the end of each year $1m
Interest expense per year 0
Weighted average cost of capital after tax per annum 10%
The firm's current assets and current liabilities are $3m and $2m respectively right now. This net working capital will not be used in this project, it will be used in other unrelated projects.
Due to the project, current assets (mostly inventory) will grow by $2m initially (at t = 0), and then by $0.2m at the end of the first year (t=1).
Current liabilities (mostly trade creditors) will increase by $0.1m at the end of the first year (t=1).
At the end of the project, the net working capital accumulated due to the project can be sold for the same price that it was bought.
The project cost $0.5m to research which was incurred one year ago.
All cash flows occur at the start or end of the year as appropriate, not in the middle or throughout the year.
All rates and cash flows are real. The inflation rate is 3% pa.
All rates are given as effective annual rates.
The business considering the project is run as a 'sole tradership' (run by an individual without a company) and is therefore eligible for a 50% capital gains tax discount when the equipment is sold, as permitted by the Australian Tax Office.
What is the expected net present value (NPV) of the project?
(b) $8.481735m
(c) $8.743802m
(d) $8.991736m
(e) $9.719008m
To value a business's assets, the free cash flow of the firm (FCFF, also called CFFA) needs to be calculated. This requires figures from the firm's income statement and balance sheet. For what figures is the balance sheet needed? Note that the balance sheet is sometimes also called the statement of financial position.
(a) Net income, depreciation and interest expense.
(b) Depreciation and capital expenditure.
(c) Current assets, current liabilities and cost of goods sold (COGS).
(d) Current assets, current liabilities and capital expenditure.
(e) Current assets, current liabilities and depreciation expense.
Question 206 CFFA, interest expense, interest tax shield
Interest expense (IntExp) is an important part of a company's income statement (or 'profit and loss' or 'statement of financial performance').
How does an accountant calculate the annual interest expense of a fixed-coupon bond that has a liquid secondary market? Select the most correct answer:
Annual interest expense is equal to:
(a) the bond's face value multiplied by its annual yield to maturity.
(b) the bond's face value multiplied by its annual coupon rate.
(c) the bond's market price at the start of the year multiplied by its annual yield to maturity.
(d) the bond's market price at the start of the year multiplied by its annual coupon rate.
(e) the future value of the actual cash payments of the bond over the year, grown to the end of the year, and grown by the bond's yield to maturity.
Question 367 CFFA, interest tax shield
There are many ways to calculate a firm's free cash flow (FFCF), also called cash flow from assets (CFFA). Some include the annual interest tax shield in the cash flow and some do not.
Which of the below FFCF formulas include the interest tax shield in the cash flow?
###(1) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp### ###(2) \quad FFCF=NI + Depr - CapEx -ΔNWC + IntExp.(1-t_c)### ###(3) \quad FFCF=EBIT.(1-t_c )+ Depr- CapEx -ΔNWC+IntExp.t_c### ###(4) \quad FFCF=EBIT.(1-t_c) + Depr- CapEx -ΔNWC### ###(5) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC+IntExp.t_c### ###(6) \quad FFCF=EBITDA.(1-t_c )+Depr.t_c- CapEx -ΔNWC### ###(7) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC### ###(8) \quad FFCF=EBIT-Tax + Depr - CapEx -ΔNWC-IntExp.t_c### ###(9) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC### ###(10) \quad FFCF=EBITDA-Tax - CapEx -ΔNWC-IntExp.t_c###
The formulas for net income (NI also called earnings), EBIT and EBITDA are given below. Assume that depreciation and amortisation are both represented by 'Depr' and that 'FC' represents fixed costs such as rent.
###NI=(Rev - COGS - Depr - FC - IntExp).(1-t_c )### ###EBIT=Rev - COGS - FC-Depr### ###EBITDA=Rev - COGS - FC### ###Tax =(Rev - COGS - Depr - FC - IntExp).t_c= \dfrac{NI.t_c}{1-t_c}###
(a) 1, 3, 5, 7, 9.
(b) 2, 4, 6, 8, 10.
(c) 1, 4, 6, 8, 10.
(d) 2, 3, 5, 7, 9.
(e) 1, 3, 5, 8, 10.
Question 371 interest tax shield, CFFA
One method for calculating a firm's free cash flow (FFCF, or CFFA) is to ignore interest expense. That is, pretend that interest expense ##(IntExp)## is zero:
###\begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp \\ &= (Rev - COGS - Depr - FC - 0)(1-t_c) + Depr - CapEx -\Delta NWC - 0\\ \end{aligned}###
Does this annual FFCF with zero interest expense or the annual interest tax shield?
Question 91 WACC, capital structure
A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of equity to raise money for new projects of similar systematic risk to the company's existing projects. Assume a classical tax system. Which statement is correct?
(a) The debt-to-assets (D/V) ratio will increase.
(b) The debt-to-equity ratio (D/E) will increase.
(c) Firm value is likely to have increased due to the higher amount of interest tax shields, assuming that there will not be any costs of financial distress.
(d) The company's after-tax WACC is likely to stay the same.
(e) The company's before-tax WACC is likely to stay the same.
Question 99 capital structure, interest tax shield, Miller and Modigliani, trade off theory of capital structure
A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged.
The firm and individual investors can borrow at the same rate and have the same tax rates.
The firm's debt and shares are fairly priced and the shares are repurchased at the market price, not at a premium.
There are no market frictions relating to debt such as asymmetric information or transaction costs.
Shareholders wealth is measured in terms of utiliity. Shareholders are wealth-maximising and risk-averse. They have a preferred level of overall leverage. Before the firm's capital restructure all shareholders were optimally levered.
According to Miller and Modigliani's theory, which statement is correct?
(a) The firm's share price and shareholder wealth will both decrease. This is because the firm will have more debt and therefore more risk so the discount rate applied to its cash flows will be higher, decreasing the value of the firm and therefore the value of the firm's equity and share price.
(b) The firm's share price and shareholder wealth will both increase. This is because the firm will have more debt which will amplify the returns of equity investors. This will mean that returns on equity can be much higher and investors will pay a premium for this, leading to an increase in the stock price.
(c) The firm's share price and shareholder wealth will both increase since it has more debt and therefore more tax shields.
(d) The firm's share price will increase due to the higher value of tax shields. But shareholder wealth will remain unchanged because capital structure is irrelevant when investors can use home-made leverage to create tax-shields themselves.
(e) The firm's share price and shareholder wealth will both increase. This is because the cost of debt is cheaper than equity, leading to a lower (before and after tax) WACC. This lower WACC will lead to a higher value of the firm and a higher share price.
Question 285 covariance, portfolio risk
Two risky stocks A and B comprise an equal-weighted portfolio. The correlation between the stocks' returns is 70%.
If the variance of stock A increases but the:
Prices and expected returns of each stock stays the same,
Variance of stock B's returns stays the same,
Correlation of returns between the stocks stays the same.
(a) The variance of the portfolio will increase.
(b) The standard deviation of the portfolio will increase.
(c) The covariance of returns between stocks A and B will stay the same.
(d) The portfolio return will stay the same.
(e) The portfolio value will stay the same.
Question 563 correlation
What is the correlation of a variable X with itself?
The corr(X, X) or ##\rho_{X,X}## equals:
(a) var(X) or ##\sigma_X^2##
(b) sd(X) or ##\sigma_X##
(c) 1
(d) 0
(e) Mathematically undefined
What is the correlation of a variable X with a constant C?
The corr(X, C) or ##\rho_{X,C}## equals:
Question 306 risk, standard deviation
Let the standard deviation of returns for a share per month be ##\sigma_\text{monthly}##.
What is the formula for the standard deviation of the share's returns per year ##(\sigma_\text{yearly})##?
Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average.
(a) ##\sigma_\text{yearly} = \sigma_\text{monthly}##
(b) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times 12##
(c) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times 144##
(d) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times \sqrt{12}##
(e) ##\sigma_\text{yearly} = \sigma_\text{monthly} \times {12}^{1/3}##
Question 705 utility, risk aversion, utility function
Mr Blue, Miss Red and Mrs Green are people with different utility functions.
(a) Mr Blue is as happy with zero wealth as he is with $1,000. Similarly for Miss Red and Mrs Green.
(b) All of the people would appear irrational to an economist. Though Mr Blue appears to be rational when he has more than $192 in wealth.
(c) Mr Blue is risk averse. Miss Red is risk neutral. Mrs Green is risk loving.
(d) Miss Red is totally indifferent to how much wealth she has. She doesn't care about gaining or losing money.
(e) Mr Blue prefers more to less up to a wealth of around $192. At about $192 he is the happiest he can be. This is his bliss point. With more than $192 he becomes less happy.
Question 110 CAPM, SML, NPV
The security market line (SML) shows the relationship between beta and expected return.
Investment projects that plot above the SML would have:
(a) A positive NPV.
(b) A zero NPV.
(c) A negative NPV.
(d) A large amount of diversifiable risk.
(e) Zero diversifiable risk.
Question 628 CAPM, SML, risk, no explanation
Assets A, B, M and ##r_f## are shown on the graphs above. Asset M is the market portfolio and ##r_f## is the risk free yield on government bonds. Assume that investors can borrow and lend at the risk free rate. Which of the below statements is NOT correct?
(a) Asset A has the same systematic risk as asset B.
(b) Asset A has more total variance than asset B.
(c) Asset B has zero idiosyncratic risk. Asset B must be a portfolio of half the market portfolio and half government bonds.
(d) If risk-averse investors were forced to invest all of their wealth in a single risky asset, so they could not diversify, every investor would logically choose asset A over the other three assets.
(e) Assets M and B have the highest Sharpe ratios, which is defined as the gradient of the capital allocation line (CAL) from the government bonds through the asset on the graph of expected return versus total standard deviation.
Question 673 CAPM, beta, expected and historical returns
A stock has a beta of 1.5. The market's expected total return is 10% pa and the risk free rate is 5% pa, both given as effective annual rates.
In the last 5 minutes, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged.
What do you think was the stock's historical return over the last 5 minutes, given as an effective 5 minute rate?
(a) -12.5%
(b) -4%
(c) -1.5%
(d) -1%
(e) 12.5%
Over the last year, bad economic news was released showing a higher chance of recession. Over this time the share market fell by 1%. The risk free rate was unchanged.
What do you think was the stock's historical return over the last year, given as an effective annual rate?
(a) -12.5% pa
(b) -4% pa
(c) -1.5% pa
(d) -1% pa
(e) 12.5% pa
Question 418 capital budgeting, NPV, interest tax shield, WACC, CFFA, CAPM
Expected sale price of equipment at end of project 0
Sale price per unit $10
Interest expense in first year (at t=1) $0.562m
Corporate tax rate 30%
Government treasury bond yield 5%
Bank loan debt yield 9%
Market portfolio return 10%
Covariance of levered equity returns with market 0.32
Variance of market portfolio returns 0.16
Firm's and project's debt-to-equity ratio 50%
Due to the project, current assets will increase by $6m now (t=0) and fall by $6m at the end (t=1). Current liabilities will not be affected.
The debt-to-equity ratio will be kept constant throughout the life of the project. The amount of interest expense at the end of each period has been correctly calculated to maintain this constant debt-to-equity ratio.
Millions are represented by 'm'.
All rates and cash flows are real. The inflation rate is 2% pa. All rates are given as effective annual rates.
The project is undertaken by a firm, not an individual.
(a) $5.772m
(b) $4.979m
(c) $4.959m
(d) $4.733m
(e) $4.584m
Question 242 technical analysis, market efficiency
Select the most correct statement from the following.
'Chartists', also known as 'technical traders', believe that:
(a) Markets are weak-form efficient.
(b) Markets are semi-strong-form efficient.
(c) Past prices cannot be used to predict future prices.
(d) Past returns can be used to predict future returns.
(e) Stock prices reflect all publically available information.
Question 100 market efficiency, technical analysis, joint hypothesis problem
A company selling charting and technical analysis software claims that independent academic studies have shown that its software makes significantly positive abnormal returns. Assuming the claim is true, which statement(s) are correct?
(I) Weak form market efficiency is broken.
(II) Semi-strong form market efficiency is broken.
(III) Strong form market efficiency is broken.
(IV) The asset pricing model used to measure the abnormal returns (such as the CAPM) had mis-specification error so the returns may not be abnormal but rather fair for the level of risk.
Select the most correct response:
(a) Only III is true.
(b) Only II and III are true.
(c) Only I, II and III are true.
(d) Only IV is true.
(e) Either I, II and III are true, or IV is true, or they are all true.
Question 105 NPV, risk, market efficiency
A person is thinking about borrowing $100 from the bank at 7% pa and investing it in shares with an expected return of 10% pa. One year later the person will sell the shares and pay back the loan in full. Both the loan and the shares are fairly priced.
What is the Net Present Value (NPV) of this one year investment? Note that you are asked to find the present value (##V_0##), not the value in one year (##V_1##).
(a) $10
(b) $3
(c) $2.8037
(d) $2.7273
Question 338 market efficiency, CAPM, opportunity cost, technical analysis
A man inherits $500,000 worth of shares.
He believes that by learning the secrets of trading, keeping up with the financial news and doing complex trend analysis with charts that he can quit his job and become a self-employed day trader in the equities markets.
What is the expected gain from doing this over the first year? Measure the net gain in wealth received at the end of this first year due to the decision to become a day trader. Assume the following:
He earns $60,000 pa in his current job, paid in a lump sum at the end of each year.
He enjoys examining share price graphs and day trading just as much as he enjoys his current job.
Stock markets are weak form and semi-strong form efficient.
He has no inside information.
He makes 1 trade every day and there are 250 trading days in the year. Trading costs are $20 per trade. His broker invoices him for the trading costs at the end of the year.
The shares that he currently owns and the shares that he intends to trade have the same level of systematic risk as the market portfolio.
The market portfolio's expected return is 10% pa.
Measure the net gain over the first year as an expected wealth increase at the end of the year.
(a) $110,000
(b) $50,000
(c) $45,000
(d) -$15,000
(e) -$65,000
Question 417 NPV, market efficiency, DDM
A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the end-of-year amount, paid at the end of every year.
This fee is charged regardless of whether the fund makes gains or losses on your money.
The fund offers to invest your money in shares which have an expected return of 10% pa before fees.
You are thinking of investing $100,000 in the fund and keeping it there for 40 years when you plan to retire.
How much money do you expect to have in the fund in 40 years? Also, what is the future value of the fees that the fund expects to earn from you? Give both amounts as future values in 40 years. Assume that:
The fund has no private information.
Markets are weak and semi-strong form efficient.
The fund's transaction costs are negligible.
The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible.
The fund invests its fees in the same companies as it invests your funds in, but with no fees.
The below answer choices list your expected wealth in 40 years and then the fund's expected wealth in 40 years.
(a) $4,462,125.27, $63,800.29
(b) $3,407,788.62, $1,118,136.94
(c) $3,316,736.53, $1,209,189.03
(d) $2,172,452.15, $2,353,473.41
(e) $2,017,206.85, $2,508,718.71
Question 464 mispriced asset, NPV, DDM, market efficiency
A company advertises an investment costing $1,000 which they say is underpriced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Assume that there are no dividend payments so the entire 15% total return is all capital return.
Assuming that the company's statements are correct, what is the NPV of buying the investment if the 15% return lasts for the next 100 years (t=0 to 100), then reverts to 10% pa after that time? Also, what is the NPV of the investment if the 15% return lasts forever?
In both cases, assume that the required return of 10% remains constant. All returns are given as effective annual rates.
The answer choices below are given in the same order (15% for 100 years, and 15% forever):
(a) $0, $0
(b) $1,977.19, $2,000
(c) $2,977.19, $3,000
(d) $499.96, $500
(e) $84,214.9, Infinite
Question 202 DDM, payout policy
Currently, a mining company has a share price of $6 and pays constant annual dividends of $0.50. The next dividend will be paid in 1 year. Suddenly and unexpectedly the mining company announces that due to higher than expected profits, all of these windfall profits will be paid as a special dividend of $0.30 in 1 year.
If investors believe that the windfall profits and dividend is a one-off event, what will be the new share price? If investors believe that the additional dividend is actually permanent and will continue to be paid, what will be the new share price? Assume that the required return on equity is unchanged. Choose from the following, where the first share price includes the one-off increase in earnings and dividends for the first year only ##(P_\text{0 one-off})## , and the second assumes that the increase is permanent ##(P_\text{0 permanent})##:
(a) ##P_\text{0 one-off} = 9.6000, \space \space P_\text{0 permanent} = 6.2766##
(b) ##P_\text{0 one-off} = 6.3000, \space \space P_\text{0 permanent} = 6.2769##
(c) ##P_\text{0 one-off} = 9.6000, \space \space P_\text{0 permanent} = 6.3000##
(d) ##P_\text{0 one-off} = 6.2769, \space \space P_\text{0 permanent} = 9.6000##
(e) ##P_\text{0 one-off} = 6.3000, \space \space P_\text{0 permanent} = 9.6000##
Note: When a firm makes excess profits they sometimes pay them out as special dividends. Special dividends are just like ordinary dividends but they are one-off and investors do not expect them to continue, unlike ordinary dividends which are expected to persist.
Question 568 rights issue, capital raising, capital structure
A company conducts a 1 for 5 rights issue at a subscription price of $7 when the pre-announcement stock price was $10. What is the percentage change in the stock price and the number of shares outstanding? The answers are given in the same order. Ignore all taxes, transaction costs and signalling effects.
(a) -16.67%, 20%
(b) -5%, 20%
(c) 0%, 20%
(d) 7.14%, 20%
(e) 11.67%, 0%
Question 712 effective rate conversion
An effective monthly return of 1% ##(r_\text{eff monthly})## is equivalent to an effective annual return ##(r_\text{eff annual})## of:
(a) 12.682503% pa
(b) 12.060201% pa
(c) 12% pa
(d) 11.940397% pa
(e) 3.464102% pa
Question 617 systematic and idiosyncratic risk, risk, CAPM
A stock's required total return will increase when its:
(a) Systematic risk increases.
(b) Idiosyncratic risk increases.
(c) Total risk increases.
(d) Systematic risk decreases.
(e) Idiosyncratic risk decreases.
Question 622 expected and historical returns, risk
An economy has only two investable assets: stocks and cash.
Stocks had a historical nominal average total return of negative two percent per annum (-2% pa) over the last 20 years. Stocks are liquid and actively traded. Stock returns are variable, they have risk.
Cash is riskless and has a nominal constant return of zero percent per annum (0% pa), which it had in the past and will have in the future. Cash can be kept safely at zero cost. Cash can be converted into shares and vice versa at zero cost.
The nominal total return of the shares over the next year is expected to be:
(a) Less than or equal to negative two percent per annum ##(r_\text{shares} \leq -0.02)##.
(b) Exactly negative two percent per annum ##(r_\text{shares} = -0.02)##.
(c) More than or equal to negative two percent per annum ##(r_\text{shares} \geq -0.02)##.
(d) Less than or equal to zero percent per annum ##(r_\text{shares} \leq 0)##.
(e) More than or equal to zero percent per annum ##(r_\text{shares} \geq 0)##.
Question 626 cross currency interest rate parity, foreign exchange rate, forward foreign exchange rate
The Australian cash rate is expected to be 2% pa over the next one year, while the Japanese cash rate is expected to be 0% pa, both given as nominal effective annual rates. The current exchange rate is 100 JPY per AUD.
What is the implied 1 year forward foreign exchange rate?
(a) 98.04 JPY per AUD.
(b) 100 JPY per AUD.
(c) 102 JPY per AUD.
(d) 1.02 AUD per JPY.
(e) 0.9804 AUD per JPY.
A company advertises an investment costing $1,000 which they say is underpriced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Of the 15% pa total expected return, the dividend yield is expected to always be 7% pa and rest is the capital yield.
Assuming that the company's statements are correct, what is the NPV of buying the investment if the 15% total return lasts for the next 100 years (t=0 to 100), then reverts to 10% after that time? Also, what is the NPV of the investment if the 15% return lasts forever?
In both cases, assume that the required return of 10% remains constant, the dividends can only be re-invested at 10% pa and all returns are given as effective annual rates.
(a) 84,214.90, Infinite
(b) 2,521.12, 3,000
(c) 2,100.93, 2,500
(d) 1,249.38, 1,333.33
(e) 0, 0
The 'time value of money' is most closely related to which of the following concepts?
(a) Competition: Firms in competitive markets earn zero economic profit.
(b) Opportunity cost: The cost of the next best alternative foregone should be subtracted.
(c) Separation of the investment and financing decisions.
(d) Diversification: Risks can often be reduced by pooling them together.
(e) Sunk costs: Costs that cannot be recouped should be ignored.
Question 660 fully amortising loan, interest only loan, APR
How much more can you borrow using an interest-only loan compared to a 25-year fully amortising loan if interest rates are 6% pa compounding per month and are not expected to change? If it makes it easier, assume that you can afford to pay $2,000 per month on either loan. Express your answer as a proportional increase using the following formula:
###\text{Proportional Increase} = \dfrac{V_\text{0,interest only}}{V_\text{0,fully amortising}} - 1###
(a) 77.6034%
(b) 30.3779%
(c) 28.8603%
(d) 22.3966%
(e) 7.5304%
Question 668 buy and hold, market efficiency, idiom
A quote from the famous investor Warren Buffet: "Much success can be attributed to inactivity. Most investors cannot resist the temptation to constantly buy and sell."
Buffet is referring to the buy-and-hold strategy which is to buy and never sell shares. Which of the following is a disadvantage of a buy-and-hold strategy? Assume that share markets are semi-strong form efficient. Which of the following is NOT an advantage of the strict buy-and-hold strategy? A disadvantage of the buy-and-hold strategy is that it reduces:
(a) Capital gains tax.
(b) Explicit transaction costs such as brokerage fees.
(c) Implicit transaction costs such as bid-ask spreads.
(d) Portfolio rebalancing to maintain maximum diversification.
(e) Time wasted on researching whether it's better to buy or sell.
You deposit money into a bank. Which of the following statements is NOT correct? You:
(a) Are a lender.
(b) Issued debt.
(c) Bought debt.
(d) Are a debt holder.
(e) Own a debt asset.
Question 749 Multiples valuation, PE ratio, NPV
A real estate agent says that the price of a house in Sydney Australia is approximately equal to the gross weekly rent times 1000.
What type of valuation method is the real estate agent using?
(a) Price to EBITDA multiple.
(b) Price to book multiple.
(c) Price to earnings multiple.
(d) Price to revenue multiple.
(e) Discounted cash flow (DCF).
Itau Unibanco is a major listed bank in Brazil with a market capitalisation of equity equal to BRL 85.744 billion, EPS of BRL 3.96 and 2.97 billion shares on issue.
Banco Bradesco is another major bank with total earnings of BRL 8.77 billion and 2.52 billion shares on issue.
Estimate Banco Bradesco's current share price using a price-earnings multiples approach assuming that Itau Unibanco is a comparable firm.
Note that BRL is the Brazilian Real, their currency. Figures sourced from Google Finance on the market close of the BVMF on 24/7/15.
(a) BRL 28.87
(b) BRL 25.372
(c) BRL 22.1
(d) BRL 21.653
(e) BRL 21.528
Question 754 fully amortising loan, interest only loan
(e) 11.5269%
Question 282 expected and historical returns, income and capital returns
You're the boss of an investment bank's equities research team. Your five analysts are each trying to find the expected total return over the next year of shares in a mining company. The mining firm:
Is regarded as a mature company since it's quite stable in size and was floated around 30 years ago. It is not a high-growth company;
Share price is very sensitive to changes in the price of the market portfolio, economic growth, the exchange rate and commodities prices. Due to this, its standard deviation of total returns is much higher than that of the market index;
Experienced tough times in the last 10 years due to unexpected falls in commodity prices.
Shares are traded in an active liquid market.
Your team of analysts present their findings, and everyone has different views. While there's no definitive true answer, who's calculation of the expected total return is the most plausible?
The analysts' source data is correct and true, but their inferences might be wrong;
All returns and yields are given as effective annual nominal rates.
(a) Alice says 5% pa since she calculated that this was the average total yield on government bonds over the last 10 years. She says that this is also the expected total yield implied by current prices on one year government bonds.
(b) Bob says 4% pa since he calculated that this was the average total return on the mining stock over the last 10 years.
(c) Cate says 3% pa since she calculated that this was the average growth rate of the share price over the last 10 years.
(d) Dave says 6% pa since he calculated that this was the average growth rate of the share market price index (not the accumulation index) over the last 10 years.
(e) Eve says 15% pa since she calculated that this was the discount rate implied by the dividend discount model using the current share price, forecast dividend in one year and a 3% growth rate in dividends thereafter, which is the expected long term inflation rate.
Question 562 covariance
What is the covariance of a variable X with itself?
The cov(X, X) or ##\sigma_{X,X}## equals:
Question 560 standard deviation, variance
The standard deviation and variance of a stock's annual returns are calculated over a number of years. The units of the returns are percent per annum ##(\% pa)##.
What are the units of the standard deviation ##(\sigma)## and variance ##(\sigma^2)## of returns respectively?
(a) Percentage points per annum ##(\text{pp pa})## and percentage points per annum ##(\text{pp pa})##.
(b) Percentage points per annum ##(\text{pp pa})## and percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)##.
(c) Percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)## and percentage points per annum ##(\text{pp pa})##.
(d) Percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)## and percentage points per annum all squared ##\left( (\text{pp pa})^2 \right)##.
(e) Percent per annum ##(\% pa)## and percent per annum ##(\% pa)##.
Hint: Visit Wikipedia to understand the difference between percentage points ##(\text{pp})## and percent ##(\%)##.
Mr Blue, Miss Red and Mrs Green are people with different utility functions. Which of the statements about the 3 utility functions is NOT correct?
(a) Mr Blue and Miss Red prefer more wealth to less.
(b) Mrs Green enjoys losing wealth.
(c) Mr Blue is risk averse.
(d) Miss Red is risk neutral.
(e) Mrs Green is risk averse.
Question 702 utility, risk aversion, utility function, gamble
Each person has $50 of initial wealth. A coin toss game is offered to each person at a casino where the player can win or lose $50. Each player can flip a coin and if they flip heads, they receive $50. If they flip tails then they will lose $50. Which of the following statements is NOT correct?
(a) Mr Blue would enjoy the gamble.
(b) Miss Red would be indifferent to gambling or not.
(c) Mrs Green would dislike the gamble.
(d) Mr Blue's certainty equivalent of the risky gamble is $70.71. This is more than his current wealth which is why he would like to gamble.
(e) Miss Red's certainty equivalent of the risky gamble is $50. This is the same as her current wealth which is why she is indifferent to gambling or not.
Question 248 CAPM, DDM, income and capital returns
The total return of any asset can be broken down in different ways. One possible way is to use the dividend discount model (or Gordon growth model):
###p_0 = \frac{c_1}{r_\text{total}-r_\text{capital}}###
Which, since ##c_1/p_0## is the income return (##r_\text{income}##), can be expressed as:
###r_\text{total}=r_\text{income}+r_\text{capital}###
So the total return of an asset is the income component plus the capital or price growth component.
Another way to break up total return is to use the Capital Asset Pricing Model:
###r_\text{total}=r_\text{f}+β(r_\text{m}- r_\text{f})###
###r_\text{total}=r_\text{time value}+r_\text{risk premium}###
So the risk free rate is the time value of money and the term ##β(r_\text{m}- r_\text{f})## is the compensation for taking on systematic risk.
Using the above theory and your general knowledge, which of the below equations, if any, are correct?
(I) ##r_\text{income}=r_\text{time value}##
(II) ##r_\text{income}=r_\text{risk premium}##
(III) ##r_\text{capital}=r_\text{time value}##
(IV) ##r_\text{capital}=r_\text{risk premium}##
(V) ##r_\text{income}+r_\text{capital}=r_\text{time value}+r_\text{risk premium}##
Which of the equations are correct?
(a) I, IV and V only.
(b) II, III and V only.
(c) V only.
(d) All are true.
(e) None are true.
Question 117 WACC
A firm can issue 5 year annual coupon bonds at a yield of 8% pa and a coupon rate of 12% pa.
The beta of its levered equity is 1. Five year government bonds yield 5% pa with a coupon rate of 6% pa. The market's expected dividend return is 4% pa and its expected capital return is 6% pa.
The firm's debt-to-equity ratio is 2:1. The corporate tax rate is 30%.
What is the firm's after-tax WACC? Assume a classical tax system.
(a) 9.47%
(b) 8.93%
(c) 8.53%
(d) 7.80%
(e) 7.07%
Question 494 franking credit, personal tax on dividends, imputation tax system
A firm pays a fully franked cash dividend of $100 to one of its Australian shareholders who has a personal marginal tax rate of 15%. The corporate tax rate is 30%.
What will be the shareholder's personal tax payable due to the dividend payment?
(a) -$21.4286
(b) -$7.563
Question 455 income and capital returns, payout policy, DDM, market efficiency
A fairly priced unlevered firm plans to pay a dividend of $1 next year (t=1) which is expected to grow by 3% pa every year after that. The firm's required return on equity is 8% pa.
The firm is thinking about reducing its future dividend payments by 10% so that it can use the extra cash to invest in more projects which are expected to return 8% pa, and have the same risk as the existing projects. Therefore, next year's dividend will be $0.90. No new equity or debt will be issued to fund the new projects, they'll all be funded by the cut in dividends.
What will be the stock's new annual capital return (proportional increase in price per year) if the change in payout policy goes ahead?
Assume that payout policy is irrelevant to firm value (so there's no signalling effects) and that all rates are effective annual rates.
(a) 2.7% pa.
(b) 3.0% pa.
(c) 3.5% pa.
(d) 3.3% pa.
(e) 3.8% pa.
Question 772 interest tax shield, capital structure, leverage
A firm issues debt and uses the funds to buy back equity. Assume that there are no costs of financial distress or transactions costs. Which of the following statements about interest tax shields is NOT correct?
(a) Higher debt leads to higher interest expense.
(b) Higher interest expense leads to lower profit before tax, following on from above.
(c) Lower profit before tax leads to lower tax payments, following on from above.
(d) Lower tax payments lead to higher cash flow from assets, following on from above.
(e) Lower profit after tax leads to a lower share price, following on from above.
Question 780 mispriced asset, NPV, DDM, market efficiency, no explanation
A company advertises an investment costing $1,000 which they say is under priced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Of the 15% pa total expected return, the dividend yield is expected to be 4% pa and the capital yield 11% pa. Assume that the company's statements are correct.
What is the NPV of buying the investment if the 15% total return lasts for the next 100 years (t=0 to 100), then reverts to 10% after that time? Also, what is the NPV of the investment if the 15% return lasts forever?
In both cases, assume that the required return of 10% remains constant, the dividends can only be re-invested at 10% pa and all returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever):
(a) $7,359.46, Infinite
(b) $4,887.57, $10,000
(c) $1,786.35, -$5,000 (Note the negative sign)
(d) $1,471.89, $830.28
(e) $830.28, $3,000
Question 205 depreciation tax shield, CFFA
There are a number of ways that assets can be depreciated. Generally the government's tax office stipulates a certain method.
But if it didn't, what would be the ideal way to depreciate an asset from the perspective of a businesses owner?
(a) 'Straight line' or 'prime cost' depreciation, which allocates equal depreciation expenses over each year of the asset's life.
(b) 'Diminishing value' or 'reducing balance' depreciation, which allocates more depreciation expense at the start of the asset's life and less towards the end.
(c) No depreciation at all, so the asset is always kept on the books as being the same value that it was bought for. The asset will cause no depreciation expense in any year.
(d) Allocating all of the depreciation expense to the final year of the asset's life.
(e) Allocating all of the depreciation expense to the first year of the asset's life. Accountants would call this 'expensing' the asset, rather than 'capitalising' it and depreciating it slowly.
An old company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below.
To value the firm's assets, the terminal value needs to be calculated using the perpetuity with growth formula:
###V_{\text{terminal, }t-1} = \dfrac{FFCF_{\text{terminal, }t}}{r-g}###
Which point corresponds to the best time to calculate the terminal value?
(a) Point A.
(b) Point B.
(c) Point C.
(d) Any of the points.
(e) None of the points.
A new company's Firm Free Cash Flow (FFCF, same as CFFA) is forecast in the graph below.
For a price of $102, Andrea will sell you a share which just paid a dividend of $10 yesterday, and is expected to pay dividends every year forever, growing at a rate of 5% pa.
So the next dividend will be ##10(1+0.05)^1=$10.50## in one year from now, and the year after it will be ##10(1+0.05)^2=11.025## and so on.
Question 20 NPV, APR, Annuity
Your friend wants to borrow $1,000 and offers to pay you back $100 in 6 months, with more $100 payments at the end of every month for another 11 months. So there will be twelve $100 payments in total. She says that 12 payments of $100 equals $1,200 so she's being generous.
If interest rates are 12% pa, given as an APR compounding monthly, what is the Net Present Value (NPV) of your friend's deal?
(a) -648.51
(b) 60.28
(c) 70.88
(d) 125.51
Question 22 NPV, perpetuity with growth, effective rate, effective rate conversion
What is the NPV of the following series of cash flows when the discount rate is 10% given as an effective annual rate?
The first payment of $90 is in 3 years, followed by payments every 6 months in perpetuity after that which shrink by 3% every 6 months. That is, the growth rate every 6 months is actually negative 3%, given as an effective 6 month rate. So the payment at ## t=3.5 ## years will be ## 90(1-0.03)^1=87.3 ##, and so on.
(c) 545.53
(e) $65.74
Question 74 WACC, capital structure, CAPM
A firm's weighted average cost of capital before tax (##r_\text{WACC before tax}##) would increase due to:
(a) The firm issuing more debt and using the proceeds to repurchase stock.
(b) The firm issuing more equity and using the proceeds to pay off debt holders.
(c) The firm's industry becoming more systematically risky, for example if it was a mining company whose performance became more sensitive to countries' GDP growth, so the correlation of the firm's returns with the market was higher.
(d) The firm's industry becoming less systematically risky, for example if it was a child care centre and the government announced higher subsidies for parents using child care centres, so the correlation of the firm's returns with the market was lower.
(e) None of the above.
Question 75 WACC, CAPM
A company has:
50 million shares outstanding.
The market price of one share is currently $6.
The risk-free rate is 5% and the market return is 10%.
Market analysts believe that the company's ordinary shares have a beta of 2.
The company has 1 million preferred stock which have a face (or par) value of $100 and pay a constant dividend of 10% of par. They currently trade for $80 each.
The company's debentures are publicly traded and their market price is equal to 90% of their face value.
The debentures have a total face value of $60,000,000 and the current yield to maturity of corporate debentures is 10% per annum. The corporate tax rate is 30%.
What is the company's after-tax weighted average cost of capital (WACC)? Assume a classical tax system.
(a) 11.75%
(b) 11.82%
(c) 13.54%
(d) 13.78%
(e) 20.84%
Question 104 CAPM, payout policy, capital structure, Miller and Modigliani, risk
Assume that there exists a perfect world with no transaction costs, no asymmetric information, no taxes, no agency costs, equal borrowing rates for corporations and individual investors, the ability to short the risk free asset, semi-strong form efficient markets, the CAPM holds, investors are rational and risk-averse and there are no other market frictions.
For a firm operating in this perfect world, which statement(s) are correct?
(i) When a firm changes its capital structure and/or payout policy, share holders' wealth is unaffected.
(ii) When the idiosyncratic risk of a firm's assets increases, share holders do not expect higher returns.
(iii) When the systematic risk of a firm's assets increases, share holders do not expect higher returns.
(a) Only (i) is true.
(b) Only (ii) is true.
(c) Only (iii) is true.
(d) Only (i) and (ii) are true.
(e) All statements (i), (ii) and (iii) are true.
Question 271 CAPM, option, risk, systematic risk, systematic and idiosyncratic risk
All things remaining equal, according to the capital asset pricing model, if the systematic variance of an asset increases, its required return will increase and its price will decrease.
If the idiosyncratic variance of an asset increases, its price will be unchanged.
What is the relationship between the price of a call or put option and the total, systematic and idiosyncratic variance of the underlying asset that the option is based on? Select the most correct answer.
Call and put option prices increase when the:
(a) Systematic variance of the underlying asset increases.
(b) Idiosyncratic variance of the underlying asset increases.
(c) Systematic, idiosyncratic or total variance of the underlying asset increases.
(d) Systematic variance of the underlying asset decreases.
(e) Systematic, idiosyncratic or total variance of the underlying asset decreases.
Question 450 CAPM, risk, portfolio risk, no explanation
The accounting identity states that the book value of a company's assets (A) equals its liabilities (L) plus owners equity (OE), so A = L + OE.
The finance version states that the market value of a company's assets (V) equals the market value of its debt (D) plus equity (E), so V = D + E.
Therefore a business's assets can be seen as a portfolio of the debt and equity that fund the assets.
Let ##\sigma_\text{V total}^2## be the total variance of returns on assets, ##\sigma_\text{V syst}^2## be the systematic variance of returns on assets, and ##\sigma_\text{V idio}^2## be the idiosyncratic variance of returns on assets, and ##\rho_\text{D idio, E idio}## be the correlation between the idiosyncratic returns on debt and equity.
Which of the following equations is NOT correct?
(a) ##r_V = \dfrac{D}{V}.r_D + \dfrac{E}{V}.r_E##
(b) ##\beta_V = \dfrac{D}{V}.\beta_D + \dfrac{E}{V}.\beta_E##
(c) ##\sigma_\text{V syst}^2 = \left(\dfrac{D}{V}\right)^2.\sigma_\text{D syst}^2 + \left(\dfrac{E}{V}\right)^2.\sigma_\text{E syst}^2##
(d) ##\sigma_\text{V idio}^2 = \left(\dfrac{D}{V}\right)^2.\sigma_\text{D idio}^2 + \left(\dfrac{E}{V}\right)^2.\sigma_\text{E idio}^2 + 2.\dfrac{D}{V}.\dfrac{E}{V}.\rho_\text{D idio, E idio}.\sigma_\text{D idio}.\sigma_\text{E idio}##
(e) ##\sigma_\text{V total}^2 = \left(\dfrac{D}{V}\right)^2.\sigma_\text{D total}^2 + \left(\dfrac{E}{V}\right)^2.\sigma_\text{E total}^2##
Question 237 WACC, Miller and Modigliani, interest tax shield
Which of the following discount rates should be the highest for a levered company? Ignore the costs of financial distress.
(a) Cost of debt (##r_\text{D}##).
(b) Unlevered cost of equity (##r_\text{E, U}##).
(c) Levered cost of equity (##r_\text{E, L}##).
(d) Levered before-tax WACC (##r_\text{V, LxITS}##).
(e) Levered after-tax WACC (##r_\text{V, LwITS}##).
Question 376 leverage, capital structure, no explanation
Interest expense on debt is tax-deductible, but dividend payments on equity are not. or ?
Question 380 leverage, capital structure
The "interest expense" on a company's annual income statement is equal to the cash interest payments (but not principal payments) made to debt holders during the year. or ?
Question 397 financial distress, leverage, capital structure, NPV
A levered firm has a market value of assets of $10m. Its debt is all comprised of zero-coupon bonds which mature in one year and have a combined face value of $9.9m.
Investors are risk-neutral and therefore all debt and equity holders demand the same required return of 10% pa.
Therefore the current market capitalisation of debt ##(D_0)## is $9m and equity ##(E_0)## is $1m.
A new project presents itself which requires an investment of $2m and will provide a:
$6.6m cash flow with probability 0.5 in the good state of the world, and a
-$4.4m (notice the negative sign) cash flow with probability 0.5 in the bad state of the world.
The project can be funded using the company's excess cash, no debt or equity raisings are required.
What would be the new market capitalisation of equity ##(E_\text{0, with project})## if shareholders vote to proceed with the project, and therefore should shareholders proceed with the project?
(a) $2.5m, so they should vote yes.
(b) $2.045455m, so they should vote yes.
(c) $0.9m, so they should vote no.
(d) $0, so they should vote no.
(e) -$0.9m, so they should vote no.
Question 398 financial distress, capital raising, leverage, capital structure, NPV
A levered firm has zero-coupon bonds which mature in one year and have a combined face value of $9.9m.
In one year the firm's assets will be worth:
$13.2m with probability 0.5 in the good state of the world, or
$6.6m with probability 0.5 in the bad state of the world.
A new project presents itself which requires an investment of $2m and will provide a certain cash flow of $3.3m in one year.
The firm doesn't have any excess cash to make the initial $2m investment, but the funds can be raised from shareholders through a fairly priced rights issue. Ignore all transaction costs.
Should shareholders vote to proceed with the project and equity raising? What will be the gain in shareholder wealth if they decide to proceed?
(a) Yes, $3m
(b) Yes, $1.5m
(c) Yes, $1m
(d) No, -$0.5m
(e) No, -$1.5m
Question 458 capital budgeting, no explanation
Which of the following is NOT a valid method to estimate future revenues or costs in a pro-forma income statement when trying to value a company?
(a) Extrapolation of past trends to estimate future revenues or costs.
(b) Using a constant or trending 'percent of sales' method to forecast future costs.
(c) Use futures (derivative) prices, if available, to forecast prices which helps calculate revenues.
(d) Use forecast GDP growth rates published by the statistics bureau to estimate future revenue growth.
(e) Assume that markets are efficient and use the random walk hypothesis to substitute a random value for revenue.
A young lady is trying to decide if she should attend university or begin working straight away in her home town.
The young lady's grandma says that she should not go to university because she is less likely to marry the local village boy whom she likes because she will spend less time with him if she attends university.
What's the correct way to classify this item from a capital budgeting perspective when trying to decide whether to attend university?
The cost of not marrying the local village boy should be classified as:
(c) A non-pecuniary cost that should be disregarded.
Question 228 DDM, NPV, risk, market efficiency
A very low-risk stock just paid its semi-annual dividend of $0.14, as it has for the last 5 years. You conservatively estimate that from now on the dividend will fall at a rate of 1% every 6 months.
If the stock currently sells for $3 per share, what must be its required total return as an effective annual rate?
If risk free government bonds are trading at a yield of 4% pa, given as an effective annual rate, would you consider buying or selling the stock?
The stock's required total return is:
(a) 9.55%, so buy the stock since its required return is too high for its low risk.
(b) 7.37%, so buy the stock since its required return is too high for its low risk.
(c) 7.37%, so sell the stock since its required return is too high for its low risk.
(d) 3.62%, so buy the stock since its required return is too low for its low risk.
(e) 3.62%, so sell the stock since its required return is too low for its low risk.
Question 308 risk, standard deviation, variance, no explanation
A stock's standard deviation of returns is expected to be:
0.09 per month for the first 5 months;
0.14 per month for the next 7 months.
What is the expected standard deviation of the stock per year ##(\sigma_\text{annual})##?
Assume that returns are independently and identically distributed (iid) and therefore have zero auto-correlation.
(a) ##\sigma_\text{annual} = 0.09 \times 5 + 0.14 \times 7##
(b) ##\sigma_\text{annual} = (0.09 \times 5 + 0.14 \times 7)^{1/2}##
(c) ##\sigma_\text{annual} = (0.09^2 \times 5 + 0.14^2 \times 7)^{1/2}##
(d) ##\sigma_\text{annual} = (1+0.09)^5\times (1+0.14)^7 - 1##
(e) ##\sigma_\text{annual} = \left( \dfrac{0.09^2 \times 5 + 0.14^2 \times 7}{12} \right)^{1/2}##
Question 471 risk, accounting ratio
High risk firms in danger of bankruptcy tend to have:
(a) Low debt to assets ratios.
(b) High quick ratios.
(c) Low interest coverage ratios.
(d) Positive amounts of net working capital.
(e) High net profit margins.
Question 253 NPV, APR
You just started work at your new job which pays $48,000 per year.
The human resources department have given you the option of being paid at the end of every week or every month.
Assume that there are 4 weeks per month, 12 months per year and 48 weeks per year.
Bank interest rates are 12% pa given as an APR compounding per month.
What is the dollar gain over one year, as a net present value, of being paid every week rather than every month?
Question 256 APR, effective rate
A 2 year corporate bond yields 3% pa with a coupon rate of 5% pa, paid semi-annually.
Find the effective monthly rate, effective six month rate, and effective annual rate.
##r_\text{eff monthly}##, ##r_\text{eff 6 month}##, ##r_\text{eff annual}##.
(a) 0.002466, 0.014889, 0.03.
(b) 0.002485, 0.015, 0.030225.
(c) 0.004074, 0.024695, 0.05.
(d) 0.004124, 0.025, 0.050625.
(e) 0.004167, 0.025, 0.05.
Question 108 bond pricing, zero coupon bond, term structure of interest rates, forward interest rate
An Australian company just issued two bonds:
A 1 year zero coupon bond at a yield of 10% pa, and
A 2 year zero coupon bond at a yield of 8% pa.
What is the forward rate on the company's debt from years 1 to 2? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted.
(a) 0.1840
(b) 0.1202
(c) 0.0920
(d) 0.0602
(e) 0.0301
Question 539 debt terminology, fully amortising loan, bond pricing
A 'fully amortising' loan can also be called a:
(e) Interest only loan
Question 514 corporate financial decision theory, idiom
The expression 'cash is king' emphasizes the importance of having enough cash to pay your short term debts to avoid bankruptcy. Which business decision is this expression most closely related to?
(a) Investment decision.
(b) Financing decision.
(c) Working capital decision.
(d) Payout policy decision.
(e) Capital or labour decision.
Question 54 NPV, DDM
A stock is expected to pay the following dividends:
Cash Flows of a Stock
Time (yrs) 0 1 2 3 4 ...
Dividend ($) 0.00 1.15 1.10 1.05 1.00 ...
After year 4, the annual dividend will grow in perpetuity at -5% pa. Note that this is a negative growth rate, so the dividend will actually shrink. So,
the dividend at t=5 will be ##$1(1-0.05) = $0.95##,
the dividend at t=6 will be ##$1(1-0.05)^2 = $0.9025##, and so on.
The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates.
What is the current price of the stock?
(a) $7.2968
(b) $7.5018
(e) $9.4101
Question 241 Miller and Modigliani, leverage, payout policy, diversification, NPV
One of Miller and Modigliani's (M&M's) important insights is that a firm's managers should not try to achieve a particular level of leverage or interest tax shields under certain assumptions. So the firm's capital structure is irrelevant. This is because investors can make their own personal leverage and interest tax shields, so there's no need for managers to try to make corporate leverage and interest tax shields. This is true under the assumptions of equal tax rates, interest rates and debt availability for the person and the corporation, no transaction costs and symmetric information.
This principal of 'home-made' or 'do-it-yourself' leverage can also be applied to other topics. Read the following statements to decide which are true:
(I) Payout policy: a firm's managers should not try to achieve a particular pattern of equity payout.
(II) Agency costs: a firm's managers should not try to minimise agency costs.
(III) Diversification: a firm's managers should not try to diversify across industries.
(IV) Shareholder wealth: a firm's managers should not try to maximise shareholders' wealth.
Which of the above statement(s) are true?
(b) I and II only.
(c) I and III only.
(d) III only.
(e) All are true.
Question 475 payout ratio, dividend, no explanation
The below screenshot of Commonwealth Bank of Australia's (CBA) details were taken from the Google Finance website on 7 Nov 2014. Some information has been deliberately blanked out.
What was CBA's approximate payout ratio over the 2014 financial year?
Note that the firm's interim and final dividends were $1.83 and $2.18 respectively over the 2014 financial year.
(d) 129.93%
(e) 247.53%
Question 402 PE ratio, no explanation
Which of the following companies is most suitable for valuation using PE multiples techniques?
(a) A company with positive earnings that does not have any comparable firms.
(b) A company with positive earnings that has comparable firms with positive earnings.
(c) A company with positive earnings that has comparable firms with negative earnings.
(d) A company with negative earnings that has comparable firms with negative earnings.
(e) A company with negative earnings that has comparable firms with positive earnings.
Which of the following investable assets is the LEAST suitable for valuation using PE multiples techniques?
(a) Common equity in a small private company.
(b) Common equity in a listed public company.
(c) Commercial real estate.
(d) Ten year commercial real estate lease.
(e) Residential real estate.
Question 493 PE ratio
A firm has 2m shares and a market capitalisation of equity of $30m. The firm just announced earnings of $5m and paid an annual dividend of $0.75 per share.
What is the firm's (backward looking) price/earnings (PE) ratio?
(a) 2.5
Question 809 Markowitz portfolio theory, CAPM, Jensens alpha, CML, systematic and idiosyncratic risk
A graph of assets' expected returns ##(\mu)## versus standard deviations ##(\sigma)## is given in the graph below. The CML is the capital market line.
Which of the following statements about this graph, Markowitz portfolio theory and the Capital Asset Pricing Model (CAPM) theory is NOT correct?
(a) The market portfolio M has systematic risk only. It's a fully diversified portfolio comprised of all individual risky assets. The market portfolio is usually assumed to be the equity index, such as the ASX200 in Australia or the S&P500 in the US.
(b) The risk free security has no risk at all. Government bonds are usually assumed to be the risk-free security.
(c) Portfolio combinations of the market portfolio and risk free security will plot on the CML and will have systematic risk only. They will have no diversifiable risk.
(d) The portfolios on the CML with a return above ##r_f## have maximum return for any given level of risk.
(e) The individual assets and portfolios with returns less than the risk free rate are over-priced, have a negative Jensen's alpha and should be sold.
On 22-Mar-2013 the Australian Government issued series TB139 treasury bonds with a combined face value $23.4m, listed on the ASX with ticker code GSBG25.
The bonds mature on 21-Apr-2025, the fixed coupon rate is 3.25% pa and coupons are paid semi-annually on the 21st of April and October of each year. Each bond's face value is $1,000.
At market close on Friday 11-Sep-2015 the bonds' yield was 2.736% pa.
At market close on Monday 14-Sep-2015 the bonds' yield was 2.701% pa. Both yields are given as annualised percentage rates (APR's) compounding every 6 months. For convenience, assume 183 days in 6 months and 366 days in a year.
What was the historical total return over those 3 calendar days between Friday 11-Sep-2015 and Monday 14-Sep-2015?
There are 183 calendar days from market close on the last coupon 21-Apr-2015 to the market close of the next coupon date on 21-Oct-2015.
Between the market close times from 21-Apr-2015 to 11-Sep-2015 there are 143 calendar days. From 21-Apr-2015 to 14-Sep-2015 there are 146 calendar days.
From 14-Sep-2015 there were 20 coupons remaining to be paid including the next one on 21-Oct-2015.
All of the below answers are given as effective 3 day rates.
(a) -0.035%
(c) 0.035%
Question 791 mean and median returns, return distribution, arithmetic and geometric averages, continuously compounding rate, log-normal distribution, VaR, confidence interval
A risk manager has identified that their pension fund's continuously compounded portfolio returns are normally distributed with a mean of 5% pa and a standard deviation of 20% pa. The fund's portfolio is currently valued at $1 million. Assume that there is no estimation error in the above figures. To simplify your calculations, all answers below use 2.33 as an approximation for the normal inverse cumulative density function at 99%. All answers are rounded to the nearest dollar. Which of the following statements is NOT correct?
(a) The mean (expected or arithmetic average) portfolio value in one year is $1,072,508. The median (50th percentile) portfolio value in one year is $1,051,271.
(b) The annual 99% relative VaR is $391,591.
(c) The annual 99% absolute VaR is $340,320.
(d) The 98% confidence interval of portfolio values in one year ##(V_1)## is ##\$659,680 < V_1 < \$1,675,312,977##.
(e) The 98% confidence interval of continuously compounded portfolio returns over the next year ##(r_{0 \rightarrow 1})## is ##-41.6\% < r_{0 \rightarrow 1} < 51.6\%##.
Question 873 Sharpe ratio, Treynor ratio, Jensens alpha, SML, CAPM
Which of the following statements is NOT correct? Fairly-priced assets should:
(a) Have Sharpe ratios equal to the market's.
(b) Have Treynor ratios equal to the market's.
(c) Have Jensen's alphas that are zero.
(d) Not necessarily be bought or sold since there's no gain either way.
(e) Plot on the Security Market Line (SML).
Question 874 utility, return distribution, log-normal distribution, arithmetic and geometric averages
Who was the first theorist to endorse the maximisiation of the geometric average gross discrete return for investors (not gamblers) since it gave a "...portfolio that has a greater probability of being as valuable or more valuable than any other significantly different portfolio at the end of n years, n being large"?
(a) Daniel Bernoulli.
(b) John Larry Kelly Jr.
(c) Henry Allen Latane.
(d) Ole Peters.
(e) Paul Anthony Samuelson.
Copyright © 2014 Keith Woodward
|
CommonCrawl
|
GridPP5 Brunel University London Staff Grant
Lead Research Organisation: Brunel University
Department Name: Electronic and Electrical Engineering
This proposal, submitted in response to the 2014 invitation from STFC, aims to provide and operate a computing Grid for the exploitation of LHC data in the UK. The success of the current GridPP Collaboration will be built upon, and the UK's response to production of LHC data in the period April 2016 to March 2020 will be to ensure that there is a sustainable infrastructure providing "Distributed Computing for Particle Physics"
We propose to operate a distributed high throughput computational service as the main mechanism for delivering very large-scale computational resources to the UK particle physics community. This foundation will underpin the success and increase the discovery potential of UK physicists. We will operate a production-quality service, delivering robustness, scale and functionality. The proposal is fully integrated with international projects and we must exploit the opportunity to capitalise on the UK leadership already established in several areas. The Particle Physics distributed computing service will increasingly be integrated with national and international initiatives.
The project will be managed across various domains and will deliver the UK's commitment to the Worldwide LHC Computing Grid (WLCG) and ensure that worldwide activities directly benefit the UK.
By 2015, the UK Grid infrastructure will have expanded in size to 50,000 cores, with more than 35 PetaBytes of storage. This will enable the UK to exploit, in an internationally competitive way, the unique physics potential of the LHC.
GridPP's knowledge exchange activities fall into two main areas: firstly, those aimed at other academic disciplines, and secondly, business and industry. GridPP has a strong outreach programme to a public and academic audience, and intends to continue this in GridPP5. The Dissemination Officer will organise GridPP's presence at conferences and events. This includes booking and manning booths, arranging backdrops, material, posters, screens, and rotas where appropriate. Examples of events that we have attended include The British Science Festival, The Royal Society Summer Exhibition, the British Science Association Science Communication Conference and Meet The Scientist at the Museum of Science and Industry in Manchester.
GridPP has developed an extensive website that is central to project communications. The Dissemination Officer will be responsible for producing news items for the website and drafting GridPP press releases. We have had broad coverage from these in the past, including many national newspapers and online publications.
Additional activities will include producing GridPP material, such as leaflets, posters, t-shirts, bags and magic cubes. We have found these very valuable in raising GridPP's and LHC's profile at minimal cost. The Dissemination Officer will also promote outreach training for members of the collaboration, will identify GridPP staff who have specific expertise in this area and will arrange occasional GridPP events, such as the Tier-1 open day.
On KE, our initial work has proved that GridPP's technology can be of use across a range of disciplines and sectors, and we plan to continue this work during GridPP5. The objectives of this program will be to improve awareness of the technologies developed by GridPP and its partners in academia and industry, and hence facilitate the increase in use of these technologies within new areas.
Apr 16 - Sep 20
ST/N001273/1
Peter Robert Hobson
Paul Kyberd
Research Subject:
Particle physics - experiment (50%)
Particle physics - theory (50%)
Beyond the Standard Model (50%)
The Standard Model (50%)
Brunel University, United Kingdom (Lead Research Organisation)
Rutherford Appleton Laboratory, Oxford (Collaboration)
University of Bristol, United Kingdom (Collaboration)
Imperial College London, United Kingdom (Collaboration)
European Organization for Nuclear Research (CERN) (Collaboration)
Peter Robert Hobson (Principal Investigator) http://orcid.org/0000-0002-5645-5253
Paul Kyberd (Principal Investigator)
ascending (press to sort descending)
Title Publication Date Published
|< < 4 5 6 7 8 9 10 11 12 13 > >|
Sirunyan A (2019) Search for long-lived particles decaying into displaced jets in proton-proton collisions at s = 13 TeV in Physical Review D
Sirunyan A (2018) Search for high-mass resonances in final states with a lepton and missing transverse momentum at s = 13 $$ \sqrt{s}=13 $$ TeV in Journal of High Energy Physics
Sirunyan A (2018) Erratum to: Measurements of the $$\hbox {pp}\rightarrow \hbox {ZZ}$$ pp ? ZZ production cross section and the $$\hbox {Z} \rightarrow 4\ell $$ Z ? 4 l branching fraction, and constraints on anomalous triple gauge couplings at $$\sqrt{s} = 13\,\hbox {TeV}$$ s = 13 TeV in The European Physical Journal C
Sirunyan A (2018) Bose-Einstein correlations in p p , p Pb , and PbPb collisions at s N N = 0.9 - 7 TeV in Physical Review C
Sirunyan A (2018) Search for pair-produced resonances decaying to quark pairs in proton-proton collisions at s = 13 TeV in Physical Review D
Sirunyan A (2019) Search for heavy Majorana neutrinos in same-sign dilepton channels in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV in Journal of High Energy Physics
Sirunyan A (2018) Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state with two b quarks and two t leptons in proton-proton collisions at s = 13 TeV in Physics Letters B
Sirunyan A (2018) Event shape variables measured using multijet final states in proton-proton collisions at s=13$$ \sqrt{s}=13 $$ TeV in Journal of High Energy Physics
Sirunyan A (2018) Search for a heavy right-handed W boson and a heavy neutrino in events with two same-flavor leptons and two jets at s = 13 $$ \sqrt{s}=13 $$ TeV in Journal of High Energy Physics
Sirunyan A (2018) Measurement of the groomed jet mass in PbPb and pp collisions at s N N = 5.02 $$ \sqrt{s_{\mathrm{NN}}}=5.02 $$ TeV in Journal of High Energy Physics
Description CMS
Organisation European Organization for Nuclear Research (CERN)
Department Compact Muon Solenoid (CMS)
PI Contribution Construction, comissioning and operation of the CMS experiment. Data analysis in top-quark physics studies. Provision (via GridPP London Tier-2) of computing resources.
Collaborator Contribution Data acquistion, computing resources (Tier 0), co-authorship of publications, access to data, scientific leadership and support
Impact Over 200 refereed journal publications in experimental particle physics. Along with LHC data analysed by the ATLAS collaboration CMS determined the existence of the Higgs boson which was the subject of the 2013 Nobel Prize in Physics. Several STFC funded doctoral students have been trained in data analysis, computer programming and large-scale distributed Grid computing techniques.
Organisation Imperial College London
Department Department of Physics
Sector Academic/University
Organisation Rutherford Appleton Laboratory
Department Particle Physics Department
Organisation University of Bristol
Department School of Physics
|
CommonCrawl
|
Subsonic Burning Fronts (aka Flames)
Laminar Flame Speeds in Degenerate Oxygen-Neon Mixtures (Flames VI, 2020)
The collapse of degenerate oxygen-neon cores (i.e., electron-capture supernovae or accretion-induced collapse) proceeds through a phase in which a deflagration wave ("flame") forms at or near the center and propagates through the star. In models, the assumed speed of this flame influences whether this process leads to an explosion or to the formation of a neutron star.
In this article we calculate the laminar flame speeds in degenerate oxygen-neon mixtures with compositions motivated by detailed stellar evolution models. These mixtures include trace amounts of carbon and have a lower electron fraction than those considered in previous work. We find that trace carbon has little effect on the flame speeds, but that material with electron fraction Ye $\simeq$ 0.48-0.49 has laminar flame speeds that are 2 times faster than those at Ye=0.5. We provide tabulated flame speeds and a corresponding fitting function so that the impact of this difference can be assessed via full star hydrodynamical simulations of the collapse process.
Turbulent Chemical Diffusion In Convectively Bounded Carbon Flames (Flames V, 2016)
It has been proposed that mixing induced by convective overshoot can disrupt the inward propagation of carbon deflagrations in super-asymptotic giant branch stars. To test this theory, in this article we study an idealized model of convectively bounded carbon flames with 3D hydrodynamic simulations of the Boussinesq equations using the pseudospectral code Dedalus.
Because the flame propagation timescale is is much longer than the convection timescale, we approximate the flame as fixed in space, and only consider its effects on the buoyancy of the fluid. By evolving a passive scalar field, we derive a turbulent chemical diffusivity produced by the convection as a function of height, $D_t(z)$. Convection can stall a flame if the chemical mixing timescale, set by the turbulent chemical diffusivity, $D_t$, is shorter than the flame propagation timescale, set by the thermal diffusivity, $\kappa$, i.e., when $D_t > \kappa$. However, we find $D_t < \kappa$ for most of the flame because convective plumes are not dense enough to penetrate into the flame. Extrapolating to realistic stellar conditions, this implies that convective mixing cannot stall a carbon flame and that ``hybrid carbon-oxygen-neon'' white dwarfs are not a typical product of stellar evolution.
The Laminar Flame Speedup by 22Ne Enrichment in White Dwarf Supernovae (Flames IV, 2007)
Carbon-oxygen white dwarfs contain $^{22}$Ne formed from $\alpha$-captures onto 14N during core He burning in the progenitor star. In a white dwarf (Type Ia) supernova, the $^{22}$Ne abundance determines, in part, the neutron-to-proton ratio and hence the abundance of radioactive 56Ni that powers the light curve. The $^{22}$Ne abundance also changes the burning rate and hence the laminar flame speed. In this article we tabulate the flame speedup for different initial $^{12}$C and $^{22}$Ne abundances and for a range of densities. This increase in the laminar flame speed -- about 30% for a $^{22}$Ne mass fraction of 6% -- affects the deflagration just after ignition near the center of the white dwarf, where the laminar speed of the flame dominates over the buoyant rise, and in regions of lower density, $\simeq$ 10$^7$ g cm$^{-3}$ where a transition to distributed burning is conjectured to occur. The increase in flame speed will decrease the density of any transition to distributed burning.
Physical Properties of Laminar Helium Deflagrations (Flames III, 2000)
The physical properties of laminar deflagrations propagating through helium-rich compositions are determined for a wide range of temperatures and densities in this article. The speeds, thermal widths, reactive widths, density contrasts, critical temperatures, and trigger masses are analyzed, along with their sensitivity to the input thermal transport coefficients, nuclear reaction rates, nuclear reaction network employed, and equation of state. A simple fitting formula of modest accuracy for the laminar flame speed is given, as well as detailed tables that list all of the physical properties. These physical properties may be incorporated into hydrodynamic programs as subgrid models for flame-tracking algorithms, and have applications toward models of X-ray bursts and the thin-shell helium flash of intermediate-mass stars.
I can't believe that I didn't publish the result that the final composition behind such flames is calcium, titanium, and chromium rich. Arrggh!
The Conductive Propagation of Nuclear Flames. II. Convectively Bounded Flames in C+O and O+Ne+Mg Cores (Flames II, 1994)
In this article we determine the speeds, and many other physical properties, of flame fronts that propagate inward into degenerate and semidegenerate cores of carbon and oxygen (CO) and neon and oxygen (NeOMg) white dwarfs when such flames are bounded on their exterior by a convective region.
Combustion in such fronts, per se, is incomplete, with only a small part of the initial mass function burned. A condition of balanced power is set up in the star where the rate of energy emitted as neutrinos from the convective region equals the power available from the unburned fuel that crosses the burning front. The propagation of the burning front itself is in turn limited by the temperature at the base of the convective shell, while cannot greatly exceed the adiabatic value. Solving for consistency between these two conditions gives a unique speed for the flame. Typical values for CO white dwarfs are a few hundredths of a centimeter per second. Flames in NeOMg mixtures are slower. Tables are presented in a form that can easily be implemented in stellar evolution codes and yield the rate at which the convective shell advances into the interior. Combining these velocities with the local equations for stellar structure, we find a minimum density for each gravitational potential below with the local equations for stellar structure, we find a minimum density for each gravitational potential below which the flame cannot propagate, and must die.
Although detailed stellar models will have to be constructed to resolve some issues conclusively, our results that a CO white dwarf inginted at its edge will not burn carbon all the way to its center unless the mass of the white dwarf exceeds 0.8 M$_{\odot}$. On the other hand, it is difficult to ignite carbon burning by compression alone anywhere in a white dwarf whose mass does not exceed 1.0 M$_{\odot}$. Thus, compressionally ignited shell carbon burning in an accreting CO dwarf almost certainly propagates all the way to the center of the star. Implications for neutron star formation, and Type Ia supernova models, are briefly discussed. These are also applicable to massive stars in the about 10-12 M$_{\odot}$ range which ignite neon burning off center.
The Conductive Propagation of Nuclear Flames. I. Degenerate C + O and O + NE + MG White Dwarfs (Flames I, 1992)
This article determines the physical properties - speed, width, and density structure - of conductive burning fronts in degenerate carbon-oxygen (C + O) and oxygen-neon-magnesium (O + Ne + Mg) compositions for a grid of initial densities and compositions. The dependence of the physical properties of the flame on the assumed values of nuclear reaction rates, the nuclear reaction network employed, the thermal conductivity, and the choice of coordinate system are investigated. The occurrence of accretion-induced collapse of a white dwarf is found to be critically dependent on the velocity of the nuclear conductive burning front and the growth rate of hydrodynamic instabilities. Treating the expanding area of the turbulent burning region as a fractal whose tile size is identical to the minimum unstable Rayleigh-Taylor wavelength, it is found, for all reasonable values of the fractal dimension, that for initial C + O or O + Ne + Mg densities above about 9 $\times$ 10$^9$ c cm$^{-3}$ the white dwarf should collapse to a neutron star.
|
CommonCrawl
|
Impact of EU duty cycle and transmission power limitations for sub-GHz LPWAN SRDs: an overview and future challenges
Martijn Saelens ORCID: orcid.org/0000-0002-2439-19961,
Jeroen Hoebeke1,
Adnan Shahid1 &
Eli De Poorter1
Long-range sub-GHz technologies such as LoRaWAN, SigFox, IEEE 802.15.4, and DASH7 are increasingly popular for academic research and daily life applications. However, especially in the European Union (EU), the use of their corresponding frequency bands are tightly regulated, since they must confirm to the short-range device (SRD) regulations. Regulations and standards for SRDs exist on various levels, from global to national, but are often a source of confusion. Not only are multiple institutes responsible for drafting legislation and regulations, depending on the type of document can these rules be informational or mandatory. Regulations also vary from region to region; for example, regulations in the United States of America (USA) rely on electrical field strength and harmonic strength, while EU regulations are based on duty cycle and maximum transmission power. A common misconception is the presence of a common 1% duty cycle, while in fact the duty cycle is frequency band-specific and can be loosened under certain circumstances. This paper clarifies the various regulations for the European region, the parties involved in drafting and enforcing regulation, and the impact on recent technologies such as SigFox, LoRaWAN, and DASH7. Furthermore, an overview is given of potential mitigation approaches to cope with the duty cycle constraints, as well as future research directions.
The past decade has seen a large growth in the use of Low-Power Wide-Area Network (LPWAN) short-range devices (SRDs). To ensure compatibility over borders and cultivate the economical market and collaboration, harmonization of frequency bands for SRDs is needed. SRDs use unlicensed bands and must thus share access to radio spectrum with other devices. This requires regularization to assure fair spectrum access for all SRDs and to prevent harmful interference. Such regularization consists of limits on transmission power and duty cycle. As the amount of SRDs rises and regulatory bands become more contested, the effects of regulatory limits will become more and more relevant. For example, LoRa duty cycle limitations already impacts, among others, the throughput of the downlink communication, the (un)availability of acknowledgements, the feasibility of over the air firmware upgrades, geolocation inaccuracies, and scalability [1–4]. Several models predict that the probability of duty cycle violations during downlink communication will further increase, up to 20% for SigFox and 15% for LoRaWAN [5]. Similar impacts are expected for other technologies operating in sub-GHz radio frequency bands.
However, despite the large impact of these regulations, many researchers are unaware of the exact limits and are not aware of mitigation techniques they can apply. Various institutes have each implemented regulations on the availability of radio spectrum for SRDs and their usage restrictions. This fragmentation causes confusion and misconceptions for researchers and manufacturers alike. Documentation is scattered among multiple sources and their jurisdiction is often unclear. For example, a generalized duty cycle of 1 or 10% is often mentioned (e.g., [6, 7]), while the actual regulations are more diverse and include other parameters such as maximum transmission power and the usage of polite spectrum access techniques. At the time of writing, there is no survey known to the author which gives an easily accessible overview of these regulations, their legal value, and where additional information about them can be found. This paper is a response to that vacuum and aims to provide an overview on the currently existing regulations for SRDs in the European region using the unlicensed frequency bands in the 863 to 870 MHz range. Although this paper focuses on LPWAN SRDs in the 863 to 870 MHz range, the insights and resources presented in this paper can be generalized to other frequency ranges as most of the documents described in this paper also contain information about other frequency ranges. Finally, the paper also discusses how these legislation constraints can serve as inspiration for future research.
The paper is divided into 6 sections. First, Section 2 provides examples regarding the impact of duty cycle limitations in recent scientific papers. Next, Section 3 gives a basic overview of the currently available frequency bands for SRDs in the 863 to 870 MHz range in the European region. This section discusses the regulatory demands for using these frequency bands, such as duty cycle limitations, and gives an overview about recent changes in the regulatory landscape, such as the newly opened frequency bands in the 874 to 876 MHz and 915 to 921 MHz range. The next section, Section 4, discusses how technologies such as SigFox, LoRaWAN, and DASH7 are impacted by the regulations. Next, an overview is given in Section 5 on future research challenges related to the regulations. The last section then contains the conclusion of this paper.
Additionally, to guide researchers through the regulatory landscape, 2 appendices are provided to give more insights in the drafting of regulation and the regulatory documents produced defining the regulations. Appendix A describes the various institutes involved and how they collaborate. Appendix B then delves deeper into the regulations and documents drafted by those institutes. This includes the official legislation provided by the EU and other often cited documents such as ERC Recommendation 70-03 and EN 300 220.
Although regulatory limitations have a significant impact on existing technologies, these are often overlooked and left unexplored in current scientific literature. For example, [8] proposes a sub-GHz network protocol based on IEEE 802.15.4g [9] for reliable industrial networks with delay guarantees, using source routing and path changes, but does not mention the regulatory limits and how they impact their proposed solution. Similarly, [10] presents an LPWAN multi-hop protocol using features such as multi-hop data-aggregation and Adaptive Power Control (APC), but does not discuss how the solutions would perform within regulatory limits.
Even when limitations are mentioned, analyzing the impact of regulations is often left for future research. For example, [11] describes network architectures for wireless connected shuttles in warehouses using IEEE 802.15.4 in the 868 MHz band, but only mentions that latency bounded operations are limited due to duty cycle operations. Another example, [12], describes a protocol to analyze power consumption at mains sockets. There, it is shown that radio duty cycle regulations are responsible for the limitation of the amount of clients connected to a master device, but the extend of this limitation is not verified with experimental data.
Recently, a small number of scientific papers have been published that aim to quantify the impact of regulatory limits. For example, [13] shows that the throughput of 802.11ah networks using high data rate, polling sequences, and packet sizes (e.g., healthcare use cases) is severely impacted by duty cycle restrictions. In the same way, duty cycle restrictions pose a difficult obstacle for real-time communication and further research is needed [6]. Pham [14], proposing a solution for quality of service (QoS) under duty cycle restrictions, and [15], introducing duty cycle aware real-time scheduling, both include mitigating actions and experimental data. Unfortunately, such papers are still rather the exception. At the moment, even commercial devices sometimes ignore the regulatory limits in real-world situations. For example, a measurement in [16] of the 868 MHz frequency band in Paris shows the presence of a violating interfering device. Similarly, [17] also shows the presence of duty cycle limit offenders during real-world measurements in the city of Aalborg, showing that the regulations are not always clear or overlooked.
An overview of the current sub-GHz duty cycle and power restrictions
This section gives a high-level overview of the available frequency bands for SRDs and their regulatory limits.
Available frequency bands
Sub-GHz technologies such as LoRaWAN and Sigfox can use several radio frequencies. Multiple overlapping frequency bands are available [18]. Some bands are application specific, whereas others are meant for non-specific devices. The bandwidth varies from 0.05 to 5 MHz. Each of the frequency bands specifies 2 parameters: the maximum allowed transmission power and the maximum allowed duty cycle ratio. An overview of the available frequency bands is shown in Fig. 1 and described in Table 1. Currently, there are 5 types of frequency bands based on their application:
Radio Frequency Identification (RFID) applications are based on tags and devices activating the tags for retrieval of information (1 frequency band).
A visual overview of the available frequency bands for SRDs in the 863 to 870 MHz range. The maximum transmission power (in mW) and maximum duty cycle (in %) are mentioned for each band
Table 1 An overview of the available frequency bands for SRDs in the 863 to 870 MHz range [18]
Wideband data applications use wideband modulation techniques (1 frequency band).
High duty cycle/continuous transmission applications rely on low latency. For example, streaming and multimedia devices such as home entertainment systems, wireless headphones, wireless microphones, and assistive listening devices (1 frequency band).
Low duty/high reliability applications are alarm and social alarm systems with a need for reliable communication (5 frequency bands).
All other devices belong to the non-specific category (8 frequency bands).
Duty cycle limitations
Each of the frequency bands imposes limits to the maximum amount of time devices are allowed to transmit. These limits are defined in the form of (i) a duty cycle or (ii) polite spectrum access restrictions. The duty cycle is defined as the ratio of the cumulated sum of transmission time per observation period. This duty cycle limit is given by (1) where Tobs is the observation period and \(\sum T_{\text {on}}\) the total allowed on air transmission time of the device within that period [18, 19]. The default duration of the observation period is 1 h, unless otherwise specified for the specific frequency band. Currently, all frequency bands use the default observation period of 1 h.
$$ \mathrm{{DC}_{max}} = \frac{\sum T_{\text{on}}}{T_{\text{obs}}} $$
Duty cycles range from 0.1% (3.6 s per hour) up to 10% (360 s per hour). Only transmission times of transmissions within that particular frequency band are included for the calculation of the duty cycle. This means that transmissions may occur in multiple bands simultaneously. By transmitting sequentially in multiple frequency bands, a larger maximum transmission time per hour can be achieved. The duty cycle does not have any restrictions how the transmissions should be spread out in time. It makes no distinction if transmission times are evenly spaced out or if the transmission time is used up at the beginning of the observation period and the rest of the interval waited out. The only thing that must be respected is the maximum duty cycle ratio itself. As such, devices are allowed to transmit using bursty traffic, e.g., transmitting 36 s and then waiting for 3564 s for a duty cycle of 1%. However, as the start of the observation period is not exactly defined, one must be cautious that such burst do not occur in the same observation period Tobs as shown in example c in Fig. 2.
Examples of how transmissions can be spread in time during the observation period Tobs. a Example of an evenly distributed spreading. b Example on the other hand of using all available transmission time in a single burst. c Also an example of using all available transmission time in a single burst, but due to the offset in the burst are there actually 2 burst in a single observation period Tobs. Therefore, the device in example cdoes not conform to the duty cycle regulations
Transmission power limitations
In addition to the duty cycle restrictions, there is also a limit on the transmission power. Transmission power limits range from 5 to 2000 mW. They are expressed in milliwatt (mW) or decibel-milliwatt (dBm). The conversion table for the most occurring maximum transmission power values is given in Table 2. The power values are defined here as Effective Radiated Power (ERP) values; the power that must be given to a reference half-wave dipole antenna to get the same electrical field strength as the actual device at the same distance in the direction of the antenna gain [20]. Another often used definition is Effective Isotropic Radiated Power (EIRP); the power that must be given to a reference isotropic antenna to get the same electrical field strength as the actual device at the same distance. The EIRP and ERP can be converted into each other using (2) if the powers are expressed in dBm.
$$ P_{\text{EIRP}} = P_{\text{ERP}} + 2.15~\text{dB} $$
Table 2 The most occurring maximum transmission power values in mW and dBm
APC is required in frequency band 47b(865 MHz)Footnote 1 (see Section 3.5 and Table 3). This implies that an SRD adapts its transmission power when communicating to another SRD. The peak transmission power at the minimum setting of APC should not exceed 7 dBm ERP [21]. Although APC requires bi-directional communication to find out the used transmission powers, it is not defined how this should be implemented. Technologies for APC do exist, such as Adaptive Transmission Power Control (ATPC) [22], but are often solely focused on transmission quality and energy consumption instead of regulations.
Table 3 An overview of the extra restrictions on the available frequency bands for SRDs in the 863 to 870 MHz range [18]
Polite spectrum access
When an application uses polite spectrum access, the duty cycle restrictions are loosened. Polite spectrum access encompasses 2 aspects: Listen Before Talk (LBT) and Adaptive Frequency Agility (AFA) [21]. LBT defines that the device must listen if the medium is already in use by a Clear Channel Assessment (CCA) check. When the medium is in use, the device must wait a random backoff interval or change the frequency before checking again. The latter is called AFA. When these 2 aspects are implemented, the duty cycle is loosened to 100 s of cumulative transmission time per hour for each possible interval of 200 kHz, which corresponds to a duty cycle ratio of 2.7%. Since the regulations do not define the start and end point of these 200 kHz boundaries, all possible 200 kHz intervals should be consideredFootnote 2. Note that as a consequence, using polite spectrum access techniques is not beneficial for certain bands, for example 54(869.4 MHz) and 55(869.65 MHz), since these have duty cycle limitations of 10% which is higher than the 2.7% with polite spectrum access. A notable downside of polite spectrum access techniques is additional complexity, which often translates into increased hardware costs. For this reason, technologies such as LoRaWAN and SigFox do not support CCA and hence do not support polite spectrum access.
To be allowed to use the loosened duty cycle limit, devices implementing polite spectrum access techniques must also comply to other restrictions (Fig. 3) [21]. The CCA check must have a minimum duration of 160 μs. After this check, the device must wait for a dead time of maximum 5 ms before it may begin its transmission. The transmission itself has a maximum duration of 1 s or 4 s depending on the type of transmission. A transmission is defined as a continuous transmission or a burst of transmissions separated by intervals smaller than 5 ms. After the transmission, the application is banned from transmitting on that frequency for a minimum of 100 ms. It is however still allowed to use that interval for the next CCA interval or transmitting on other frequencies.
A visual overview of the restrictions for polite spectrum access transmissions
Frequency bands in practice
One of the most commonly used frequency bands is band 48(868 MHz) for non-specific SRDs. This is due to historical reasons: initially only a selection from the bands in Table 1 was available for non-specific SRDs, with heavier restricted duty cycle and power limitations [23]. Band 48(868 MHz) was the best candidate for LPWAN end devices, because it was the only band for non-specific SRDs with a 1% duty cycle ratio and at the same time a larger than average bandwidth of 600 kHz. Therefore, it became the band used by end devices of various popular technologies such as SigFox [24] and LoRaWAN [25]. Similarly, band 54(869.4 MHz) was selected by those technologies as the band for downlink communication by base stations, as it had the highest power and duty cycle limitations of all the then available frequency bands. Since a single base station needs to communicate with a large number of end devices, a large transmission power and higher duty cycle is important.
In addition to the above restrictions, several bands impose additional usage restrictions [18]. For example, some bands do not allow for analog video or audio, often with the exception of voice. Other examples are maximum allowed bandwidth limits, determined channel spacing, specific center frequencies or subchannels, or only allowing specific applications. Other documents, such as ERC Recommendation 70-03 [19], provide their own set of specific rules for certain frequency bands. The relevance and jurisdiction of each document is further elaborated on in Appendix B. An overview of these additional restrictions is given in Table 3.
EU regulatory institutes and documents
Various institutes on global, regional, and national level are responsible for SRD regulation. The International Telecommunication Union (ITU), on a global level, has not imposed any mandatory regulations, thus handing down this responsibility to institutes on a regional level. The ITU however defines some recommendations. For example, Recommendation SM.1896 [26] recommends frequency ranges for the harmonization of SRDs. The ITU is also responsible for defining the license-free Industrial, Scientific and Medical (ISM) bands. Although the 863 to 870 MHz frequency range is often mistaken as an ISM band, it is actually not contained in any of the ISM bands defined for the European region, as shown in Table 8 in Appendix A.1.
At EU level, the SRD regulation is defined by the cooperation of 3 institutes: the European Commission (EC) for legislation, European Conference of Postal and Telecommunications Administrations (CEPT) for studies on technical measures, and European Telecommunications Standards Institute (ETSI) for standardization. The laws created by the EC are the only truly legally binding deliverables. Other documents often referred to in literature, such as ERC Recommendation 70-03 and the harmonized standard EN 300 220, are voluntarily but could offer advantages to manufacturers to create compliant radio equipment.
A thorough overview of all institutes involved in drafting LPWAN SRD regulation and their interactions with each other is provided in Appendix A. Appendix B.1 gives a full description of the EU legislatory acts relevant to LPWAN SRDs and their consequences. Deliverables ERC Recommendation 70-03 and EN 300 220 are further elaborated in Appendix B.2.
Recent evolutions
CEPT is mandated by the EC to yearly evaluate the frequency allocation table in EU law [18] and to propose modifications if necessary [27]. The latest CEPT Report [28] contains among others multiple alterations and additions to the frequency allocation table regarding SRDs in the 800 + MHz range:
The extension of the frequency bands for SRDs with extra bands in the 862 to 863 MHz, 870 to 876 MHz, and 915 to 921 MHz ranges [29]. This introduces additional spectrum for SRDs and reduces the risk of interference and congestion in current bands. This has already been implemented by introducing additional frequency bands for SRDs in the 874 to 876 MHz and 915 to 921 MHz range.
The definition of the duty cycle is broadened to allow for an individual observation timeFootnote 3 per frequency band. The default is still 1 h, unless explicitly specified. This has already been implemented.
The renaming of SRD categories. The frequency bands assigned to the low duty cycle/high reliability category are re-assigned to the non-specific SRD category, but keep the usage restrictions that these bands may only be used for (social) alarms. This usage restriction will be adapted later to more specific parameters (e.g., duty cycle, channeling, or access parameters) after more research has been conducted by CEPT and ETSI to allow for broadening the future usage based on technical parameters instead of application type. The frequency band belonging to the category low latency/continuous transmission will also be renamed to wireless audio and multimedia streaming systems.
The merge with RFID regulation, introducing a frequency band for RFID. This has already been implemented by the addition of 47a(865 MHz).
The most notable suggestion of the CEPT report is the addition of extra frequency bands. The rising usage of SRDs and the increasing need for radio spectrum, caused CEPT to investigate for countermeasures against radio spectrum congestion and harmful interference. As a result, ranges 862 to 863 MHz, 870 to 876 MHz, and 915 to 921 MHz were selected for further research as these are mostly underused by CEPT member states [30, 31]. Furthermore, the 915 to 921 MHz range falls within the region 2 ISM band of the ITU, which has been adopted in various other parts of the world (e.g., Australia, New Zealand, Singapore, Vietnam, Malaysia, Japan, South Africa, and the USA). The usage of the 915 to 921 MHz frequency range thus improves global harmonization, and compatibility between EU members and other states. As a response, the EU has introduced 5 additional frequency bands in the 874 to 876 MHz and 915 to 921 MHz range: 2 for non-specific SRDs, 1 for wideband data applications, and 1 for RFID [32]. An overview of these bands can be seen in Tables 4 and 5, and Fig. 4.
A visual overview of the recently added frequency bands for SRDs in the 874 to 876 MHz and 915 to 921 MHz range. The maximum transmission power (in mW) and maximum duty cycle (in %) are mentioned for each band
Table 4 An overview of the recently added frequency bands for SRDs in the 874 to 876 MHz and 915 to 921 MHz range [32]
Table 5 An overview of the extra restrictions on the recently added frequency bands for SRDs in the 874 to 876 MHz and 915 to 921 MHz range [32]
Most of these recently added frequency bands were already in use by some member states for other applications, such as extension bands for Global System for Mobile communications for Railways (GSM-R) or for military use. Some member states are thus allowed to not implement some of these frequency bands in order to keep their usage of those frequency bands. This is possible as the Radio Spectrum Decision [33] dictates that EU regulation may not interfere with the usage of radio equipment by member states for governmental, security, or defense purposes. To reduce further fragmentation, EU Member states are therefore not allowed to introduce new uses in the 874.4 to 876 MHz and 919.4 to 921 MHz range.
Examples of LPWAN technologies and how they cope with duty cycle restrictions
Current technology on the market in the EU region must comply to the regulations described above. An overview of the use of frequency bands in the 863 to 870 MHz range by these technologies is given in Fig. 5. Additonally, an overview of the characteristics of uplink communication for these technologies is given in Table 6.
An overview of the use of frequency bands in the 863 to 870 MHz range by various sub-GHz technologies
Table 6 An overview of the characteristics of uplink communication by SigFox, LoRaWAN, and DASH7 [24, 25, 35, 37, 40]
SigFox is an Ultra Narrow Band (UNB) technology that uses a low rate of small-sized messages to achieve great distance [7, 24, 34, 35]. While it is mainly focused on uplink communication, it also supports a limited form of downlink communication. End devices communicate with SigFox base stations and their messages are pushed on the SigFox cloud. From here, they are delivered to the client infrastructure. The SigFox protocol is closed as the base stations and cloud are in the hands of SigFox operators and SigFox itself. Clients can use the SigFox network by buying a subscription. End devices of the client must be certified by SigFox to be able to use the SigFox network.
Up- and downlink communication each use a different modulation scheme and frequency range. Uplink communication uses a 192 kHz wide frequency interval from 868.034 to 868.226 MHz. All uplink communication thus falls into the range of the 48(868 MHz) frequency band with a limit of 25 mW transmission power and 0.1% duty cycle. As SigFox does not use polite spectrum access techniques, it is bound to the maximum transmission power and duty cycle of the frequency band. End devices use a transmission power up to 25 mW. In order to comply with the maximum allowed duty cycle, SigFox limits the maximum allowed transmissions for up- and downlink messages per day by its subscription model. The most extensive subscription, the platinum model, allows for 140 up- and 4 downlink messages a day. When an uplink message is sent, it is transmitted 3 times on different frequencies one after another, as can be seen in Fig. 6. The frequency of each transmission is randomly selected within the 192 kHz wide interval, and each transmission uses only around 100 Hz of bandwidth. This scatters the messages in both the time and the frequency domain to avoid collisions with other transmissions. SigFox relies on the UNB character of its technology as a collision mitigation technique.
A single SigFox message. Every message is transmitted 3 times on randomly selected frequencies one after another
It takes about 6.24 s (Tm) to send a SigFox uplink message with a full payload PL of 12 bytes, totaling up to a message size MS of 26 bytes, and a data rate DR of 100 bps (3). By subscription limits, a device can send 140 uplink messages per day and thus have a daily on air time of 873.6 s. The duty cycle of 1% allows for a transmission time of 36 s per hour or 864 s per day. As can be seen from the results, SigFox actually allows 9.6 s of transmission time more than allowed by the regulatory limits. The subscription model does not explicitly mention a limit of uplink messages per hour. The 36 s allowed by the duty cycle allow for the transmission of 5.77 messages, which boils down to a limit of 5 messages per hour. SigFox actually overshoots the regulations to 6 messages per hour with 1.44 s per message, which can violate the regulatory limits if all messages are sent with a full payload. Ultimately, SigFox can blacklist end devices who regularly ignore the limits.
$$ T_{m} = 3 \left(\frac{8 \text{MS}}{\text{DR}} \right) $$
SigFox base stations use frequency band 54(869.4 MHz) for downlink communication. The base stations profit from the high regulatory limits of this band. The high maximum allowed transmission power allows to send messages over larger distances, while the high duty cycle offers room for communication with a high number of devices. Although the small bandwidth is the biggest drawback of this frequency band, this has little impact on the operations of the base stations due to the UNB character of the communications.
LoRaWAN is a star-of-stars topology LPWAN technology based on the LoRa physical layer developed by Semtech [34, 36]. The LoRa layer uses Chirp Spread Spectrum (CSS) as modulation. Similar to SigFox, it allows for both up- and downlink communication, but with a strong optimization towards for uplink communication. The data rate of LoRaWAN transmissions can vary and can be adapted on the fly by the network to avoid packet loss or reduce power usage.
The LoRaWAN specification is split in a general [37] and regional document [25]. The general specification explains the inner workings of the protocol. As LoRaWAN aims to be used internationally, it must take into account the differences in regulations between various regions in the world. Therefore, all parameter values related to regional regulations have been moved to the regional specification. This supplements the general specification and prevents that a new version of the specification should be released for every change in regional regulation. Both the general and regional specification acknowledge the presence of spectrum regulation and remind the user that it is their responsibility to meet these regulations, aided by the information and parameter limits in the regional specification.
By the regional specifications, all LoRaWAN end devices in the EU region are required to support at least 3 default channels in the 48(868 MHz) frequency band for uplink communication, with center frequencies of 868.1, 868.3, and 868.5 MHz. Similar to SigFox, LoRaWAN does not use any polite spectrum access techniques. Although each uplink message is sent through a pseudo-randomly selected channel, this is not sufficient to be considered as an LBT & AFA polite spectrum access technique. End devices must therefore oblige to the 1% duty cycle limit. However, the regional specifications mention that all LoRaWAN devices in the EU region should be able to support channels in the whole 863 to 870 MHz range. This allows end devices to allocate channels in other frequency bands than 48(868 MHz) to reduce the constraints of the regulatory duty cycle limits. All EU LoRaWAN devices are required to be able to store 16 channels, with support for in total 8 different frequencies (including the default frequencies). Furthermore, the regional specifications specify the default maximum output power as 25 mW for end devices, which corresponds to the maximum output power of frequency band 48(868 MHz). Although the regional specifications also allow for an output power of 100 mW, the user is reminded that it is his responsibility to not exceed the regional regulatory limits. This value should hence only be used in frequency bands such as 54(869.4 MHz). Other parameters specified by the regional parameters for EU end devices are the preamble format, data rate and output power configurations, maximum payload sizes, and default settings.
The LoRa layer uses chirps to transmit symbols, as can be seen in Fig. 7. A chirp is an interval in which the transmission frequency is periodically increased. When it reaches the upper limit of the frequency channel, it overflows to the minimum limit and continues to increase from there. The chirp is completed when the start frequency has been reached. Because of the spreading with spreading factor SF, every symbol will be a series of 2SF chirps. Semtech sets the chirp speed equivalent to the used bandwidth, so a bandwidth BW of 100 kHz results in a chirp rate CR of 100 kcps. Using this information, the time on air for a single chirp can be calculated as BW−1. The time Ts to transmit a single symbol is thus
$$ T_{s} = \frac{2^{\text{SF}}}{\text{CR}} = \frac{2^{\text{SF}}}{\text{BW}} $$
An extract of a LoRaWAN message using CSS modulation
The maximum time on air for a single LoRaWAN message can then be calculated using (5) (7) (8) [38, 39]. Equation (5) calculates the time on air of the preamble Tp, (7) the amount of symbols of the message nm, and (8) the time on air Tm of the complete transmission.
$$ T_{p} = (n_{p} + 4.25) T_{s} $$
$$ n_{t} = \left\lceil\frac{8\text{PL} - 4\text{SF} + 28 + 16 - 20H}{4\text{SF} - 8\text{DE}}\right\rceil $$
$$ n_{m} = 8 + \text{max}((n_{t}(\text{CR}+4)), 0) $$
$$ T_{m} = T_{p} + n_{m} T_{s} $$
The parameters are as follows:
np is the amount of preamble symbols, which is 8 for EU end devices according to the regional specifications.
PL indicates the number of payload bytes. The maximum payload size is defined in the regional specifications and ranges from 59 to 250 bytes for EU end devices depending on the use of repeaters and certain header fields.
H is equal to 0 if the LoRa header is present, 1 otherwise. LoRaWAN always uses the header, so this should be 0.
DE is 1 if low data rate optimization is enabled, 0 otherwise. This is used when SF is equal to 11 or 12, to account for oscillator drifts.
CR is the coding rate and is often expressed as a ratio. Here, CR is a value in the range of 1 to 4 to calculate the coding rate as 4/(1 + CR).
The outcome of these equations can be used to get an estimate about the time on air per message and the amount of messages that can be sent within the duty cycle regulations. One must keep in mind that LoRaWAN transmissions cannot follow one after another directly. According to [37], an end device must wait to transmit until it has received a downlink transmission or the second receive window for downlink transmissions has passed. As shown in Fig. 8, each LoRaWAN uplink transmission is followed by two windows for downlink communication. The recommended values for RECEIVE_DELAY1 and RECEIVE_DELAY2, 1 s and 2 s respectively, are provided by the regional parameters [25]. A receive window must be at least long enough to be able to detect a downlink preamble. The waiting time is not considered as transmission time Ton for the duty cycle regulations.
A LoRaWAN uplink message, followed by its 2 receive windows for downlink communication [37]
As downlink communication can only occur during the two downlink windows after an uplink transmission for class A devices, this could have a significant impact on the network. As soon as an end device has saturated its duty cycle, it cannot transmit uplink messages and thus not receive downlink messages. The communication is thus interrupted in both ways for the remainder of the duty cycle interval. As base stations are also limited to duty cycle regulations, it is unsure how many devices can be supported by a single station. Base stations answer in the same channel of the uplink message or a default channel in frequency band 54(869.4 MHz), depending if respectively the first or second downlink slot is used. The maximum amount of downlink messages according to duty cycle regulations thus depends on the end devices and their uplink frequency. When using class B end devices, which use extra slots for downlink communication, base stations also broadcast beacons in the default channel which also reduces the amount of possible downlink communication on the default channel.
DASH7 is a full-stack solution based on RFID technology. According to the specification [40], DASH7 complies fully with regulations from all major regions. Contrary to SigFox and LoRaWAN, DASH7 uses LBT and AFA as polite spectrum access techniques to access the medium. This means that instead of having to adhere to the set duty cycle limit, DASH7 can benefit from the 100 s per 1 h per 200 kHz boundary described in Section 3.2.
DASH7 defines 3 types of channels: lo-rate, normal, and hi-rate. These types each have their own channel spacing, modulation, and data rate. The 863 to 870 MHz range is divided in 280 lo-rate channels with a width of 25 kHz and 207 normal/hi-rate channels with a width of 0.2 MHz. All normal and hi-rate channels fall within the frequency bands for non-specific SRDs, as shown in Table 7. However, a number of lo-rate channels also overlap with frequency bands reserved for low duty cycle/high reliability devices, as they fill the whole continuous 863 to 870 MHz range. Only DASH7 devices used as (social) alarms are allowed to function in those frequency bands. This applies to the ranges 868.6 to 868.7 MHz (channels 224 to 227), 869.2 to 869.4 MHz (channels 248 to 255), and 869.65 to 869.7 MHz (channels 266 and 267).
Table 7 An overview of the normal/hi-rate DASH7 channels and the corresponding regulatory frequency bands
The loosened duty cycle interval conveniently matches the bandwidth of normal and hi-rate channels, thus allowing to make optimal use of the loosened duty cycle. However, most frequency bands designated for low duty cycle/high reliability devices do not permit the use of the 100 s duty cycle, thus disabling this optimization for some lo-rate channels.
Future research directions
In this section, we discuss some open problems and research challenges caused by EU regulatory restrictions for LPWAN SRDs. As the transmission power limitation is a simple fixed value, most of the challenges will revolve around the selection of regulatory frequency bands and coping with the duty cycle limit.
Although the technologies discussed up until now are well-known LPWAN technologies, the scope of this section will be expanded to include other non-LPWAN SRD technologies using the 863 to 870 MHz range. For example, the IPv6 over the Time Slotted Channel Hopping (TSCH) mode of IEEE 802.15.4e (6TiSCH) technology stack [41], shown in Fig. 9, and its various components such as among others IPv6 Routing Protocol for Low-Power (RPL) and Lossy Networks and TSCH are often used by SRDs in the 863 to 870 MHz range to create mesh networks. As these mesh networks use multi-hop communication and routing, they are more vulnerable to the negative effects of duty cycle regulations. A comparison between LPWAN and the 6TiSCH technology stack can be found in [43]. Furthermore, research is ongoing to mix both LPWAN and multi-hop communication protocols. For example, [44–47] present solutions to enable multi-hop communication on LoRa technology, where [48] also aims to enable RPL routing. Next, work is in progress by the IEEE 802.15 WPANTM Task Group 4w (TG4w) [49] to amend IEEE 802.15.4 with LPWAN capabilities, naming the amendment IEEE 802.15.4w [50].
An overview of the 6TiSCH stack [42]
Impact on the network and higher layers
Saturation of duty cycled devices could have a significant effect on network performance. (i) When a receiving device is saturated, it will not be able to respond to any acknowledgements. This will be interpreted by the network as packet loss, leading to retransmissions and changes in network routing topology. (ii) In addition, duty cycle regulations could impose delays in network performance, as devices must be cautious when to send and how much. Measures such as spreading out messages during the duty cycle interval introduce a delay in network throughput, which could again lead to retransmissions. For example, this could occur when sending a Constrained Application Protocol (CoAP) GET request [51]. The upper CoAP layer at the sending device will time-out after a while due to a lack of a CoAP acknowledgement message and retransmit the CoAP GET request.
Both effects cause a snowball effect, as retransmissions push other devices to their own saturation limits. If the sending device was also already on the verge of saturation, it could be pushed into saturation by the retransmissions. This effect could also be enhanced by the combination of multiple technologies demanding acknowledgements on different layers. For example, when the device uses both IEEE 802.15.4 and CoAP, then a CoAP retransmission will force multiple retransmissions on the IEEE 802.15.4 layer. Whenever a device reaches saturation, it is considered lost by the network and the routing information in the network must be adjusted. With non-LPWAN setups, the additional overhead by routing protocols such as RPL could cause a significant overhead, wasting the transmission time of several devices in the network [52–54] and forcing them to saturation. Similar effects could be seen when devices frequently drop in or out the network due to duty cycle regulations or other causes, resulting each time in routing adjustment overhead on the network.
The use of multiple frequency bands as a mitigation technique
As the duty cycle in each frequency band poses a limit, a solution would be to combine the use of different regulatory frequency bands. Unfortunately, this is not a possible solution for SigFox as SigFox uses only a single regulatory frequency band for uplink communication. At the contrary, LoRaWAN devices must be able to support channels in the whole 863 to 870 MHz range and are thus able to combine channels from multiple regulatory frequency bands. LoRaWAN devices are also capable of defining new channels through on-the-air updates, which could be used to migrate its traffic to other regulatory frequency bands. Similarly, DASH7 also supports channels in the 863 to 870 MHz range and is therefore also able to use multiple regulatory frequency bands.
As an example for non-LPWAN technologies, TSCH [55] could be improved to incorporate the use of multiple regulatory frequency bands. TSCH uses a slotframe where slots are assigned to pairs of devices. During each iteration of the slotframe, each slot is assigned a channel in a pseudo-randomly manner. However, the assignment of pairs of devices to slots and the definition of the bandwidth of channels and timings of the slotframe is left to the user. This leaves a possibility to adjust these parameters to the regulatory frequency bands. For example, the channels could be mapped to the bandwidth ranges of various regulatory frequency bands. Rizzi [56] proposes a LoRaWAN adaptation to apply a TSCH-like approach. This could bring these optimalisations to LPWAN technology, although this is not yet explicitly explored in [56].
Additionally, the 6TiSCH Operation Sublayer (6top) layer in the 6TiSCH stack allows devices to dynamically adjust the slotframe scheme in the TSCH slotframe by the use of a scheduling function [57]. The implementation of a scheduling function is left to the user. This scheduling function could be used to adapt the slotframe scheme to the duty cycle of each device. For example, devices who approach full saturation could be assigned less and less time slots in the slotframe scheme as they approach their point of saturation. This would force the device to reduce transmissions and would free up time slots in the slotframe scheme to other devices who are still far from their point of saturation.
Solutions have been proposed to extend TSCH with adaptive channel selection to omit channels with interference. Du and Roussos, Tavakoli et al., and Kotsiou et al. [58–60] blacklist undesirable channels based on interference. Such techniques could be adapted to blacklist channels for devices who are saturated in certain regulatory frequency bands. Another possible technique would be to incorporate additional input into the pseudo-randomly selection to reduce the assignments of channels belonging to regulatory frequency bands nearing saturation. Implementations for such adaptive channel selection already exist [61], but are not yet adapted to duty cycle regulations.
Possible actions for SRDs nearing saturation
As can be seen in Section 4, current technologies do not contain specific features to prevent any type of saturation or to reduce the impact of saturation on the network. SigFox simply limits the amount of messages due to their subscription model. As uplink communication happens in a single regulatory frequency band, SigFox devices can only be fully saturated. Once saturation is reached for a device, no more transmissions can occur from that device. Due to the proprietary character of SigFox, it is not possible to adapt its technology. LoRaWAN and DASH7 on the other hand leave it up to the user and upper layers to adhere to regulation and to prevent and respect saturation. However, DASH7 selects a channel prior before transmission and checks if the medium is accessible through Carrier-Sense Multiple Access Collision Avoidance (CSMA-CA). If the channel is in use, it retries the transmission process on another randomly selected channel. By editing this selection process to keep track of the transmission times to the different 200 kHz frequency intervals for polite spectrum access techniques, DASH7 could be adapted to better handle duty cycle regulations and prevent saturation. LoRaWAN also uses a pseudo-random channel hopping technique which could also be adapted to better suit the duty cycle regulations. By defining new channels through on-the-air updates, LoRaWAN could move its traffic to away from saturated regulatory frequency bands or change its physical layer (PHY) parameters when nearing saturation. Even on higher levels, mitigating actions can be developed. For example, when a device detects it is nearing saturation, it could alert its neighbors or a central authority. This would allow the network to change its routes or used frequency ranges to prevent cutting of a part of that network. There is still much room for further research into and the implementation of mitigating features regarding duty cycle limitations for such technologies. For example, models such as [62] are being proposed to adapt existing technologies to duty cycle regulations. Sandoval et al. [62] uses Markov Decision Processes (MDP) to derive an optimal transmission policy for LoRaWAN and SigFox to maximize the number of reported events according to their priority, while staying conform to the duty cycle regulations. This model is usable on constrained nodes, but currently only keeps a single regulatory frequency band in mind.
Efficient monitoring of duty cycled devices
Each SRD should at least be able to keep track of their own cumulative transmission time \(\sum T_{\text {on}}\) (1) per regulatory frequency band to prevent it from exceeding the regulatory duty cycle limits. This could also be used to detect nearing saturations and to use the last available transmission time for any mitigating actions.
To enable mitigating actions on a network level, it is essential to introduce monitoring on a higher level than the device itself. Such monitoring could be implemented in a centralized or decentralized manner. However, such monitoring solutions rely on additional wireless transmissions and thus could potentially increase the duty cycle problem. For example, [63] introduce a piggyback mechanism that integrates with IEEE 802.15.4 to retrieve network information. Since the act of collecting such information by itself also results in additional traffic, additional research is needed to analyze the overhead versus the benefits of centralized duty cycle monitoring in sub-GHz networks.
Heterogeneous spectrum monitoring
To collect spectrum data and to detect devices violating the regulatory limits, a central repository would be useful similar to tv-whitespace databases. However, rather than deploying additional devices, this repository should ideally be populated using the different heterogeneous devices that have already been deployed.
At the moment however, heterogeneous spectrum sensing still has its own challenges according to [64]. (i) As the spectrum sensing is done by different types of devices, the data of each of the devices could be stored in different types of formats (text, binary, …) which makes it harder to aggregate and interpret the data. For example, there is often a lack of meta information containing device details such as the used technology, a description of the involved devices, and used signals. This could be mitigated by the introduction of a uniform storage mechanism. Next, (ii) the measuring resolution could be different between the various sensing devices in both the time and frequency domain. Collected data could therefore not be meaningful compared to one another, which makes it difficult to process and interpret. Therefore, effort should be put in defining a common resolution. (iii) In a similar way, there is often a mismatch in calibration between the different devices, thus defining the need for a common calibration reference to calibrate all the devices. Finally, (iv) as the different devices generate a large amount of data, there is a need for an efficient processing method to handle this data. Liu et al. [64] proposes methodologies to cope with some of these challenges.
Impact of PHY parameters
As the duty cycle limits the transmission time Ton of a device (1), one should also take into consideration the parameters of the physical layer. For example, IEEE 802.15.4-2015 [55] enabled devices can choose various PHY parameters. Smart Utility Network (SUN) Frequency Shift Keying (FSK) devices can choose from three operating modes; 50, 100, or 200 kbps. A larger data rate means shorter transmission times per message, but reduces the transmission range. One must also not forget the additional PHY fields attached to the transmission as these also contribute to the transmission time, such as among others the preamble, Start-of-Frame Delimiter (SFD), PHY header, and Forward Error Correction (FEC). The same goes for technologies such as LoRaWAN and DASH7. LoRaWAN can adapt its data rate on the fly, thus changing the time on air for its messages. DASH7 also has access to 3 different types of channel classes with a different data rate for each. These constructs can be leveraged to create adaptive PHYs in regard to their duty cycle status. Further research is still needed to automatically characterize the throughput, range, and duty cycle trade-offs and to automatically adjust the PHY parameters while keeping in mind these constraints.
Another solution would be the construction of SRDs with multiple PHYs, so that the device could switch to a different PHY depending on traffic needs and duty cycle limits. According to [16], this comes with its own challenges such as adaptations to routing protocols, the need for hand-over mechanisms, challenges for optimization, and a need for a virtualized LPWAN interface. Famaey et al. [65] proposes an architecture that uses such a multimodal device.
As already mentioned, the IEEE is currently developing an LPWAN amendment to the IEEE 802.15.4 standard, called IEEE 802.15.4w, which is based on the low-energy, critical infrastructure monitoring (LECIM) FSK PHY modulation scheme [50] already present in IEEE 802.15.4. IEEE 802.15.4w will introduce features such as frequency-hopping spread spectrum (FHSS) and low-density parity check (LDPC). It is expected that devices using IEEE 802.15.4 will be able to adapt to IEEE 802.15.4w by software modifications only. Currently, there is little academic literature on the IEEE 802.15.4w protocol and its performance, thus leaving opportunities for further research.
This paper gave an overview of the available frequency bands for various categories of SRDs. These frequency bands are specified by maximum duty cycle and maximum transmission power parameters. To remedy the duty cycle restriction, it is possible to use a combination of different frequency bands, or to use polite spectrum techniques. The latter reduces the duty cycle to 2.7% per 200 kHz interval, but is also bound to timing parameters. Eventually, we looked at the recent alterations to the regulations which mainly involve the addition of extra frequency bands in the 874 to 876 MHz and 915 to 921 MHz ranges.
Next, we have discussed 3 commonly used technologies and how they cope with the regulations. SigFox depends on a subscription model to keep the transmission time within duty cycle restrictions. The LoRaWAN specification mentions that it is the user's responsibility to respect the regulations. Its specification is split in a general and a regional part. The regional specification describes various parameters and settings for each region to help the user to comply to SRD regulation. DASH7 also relies on the user to respect the regulatory restrictions. Contrary to SigFox and LoRaWAN does DASH7 rely on polite spectrum techniques; therefore, it is able to use the loosened duty cycle restriction.
Additionally, we have identified some open research challenges regarding the regulatory limits, such as the impact on higher layers, mitigating the duty cycle restriction by using multiple frequency bands together, possible actions for when an SRD nears saturation, how SRDs violating the regulations can be detected, adaptations necessary to heterogeneous spectrum monitoring, and the impact of PHY parameters on duty cycle restricted SRDs.
Finally, more information regarding the relevant institutes and legislation can be found in Appendix A and B, allowing researchers to find relevant rulings in the sometimes confusion range of documents that is available.
We hope that this paper will be useful for researchers and manufacturers to find their way through the regulatory landscape, to achieve the best possible performance while remaining compliant to EU SRD regulations, and to inspire them towards relevant research driven by the imposed constraints.
Appendix A: Institutes involved in drafting EU regulations for SRDs
The law for spectrum allocation in EU member states is the result of different institutes on various levels. (i) At the top level, global standardization institutes issue global standards and regulations in the form of international treaties. (ii) At European level, those treaties are implemented in EU law and then converted into national law by EU member states. (iii) Each member state has a National Regulatory Authority (NRA) for monitoring spectrum usage and enforcing regulation.
A.1 Global
The most prominent global institute for spectrum management is the ITU. The ITU is the United Nations specialized agency for information and communication technologies since 1949 [66]. It has been founded in 1865 and at the moment contains around 1000 members. Among those members are 193 countries, meaning that almost every country in the world is a member of ITU [67]. The ITU is composed out of 3 sectors: the Radiocommunication Sector (ITU-R), the Telecommunication Standardization Sector (ITU-T), and the Telecommunication Development Sector (ITU-D). Of these sectors, the ITU-R is responsible for the allocation of the radio spectrum.
The allocation of radio spectrum by the ITU-R is defined in the Radio Regulations (ITU-RR) [68]. This collection of documents is reviewed, appended, or revised every 3 to 4 years during a World Radiocommunication Conference (WRC). The ITU-RR divides the world into 3 large regions as shown in Fig. 10, where the EU belongs to region 1. Article 5 of the ITU-RR contains the frequency allocations.
The 3 regions defined by the ITU-RR [68]
The ITU-RR defines the license-free ISM frequency bands, which are often used by SRDs because of their license-free nature. The 863 to 870 MHz frequency range is often mistaken as an ISM band, but is actually not contained in any of the ISM bands for region 1. A full overview of the ISM bands can be found in Table 8.
As a matter of fact, SRDs as a whole are not considered as a radio service according to article 1 of the ITU-RR [69, 70]; thus, the regulations and frequency allocations of the ITU-RR do not apply to SRDs. The ITU-R has no mandatory regulations regarding SRDs. Resolution 54 [71] states that although global or regional harmonization could offer multiple benefits, radio spectrum regulation regarding SRDs is currently still considered a matter for national administrations. The resolution also declares that studies regarding the radio spectrum usage of SRDs should be continued, including participation of various standardization, industrial, and scientific organizations so that global or regional harmonization could be achieved in the future.
The studies conducted as result of the resolution 54 have lead to various ITU-R Reports and Recommendations. Recommendations are a voluntary set of international technical standards prepared by consensus, while a Report is a statement by an ITU Study Group (SG) regarding a specific matter or results from studies [67]. Neither of these types of documents force any regulation to ITU member states. These ITU-R deliverables regarding SRDs often provide useful overviews and insights. An overview of the most influential ITU-R deliverables is given here:
Report SM.2153 [72] by the Spectrum Management SG describes the technical and operational parameters of SRDs and how they access and use the radio spectrum. It includes among others a definition of SRDs, an overview of possible applications, commonly used frequency ranges, and maximum transmission power and an overview of various regional and national regulations from around the world.
Recommendation SM.1896 [26] recommends frequency ranges for the harmonization of SRDs. The Recommendation designates the frequency range from 862 to 875 MHz as available for region 1 and some countries of region 3.
Recommendation SM.2103 [73] provides a recommended categorization of SRDs for global harmonization.
A.2 European region
Since the ITU does not provide regulations for SRDs on a global level, regulation must take place on a regional level. The regulations for SRDs in the EU region is the result of a triangle of cooperation of 3 institutes, namely the CEPT, the EC, and the ETSI. These institutes each have their role in drafting law, harmonizing spectrum allocation, and developing standards. This section describes each institute, their responsibility, and how they collaborate with the other institutes.
A.2.1 European Commission
The EC is the heart of the EU. Among other things, it defines the EU policy and takes the initiative for drafting EU legislation proposals. The EU policy for radio spectrum allocation is unified in a single document called the Radio Spectrum Policy Programme [74]. It was drafted for the first time in 2012 and can be considered as the road map for a wireless Europe for the next multiple years with an eye on the future. It aims to lay out a policy programme for the planning and harmonization of radio spectrum to create a single digital market. The efficient use of radio spectrum should be maximized by introducing greater flexibility and analyzing the need to free, reallocate, or create frequency bands. Harmful interference and fragmentation of the market should be avoided by the introduction of harmonizing technical measures and standards. To foster the internal EU market, attention should be paid to competition and innovation. Spectrum should therefore be available for the introduction of new technologies, while member states will collaborate with research and academic institutes to further development of existing and new technologies. Effort should also be made to reduce the environmental footprint. In order to keep track of the market needs, trends, and possible improvements in spectrum allocation, an inventory will be kept identifying the uses of spectrum. As competition is important for the well being of the market, it is vital that member states actively keep competition fair and effective. Therefore, member states are allowed to amend, limit, add conditions, reserve, and refuse rights to frequency bands or the transfer thereof, at the same time promoting the coexistence of technologies and services. Member states are allowed to impose sanctions to ensure fair competition and optimal spectrum use. The programme also defines that member states should follow EU standpoints in international agreements or negotiations and that the EU should always aim to promote compatible policies in neighboring or third countries if possible. As most often the case for radio spectrum regulation, the Programme does not apply to matters regarding public order, safety, and defense.
When there is a need for legislation, the EC will draft a proposal and submit it to the European Parliament and Council. If accepted, the legislation will be published in the Official Journal of the European Union (OJEU) [75, 76]. The EU distinguishes between various kinds of legal acts [76]:
Regulations are effective immediately for all EU members and citizens.
Directives must be translated into national law by a certain deadline.
A Decision applies only to whom it is addressed and is similar to regulations directly applicable. Possible addressees include member states, companies, organizations, and individuals.
Non-binding acts, such as recommendations and opinions, serve only to inform or express the EU views.
EU legislation follows the subsidiarity principle, which specifies that legislation must be applied as local as possible [75]. If there is no need for EU wide legislation, then it must be deferred to the EU members themselves on a national level. Most EU legislation for spectrum management are decisions addressed to all member states.
Because of the importance of correct and relevant legislation, it is necessary for the EC to inform themselves thoroughly. EU legislative acts often lay down the general principles and give the EC power to implement more specific additional acts. These acts are divided into 2 different kinds, namely implementing and delegated acts [77]:
When the EC wants to introduce an implementing act, it is obligated by EU law to consult a Committee comprised of representatives from each member state. This allows the member states to provide input into the drafting of implementing acts. Committees can prevent the adoption of an implementing act [78]. For radio spectrum matters, this Committee is the Radio Spectrum Committee (RSC) which is mainly focused on the development of technical measures for harmonizing legislation [33].
Delegated acts may not alter essential elements of the law and serve to clarify definitions, objective, scopes, etc. Therefore is a consultation by a Committee not required. The regulations for delegated acts are limited to Article 290 in the Treaty on the Functioning of the European Union (TFEU) [77]. Nevertheless, it is of high importance for the EC to ask input from experts for delegated acts through an advisory group (a Commission Expert Group (CEG)) [79]. The input of a CEG is only consultive and not binding for the EC. The Radio Spectrum Policy Group (RSPG) is the CEG for radio spectrum policy and offers opinions to the EC for non-technical matters involving the radio spectrum allocation [80]. This mainly revolves around the EU policy for radio spectrum allocation by an economic, political, cultural, strategic, health, and social viewpoint. The members are the representatives of the member states ministries and NRAs, although representatives of certain countries and organizations such as CEPT and ETSI are also welcome as observers.
The RSPG generally handles radio spectrum matters on a more high level plane compared to the RSC, which generally handles only technical implementing measures. Contrary to the RSC, the RSPG does not have any legal power to prevent the implementation of an act, as it is only an advisory group.
A.2.2 CEPT/ECC
CEPT was founded in 1959 to improve relations and cooperation between national postal and telecommunication administrations. It is composed out of 3 committees: the European Committee for Postal Regulation (CERP), the Committee for ITU Policy (Com-ITU), and the Electronic Communications Committee (ECC). CEPT currently has 48 members (the NRA of each country), covering the European continent and including all EU members [81]. This is an ideal position to harmonize not only within the EU, but also with its border states.
The CEPT/ECC's main priority is to harmonize the use of limited radio- and telecommunication resources, such as radio spectrum and satellite orbits [82]. The EC can issue mandates to the CEPT to conduct studies or give opinions on the technical side of harmonizing measures in legislation proposals [83], and also represent the interests of the EU on an international level at the ITU. The CEPT/ECC can issue 6 types of documents [84]:
ECC Decisions are issued to harmonize the limited resources. These decisions are non-binding, but are based on consensus and thus often implemented by the CEPT member states. These CEPT/ECC decisions are synchronized with EC legislation as the latter are binding for all the EU member states.
ECC Recommendations are a form of advice for member state NRAs, showing the viewpoints and opinions of the CEPT/ECC regarding harmonization.
ECC Reports contain the results of studies.
CEPT Reports are sent as an answer to mandates issued by the EC. They contain the results of requested studies and are often used by the EC as a base for legislation proposals.
European Common Proposals (ECP) are submitted to the ITU to represent the EU viewpoints.
ECC multi-annual strategic plans.
An overview of these document and how they interact with the EC and ETSI is shown in Fig. 11. All CEPT/ECC deliverables can be found publicly online [85].
An overview of EC, CEPT, and ETSI interaction
A.2.3 ETSI
In 1988, CEPT created ETSI to separate the task of regulation from standardization [69]. ETSI is appointed by EU law as one of the 3 European Standardization Organizations (ESOs) [86]. Contrary to CEPT, which only consist of member state NRAs on the European continent, ETSI also includes as members manufacturers, researchers, economic operators, and various other kinds of organizations and institutes from all over the world. Currently, ETSI has over 800 members from 68 countries. Its standards are created by consensus and are used throughout the world [87, 88]. As ETSI is merely a standardization organization, its standards are entirely voluntarily. ETSI is also responsible for developing harmonized standards. A harmonized standard is a standard developed by one of the 3 ESOs as a response on a request from the EC for harmonized legislation [86]. These standards can be recognized by the "EN" at the beginning of their name [89].
A.2.4 Interactions between the institutes
The EC, CEPT, and ETSI institutes form a triangle of cooperation and interaction as can be seen in Fig. 11. The EC takes initiative for legislation, CEPT conducts studies involving the shared use of spectrum, and ETSI drafts the European (harmonized) standards.
EC and CEPT
The EC and CEPT have signed a Memorandum of Understanding (MoU) which allows the EC to issue mandates to CEPT [83]. When the EC wants to take initiative for developing harmonizing EU legislation, it can ask CEPT for advice or to undertake studies regarding technical implementing measures. The results of the work done by CEPT are then sent back in the form of CEPT reports. The EC works closely with the RSC during this process in order to retrieve input of the member states on the mandates to CEPT and drafts of the legislative acts, and with the RSPG for advice on non-technical and policy measures. The MoU therefore asks to allow representatives of CEPT to take part in meetings of the RSC and RSPG. Vice versa are EC representatives allowed at relevant meetings of the CEPT/ECC. This cooperation is reflected in EU law [33, 80] and CEPT/ECC rules of procedure [84]. An overview of all mandates to CEPT from the EC can be found in [90, 91]. The MoU also encourages the exchange of information and experience between the two institutes.
EC and ETSI
The collaboration between EC and ETSI is defined in EU law instead rather than an MoU [86]. ETSI can be requested by the EC to draft EU harmonizing standards. These harmonizing standards are then officially published in the OJEU [92].
CEPT and ETSI
Aside from the mandates and requests from the EC to CEPT and ETSI, there is also an MoU between CEPT and ETSI [93]. In this MoU, each institute's responsibilities and the need for close collaboration has been recorded. It states that the two institutes are complementary: CEPT is responsible for regulating and harmonizing the use of radio spectrum, while ETSI is responsible for standardization. Deliverables from both organizations should not contradict each other and must be mutually acceptable. The institutes will co-operate closely together, exchange information, and invite representatives from the other on relevant meetings. The MoU also contains an annex describing the procedures for cooperation. These protocols imply that whenever one of the institutes is developing a deliverable, it should inform the other institute when a new deliverable or modification to an existing deliverable of that other institute is needed. When ETSI is developing a standard with spectrum sharing issues or in need for spectrum (re)allocation, it can issue an System Reference Document (SRDoc) to CEPT describing the issue. CEPT/ECC will then investigate the matter, conducting the necessary studies or (re)allocating radio spectrum if necessary. The results of studies are sent back to ETSI in the form of ECC Reports, the (re)allocation of radio spectrum in the form of ECC Decisions or Recommendations [92]. During the whole process there is a strong interaction with feedback from both institutes. Both institutes keep a relationship matrix, which shows how they co-operate [94]. A full overview of the cooperation procedure for standardization and regulation by CEPT and ETSI can be found in [92].
A.3 National
In order to regulate radio spectrum, each state has an NRA on a national level. A list of all EU NRAs can be found in Table 9. Often (depending on the state) is the NRA involved on both global, regional, and national level [96, 97]. This allows states to have input in global and regional standardization and regulation, which can offer economical benefits. For example, the Belgian NRA, Belgian Institute for Postal Services and Telecommunications (BIPT), is involved in all 3 layers [96]. On a global level, BIPT is a member of the ITU. Regionally, BIPT is a member of CEPT/ECC, ETSI, RSC, and RSPG. Nationally, BIPT is by law designated as the institute for managing the radio spectrum.
Table 8 An overview of the ISM frequency bands defined by the ITU-RR [68]
Table 9 An overview of the NRAs in the EU [95]
Appendix B: Legislatory acts, recommendations, and standards for SRDs in the EU
The complexity of sub-GHz SRD law arises from the fact that the EU regulation is spread out over a number of EU laws. This section offers an overview of these laws and their implications.
B.1 European legal acts
All EU regulation for the use and allocation of radio spectrum is laid down in EU law. An overview of the most important laws concerning radio spectrum can be found in Fig. 12.
An overview of the most notable EU laws regarding SRD regulation and how they are related
B.1.1 Decision 676/2002/EC (Radio Spectrum Decision)
Decision 676/2002/EC on a regulatory framework for radio spectrum policy in the European Community [33], also known as the Radio Spectrum Decision, is a cornerstone for SRD regulation in EU law. The Decision has some influential consequences: (i) radio spectrum regulation should occur on a EU level instead of national, (ii) mandates can be issued to CEPT as described in Appendix: Section A.2.4, (iii) it contains the legal basis for the creation of the RSC, (iv) all member states must publish their national radio frequency table to the public, and (v) it describes the policy for member states involved in international organizations regarding radio spectrum such as the ITU.
B.1.2 Decision 2000/299/EC
According to Decision 2000/299/EC [98], radio equipment can be allocated to 2 different classes, which are simply named class 1 and class 2. Class 1 contains all the radio equipment that can be used throughout the whole EU without any restrictions. Any radio equipment that has been applied a restriction by an EU member state belongs to class 2. The Alert Sign has been assigned as the Equipment Class Identifier for class 2, as shown in Fig. 13. Class 2 devices must contain a table on the packaging, including the sign shown in Fig. 14, indicating which member state has put any restrictions on this device [99]. Affixing the Equipment Class Identifier is no longer required by EU law.
The Equipment Class Identifier for class 2 radio equipment [98]
The sign for radio equipment under restrictions in one or more member states [99]
An indicative and non-exhaustive list listing the radio equipment for both Equipment Classes is publicly available online at [100]. The class 1 list contains an entry for each non-specific, alarm, social alarm, wireless streaming, RFID, and wideband data frequency band for SRDs mentioned in Table 1. Currently, there are no entries for SRDs in the 863 to 870 MHz range in the class 2 list, meaning that all SRDs using the frequency bands of Table 1 can be used throughout the whole EU without any restrictions.
As the amount of SRDs on the internal market expanded, it became apparent that harmonization was needed to ensure compatibility across borders, prevent harmful interference, and reduce production costs. Therefore, Decision 2006/771/EC [18] has been drafted for the legal harmonization of frequency bands for SRDs throughout the EU. The annex of this Decision contains a frequency allocation table for the range of 9 kHz to 246 GHz. This table is the only frequency allocation table with legal value Footnote 4, contrary to the tables in other documents described in Appendix: Section B.2 which have none. SRDs compliant with the specified frequency ranges and their parameters are classified as class 1 devices and may thus be used throughout the whole EU. All frequency ranges from the table are available in all EU member states, as enforced by the Decision. Member states are allowed to loosen the restrictions or make available other frequency ranges. However, SRDs using those restrictions or frequency ranges cannot operate in the whole EU and are consequently classified as a class 2 device.
The table specifies the following parameters for each frequency band:
The category of SRD to which the band is assigned
The maximum transmission power, maximum field strength, or maximum power density
Additional parameters such as duty cycle, channeling, access, or occupation rules
Extra usage restrictions
The frequency ranges from the frequency allocation table within the range of 863 to 870 MHz are displayed in Fig. 1 and Tables 1 and 3.
This Decision also acknowledges the low-power and short-range nature of SRDs and defines their position in relation to other radiocommunication services. Due to their nature, SRDs are allowed to share frequency bands with other radiocommunication services. It is the responsibility of SRDs to protect themselves from interference of such services and to avoid causing harmful interference to those services. The radiocommunication services have priority and should thus not be obligated to protect themselves from SRD interference.
Decision 2006/771/EC is a direct result of the Radio Spectrum Decision, as it came into existence through a mandate to CEPT [101]. Later on, a permanent mandate was issued to CEPT to update the Decision on a yearly basis [27].
B.1.4 Decision 2018/1538
Recently, additional frequency bands have been allocated to SRDs through Decision 2018/1538 [32], as described in Section 3.7. Due to the usage of bandwidth in the 874 to 876 MHz and 915 to 921 MHz range by EU member states for public order, security, and defense purposes, a different and more flexible approach is needed rather than adding the new frequency bands to Decision 2006/771/EC. After all, the Radio Spectrum Decision dictates that EU regulations may not intervene with the member state regulations regarding public order, security, and defense purposes. The aim of this decision is to prevent further fragmentation of the frequency bands in this range, while providing greater flexibility to member states regarding frequency bands for public order, security, defense, and railway purposes.
The purpose of Decision 2007/344/EC [102] is to introduce a single access point with a common format and level of detail for all available information about radio spectrum allocation in the EU. This single access point, named ERO Frequency Information System (EFIS), is publicly available on the internet [103] and contains all available radio spectrum information for each EU and CEPT member state. It is a handy tool for manufacturers, researchers, and other interested users for looking up or comparing radio spectrum allocations from different EU and CEPT member states. EFIS is hosted by the European Communications Office (ECO), the office supporting CEPT. It is not the purpose of EFIS to replace the NRA's national databases, but to complement them. The NRAs still keep maintaining their own radio spectrum information databases, but must send updates to EFIS twice per year.
The Decision is a direct consequence of the Radio Spectrum Decision, as that mentioned that all member states should publicly publish their frequency allocation table and all other available information for the use of radio spectrum, and keep this up to date. EFIS was then selected after the EC mandated CEPT to investigate if it was indeed suitable to fulfill that task [104].
Decision 2002/622/EC [80] announces the establishment of the RSPG as an advisory group for assisting the EC in matters about radio spectrum policy on EU and international level. The decision defines the members of the RSPG as one expert of each member state. It is also allowed to invite observers, such as CEPT and ETSI, and encouraged to consult with other interested parties such as market operators and consumers.
B.1.7 Directive 2014/53/EU (RED)
Directive 2014/53/EU [105], also known as the Radio Equipment Directive (RED), specifies the essential requirements radio equipment must meet in order to be allowed on the EU market. All radio equipment in order with the Directive may be made available and move freely throughout the whole EU marketFootnote 5. Radio equipment considered compliant with the relevant harmonized standards is also considered compliant with the essential requirements of this Directive. This presumption of conformity offers a great advantage for manufacturers: when the radio equipment complies with the relevant harmonized standards, such as EN 300 220, it can be put on the EU wide market. The relevant harmonized standards can be found in the OJEU [106].
These essential requirements for radio equipment are defined in 3 parts. First, the radio equipment must be safe to use and may not present any risk to persons or animals. Next, radio equipment must use the radio spectrum as efficient as possible and prevent harmful interference. Finally, the radio equipment must comply with the following requirements depending on the class or category to which it belongs: it must be compatible with other radio equipment and accessories (e.g., chargers), it may not harm its network or abuse the network resources, the privacy of users must be respected, measures must be present to prevent fraud, it must provide access to emergency services, it must be accessible for persons with a disability, and only software compliant with the radio equipment may be loaded onto that equipment.
The Directive also defines the obligations of the manufacturers, importers, and distributors involved Footnote 6. Manufacturers are responsible for the radio equipment they produce, the conformity assessment thereof, and the drafting of the technical documentation of the equipment. When radio equipment is deemed compliant with the requirements through assessment, the manufacturer will draft an EU declaration of conformity and affix the Conformité Européenne, or European Conformity (CE) marking shown in Fig. 15. Aside from assessment, manufacturers are also subject to other obligations. These obligations are among others: (i) radio equipment must be usable in at least 1 member state without breaking regulations. (ii) Member states imposing restrictions on the equipment must also be listed on the packaging. (iii) The equipment must also be affixed with an identification of the equipment (e.g., a serial number) and the manufacturers' contact details and (iv) be accompanied with various documents including (a copy of) the declaration of conformity, technical documentation, instructions, a description of the components or accessories, safety information, and information of the used frequency bands and maximum transmission power. (v) Manufacturers are also obliged to keep the technical documentation and declaration of conformity for a period of 10 years. (vi) Manufacturers must cooperate with national authorities and provide all relevant documents when requested to prove the conformity of the equipment. (vii) It is the manufacturers responsibility to keep radio equipment or to be produced radio equipment of the assessed type, compliant in the event of a change in technical specifications of the equipment or of harmonized standards. If radio equipment no longer complies with this directive, it must be investigated and monitored by the manufacturer. Additionally, distributors must be informed and a register of complaints can be kept. When necessary, manufacturers must take corrective measures or remove the equipment of the EU market. If the non-compliance presents a risk, national authorities must also be notified. Importers and distributors are bound by the similar demands as manufacturers. Importers are only allowed to import positively assessed radio equipment. It is the importers and distributors responsibility to ensure that the equipment is compliant, the manufacturer (and importer) has fulfilled its obligations, all required affixes and documents from the manufacturer are present, and the equipment stays compliant during storage or transport. Imported radio equipment must also be affixed by the importers contact details. Importers must also keep the declaration of conformity and technical specifications for 10 years. Just like manufacturers, importers and distributors are obliged to keep track of non-compliant radio equipment, to take corrective measures if necessary, to inform the distributors involved, and to cooperate with national authorities. Manufacturers, importers, and distributors must keep track for 10 years who has supplied them and who they have supplied with radio equipment.
The CE marking. This marking declares that radio equipment complies with EU regulation and therefore may be freely distributed and used throughout the whole EU market
The conformity assessment procedures can be performed by either the manufacturer itself or a conformity assessment body. When harmonized standards are not or only partially used, compliance with certain requirements can only be assessed through a conformity assessment body. The annexes of the Directive describe 4 types of conformity assessment procedures. In the case of a negative assessment by an assessment body, the assessment body can ask to take corrective actions. If these are not sufficient, the assessment body can refuse or recall approval. An appeal procedure is available in case the assessment is contested. The technical documentation must contain enough information and details to check if the equipment is compliant to this Directive and the requirements. If the technical documentation is not adequate, manufacturers or importers can be asked to have the equipment tested for compliance to the requirements by an external party at their own expense. Conformity assessment bodies can also be notified by market surveillance authorities in case of non-compliance or risk, and have the authority to restrict the equipment from the EU market. The Directive also specifies regulations for conformity assessment bodies and each member state's national accreditation body to assess and monitor those conformity assessment bodies. These regulations include that all bodies must be objective, free from conflict of interest, have sufficient means and capable personnel, be up to date, exchange information, and respect confidentiality. All conformity assessment bodies are listed publicly by the EC [107].
B.1.8 Regulation 1025/2012
Regulation 1025/2012 [86] lays down the rules regarding standardization organizations and procedures in the EU and appoints ETSI as one of the 3 European standardization organizations. The Regulation also defines the right of the EC to send requests to the European standardization organizations to draft harmonizing standards. Harmonized standards should be drafted with the needs of the market and public interest in mind and be based on consensus. The development of standards must happen in a transparent manner and also involve all interested parties during various stages of the development. This includes research centers, universities, enterprises, consumer organizations, environmental and social parties, public authorities, and market surveillance authorities. Drafts of standards, and other deliverables, must be distributed to other European and national standardization organizations, to allow other standardization organizations to comment on the draft or deliverable. When the harmonized standard is adopted, it will be published in the OJEU by the EC. National standardization organizations are not allowed to impose standards which impede EU harmonization and must remove national standards if they conflict with new harmonized standards. Member states are however allowed to object against a harmonized standard, which can lead to the addition of restrictions on the standard or its withdrawal.
Standardization organizations must work in a transparent manner according to the Regulation. For example, all European and national standardization organizations must publicly publish their annual work programme. This programme defines among others the standards or other deliverables which will be worked on during that year or were adopted in the previous work programme.
B.2 Recommendations and standards
Next to the official EU law, a few other documents are often mentioned regarding the allocation of radio spectrum. This section takes a closer look at those documents, their implications, and jurisdiction.
B.2.1 ERC 70 03
In 1997, CEPT has issued a recommendation for the allocation of radio spectrum for SRDs. This document, called ERC Recommendation 70-03 [19], gives the opinion of the CEPT/ECC on how the radio spectrum should be allocated. The recommendation in itself has rather little power to impose or force the implementation of the harmonizing frequency bands and their restrictions, as CEPT members are free to choose whether or not to implement CEPT or ECC deliverables [108]. The recommendation is also merely a recommendation, which implies that the implementation is encouraged, but entirely voluntarily. The harmonized frequencies in the Recommendation used to be defined in ECC decisions, but were repealed in 2008 as they became obsolete through EU harmonized standards [109]. The Recommendation is updated and synchronized with the frequency allocation tables in both Decision 2006/771/EC and EN 300 220 and thus corresponds with EU law. This synchronization is done deliberately because many CEPT members are also EU member states, which are obliged to uphold EU law. Members are encouraged, but not required to uphold to the more extensive Recommendation's restrictions as long as they stay within EU law. Although the recommendation has only a small legal impact, it is cited in multiple papers and online sources. This demonstrates the Recommendation's true power, namely providing the otherwise scattered information regarding radio spectrum allocation in a single clear and publicly available document.
The recommendation also contains 14 annexes and 5 appendices. While the recommendation itself expresses the need for harmonization and prevention of harmful interference on a high level, the most relevant and contributing information is actually contained in the annexes and appendices which make up the bulk of the recommendation. Each annex contains a frequency allocation table defining the frequency bands for SRDs belonging to a certain application type. The frequency bands in the Recommendation mainly conform to the frequency bands specified in Decision 2006/771/EC with some minor changes which are mostly more restrictive. Only the following annexes contain frequency bands in the 863 to 870 MHz range:
Annex 1: Non-specific SRDs. This annex contains all the frequency bands of Decision 2006/771/EC for non-specific SRDs with the exception of 47b(865 MHz), which is defined in annex 2. A major difference with Decision 2006/771/EC is the split of the non-specific frequency bands for the 863 to 870 MHz into a (i) FHSS, (ii) Direct Sequence Spread Spectrum (DSSS) or other wideband technique, and (iii) non-spread spectrum band. The FHSS and non-spread spectrum bands also specify certain bandwidth conditions not present in Decision 2006/771/EC. The DSSS or other wideband techniques have a power density limit, while there are no such limits in Decision 2006/771/EC. The duty cycle in the frequency band for DSSS or other wideband techniques can be increased to 1% if certain conditions regarding bandwidth and power are met, but this does not apply for the FHSS and non-spread spectrum bands. Another notable difference is that restrictions as in Table 3 are generalized in the Restriction over all non-specific frequency bands in the 863 to 869.2 MHz range. This generalized restriction allows only digital audio and video with a maximum bandwidth of 300 kHz and analog and digital voice applications with a maximum bandwidth of 25 kHz. This differs with Decision 2006/771/EC, where frequency bands for non-specific SRDs often exclude analog audio or video applications without exceptions based on bandwidth, and do not have bandwidth conditions for audio and video applications using digital modulation. Additional, 56b(869.7 MHz) is more heavily restricted than in Decision 2006/771/EC as only voice is allowed under certain conditions, such as maximum bandwidth of 25 kHz, polite spectrum access, and maximum transmission time of 1 min per transmission.
Annex 2: Tracking, tracing, and data acquisition. This annex contains band 47b(865 MHz) from Decision 2006/771/EC without modifications.
Annex 3: Wideband data transmission systems. This annex contains the wideband data transmission band 84(863 MHz) from Decision 2006/771/EC without modifications.
Annex 7: Alarms. This annex contains all the low duty cycle/high reliability bands 51(869.2 MHz),52(869.25 MHz),53(869.3 MHz), and 55(869.65 MHz) from decision 2006/771/EC without modifications.
Annex 10: Radio microphone applications including Assistive Listening Devices (ALD), wireless audio, and multimedia streaming systems. This annex contains the high duty cycle/continuous transmission band 46b(863 MHz) from decision 2006/771/EC without modifications.
Annex 11: RFID. The RFID annex contains a frequency band equivalent to band 47a(865 MHz) from Decision 2006/771/EC, with the addition of a maximum continuous interrogation time of 4 s and a dead time of 100 ms between interrogation transmissions in the same channel. It also contains frequency bands corresponding to the frequency bands in the repealed Decision 2006/804/EC, which are still allowed for RFID interrogation devices made before the repeal.
Following the annexes are the appendices of which appendix 1 (National Implementation), 3 (National Restrictions), and 5 (Duty Cycle Categories) are the most interesting. Appendix 1 (National Implementation) provides a matrix with all frequency bands as rows and all CEPT members as columns. The appendix gives an overview of which CEPT members have implemented which frequency bands fully, partially, or not at all. Appendix 3 (National Restrictions) lists for each frequency band all CEPT members that have only implemented the frequency band partially or not at all, together with a description of the limitation and the reason thereof. The last appendix, appendix 5 (Duty Cycle Categories), contains the only recommendation in all of the EU, CEPT, and ETSI documentation regarding radio spectrum regulation for the maximum allowed continuous transmission time for SRDs not using LBT and AFA. An overview can be seen in Table 10. For example, transmissions in the low category are recommended to have a duration less or equal than 3.6 s. As the duty cycle of 1% only allows for 36 s of transmission time during 1 h, only 10 continuous messages of 3.6 s can be sent.
Table 10 The duty cycle categories defined in ERC Recommendation 70-03 [19]
B.2.2 EN 300 220
As the result of a request from the EC, ETSI drafted a harmonized standard called EN 300 220 for SRDs in the 25 to 1000 MHz range [110]. The standard consists of 4 parts from EN 300 220-1 to EN 300 220-4. EN 300 220-1 [21] contains mainly the technical specifications and procedures to test conformance to the standard. Part 2 [111] is the actual harmonized standard for non-specific SRDs. Part 3 is divided into 2 parts itself of which both are harmonized standards for the low duty cycle/high reliability alarm frequency bands: 3-1 [112] handles the social alarm band and 3-2 [113] the other alarm bands. Finally, part 4 [114] contains the harmonized standard for metering devices operating in the 169.4 to 169.475 MHz band.
The EN 300 220 standard aims to fulfill the essential requirements described in the RED. This connection between the standard and the RED is the true advantage of EN 300 220. This is a valuable asset for manufacturers, as the implementation of the EN 300 220 standard in their SRDs and solutions is an easy way to comply to EU regulation. It is not mandatory to implement the EN 300 220 standard to fulfill the essential requirements, but then, its conformity must be proven and tested otherwise. EN 300 220-3-2 to EN 300 220-4 each cover the essential requirements for that type of SRDs (non-specific or (social) alarms), based on the technical specifications described in EN 300 220-1. Annex A of each document gives the relationship between the standard and the essential requirements of the RED.
The most interesting section in EN 300 220-1 [21] is Section 5.21 (polite spectrum access), as this is the only occurrence of specific regulations for devices using LBT and AFA. All timing parameters and the alternative duty cycle ratio defined in Section 3.2 and Fig. 3 originate from this section. It also specifies other parameters like the CCA threshold: only when no signals are received with a signal strength above the threshold during the CCA check is the medium considered free and available for transmission. The threshold is categorized by the transmission power, as can be seen in Table 11, and depends on the receiver sensitivity Sp which can be calculated using the receiver bandwidth Rb (9). Unfortunately, not all parameters have been defined clearly. One example are the boundaries of the 200 kHz intervals for the alternative duty cycle. Another is the maximum continuous transmission time which can be 1 s or 4 s depending on the application. Regular transmissions only have a maximum duration of 1 s, while the 4 s limit is reserved for polling sequences and transmission dialogs. However, there is currently no precise definition when a transmission can be categorized as a polling sequence or transmission dialog. In the same way, there is no list of specific algorithms for LBT or AFA accepted by the standard: LBT is simply defined as an CCA check followed by a random backoff time period or frequency change, and AFA can be implemented in various ways but should try to avoid channels occupied by other devices.
$$ S_{p} = 10 \log (R_{b})\ -\ 117 $$
Table 11 The CCA threshold defined in EN 300 220-1 [21] based on an antenna gain of 0 dB relative to a dipole
Parts EN 300 220-2 to EN 300 220-4 are the actual harmonized standards and almost exclusively refer to EN 300 220-1 for descriptions, limits, and conformance procedures. The standardized frequency allocations can be found mostly in their annexes:
EN 300 220-2 annex B contains the regulatory limits for non-specific SRDs in the EU market, categorized as class 1 devices, in the form of a frequency allocation table. It is kept in sync with Decision 2006/771/EC. The frequency bands in the annex are at the moment of writing which is not yet updated to the current version of Decision 2006/771/EC, meaning that for example 47b(865 MHz) is not yet added. Additionally, frequency bands regarding RFID and wideband data applications are not included in EN 300 220. Even without the differences due to updated legislation, there are some differences between EN 300 220-1 and Decision 2006/771/EC, resembling the differences between ERC Recommendation 70-03 and Decision 2006/771/EC. For example, EN 300 220-2 mentions no usage restrictions regarding audio and video in the whole 863 to 869.65 MHz range, in contrast to the exclusions of analog audio and/or video mentioned in Decision 2006/771/EC. There is also no restriction for analog video in 54(869.4 MHz) or any mention of the allowance for voice applications in 56a(869.7 MHz) as there is in Decision 2006/771/EC. Additionally, EN 300 220-2 is more restrictive for 46a(863 MHz), as it limits the bandwidth for audio and video to 300 kHz while such restriction is not enforced by Decision 2006/771/EC.
EN 300 220-2 annex C also has a frequency allocation table, but for frequency bands not harmonized in the EU or for non EU members. The frequency bands here largely correspond with the equivalent frequency bands mentioned in annex 1 of ERC Recommendation 70-03. The difference is that ERC Recommendation 70-03 describes the frequency bands with notes, while EN 300 220-2 describes these explicitly in its spectrum table. Minor differences for example are the replacement of audio/video bandwidth limitations of FHSS frequency bands by a maximum allowed occupied bandwidth based on the amount of channels, and the definition of wideband as a minimum occupied bandwidth of 200 kHz in EN 300 220-2.
EN 300 220-3-1 section 4.2 contains the frequency band for social alarms, which corresponds entirely with the low duty cycle/high reliability band 51(869.2 MHz) from Decision 2006/771/EC.
EN 300 220-3-2 annex B contains the frequency bands for alarms, which correspond entirely with low duty cycle/high reliability bands 49(868 MHz),52(869.25 MHz),53(869.3 MHz), and 53(869.65 MHz) from Decision 2006/771/EC.
This paper uses the following notation for frequency bands in Tables 1 and 3: number(start frequency MHz). For example, band 48 from 868 to 868.6 MHz will be noted as 48(868 MHz).
Similar to the issue of Tobs in the time domain for the duty cycle restriction, as shown in Fig. 2.
Tobs in (1)
Along with Commission Implementing Decision 2018/1538.
An exception is included for the demonstration of radio equipment on trade fairs or exhibitions, as long as it is clearly indicated and no risk or harmful interference is present.
Importers and distributors will also be considered as manufacturers if they place the equipment on the market under their name or modify the radio equipment.
6LoWPAN:
IPv6 over Low-Power Wireless Personal Area Network
6TiSCH:
IPv6 over the TSCH mode of IEEE 802.15.4e
6top:
6TiSCH operation sublayer
AFA:
Adaptive Frequency Agility
ALD:
Assistive Listening Devices
APC:
Adaptive Power Control
ATPC:
Adaptive Transmission Power Control
BIPT:
Belgian Institute for Postal Services and Telecommunications
Clear Channel Assessment
CE:
ConformitéEuropéenne, or European conformity
CEG:
Commission expert group
CEPT:
European Conference of Postal and Telecommunications Administrations
CERP:
European Committee for Postal Regulation
CoAP:
Constrained Application Protocol
Com-ITU:
Committee for ITU Policy
CSMA-CA:
Carrier-sense multiple access collision avoidance
CSS:
Chirp Spread Spectrum
DBPSK:
Differential Binary Phase Shift Keying
DSSS:
Direct Sequence Spread Spectrum
EBA:
European Broadcasting Area
EC:
ECC:
Electronic Communications Committee
European Communications Office
ECP:
European Common Proposals
EFIS:
ERO Frequency Information System
EIRP:
Effective Isotropic Radiated Power
ERO:
European Radiocommunications Office
ERP:
Effective Radiated Power
ESO:
European Standards Organization
European Telecommunications Standards Institute
e.r.p.:
FEC:
Forward Error Correction
FHSS:
Frequency Hopping Spread Spectrum
Frequency Shift Keying
GFSK:
Gaussian Frequency Shift Keying
GSM-R:
Global System for Mobile Communications for Railways
IoT:
ISM:
Industrial, Scientific and Medical
ITU:
International Telecommunication Union
ITU-D:
Telecommunication Development Sector
ITU-R:
Radiocommunication Sector
ITU-RR:
Radio Regulations
ITU-T:
Telecommunication Standardization Sector
LBT:
Listen Before Talk
LDPC:
Low-Density Parity Check
LECIM:
Low-Energy, Critical Infrastructure Monitoring
LPWAN:
Low-Power Wide-Area Networks
MDP:
Markov Decision Processes
MoU:
NRA:
National Regulatory Authority
OJEU:
Official Journal of the European Union
PHY:
QoS:
Radio Equipment Directive
RPL:
IPv6 Routing Protocol for Low-Power and Lossy Networks
RSC:
Radio Spectrum Committee
RSPG:
Radio Spectrum Policy Group
RSPP:
Radio Spectrum Policy Programme
SFD:
Start-of-Frame Delimiter
SG:
SRD:
Short-range device
SRDoc:
System Reference Document
Smart Utility Network
TG4w:
IEEE 802.15 WPANTM Task Group 4w
TFEU:
Treaty on the Functioning of the European Union
TSCH:
Time Slotted Channel Hopping
UNB:
Ultra Narrow Band
WRC:
World Radiocommunication Conference
J. M. Marais, R. Malekian, A. M. Abu-Mahfouz, in 2017 IEEE AFRICON. LoRa and LoRaWAN testbeds: a review, (2017), pp. 1496–1501. https://doi.org/10.1109/AFRCON.2017.8095703.
K. Mikhaylov, J. Petaejaejaervi, T. Haenninen, in European Wireless 2016; 22th European Wireless Conference. Analysis of capacity and scalability of the LoRa low power wide area network technology (VDEBerlin, 2016), pp. 1–6. https://www.vde-verlag.de/.
A. Pop, U. Raza, P. Kulkarni, M. Sooriyabandara, Does bidirectional traffic do more harm than good in LoRaWAN based LPWA networks?. CoRR. abs/1704.04174: (2017). http://arxiv.org/abs/1704.04174.
F. Adelantado, X. Vilajosana, P. Tuset-Peiro, B. Martinez, J. Melia-Segui, T. Watteyne, Understanding the limits of LoRaWAN. IEEE Commun. Mag.55(9), 34–40 (2017). https://doi.org/10.1109/MCOM.2017.1600613.
B. Vejlgaard, M. Lauridsen, H. Nguyen, I. Z. Kovacs, P. Mogensen, M. Sorensen, in 2017 IEEE 85th Vehicular Technology Conference (VTC Spring). Coverage and capacity analysis of Sigfox, LoRa, GPRS, and NB-IoT, (2017), pp. 1–5. https://doi.org/10.1109/VTCSpring.2017.8108666.
D. Ismail, M. Rahman, A. Saifullah, in Proceedings of the Workshop Program of the 19th International Conference on Distributed Computing and Networking. Workshops ICDCN '18. Low-power wide-area networks: opportunities, challenges, and directions (ACMNew York, 2018), pp. 8–186. https://doi.org/10.1145/3170521.3170529.
K. E. Nolan, W. Guibene, M. Y. Kelly, in 2016 International Wireless Communications and Mobile Computing Conference (IWCMC). An evaluation of low power wide area network technologies for the Internet of Things, (2016), pp. 439–444. https://doi.org/10.1109/IWCMC.2016.7577098.
Y. Kawamoto, Y. Kado, in 2016 TRON Symposium (TRONSHOW). NES-SOURCE: indoor small-scale wireless control network protocol that has a communication failure point avoidance function, (2016), pp. 1–7. https://doi.org/10.1109/TRONSHOW.2016.7842884.
IEEE standard for local and metropolitan area networks–part 15.4: low-rate wireless personal area networks (LR-WPANs) amendment 3: physical layer (PHY) specifications for low-data-rate, wireless, smart metering utility networks. IEEE Std 802.15.4g-2012 (Amendment to IEEE Std 802.15.4-2011), 1–252 (2012). https://doi.org/10.1109/IEEESTD.2012.6190698.
T. Adame, S. Barrachina, B. Bellalta, A. Bel, HARE: supporting efficient uplink multi-hop communications in self-organizing LPWANs. CoRR. abs/1701.04673: (2017). http://arxiv.org/abs/1701.04673.
A. Karaağaç, J. Haxhibeqiri, W. Joseph, I. Moerman, J. Hoebeke, in 2017 IEEE 13th International Workshop on Factory Communication Systems (WFCS). Wireless industrial communication for connected shuttle systems in warehouses, (2017), pp. 1–4. https://doi.org/10.1109/WFCS.2017.7991971.
M. Altmann, P. Schlegl, K. Volbert, in 2015 12th International Workshop on Intelligent Solutions in Embedded Systems (WISES). A low-power wireless system for energy consumption analysis at mains sockets (IEEE Piscataway, 2015), pp. 79–84. https://www.ieee.org/.
M. Qutab-ud-din, A. Hazmi, L. F. D. Carpio, A. Goekceoglu, B. Badihi, P. Amin, A. Larmo, M. Valkama, in European Wireless 2016; 22th European Wireless Conference. Duty cycle challenges of IEEE 802.11ah networks in M2M and IoT applications (VDEBerlin, 2016), pp. 1–7. https://www.vde-verlag.de/.
C. Pham, QoS for long-range wireless sensors under duty-cycle regulations with shared activity time usage. ACM Trans. Sen. Netw.12(4), 33–13331 (2016). https://doi.org/10.1145/2979678.
M. T. Islam, B. Islam, S. Nirjon, in 2018 14th International Conference on Distributed Computing in Sensor Systems (DCOSS). Duty-cycle-aware real-time scheduling of wireless links in low power wans (IEEE Piscataway, 2018), pp. 53–60.
E. De Poorter, J. Hoebeke, M. Strobbe, I. Moerman, S. Latré, M. Weyn, B. Lannoo, J. Famaey, Sub-GHz LPWAN network coexistence, management and virtualization: an overview and open research challenges. Wirel. Pers. Commun.95(1), 187–213 (2017). https://doi.org/10.1007/s11277-017-4419-5.
M. Lauridsen, B. Vejlgaard, I. Z. Kovacs, H. Nguyen, P. Mogensen, in 2017 IEEE Wireless Communications and Networking Conference (WCNC). Interference measurements in the European 868 MHz ISM band with focus on LoRa and SigFox, (2017), pp. 1–6. https://doi.org/10.1109/WCNC.2017.7925650.
European Union, Commission Decision of 9 November 2006 on harmonisation of the radio spectrum for use by short-range devices (2006). 2006/771/EC. Consolidated version of August 2017.
CEPT/ECC, ERC Recommendation 70-03 Relating to the use of short range devices (SRD) (1997). https://www.ecodocdb.dk/download/Archive/25c41779-cd6e/4206d0ad-1909/Rec7003.pdf.
M. Loy, R. Karingattil, L. Williams, ISM-band and short range device regulatory compliance overview (2005). http://www.ti.com/lit/an/swra048/swra048.pdf.
ETSI, Short Range Devices (SRD) operating in the frequency range 25 MHz to 1 000 MHz; Part 1: Technical characteristics and methods of measurement (2017). https://www.etsi.org/deliver/etsi_en/300200_300299/30022001/03.01.01_60/en_30022001v030101p.pdf.
S. Lin, F. Miao, J. Zhang, G. Zhou, L. Gu, T. He, J. A. Stankovic, S. Son, G. J. Pappas, ATPC: adaptive transmission power control for wireless sensor networks. ACM Trans. Sen. Netw.12(1), 6–1631 (2016). https://doi.org/10.1145/2746342.
European Union, Commission Decision of 9 November 2006 on harmonisation of the radio spectrum for use by short-range devices (2006). http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:02006D0771(01)-20080611.
SigFox, Sigfox technical overview (2017). https://storage.sbg1.cloud.ovh.net/v1/AUTH_669d7dfced0b44518cb186841d7cbd75/dev_medias/build_technicalOverview.pdf.
LoRa Alliance Technical Committee, LoRaWAN™ Regional Parameters (2016). https://www.mdpi.com/1424-8220/17/10/2364.
International Telecommunication Union, Recommendation ITU-R SM.1896: frequency ranges for global or regional harmonization of short-range devices (2011). https://www.itu.int/dms_pubrec/itu-r/rec/sm/R-REC-SM.1896-0-201111-S!!PDF-E.pdf.
European Union, Permanent mandate to CEPT regarding the annual update of the technical annex of the Commission Decision on the technical harmonisation of radio spectrum for use by short range devices (2006). http://ec.europa.eu/newsroom/dae/document.cfm?action=display&doc_id=7494.
CEPT/ECC, In response to the EC permanent mandate on the "Annual update of the technical annex of the Commission Decision on the technical harmonisation of radio spectrum for use by short range devices" (2016). https://www.erodocdb.dk/download/08fc64c1-36ab/CEPTRep059.pdf.
CEPT/ECC, Addendum to CEPT Report 59 (2017). https://www.erodocdb.dk/download/08fc64c1-36ab/CEPTRep059_Addendum.pdf.
CEPT/ECC, Future Spectrum Demand for Short Range Devices in the UHF Frequency Bands (2014). https://www.erodocdb.dk/download/f584774b-c3c4/ECCREP189.PDF.
CEPT/ECC, Co-existence studies for proposed SRD and RFID applications in the frequency band 870-876 MHz and 915-921 MHz (2013). https://www.erodocdb.dk/download/26ce1d81-2a81/ECCREP200.PDF.
European Union, Commission Implementing Decision (EU) 2018/1538 of 11 October 2018 on the harmonisation of radio spectrum for use by short-range devices within the 874-876 and 915-921 MHz frequency bands (2018). https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32018D1538.
European Union, Decision No 676/2002/EC of the European Parliament and of the Council of 7 March 2002 on a regulatory framework for radio spectrum policy in the European Community (Radio Spectrum Decision) (2002). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32002D0676.
U. Raza, P. Kulkarni, M. Sooriyabandara, Low power wide area networks: an overview. IEEE Commun. Surv. Tutor.19(2), 855–873 (2017). https://doi.org/10.1109/COMST.2017.2652320.
J. -C. Zúñiga, B. Ponsard, SIGFOX System Description. Internet-Draft draft-zuniga-lpwan-sigfox-system-description-04, Internet Engineering Task Force (December 2017). Work in Progress. https://datatracker.ietf.org/doc/html/draft-zuniga-lpwan-sigfox-system-description-04.
J. de Carvalho Silva, J. J. P. C. Rodrigues, A. M. Alberti, P. Solic, A. L. L. Aquino, in 2017 2nd International Multidisciplinary Conference on Computer and Energy Science (SpliTech). LoRaWAN - a low power WAN protocol for Internet of Things: a review and opportunities (IEEEPiscataway, 2017), pp. 1–6. https://www.ieee.org/.
N. Sornin, M. Luis, T. Eirich, T. Kramp, O. Hersent, LoRaWAN™ Specification (2016).
Semtech Corporation, SX1272/3/6/7/8: LoRa ModemDesigner's Guide (2013). http://www.semtech.com/images/datasheet/LoraDesignGuide_STD.pdf.
N. Blenn, F. A. Kuipers, Lorawan in the wild: measurements from the things network. CoRR. abs/1706.03086: (2017). http://arxiv.org/abs/1706.03086.
DASH, 7™ Alliance, DASH7 Alliance Wireless Sensor and Actuator Network Protocol Version 1.1 (2017).
Q. Wang, X. Vilajosana, T. Watteyne, 6TiSCH Operation Sublayer Protocol (6P) (2018). Internet-Draft draft-ietf-6tisch-6top-protocol-12, Internet Engineering Task Force (2018). Work in Progress. https://datatracker.ietf.org/doc/html/draft-ietf-6tisch-6top-protocol-12. Accessed 11 Oct 2018.
L. Thomas, R. Shalu, J. J. Daniel, S. V. R. Anand, M. Hegde, in 2017 9th International Conference on Communication Systems and Networks (COMSNETS). 6TiSCH operation sublayer (6top) implementation on contiki os, (2017), pp. 423–424. https://doi.org/10.1109/COMSNETS.2017.7945424.
H. A. A. Al-Kashoash, A. H. Kemp, Comparison of 6LoWPAN and LPWAN for the Internet of Things. Australian Journal of Electrical and Electronics Engineering. 13(4), 268–274 (2016). https://doi.org/10.1080/1448837X.2017.1409920.
S. Thielemans, M. Bezunartea, K. Steenhaut, in 2017 Wireless Telecommunications Symposium (WTS). Establishing transparent IPv6 communication on LoRa based low power wide area networks (LPWANs), (2017), pp. 1–6. https://doi.org/10.1109/WTS.2017.7943535.
A. Abrardo, A. Pozzebon, A multi-hop LoRa linear sensor network for the monitoring of underground environments: the case of the Medieval Aqueducts in Siena, Italy. Sensors. 19(2) (2019). https://doi.org/10.3390/s19020402.
M. Anedda, C. Desogus, M. Murroni, D. D. Giusto, G. Muntean, in 2018 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB). An energy-efficient solution for multi-hop communications in Low Power Wide Area Networks, (2018), pp. 1–5. https://doi.org/10.1109/BMSB.2018.8436722.
C. Liao, G. Zhu, D. Kuwabara, M. Suzuki, H. Morikawa, Multi-hop LoRa networks enabled by concurrent transmission. IEEE Access. 5:, 21430–21446 (2017). https://doi.org/10.1109/ACCESS.2017.2755858.
B. Sartori, S. Thielemans, M. Bezunartea, A. Braeken, K. Steenhaut, in 2017 IEEE 13th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob). Enabling RPL multihop communications based on LoRa, (2017), pp. 1–8. https://doi.org/10.1109/WiMOB.2017.8115756.
IEEE, IEEE 802.15 WPANTM Task Group 4w (TG4w) Low Power Wide Area (LPWA) (2011). Available online: http://grouper.ieee.org/groups/802/15/pub/TG4w.html Accessed 23 May 2019.
Standards news. IEEE Commun. Stand. Mag.2(4), 12–17 (2018). https://doi.org/10.1109/MCOMSTD.2018.8636827.
Z. Shelby, K. Hartke, C. Bormann, The Constrained Application Protocol (CoAP). RFC Editor (2014). https://doi.org/10.17487/RFC7252. https://rfc-editor.org/rfc/rfc7252.txt.
N. Accettura, L. A. Grieco, G. Boggia, P. Camarda, in 2011 IEEE International Conference on Mechatronics. Performance analysis of the RPL routing protocol, (2011), pp. 767–772. https://doi.org/10.1109/ICMECH.2011.5971218.
U. Kulau, S. Müller, S. Schildt, A. Martens, F. Büsching, L. Wolf, in 2017 IEEE 18th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM). Energy efficiency impact of transient node failures when using RPL, (2017), pp. 1–6. https://doi.org/10.1109/WoWMoM.2017.7974283.
K. Heurtefeux, H. Menouar, N. AbuAli, in 2013 IEEE 9th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob). Experimental Evaluation of a Routing Protocol for WSNS: RPL robustness under study, (2013), pp. 491–498. https://doi.org/10.1109/WiMOB.2013.6673404.
Ieee standard for low-rate wireless networks. IEEE Std 802.15.4-2015 (Revision of IEEE Std 802.15.4-2011), 1–709 (2016). https://doi.org/10.1109/IEEESTD.2016.7460875.
M. Rizzi, P. Ferrari, A. Flammini, E. Sisinni, M. Gidlund, in 2017 IEEE 13th International Workshop on Factory Communication Systems (WFCS). Using LoRa for industrial wireless networks, (2017), pp. 1–4. https://doi.org/10.1109/WFCS.2017.7991972.
P. Thubert, An architecture for IPv6 over the TSCH mode of IEEE 802.15.4 (2018). Internet-Draft draft-ietf-6tisch-architecture-14, Internet Engineering Task Force (2018). Work in Progress. https://datatracker.ietf.org/doc/html/draft-ietf-6tisch-architecture-14. Accessed 19 Dec 2017.
P. Du, G. Roussos, in 2012 4th Computer Science and Electronic Engineering Conference (CEEC). Adaptive time slotted channel hopping for wireless sensor networks, (2012), pp. 29–34. https://doi.org/10.1109/CEEC.2012.6375374.
R. Tavakoli, M. Nabi, T. Basten, K. Goossens, in 2015 IEEE 12th International Conference on Mobile Ad Hoc and Sensor Systems. Enhanced time-slotted channel hopping in WSNS using non-intrusive channel-quality estimation, (2015), pp. 217–225. https://doi.org/10.1109/MASS.2015.48.
V. Kotsiou, G. Z. Papadopoulos, P. Chatzimisios, F. Theoleyre, in Proceedings of the 20th ACM International Conference on Modelling, Analysis and Simulation of Wireless and Mobile Systems. MSWiM '17. Label: link-based adaptive blacklisting technique for 6TiSCH wireless industrial networks (ACMNew York, 2017), pp. 25–33. https://doi.org/10.1145/3127540.3127541.
P. Li, T. Vermeulen, H. Liy, S. Pollin, in 2015 International Symposium on Wireless Communication Systems (ISWCS). An adaptive channel selection scheme for reliable TSCH-based communication, (2015), pp. 511–515. https://doi.org/10.1109/ISWCS.2015.7454397.
R. M. Sandoval, A. Garcia-Sanchez, J. Garcia-Haro, T. M. Chen, Optimal policy derivation for transmission duty-cycle constrained LPWAN. IEEE Internet of Things J.5(4), 3114–3125 (2018). https://doi.org/10.1109/JIOT.2018.2833289.
D. Fanucchi, R. Knorr, B. Staehle, in 2015 IEEE 16th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM). Impact of network monitoring in IEEE 802.15.4e-based wireless sensor networks, (2015), pp. 1–3. https://doi.org/10.1109/WoWMoM.2015.7158174.
W. Liu, M. Chwalisz, C. Fortuna, E. De Poorter, J. Hauer, D. Pareit, L. Hollevoet, I. Moerman, Heterogeneous spectrum sensing: challenges and methodologies. EURASIP J. Wirel. Commun. Netw.2015(1), 70 (2015). https://doi.org/10.1186/s13638-015-0291-8.
J. Famaey, R. Berkvens, G. Ergeerts, E. D. Poorter, F. V. d. Abeele, T. Bolckmans, J. Hoebeke, M. Weyn, Flexible multimodal sub-gigahertz communication for heterogeneous internet of things applications. IEEE Commun. Mag.56(7), 146–153 (2018). https://doi.org/10.1109/MCOM.2018.1700655.
United Nations, United Nations Treaty Series: treaties and international agreements registered or filed and recorded with the Secretariat of the United Nations. Vol. 30 (1949). https://treaties.un.org/doc/Publication/UNTS/Volume%2030/v30.pdf.
International Telecommunication Union, Collection of the basic texts adopted by the Plenipotentiary Conference (2015). http://handle.itu.int/11.1004/020.1000/5.21.61.en.100.
International Telecommunication Union, Radio Regulations: Articles. Vol. 1 (2016). http://www.itu.int/dms_pub/itu-r/opb/reg/R-REG-RR-2016-ZPF-E.zip.
H. Mazar, Radio Spectrum Management: Policies, Regulations and Techniques (Wiley, Hoboken, 2016).
A. Sendin, M. A. Sanchez-Fornie, I. Berganza, J. Simon, I. Urrutia, Telecommunication Networks for the Smart Grid: Artech House power engineering series (Artech House Publishers, Norwood, 2016).
International Telecommunication Union, Resolution ITU-R 54-2: studies to achieve harmonization for short-range devices (2015). https://www.itu.int/dms_pub/itu-r/opb/res/R-RES-R.54-2-2015-PDF-E.pdf.
International Telecommunication Union, Report ITU-R SM.2153-6: technical and operating parameters and spectrum use for short-range radiocommunication devices (2017). https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-SM.2153-6-2017-PDF-E.pdf.
International Telecommunication Union, Recommendation ITU-R SM.2103-0: global harmonization of short-range devices categories (2017). https://www.itu.int/dms_pubrec/itu-r/rec/sm/R-REC-SM.2103-0-201709-I!!PDF-E.pdf.
European Union, Decision No 243/2012/EU of the European Parliament and of the Council of 14 March 2012 establishing a multiannual radio spectrum policy programme (2012). http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32012D0243.
(Publications Office of the European Union, Luxembourg, 2014). https://doi.org/10.2775/11255.
K. -D. Borchardt, The ABC of EU law (Publications Office of the European Union, Luxembourg, 2016). https://doi.org/10.2775/953190.
European Union: Consolidated Versions of the Treaty on European Union and the Treaty on the Functioning of the European Union (2016). http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:02016ME/TXT-20160901.
European Union: Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and general principles concerning mechanisms for control by Member States of the Commission's exercise of implementing powers (2011). http://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32011R0182.
European Union: Communication from the Commission to the European Parliament and the Council; Implementation of Article 290 of the Treaty on the Functioning of the European Union. COM (2009) 673 final (2009). http://eur-lex.europa.eu/legal-content/EN/ALL/?uri=celex:52009DC0673.
European Union: Commission Decision of 26 July 2002 establishing a Radio Spectrum Policy Group (2002). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:02002D0622-20100107.
CEPT, CEPT Background (2017). Available online: https://cept.org/cept/background Accessed 8 Dec 2017.
CEPT/ECC, ECC All About Our Organisation; The Electronic Communications Committee (2017).
CEPT/ECC and European Union, Memorandum of Understanding Between the European Commission ("the Commission") and the European Conference of Postal and Telecommunications Administrations ("CEPT") (2004). https://cept.org/files/6682/MoU%20EC%20and%20CEPT.pdf.
CEPT/ECC, Rules of Procedure for the Electronic Communications Committee (and its subordinate entities), 15th edition (CEPT/ECC, Copenhagen, 2017). https://cept.org/ecc/.
CEPT, European Communications Office Documentation Database. Available online: http://www.ecodocdb.dk/ Accessed 10 Dec 2017.
European Union, Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (2012). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:02012R1025-20151007.
ETSI, Building the future: work programme 2017–2018 (2017). https://www.etsi.org/images/files/WorkProgramme/etsi-work-programme-2017-2018.pdf.
ETSI, Long-term strategy 2016–2021 (2018). http://www.etsi.org/technologies-clusters/white-papers-and-brochures/etsi-long-term-strategy. Accessed 28 Feb 2018.
ETSI, Role in Europe. Available online: http://www.etsi.org/about/what-we-are/role-in-europe Accessed 10 Dec 2017.
European Union, CIRCABC - Radio Spectrum Committee (RSC). Available online: https://circabc.europa.eu/faces/jsp/extension/wai/navigation/container.jsp Accessed 11 Dec 2017.
European Union, Radio spectrum CEPT mandates: list of EC mandates to CEPT (2014). Available online: https://ec.europa.eu/digital-single-market/en/news/radio-spectrum-cept-mandates-0 Accessed 11 Dec 2017.
CEPT/ECC and ETSI, European process of standardisation and regulation for radiocommunications devices and systems - cooperation between CEPT/ECC and ETSI (2018). https://cept.org/files/7326/ECC-ETSI%20cooperation%20process-2018%20final.pdf.
CEPT/ECC, ETSI: Memorandum of Understanding between the CEPT Electronics Communications Committee (ECC) and the European Telecommunications Standards Institute (ETSI) (2016). https://cept.org/files/6682/MoU%20ECC%20and%20ETSI%20-%20update%202016.pdf.
CEPT/ECC and ETSI, ETSI-ECC cross reference matrix (2017). Available online: https://cept.org/files/7326/ETSI-ECC_cross_reference_matrix-October2017v3.xlsx Accessed 11 Dec 2017.
European Union, National Regulatory Authorities (2018). Available online: https://ec.europa.eu/digital-single-market/en/national-regulatory-authorities Accessed 28 Aug 2018.
BIPT, Telecommunications-Radio communications. Available online: http://www.bipt.be/en/operators/bipt/international-relations/telecommunications-radio-communications Accessed 11 Dec 2017.
Ofcom, Spectrum (international work) (2010). Available online: https://www.ofcom.org.uk/about-ofcom/international/spectrum Accessed 11 Dec 2017.
European Union, Commission Decision of 6 April 2000 establishing the initial classification of radio equipment and telecommunications terminal equipment and associated identifiers (2000). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32000D0299.
European Union, Commission Implementing Regulation (EU) 2017/1354 of 20 July 2017 specifying how to present the information provided for in Article 10(10) of Directive 2014/53/EU of the European Parliament and of the Council (2017). http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32017R1354.
CEPT, R&TTE and RE Equipment Classes (2014). Available online: https://www.efis.dk/sitecontent.jsp?sitecontent=RTTE_sub-classes Accessed 11 Jan 2018.
European Union, Opinion of the RSC on a draft Commission Mandate to CEPT on SRD radio spectrum harmonisation (2004). https://ec.europa.eu/digital-single-market/en/news/radio-spectrum-ceptmandates-0.
European Union, Commission Decision of 16 May 2007 on harmonised availability of information regarding spectrum use within the Community (2007). https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32007D0344.
CEPT, ECO Frequency Information System. Available online: https://www.efis.dk/ Accessed 12 Jan 2018.
European Union, Mandate to CEPT on the use of EFIS for publication and access to spectrum information within the Community (2005). https://ec.europa.eu/digital-single-market/en/news/radio-spectrum-ceptmandates-0.
European Union, Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC (2014). https://eur-lex.europa.eu/legal-content/GA/TXT/?uri=celex:32014L0053.
European Union, Commission communication in the framework of the implementation of Directive 2014/53/EU of the European Parliament and of the Council on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC (Publication of titles and references of harmonised standards under Union harmonisation legislation) (2017). https://eur-lex.europa.eu/legal-content/SV/TXT/?uri=CELEX:52017XC0512(04).
European Union, Nando (New Approach Notified and Designated Organisations) Information System. Available online: http://ec.europa.eu/growth/tools-databases/nando/ Accessed 13 Feb 2018.
CEPT/ECC, Working Methods for the Electronic Communications Committee (and its sub-ordinate entities), 28th edition (CEPT/ECC, Copenhagen, 2017). https://cept.org/ecc/.
CEPT/ECC, ECC Decision of 14 March 2008 on the withdrawal of ERC/DEC/(01)04, ERC/DEC/(01)09, ERC/DEC/(01)13, ERC/DEC/(01)15 and ERC/DEC(01)18 (2008). https://www.ecodocdb.dk/download/6e386e17-bc5d/ECCDEC0804.PDF.
European Union, Commission Implementing Decision of 4.8.2015 on a standardisation request to the European Committee for Electrotechnical Standardisation and to the European Telecommunications Standards Institute as regards radio equipment in support of Directive 2014/53/EU of the European Parliament and of the Council (2015). https://ec.europa.eu/growth/toolsdatabases/mandates/index.cfm?fuseaction=search.detail&id=556#.
ETSI, Short Range Devices (SRD) operating in the frequency range 25 MHz to 1 000 MHz; Part 2: Harmonised Standard for access to radio spectrum for non specific radio equipment (2018). https://www.etsi.org/deliver/etsi_en/300200_300299/30022002/03.02.01_60/en_30022002v030201p.pdf.
ETSI, Short Range Devices (SRD) operating in the frequency range 25 MHz to 1 000 MHz; Part 3-1: Harmonised Standard covering the essential requirements of article 3.2 of Directive 2014/53/EU; Low duty cycle high reliability equipment, social alarms equipment operating on designated frequencies (869,200 MHz to 869,250 MHz) (2016). https://www.etsi.org/deliver/etsi_en/300200_300299/3002200301/02.01.01_60/en_3002200301v020101p.pdf.
ETSI, Short Range Devices (SRD) operating in the frequency range 25 MHz to 1 000 MHz; Part 3-2: Harmonised Standard covering the essential requirements of article 3.2 of Directive 2014/53/EU; Wireless alarms operating in designated LDC/HR frequency bands 868,60 MHz to 868,70 MHz, 869,25 MHz to 869,40 MHz, 869,65 MHz to 869,70 MHz (2017). https://www.etsi.org/deliver/etsi_en/300200_300299/3002200302/01.01.01_60/en_3002200302v010101p.pdf.
ETSI, Short Range Devices (SRD) operating in the frequency range 25 MHz to 1 000 MHz; Part 4: Harmonised Standard covering the essential requirements of article 3.2 of Directive 2014/53/EU; Metering devices operating in designated band 169,400 MHz to 169,475 MHz (2017). https://www.etsi.org/deliver/etsi_en/300200_300299/30022004/01.01.01_60/en_30022004v010101p.pdf.
This work has been partly funded by the IDEAL-IoT (Intelligent DEnse And Longe range IoT networks) SBO project, funded by the FWO-V (Fund for Scientific Research-Flanders) under grant agreement #S004017N.
Department of Information Technology, Ghent University - imec - IDLab, Technologiepark Zwijnaarde 15, Ghent, B-9052, Belgium
Martijn Saelens, Jeroen Hoebeke, Adnan Shahid & Eli De Poorter
Martijn Saelens
Jeroen Hoebeke
Adnan Shahid
Eli De Poorter
EDP was responsible for the conceptualization and project administration. MS was responsible for the investigation. AS was responsible for the resources. JH and EDP were responsible for the supervision of the study. MS was responsible for the visualization, writing, and original draft of the manuscript. MS and EDP were responsible for the writing, review, and editing of the manuscript. All authors read and approved the final manuscript.
Correspondence to Martijn Saelens.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Saelens, M., Hoebeke, J., Shahid, A. et al. Impact of EU duty cycle and transmission power limitations for sub-GHz LPWAN SRDs: an overview and future challenges. J Wireless Com Network 2019, 219 (2019). https://doi.org/10.1186/s13638-019-1502-5
LPWAN
|
CommonCrawl
|
GIS and remote sensing coupled with analytical hierarchy process (AHP) for the selection of appropriate sites for landfills: a case study in the province of Ouarzazate, Morocco
Farah Abdelouhed1,
Algouti Ahmed1,
Algouti Abdellah1,
Baiddane Yassine1 &
Ifkirne Mohammed1
The province of Ouarzazate has a population of 10,744 and is divided into 17 communes (15 rural communes and 2 urban communes), the majority of which have a population of less than 2000 people. Currently, more than 42% of the total population does not have access to a controlled landfill that meets all the socio-environmental criteria defined by Law 28-00 and its implementing regulations. The most typical landfills are located in small villages that resemble illegal dumps or dark areas close to the inhabitants. Moreover, in 2009, a controlled landfill was established near the city of Ouarzazate. Over time, urban extensions tend to move towards the site of the landfill following the development plans of the city, which influences the environmental life and health of the new population. Indeed, this landfill is considered to be located in the wrong place according to the results of our study; it does not meet all the main socio-environmental criteria. For these reasons, this study was conducted to identify appropriate landfill sites and waste transfer centers using geographic information systems (GIS) and remote sensing coupled with multi-criteria evaluation techniques such as AHP. Eleven criteria were selected, including distance to protected areas, wind direction, subsurface geology, lineament density, distance to surface water (river systems and dams), soil quality, distance to roads, elevation, and slope. The rasters of all the criteria were prepared, processed, and overlaid in the GIS environment by assigning each parameter its weight according to its importance. In the field, five sites have been provisionally selected, but only sites D and B have been given higher priority because of their geographical location, large surface area, geological imperviousness, zero risks, better soil quality, distance from any protection zone, any water point or hydrographic network, and their accessibility by provincial roads. These sites are located very close to the province's waste hubs, which helps reduce the cost of transporting waste to the new landfill.
Solid waste management is a matter of great international concern from an environmental, morphological, and socio-economic viewpoint [18, 23, 56]. Effective management of municipal solid waste (MSW) is a major challenge for local authorities and planners due to the rapid industrialization, population increase and land scarcity [42]. This expansion of urban growth endangers sustainable urbanization and has resulted in many challenges. Efficient waste management systems that can provide reliable services are needed to confront the increasing amounts of solid waste, as many current systems fail to satisfy the needs [70]. Waste management in the study area is performed in a very crude and conventional form, except in the city of Ouarzazate, and does not take into account the improvement of the standard of living and work of the workers and rag pickers and does not take into consideration the approach to environmental protection in which Morocco has been engaged. The urban population lives essentially in the areas of the cities and urban centers of Tarmigte, Amerzgane, Kourkoda, Tachakouchte, Timjijte, Anzal, Tabourahte, Ait Ben Haddou, Timdline, Telouet, Ighrem Nougdal, Agouim, Tidili, Skoura, Toundoute, Iminoulaoune, and Ghessate et Idelsane. This population currently totals 309,000 inhabitants and will reach 393,000 by 2032 whether the region's socio-economic development projects are implemented. The amount of waste produced by this urban population and to be landfilled is estimated at 20,600 T/year in 2014, which justifies implementing of a new controlled landfill in the study area, respecting all socio-environmental standards. The study area (Fig. 1) belongs to the province of Ouarzazate, which is bordered to the north by the province of El Haouz, to the east by the province of Azilal, to the south by the province of Tinghir, to the south-west by the province of Zagora and to the west by the province of Tata Taroudant. Administratively, the province of Ouarzazate is currently composed of 2 circles (Ouarzazate and Amerzgane), two urban communes (Ouarzazate Taznakhte) and 15 rural communes (Tarmigte, Tarmigte, Idelsane, Toundoute, Ghessate, Iminoulaouene, Iznaguen, Khouzama, Siroua, Ouisselssat, and Telouet). The surface area of the province of Ouarzazate increased after 11 June 2009 (date of creation of the province of Tinghir) to reach 12,169 km2, i.e., 1.7% of the total area of Morocco.
Location of the study area. A African view. B National view. C Regional view of Ouarzazate
The present study consists in a critical analysis of the current situation concerning waste management on different aspects, and in proposing solutions and recommendations for the improvement of the situation in the province of Ouarzazate, among them the selection of a landfill site based on the Geographic Information System (GIS) and the multi-criteria evaluation technique. Site selection using GIS can be an effective technique, considering all restrictions at once [14]. The parameters for selecting a landfill site include environmental, economic, and social criteria, some of which may conflict, making landfill selection a complex and challenging process [21]. In recent years, a variety of multi-criteria decision analysis (MCDA) methods have been employed for the landfill site selection, (LSS) research [22], encompassing the analytic hierarchy process (AHP) [9, 71], preference ranking organization method (PROMETHEE) [34], Fuzzy TOPSIS [17], fuzzy analytic hierarchy process (FAHP) [35], analysis network process (ANP) [11], and technique for order preference by similarity to ideal solution (TOPSIS) [9].
In this work, we rely on the best known multi-criteria analysis method AHP coupled with GIS, although the MCDA method integrated with GIS is the most commonly used model for LSS [22]. However, the AHP uses materiality to compare qualitative and quantitative criteria, which can minimize inconsistency of judgment [9, 71]. This approach has also been favored by various researchers for landfill selection, namely [23] who assessed the suitability of potential municipal solid waste landfill sites in northeastern Greece by applying GIS combined with AHP and trade-off programming methods. Furthermore, [72] determined the location of a solid waste disposal site for Konya, Turkey, while [14] assessed landfill sites in Morocco using GIS-based Boolean and AHP models. The GIS/AHP has therefore proved to be a powerful tool for assessing potential landfill sites. Similarly, [43] studied the location of a municipal solid waste landfill in Qom, Iran, using both GIS and AHP. (A. J [20].) selected a landfill site for Babylon, Iraq, using GIS and AHP, and [69] used an AHP approach in a GIS environment for the selection of a sanitary landfill site and the optimization of its use in São Paulo, Brazi l[50]. also applied GIS-based multi-criteria evaluation technique for landfill site selection in Srinagar city, India [13]. also applied three methods Fuzzy Logic and AHP and WLC coupled with GIS for Landfill Site Selection in Razan City, Iran.
Data and methodology
In this study, 11 input map layers, including topography (elevation and slope), human settlements (urban centers and villages), roads (main roads and village roads), sensitive ecosystems, soil quality, land cover, surface water, lineaments density, geology, and demography were collected and prepared in a GIS environment. All layers have been converted into individual matrix maps of the same unit [65]. In this study, we rely on data such as data from the administrative division of the province of Ouarzazate was downloaded from (https://www.diva-gis.org/gdata), also socio-economic data such as land use data, bare land, buildings, forests was downloaded from OpenStreetMap. Other geospatial data are used of hydrological type received from the agency of hydraulic basin of Ouarzazate containing the qualitative results of groundwater withdrawals with coordinates. However, concerning the geological data were downloaded from NAZA website (https://certmapper.cr.usgs.gov/data/apps/world-maps/). Topographic data used as the DEM SRTM of 30 m resolution were downloaded from the NASA site (https://earthexplorer.usgs.gov/). Finally, satellite data namely Sentinel 2A images were downloaded from the NASA site (https://earthexplorer.usgs.gov/), helped us to extract the lineaments automatically and to map the soil degradation in the province. Other calibrated climate data (wind direction and annual precipitation) were downloaded from the websites (https://www.ncdc.noaa.gov/cdo-web/datatools and https://globalwindatlas.info/).
Figure 2 presents schematically the methodology followed in this work, and the details of which will be presented later step by step.
Flow chart of the methodology adopted in the study
Before selecting the most important criteria and subcriteria for site choice, a preliminary study was carried out on the search for the best factors controlling landfill sites. Various authors have already worked on the selection of landfill sites by integrating different criteria and (Table 1) presents them in detail and with references.
Table 1 Subcriteria selected according to standard site selection methods
Geospatial data collection
The criteria used in this study were selected considering the literature [10, 11, 17, 67, 71, 73] with some adjustments to the Moroccan context. They were classified into two main groups: (1) the economic group included the criteria of distance to roads, slope, and elevation; (2) the geo-environmental group includes land use, distance to residential areas, geology, water resources, and aspect (wind).
Elevation and slope data
The geomorphology of the province of Ouarzazate is very diverse; it presents a mosaic of structures and geomorphological forms. In general, we find at this level structures with flat morphology and massive mountainous structures whose dip is greater than 30%. Among the most recognized structures, the depression of Ouarzazate or Ouarzazate basin structure is located in the central part of the provincial territory of Ouarzazate; the flat tabular characterizes the southern part of the Pre-African furrow, which is a major South-Atlasic accident. The topographic slopes at the level of the basin does not exceed 5% (Fig. 3). Eastern Anti-Atlas, a geomorphological structure, occupies the southeast of the province of Ouarzazate, and it is represented by the ancient massif of Saghro. On the northern slopes of this massif, the slopes vary between 5 and 30%, but in the central part of the massif, the slope exceeds 30%. The central Anti-Atlas is delimiting the province of the West and Southwest, and we find in this structure two main morphological substructures and provincial scale, which are the volcanic massif of Siroua, whose highest point is 3304 m.
A Relief map (elevation in m). B Slope map
Geological and lineaments data
The foreland-type Ouarzazate Basin is located on the southern border of the Central High Atlas as part of the intracontinental Atlas Basin [51]. The facies that form this basin extend from the Meso-Cenozoic to the Plio-Quaternary series (Fig. 4A). These formations, which fill the basin, are offset in unconformity to the south, on the Paleozoic and Precambrian lands of the Anti-Atlas. The lithostratigraphy of the sedimentary series that form the Ouarzazate Basin shows that they are composed of marine and continental sediments that reflect well-defined paleogeographic and structural domains. The lower part of the series corresponds to basin fill formations developed during the rifting phase [47, 51], while the upper part represents either the upper part represents either the coverage of the Central High Atlas or the filling of the Ouarzazate basin. From a tectonic point of view, the studies show the presence of thrust lines of E–W to ENE–WSW direction, with variable dip and south vergence. They are accompanied by EW to ENE–WSW folds. The folds and thrusts were highlighted by the pioneering work of [30, 37, 59]. The automatic extraction of lineaments was performed by the LINE MODULE algorithm of the PCI Geomatica 2016 software. Various user-defined parameters require automatic lineament extraction [2, 4, 33]. Automatic lineament extraction was applied for the seven stacked bands of the Sentinel 2A image, for each of the four shaded relief images of the DEM image mosaic (N0°, N45°, N90°, and N135°), and on the four images derived from the 7X7 matrix directional filtering of the CP1 neo component. Overlaying the extracted lineaments on ArcGIS-specific high-resolution terrain imagery (Source: Esri, Maxar, GeoEye, Earthstar Geographics, CNES/Airbus, DS, USDA, AeroGRID, IGN, and the GIS User Community) has been well endorsed in the lineaments extraction validation [2, 28]. The results reveal that the lineaments extracted directly from the Sentinel image very much reflect the reality of the terrain (Fig. 4B).
Large-scale geology of the study area (A). Line density of the study area (B)
Soil quality and pedology
Generally speaking, the soils are alluvial, not significantly evolved and with an alkaline tendency, moderately to strongly degraded in the Ouarzazate basin [28]. In some places, they have quite high salt contents. In the valleys, paedogenes is not very active because of the arid climate. The soil types identified in the area can be grouped into 3 main categories: 1—soils with undifferentiated profiles: raw mineral soils (Litho sols rego soils, alluvial-colluvial soils), 2—soils with a poorly differentiated profile: soils that are not very advanced, 3-soils with a differentiated profile: isohumic soils, soils with iron sesquioxides, and browned soils. Soils in the last two categories may be affected by the accumulation of soluble salts. Indeed, the unavailability of soil maps covering the study area, geospatial data of soil quality of 30 m resolution (Fig. 5A) from the algorithms of remote sensing techniques was downloaded from the site. Also, a map of the state of soil degradation (Fig. 5B) was made based on processing a satellite image type Sentinel 2A by supervised classification using the SPC plugin of QGIS software. Spectral signatures were chosen based on soil sampling during field missions made in August 2020 and published more recently [28].
Soil quality of the study area (A) and land degradation map of the study area (B)
The support land of the future provincial landfill of Ouarzazate is part of the hydraulic unit of the Draa basin, which forms a basin between the High Atlas to the north and the Anti-Atlas to the south. With about 15,170 km2, it is one of the most important basins in Morocco. Surface flows (Fig. 6A) from the northern edge of the Anti-Atlas between Jbel Saghro and the Ouarzazate basin are collected by Oued N'Ait Douchchène, essentially an uncontrolled wadi and a tributary of Ouarzazate wadi, which also has tributaries on the left bank from the High Atlas. The contributions of Oued N'Ait Douchchène would be between 60 and 70. 106 m3/year. In addition, the Oued Dadès collects the inflows of the affluents of the northern flank of the Saghro (1445 m2), i.e. about 20 to 25,106 m3/year. The upper reaches of the Oued Drâa receive 80 to 95,106 m3/year of runoff from the northern flank of the Anti-Atlas, and much of this runoff is collected at dams built along the Upper Drâa watershed (http://www.abhsm.ma/). In the High Atlas, groundwater provides most of the perennial flow of the wadis, which are abundant. These water resources are better regulated in the East (Oued Dadès and Mgoun), where limestone soils dominate in the West (Oued Ouarzazate) where metamorphic and granitic soils are less permeable. Regarding the quality of groundwater, we note that in areas of high population density, the quality of water is very poor, and when away from these urban centers, the quality of water becomes good (Fig. 6B). The water department of Ouarzazate provides groundwater sampling data.
Map of the hydrographic network of the study area (A). Map of the groundwater quality (B)
Social and economic framework demographics
The following table summarizes the various socio-demographic data of the most waste-producing urban communes in the province of Ouarzazate (Table 2). This population of urban centers is taken into consideration in the proposals of our study. The demographic distributions or population density (last census) of the communes of the province in Ouarzazate are presented in map form (Fig. 7).
Table 2 Quantities of waste generated in the province's urban centers and cities
Map of population density at the municipal level
The barycenters of production of household and similar waste (2012–2027) at the provincial level are concentrated in the municipality of Tarmigte near the city of Ouarzazate because they are the two most productive municipalities in terms of household and similar waste.
Land use, road network, and wind direction
The province of Ouarzazate has a network of 1620 km. This network is divided into 27% of national roads, 18% of regional roads, and 55% of provincial roads. Of the 1620 km of the network, 770 km are paved or 46% (Fig. 8A). Concerning the wind, the prevailing winds blow from the west and northwest (Fig. 8B) with moderate speeds of 2 to 4 m/s to very strong in case of disturbances (trade winds) related to the Atlantic Influence. However, the Siroua massif (Tikirt Basin) is a strong barrier to this oceanic influence. The winds circulating at high speeds generally blow from the NNW. The number of days of thunderstorms is 2 days/year on average, with a maximum registered in August (Ouarzazate Station-Period: 2000–2012).
Land-use map of the study area (A). Map of wind direction and speed in the study area (B)
Each criteria map has a measurement range and a scale. The multi-criteria analysis and evaluation must match the measurement scale. The criteria standardization process can be used to match the measurement scales and convert them to comparable units. The analytical hierarchy process method can be used to normalize the data. For example, it organizes all map layer values into classes as desired by the decision maker (Table 3).
Table 3 Suitability scores of the selected criteria
The analytical hierarchy process method is a multi-criteria analytical approach to decision support. It is fundamentally based on complex calculations using matrix algebra. It was developed by Thomas Saaty in 1970 and allows to decompose a complex problem into a hierarchical system, in which binary combinations are established at each level of the hierarchy [8]. The combination of geographic information systems and analytical hierarchy process is a powerful tool for developing future policies that are relevant to urban growth [5], and greatly facilitates the decision-making process [66], it is one of the most comprehensive systems designed for making multi-criteria decisions, as it offers the ability to develop problems hierarchically [74]. This combination is flexible and powerful for the qualitative and quantitative study of multi-criteria issues [48].
In this study we have developed the decision-making process necessary for using the analytical hierarchy process method, this process is presented in steps, of which beforehand the problem or the purpose of the analysis must be well identified. The steps are as follows [62]: develop the hierarchical structure of the project (Fig. 9); carry out the pairwise (binary) comparisons of criteria with respect to the objective; establish the matrix of comparison judgments; calculate the priority vectors; give the random index value (IA); calculate the average of the priority value (γmax); calculate the coherence index (CI); calculate the coherence ratio (CR); calculate the final aggregation of projects; and express the final decision.
Hierarchy structure for the landfill site selection
Elaboration of the hierarchical structure of the project
This approach consists of carefully expressing the hierarchy structure that will reflect the problem to be solved. This hierarchical structure clarifies the issue and makes it possible to identify the contribution of each element to the final decision. The objective of the problem is located at the highest level of the hierarchy. The criteria and subcriteria, being the elements that influence the objective, are found in the intermediate levels of the hierarchy. The alternatives are the lowest level of the hierarchy. In this study, four hierarchical levels were constructed. Level 0 is the objective, level 1 compares the criteria against the objective, level 2 compares the subcriteria against the criteria, and level 3 compares the alternatives against the subcriteria. Each analysis aims to target the best criterion and the best alternative in relation to the higher hierarchical level.
Binary comparisons
Analytical hierarchy process is used to derive ratio scales from pairwise comparisons. Using a nine-point scale that includes 9, 8, 7..., 1/7, 1/8, 1/9, the comparison was performed, where 9 represents extreme preference, 7 represents very strong preference, 5 represents strong preference, and so on down to 1 which means no preference.
Comparison judgment matrix
The criteria comparison table was transformed into a matrix known as the judgment's matrix in Table 4.
Table 4 Pairwise comparison matrix for subcriteria
We designate an important value of "a" elements of the evaluation in relation to the comparisons of two criteria C1 and C2; we put the value "a" in the cell "i" column and "j" line of a criterion considered important, and after we have to put the value ratio of "1/a" in the cell considered less important of the comparison.
a = the value which is in the intersection of the cell i column and j line, and noted (aij). C1, C2 to Cn = the criterion of comparison posed in line "i" and column "j" corresponding to the value comparison of evaluation Ci and Cj.
Judgment matrix
$$ A=\left[{a}_{ij}\right]={\displaystyle \begin{array}{c}\ \\ {}\begin{array}{c}{C}_1\\ {}{C}_2\\ {}\begin{array}{c}{}_{\begin{array}{c}\ \\ {}\\ {}\\ {}\end{array}}\\ {}{C}_n\end{array}\end{array}\end{array}}\kern0.75em {\displaystyle \begin{array}{c}\kern2.5em {C}_1\kern2.25em {C}_2\kern3.25em {C}_n\\ {}\left(\begin{array}{c}\begin{array}{ccc}1\kern0.5em & {a}_{12}& .\\ {}\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{${a}_{12}$}\right.&\ &\ \\ {}\vdots &\ & \ddots \end{array}\\ {}\kern1.75em \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{${a}_{1n}$}\right.\kern0.5em \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{${a}_{1n}$}\right.\kern0.5em \cdots \kern1.25em \end{array}\begin{array}{c}{a}_{1n}\\ {}\begin{array}{c}{a}_{2n}\\ {}\vdots \\ {}1\end{array}\end{array}\right)\end{array}} $$
after calculating the relative importance of each element of the hierarchy, the determination of the priorities of the elements of each matrix is carried out by solving the eigenvector problem to then calculate of the λmax eigenvalue. We will multiply the matrix A by the elements of priority vector (x), x is the eigenvalue of priority vector (n), we calculate the average of the found values. The result is called λmax value.
We pose aij = the judgment matrix of the value of the element (i) of ith row and the element (j) jth column.
The normalized value aij is equal to:
Normalized value
\( aij=\frac{Wi}{Wj} \) et aij = 1, \( aij=\frac{Wj}{Wi} \)= \( \frac{1}{aij} \)
Wi = the contribution to the selection of the best choice and to each of the criteria
Wj = the contribution of the specific criteria to the main objective
Saaty (1990 [61]) suggested that the largest max eigenvalue is
Eigenvalue of γ max
$$ \gamma max= aij\ast \frac{Wj}{Wi} $$
The results obtained can be used to determine the average of the max value
n = 11, "n" number of criteria.
γmax = 132,17/11 = 11,20, γmax = 11,20
Determining the random index (RI) value
Saaty developed a scale where the random indexes IA (Table 5) were established by making random judgments for a high number of replications. This random indexes number represents the average of the indices calculated at each replication for different square matrix size (N). The reading of the IA value is indicated by a random index table 21: n = number of criteria [24]
Table 5 Random consistency index (RI), [24]
We have n = 11, the corresponding value is equal to 1.51, IA = 1.51
Computation of the coherence ratio (CR)
The coherence ratio (CR) is the ratio of the coherence index calculated on the matrix corresponding to the judgments of the decision-maker and the random index IA of a matrix of the same dimension.
If CR ≤ 0.1 or CR ≤ 10%, the matrix is considered sufficiently coherent, in the case where this value exceeds 10%, the judgments may require some revisions.
Ration of consistency
$$ \boldsymbol{RC}=\frac{\boldsymbol{IC}}{\boldsymbol{IA}} $$
We have IC = 0.02 and IA = 1.51 then RC = IC/IA = 0.02/1.51 = 0,01 or 1%.
RC = 1.3% < = 10%, so conclusion:
OR = 1.3%, the degree of comparison consistency is acceptable.
Finally, we summarize in a Table 6 called criteria judgment table: full priority, the results of calculations such as: judgment matrix, priority, eigenvalue (λmax), consistency index (CI), and consistency ratio (CR).
Table 6 Suitability ranking of factors and AHP pairwise comparison
Analytical hierarchy process has been widely applied in previous research dealing with solid waste management applications (A [19, 27, 32, 42, 52].)
The thematic maps of all criteria were realized using ArcGIS 10.5 software and then converted into raster format using weight values obtained from Boolean and analytical hierarchy process methods. The final suitability map was derived using a raster calculator and overlay analysis tools in ArcGIS 10.5 software for the spatial analyst.
Reclassification and homogenization of the data
All vector layers were converted to raster layers in the same projection system (WGS-1984) and reclassified to an equal pixel size with a scale value of 1–4 (Fig. 10) in a GIS (ArcGIS 10.5).
Data reclassification maps. A Elevation. B Slope. C Soil quality. D Geology. E Distance to surface water. F Distance from river. G Distance to sensitive places. H Distance to roads. I Wind direction. G Lineaments density. K Population density
Weighted overlay
The weighted overlay tool (Fig. 11) involves one of the most widely used approaches to solving multi-criteria problems, including site selection and suitability models. In GIS, this function allows users to combine different spatial layers to obtain a final result. ArcGIS uses the following process for this analysis. Each raster layer is assigned a weight in the suitability analysis. The raster layer values are re-ranked on a common suitability scale. This technique has been successfully applied in various studies and analyses, including landfill site selection land use suitability ([6, 7, 36]; L. [41, 49, 53]), analysis soil erosion, and landslides ([16, 31, 46, 54]), groundwater exploration [38, 40, 55, 57]. In this study, weighted overlay analysis (WOA) was used to identify optimal and suitable sites for a landfill based on the AHP-calculated weights assigned to each factor considered in the study. All selected criteria in raster format were re-ranked into equal-sized cells and combined into a single suitability layer. WOA is defined as follows: WOA = ∑n i = 1 Wi × Ri where Wi is the weight of a particular decision criterion, Ri is the matrix layer of the same criterion, n is the number of decision criteria.
Overlay of the raster data of the subcriteria
The identification of a landfill site is based on a multi-sectoral analysis involving geometricians, engineers, geologists, chemists, biologists, naturalists, economists, sociologists, political scientists, etc. There is no universal method for studying the siting of landfills. Each country, or even each region, must adopt methods that consider their specific environmental and socio-economic characteristics. Nevertheless, the issue addressed remains the same: to avoid or minimize the impacts of the landfill on the environment at the lowest cost. The current orientation in this field is to consider the landfill as a socio-environmental work with geological and hydrogeological, environmental, social, and economic predominance. It must respond to the quality standards required by the regulations in vigor. Therefore, it appears to be necessary to organize the preliminary studies of landfill implementation in sequential steps, each of which must answer precise questions to allow the deciders to make a rational choice. On the other hand, geographic information systems (GIS) and remote sensing coupled with AHP constitute an ideal, less expensive, efficient, and innovative approach for this type of preliminary studies because of their ability to manage large volumes of spatial data from various sources [45]. The choice of the site of the future controlled landfill of the province is an operation that requires a multi-criteria analysis through the analysis of several parameters aiming at the minimization of the environmental nuisances likely to be generated by such a work. The choice of the best site that will house the controlled landfill will optimize the investment costs and the operating costs of the landfill. It will contribute considerably to the protection of the environment in the logic of sustainable development. The principle that governs the determination of the marks attributed to each higher objective is based on recognizing the environment that is the object of the search for a household waste disposal site on the one hand, and on the evaluation of the potential of this environment on the other hand. Care should be taken to ensure that the weights assigned to each higher-level objective accurately reflect the reality of the environment, as these scores will influence the final screening result for the site. The score assigned to each of the higher-level objectives must be greater than zero. The total sum of the points assigned must equal 100. Concerning the province of Ouarzazate and the environment, we have identified 11 criteria: (1) elevation; (2) slope; (3) soil quality; (4) geology; (5) distance from wadis and rivers; (6) distance to water surfaces (dams); (7) distance to roads; (8) distance to sensitive areas; (9) wind direction; (10) density of lineaments; and (11) population density. The weights assigned to each factor are respectively 12%, 12%, 3%, 3%, 8%, 8%, 20%, 20%, 5%, and 2%. The water component in the Ouarzazate region is of paramount importance, hence the need to protect it, which is why it was assigned second place after the morphological factors. Ouarzazate is one of the region's best known for its great tourist and film potential on a national and global scale. The component of sensitive areas was given an important score in third position. The higher objectives can be translated into lower objectives based on environmental conditions. The score assigned to the lower objectives identified using the same reasoning as for the higher objectives. After determining the degree of realization of the determining factors (Table 6), the application of the multi-criteria analysis and, based on the considerations and calculations developed above, the prioritization of the various sites selected, we can show the following map (Fig. 12).
After having proceeded to the extraction of the lands in conformity with the implantation of the provincial controlled landfill by superimposing all the thematic maps of the different exclusion criteria with the definition of the safety perimeters for each component, we find vacant zones where the site of the future controlled landfill of the province can be accommodated. It is what we call "free surfaces = pixels colored in cyan on the map." The pre-selected empty surfaces have very variable surface areas and only the surfaces with a surface area greater than the needs of the future landfill are kept and classified according to their importance. The aim is to identify the most suitable sites for a landfill in open areas, preferably (but not necessarily) in areas of low environmental and economic value. The selection of sites will be based primarily on general topography of the site; potential landfill volume; proximity to waste generation areas; storage capacity and therefore life span; location in relation to prevailing winds; location in relation to runoff; road and trail access; water and electricity supply; social acceptance; availability of materials (equipment of the bottom of the casier (clay and draining materials); and of the cover (rehabilitation).
Analysis and classification of provisionally selected sites
Of the hundreds of such open space sites, only five sites (Fig. 13) have sufficient area to meet the project horizon of 20 years or more. Furthermore, to easily store the 1.5 million m3 over 20 years, areas of at least 20 ha are required. The following table shows the characteristics of the provisionally selected landfill sites (Table 7).
Satellite views of provisionally selected sites (Source: Esri, Maxar, GeoEye, Earthstar Geographics, CNES/Airbus, DS, USDA, AeroGRID, IGN, and the GIS User Community)
Table 7 The characteristics of the provisionally selected landfill sites
Transfer sites
The transfer of waste cannot be studied independently of the location of the dumpsites and the treatment units. To identify the impacts of different scenarios, we propose the use of an indicator of the transport function, which is the Tonne/km of distance between the collection center of the communes and the waste dumping sites. Some sites will be allocated to transfer stations (Fig. 12B) that will be used to reduce the number of vehicles going to the proposed landfill (site D). For example, relatively large distances between the waste production center and the collection areas (Taznakhte being 50 km from the landfill of site D), the transfer will be economically and technically interesting to carry out at site C, or at one of the lands conforming to the waste storage of the municipality (pixels in cyan on the map of sites).
The proposed study allows the controlled management of household waste of the entire population of the province of Ouarzazate. This work is also in line with a logic of sustainable development since it aims to reduce the environmental pollution of the agglomerations, especially that of Ouarzazate and Tarmigte, which is the barycenter of the province. This work also proposes proven solutions for the transfer of all the waste produced on the territories of the communes towards a socio-environmental landfill where all the required standards are respected. This approach is innovative for Morocco and will require a consequent accompaniment of the various actors. In this regard, due to the large size of the study area, geographic information systems (GIS), and spatial remote sensing techniques coupled with multi-criteria analysis (MCA) are proving to be very useful in the site selection process due to their ability to handle a multi-source dataset and use it in a well-organized and systematic manner. While multi-criteria evaluation can be used productively to address municipal solid waste management issues due to its low cost and quick implementation. These techniques have been increasingly used for the landfill siting process in other similar studies. For waste management at the provincial level, a restriction map is obtained from the criteria used to evaluate the constrained areas. It, therefore, cannot be used for the siting process. However, five sites (A, B, C, D, and E) are provisionally selected for the siting of the landfill; among these five sites, site D was selected for the siting of the new landfill since all the criteria seem to be very important are found in this site. The chosen site is located in an open area in the south of the territory of the urban commune of Tarmigt at the limit of the commune of Ouilssane with a maximum area of 19 ha, on the provincial road N23, far from the city of Ouarzazate of 25 km, far also from areas of social and cultural interest, areas to be protected or areas declared of natural protection. The site is located in an urban commune with a very high population density and is nearer the barycenter of waste production (Ouarzazate). The land's topography is generally flat with a slope of less than 5%, and the altitude around 2000 m. The site is located in a terrain formed by an impermeable crystalline bedrock of Precambrian-Paleozoic age and is also situated in an area with low to medium lineament density and no geological faults (non-fractured area). The wind direction on the site is West-East. The selected site is far from any water body (dams), affluents or wadis whose safety radius is very high. Other sites are selected with the same characteristics, namely site B, but the results of analysis confirm the site D by preference, but it is necessary to have with the other actors on this decision.
AHP:
Analytical hierarchy process
GIS:
DEM:
Digital elevation model
MSW:
Municipal solid waste
NASA:
MCDA:
Multi-criteria decision analysis
LSS:
Landfill site selection
PROMETHEE:
Preference ranking organization method
WOA:
Weighted overlay analysis
WGS:
World Geodetic System
Consistency ratio
Coherence index
IA:
Random index
SPC:
Semi-automatic classification
Abdelouhed F, Ahmed A, Abdellah A, Errami M et al (2021d) Lithological mapping and automatic lineament extraction using Aster and Gdem data in the Imini-Ounilla-Asfalou District, South High Atlas of Marrakech, Morocco. In: In E3S Web of Conferences, vol 4002
Abdelouhed F, Ahmed A, Abdellah A, Mohammed I (2021c) Lineament mapping in the Ikniouen area (eastern anti-atlas, Morocco) using Landsat-8 Oli and SRTM data. In: Remote sensing applications: society and environment, vol 23, p 100606. https://doi.org/10.1016/j.rsase.2021.100606
Abdelouhed F, Ahmed A, Abdellah A, Mohammed I, Zouhair O (2021a) Extraction and analysis of geological lineaments by combining ASTER-GDEM and Landsat 8 image data in the central high atlas of Morocco. Nat Hazards:1–23. https://doi.org/10.1007/s11069-021-05122-9
Abdelouhed F, Algouti A, Algouti A, Mlouk MA et al (2021b) Lithological mapping using Landsat 8 Oli multispectral data in Boumalne, Imider, and Sidi Ali Oubork, high central atlas, Morocco. In: In E3S web of conferences, EDP sciences, vol 234. https://doi.org/10.1051/e3sconf/202123400017
Aburas MM, Abullah SH, Ramli MF, Ash'aari ZH (2015) A review of land suitability analysis for urban growth by using the GIS-based analytic hierarchy process. Asian J Applied Sci 3(6)
Aderoju OM, Dias GA, Gonçalves AJ (2020) A GIS-based analysis for sanitary landfill sites in Abuja, Nigeria. Environ Dev Sustain 22(1):551–574. https://doi.org/10.1007/s10668-018-0206-z
Al-Anbari MA, Thameer MY, Al-Ansari N (2018) Landfill site selection by weighted overlay technique: case study of Al-Kufa, Iraq. Sustainability 10(4):999. https://doi.org/10.3390/su10040999
Ali S, Lee SM, Jang CM (2017) Determination of the most optimal on-shore wind farm site location using a GIS-MCDM methodology: evaluating the case of South Korea. Energies 10(12):2072. https://doi.org/10.3390/en10122072
Asefi H, Lim S (2017) A novel multi-dimensional modeling approach to integrated municipal solid waste management. J Clean Prod 166:1131–1143. https://doi.org/10.1016/j.jclepro.2017.08.061
Aydi A, Zairi M, Dhia HB (2013) Minimization of environmental risk of landfill site using fuzzy logic, analytical hierarchy process, and weighted linear combination methodology in a geographic information system environment. Environ Earth Sci 68(5):1375–1389. https://doi.org/10.1007/s12665-012-1836-3
Bahrani S, Ebadi T, Ehsani H, Yousefi H, Maknoon R (2016) Modeling landfill site selection by multi-criteria decision making and fuzzy functions in GIS, case study: Shabestar, Iran. Environ Earth Sci 75(4):337. https://doi.org/10.1007/s12665-015-5146-4
Balew A, Alemu M, Leul Y, Feye T (2020) Suitable landfill site selection using GIS-based multi-criteria decision analysis and evaluation in robe town, Ethiopia. GeoJ:1–26
Balist J, Nahavandchi M, Bidar GS (2021) Landfill site selection using fuzzy logic & AHP & WLC (case study: Razan City-Iran). J Civil Eng Frontiers 2(01):1–7. https://doi.org/10.38094/jocef20129
Barakat A, Hilali A, El Baghdadi M, Touhami F (2017) Landfill site selection with GIS-based multi-criteria evaluation technique. A case study in Béni Mellal-Khouribga region, Morocco. Environ Earth Sci 76(12):1–13. https://doi.org/10.1007/s12665-017-6757-8
Basharat M, Shah HR, Hameed N (2016) Landslide susceptibility mapping using GIS and weighted overlay method: a case study from NW Himalayas, Pakistan. Arab J Geosci 9(4):1–19. https://doi.org/10.1007/s12517-016-2308-y
Basu T, Pal S (2017) Exploring landslide susceptible zones by analytic hierarchy process (AHP) for the Gish River basin, West Bengal, India. Spat Inf Res 25(5):665–675. https://doi.org/10.1007/s41324-017-0134-2
Beskese A, Demir HH, Ozcan HK, Okten HE (2015) Landfill site selection using fuzzy AHP and fuzzy TOPSIS: a case study for Istanbul. Environ Earth Sci 73(7):3513–3521. https://doi.org/10.1007/s12665-014-3635-5
Cervantes DE, Martínez AL, Hernández MC, de Cortázar AL (2018) Using indicators as a tool to evaluate municipal solid waste management: a critical review. Waste Manag 80:51–63. https://doi.org/10.1016/j.wasman.2018.08.046
Chabuk A, Al-Ansari N, Hussain HM, Knutsson S, Pusch R (2016) Landfill site selection using geographic information system and analytical hierarchy process: a case study Al-Hillah Qadhaa, Babylon, Iraq. Waste Manag Res 34(5):427–437. https://doi.org/10.1177/0734242X16633778
Chabuk A, Al-Ansari N, Hussain HM, Knutsson S, Pusch R (2017) Landfill sites selection using analytical hierarchy process and ratio scale weighting: case study of Al-Mahawil, Babylon, Iraq. Engineering 9(2):123–141. https://doi.org/10.4236/eng.2017.92006
Christian H, Macwan JEM (2016) Fuzzy ranking for landfill site selection in Indian context. Int Dent J 11(26):2576–2580. https://doi.org/10.21660/2016.26.5382
Davami AH, Moharamnejad N, Monavari SM, Shariat M (2014) An urban solid waste landfill site evaluation process incorporating GIS in local scale environment: a case of Ahvaz City, Iran. Int J Environ Res 8(4):1011–1018
Demesouka OE, Vavatsikos AP, Anagnostopoulos KP (2013) Suitability analysis for siting MSW landfills and its multicriteria spatial decision support system: method, implementation and case study. Waste Manag 33(5):1190–1206. https://doi.org/10.1016/j.wasman.2013.01.030
Donegan HA, Dodd FJ (1991) A note on Saaty's random indexes. Math Comput Model 15(10):135–137. https://doi.org/10.1016/0895-7177(91)90098-R
Elhamdouni D et al (2017) Geomatics tools and AHP method use for a suitable communal landfill site: case study of Khenifra region-Morocco. J Mater Environ Sci 8(10):3612–3624
Errouhi AA et al (2018) Evaluation of landfill site choice using AHP and GIS case study: Oum Azza, Morocco. In MATEC Web of Conferences, EDP Sciences 2047
Eskandari M, Homaee M, Mahmoodi S, Pazira E, van Genuchten MT (2015) Optimizing landfill site selection by using land classification maps. Environ Sci Pollut Res 22(10):7754–7765. https://doi.org/10.1007/s11356-015-4182-7
Farah A, Algouti A, Algouti A, Ifkirne M et al (2021) Mapping of soil degradation in semi-arid environments in the Ouarzazate basin in the south of the central high atlas, Morocco, using sentinel 2 a data. Remote Sensing Applications: Society and Environment 100548
Ganasri BP, Ramesh H (2016) Assessment of soil erosion by RUSLE model using remote sensing and GIS-a case study of Nethravathi Basin. Geosci Front 7(6):953–961. https://doi.org/10.1016/j.gsf.2015.10.007
Gauthier H (1957) Contribution à l'étude Géologique des formations post-Liasiques des Bassins Du Dadès et Du haut Todra (Maroc Méridional)
Ghosh P, Lepcha K (2019) Weighted linear combination method versus grid based overlay operation method–a study for potential soil erosion susceptibility analysis of Malda District (West Bengal) in India. Egypt J Remote Sens Space Sci 22(1):95–115. https://doi.org/10.1016/j.ejrs.2018.07.002
Gorsevski PV, Donevska KR, Mitrovski CD, Frizado JP (2012) Integrating multi-criteria evaluation techniques with geographic information systems for landfill site selection: a case study using ordered weighted average. Waste Manag 32(2):287–296. https://doi.org/10.1016/j.wasman.2011.09.023
Hamdani N, Baali A (2019) Fracture network mapping using Landsat 8 OLI data and linkage with the karst system: a case study of the Moroccan central middle atlas. Remote Sensing in Earth Systems Sciences 2(1):1–17. https://doi.org/10.1007/s41976-019-0011-y
Hamzeh M, Abbaspour RA, Davalou R (2015) Raster-based outranking method: a new approach for municipal solid waste landfill (MSW) siting. Environ Sci Pollut Res 22(16):12511–12524. https://doi.org/10.1007/s11356-015-4485-8
Hanine M, Boutkhoum O, Tikniouine A, Agouti T (2016) Comparison of fuzzy AHP and fuzzy TODIM methods for landfill location selection. SpringerPlus 5(1):1–30. https://doi.org/10.1186/s40064-016-2131-7
Hereher ME, Al-Awadhi T, Mansour SA (2019) Assessment of the optimized sanitary landfill sites in Muscat, Oman. In: The Egyptian Journal of Remote Sensing and Space Science
Hindermeyer J et al (1977) Carte Géologique Du Maroc, Jbel Saghro-Dades (haut atlas central, Sillon Sud-Atlasique et anti-atlas oriental)—echelle 1/200,000. Notes et Mém Serv Géol Maroc 161
Iqbal AB, Rahman MM, Mondal DR, Khandaker NR, Khan HM, Ahsan GU, Jakariya M, Hossain MM (2020) Assessment of Bangladesh groundwater for drinking and irrigation using weighted overlay analysis. Groundw Sustain Dev 10:100312. https://doi.org/10.1016/j.gsd.2019.100312
Kamdar I, Ali S, Bennui A, Techato K, Jutidamrongphan W (2019) Municipal solid waste landfill siting using an integrated GIS-AHP approach: a case study from Songkhla, Thailand. Resour Conserv Recycl 149:220–235. https://doi.org/10.1016/j.resconrec.2019.05.027
Kanagaraj G, Suganthi S, Elango L, Magesh NS (2019) Assessment of groundwater potential zones in Vellore District, Tamil Nadu, India using geospatial techniques. Earth Sci Inf 12(2):211–223. https://doi.org/10.1007/s12145-018-0363-5
Kareem SL et al (2021) Optimum location for landfills landfill site selection using GIS technique: Al-Naja City as a case study. Cogent Engineering 8(1):1863171. https://doi.org/10.1080/23311916.2020.1863171
Khan S, Alvarez LCM, Wei Y (2018) Sustainable management of municipal solid waste under changing climate: a case study of Karachi, Pakistan. Asian J Environ Biotechnol 2(1):23–32
Khodaparast M, Rajabi AM, Edalat A (2018) Municipal solid waste landfill siting by using GIS and analytical hierarchy process (AHP): a case study in Qom City, Iran. Environ Earth Sci 77(2):1–12. https://doi.org/10.1007/s12665-017-7215-3
Kontos TD, Komilis DP, Halvadakis CP (2003) Siting MSW landfills on Lesvos Island with a GIS-based methodology. Waste Manag Res 21(3):262–277. https://doi.org/10.1177/0734242X0302100310
Kovacs JM, Malczewski J, Flores-Verdugo F (2004) Examining local ecological knowledge of hurricane impacts in a mangrove forest using an analytical hierarchy process (AHP) approach. J Coast Res 20(3):792–800
Kumar S, Gupta V, Kumar P, Sundriyal YP (2021) Coseismic landslide hazard assessment for the future scenario earthquakes in the Kumaun Himalaya, India. Bull Eng Geol Environ 80(7):1–17. https://doi.org/10.1007/s10064-021-02267-6
Laville E, Pique A (1991) La distension Crustale Atlantique et Atlasique au Maroc au debut Du Mesozoique; Le Rejeu des structures Hercyniennes. Bulletin de la Société géologique de France 162(6):1161–1171
Lyu H-M, Sun W-J, Shen S-L, Arulrajah A (2018) Flood risk assessment in metro systems of mega-cities using a GIS-based modeling approach. Sci Total Environ 626:1012–1025. https://doi.org/10.1016/j.scitotenv.2018.01.138
Mahini AS, Gholamalifard M (2006) Siting MSW landfills with a weighted linear combination methodology in a GIS environment. Int J Environ Sci Technol 3(4):435–445. https://doi.org/10.1007/BF03325953
Majid M, Mir BA (2021) Landfill site selection using GIS based multi criteria evaluation technique. a case study of Srinagar City, India. Environ Challenges 3:100031
Mattauer M, Tapponnier P, Proust F. 1977. Sur Les Mecanismes de Formation Des Chaines Intracontinentales; l'exemple Des Chaines Atlasiques Du Maroc. Bulletin de la Société Géologique de France S7-XIX(3): 521–26. https://pubs.geoscienceworld.org/sgf/bsgf/article-pdf/S7-XIX/3/521/2971748/521.pdf (13 Jan 2021).
Moeinaddini M, Khorasani N, Danehkar A, Darvishsefat AA (2010) Siting MSW landfill using weighted linear combination and analytical hierarchy process (AHP) methodology in GIS environment (case study: Karaj). Waste Manag 30(5):912–920. https://doi.org/10.1016/j.wasman.2010.01.015
Mohammed HI, Majid Z, Yamusa YB (2019) GIS based sanitary landfill suitability analysis for sustainable solid waste disposal. Earth and Environmental Science, IOP Publishing, In IOP Conference Series, p 12056
Mondal S, Mandal S (2019) Landslide susceptibility mapping of Darjeeling Himalaya, India using index of entropy (IOE) model. Applied Geomatics 11(2):129–146. https://doi.org/10.1007/s12518-018-0248-9
Nowreen S, Newton IH, Zzaman RU, Islam AS, Islam GT, Alam MS (2021) Development of potential map for groundwater abstraction in the northwest region of Bangladesh using RS-GIS-based weighted overlay analysis and water-table-fluctuation technique. Environ Monit Assess 193(1):1–17. https://doi.org/10.1007/s10661-020-08790-5
Olay-Romero E, Turcott-Cervantes DE, del Consuelo Hernández-Berriel M, de Cortázar AL, Cuartas-Hernández M, de la Rosa-Gómez I (2020) Technical indicators to improve municipal solid waste management in developing countries: a case in Mexico. Waste Manag 107:201–210. https://doi.org/10.1016/j.wasman.2020.03.039
Pande CB, Moharir KN, Singh SK, Varade AM (2020) An integrated approach to delineate the groundwater potential zones in Devdari watershed area of Akola District, Maharashtra, Central India. Environ Dev Sustain 22(5):4867–4887. https://doi.org/10.1007/s10668-019-00409-1
Rahmat ZG, Niri MV, Alavi N, Goudarzi G, Babaei AA, Baboli Z, Hosseinzadeh M (2017) Landfill site selection using GIS and AHP: a case study: Behbahan, Iran. KSCE J Civ Eng 21(1):111–118. https://doi.org/10.1007/s12205-016-0296-9
Roch, E. 1939. Description Géologique Des Montagnes à l'Est de Marrakech. Jouve & cie, éditeurs.
Roslee R, Mickey AC, Simon N, Norhisham MN (2017) Landslide susceptibility analysis (LSA) using weighted overlay method (WOM) along the Genting Sempah to Bentong highway, Pahang. Malaysian J Geosciences 1(2):13–19. https://doi.org/10.26480/mjg.02.2017.13.19
Saaty, Thomas L (1990) "How to make a decision: the analytic hierarchy process." European journal of operational research 48(1): 9-26
Saaty TL, Vargas LG (2001) How to make a decision. In: In models, methods, Concepts & Applications of the analytic hierarchy process, Springer, pp 1–25
Saleh SK, Aliani H, Amoushahi S (2020) Application of modeling based on fuzzy logic with multi-criteria method in determining appropriate municipal landfill sites (case study: Kerman City). Arab J Geosci 13(22):1–14. https://doi.org/10.1007/s12517-020-06213-w
Şener E, Şener Ş (2020) Landfill site selection using integrated fuzzy logic and analytic hierarchy process (AHP) in Lake basins. Arab J Geosci 13(21):1–16. https://doi.org/10.1007/s12517-020-06087-y
Şener Ş, Sener E, Karagüzel R (2011) Solid waste disposal site selection with GIS and AHP methodology: a case study in Senirkent–Uluborlu (Isparta) basin, Turkey. Environ Monit Assess 173(1):533–554. https://doi.org/10.1007/s10661-010-1403-x
Serbu R, Marza B, Borza S (2016) A spatial analytic hierarchy process for identification of water pollution with GIS software in an eco-economy environment. Sustainability 8(11):1208. https://doi.org/10.3390/su8111208
Shahabi H, Keihanfard S, Ahmad BB, Amiri MJ (2014) Evaluating Boolean, AHP and WLC methods for the selection of waste landfill sites using GIS and satellite images. Environ Earth Sci 71(9):4221–4233. https://doi.org/10.1007/s12665-013-2816-y
Sk MM, Ali SA, Ahmad A (2020) Optimal sanitary landfill site selection for solid waste disposal in Durgapur City using geographic information system and multi-criteria evaluation technique. KN J Cartography Geographic Inform 70(4):163–180. https://doi.org/10.1007/s42489-020-00052-1
Spigolon LM, Giannotti M, Larocca AP, Russo MA, Souza ND (2018) Landfill siting based on optimisation, multiple decision analysis, and geographic information system analyses. Waste Manag Res 36(7):606–615. https://doi.org/10.1177/0734242X18773538
Sukholthaman P, Shirahada K (2015) Technological challenges for effective development towards sustainable waste management in developing countries: case study of Bangkok, Thailand. Technol Soc 43:231–239. https://doi.org/10.1016/j.techsoc.2015.05.003
Torabi-Kaveh M, Babazadeh R, Mohammadi SD, Zaresefat M (2016) Landfill site selection using combination of GIS and fuzzy AHP, a case study: Iranshahr, Iran. Waste Manag Res 34(5):438–448. https://doi.org/10.1177/0734242X16633777
Uyan M (2014) MSW landfill site selection by combining AHP with GIS for Konya, Turkey. Environ Earth Sci 71(4):1629–1639. https://doi.org/10.1007/s12665-013-2567-9
Yazdani M et al (2013) The evaluation of municipal landfill sites in north of Iran through comparing BC guideline and Iran legislation
Yousefi H, Ghodusinejad MH, Noorollahi Y (2017) GA/AHP-based optimal design of a hybrid CCHP system considering economy, energy and emission. Energ Buildings 138:309–317. https://doi.org/10.1016/j.enbuild.2016.12.048
The researchers would like to thank all the organizations that provided us with the necessary free data during the study, among them, DIVA-GIS; OpenStreetMap, U.S. Geological Survey, NOAA (National Oceanic and Atmospheric Administration).
No specific funding has to be declared for this work.
Department of Geology, Geoscience Geotourism Natural Hazards and Remote Sensing Laboratory (2GRNT), University of Cadi Ayyad, Faculty of Sciences, Semlalia, BP 2390, 40000, Marrakesh, Morocco
Farah Abdelouhed, Algouti Ahmed, Algouti Abdellah, Baiddane Yassine & Ifkirne Mohammed
Farah Abdelouhed
Algouti Ahmed
Algouti Abdellah
Baiddane Yassine
Ifkirne Mohammed
FA: investigation, methodology, writing—original draft. BY: investigation, methodology, writing—original draft. IM: investigation, methodology, writing—original draft. AA and AAB: Validation, writing—review and editing, supervision. The authors read and approved the final manuscript.
Correspondence to Farah Abdelouhed.
Abdelouhed, F., Ahmed, A., Abdellah, A. et al. GIS and remote sensing coupled with analytical hierarchy process (AHP) for the selection of appropriate sites for landfills: a case study in the province of Ouarzazate, Morocco. J. Eng. Appl. Sci. 69, 19 (2022). https://doi.org/10.1186/s44147-021-00063-3
Landfill selection
Waste transfer center
Ouarzazate province
|
CommonCrawl
|
distance from point to plane calculator
Another way to find the distance is by finding the plane and the line intersection point and then calculate distance between this point and the given point. Finding the distance from a point to a plane by considering a vector projection. D = √[ ( X2-X1)^2 + (Y2-Y1)^2) Where D is the distance; X1 and X2 are the x-coordinates; Y1 and Y2 are the y-coordinates; Simply type in the name of the two places in the text boxes and click the show button!The best format to use is [City, Country] to enter a location - that is [City(comma)(space)Country]. Find the distance between two points from x and y coordinates with this distance formula calculator. showing that d is the distance from the origin 0 = (0,0,0) to the plane P. This formula gives a signed distance which is positive on one side of the plane and negative on the other. The distance should then be displayed. Such a line is given by calculating the normal vector of the plane. If Ax + By + Cz + D = 0 is a plane equation, then distance from point P (P x, P y, P z) to plane can be found using the following formula: The distance from a point to a plane (d) = (AP x + BP y + CP z + D)/ √ (A+ B 2 + C 2) Calculate … Simply click once on one point, then click again on the second point. Plane Geometry Solid Geometry Conic Sections. Fractions should be entered with a forward such as '3/4' for the fraction $$ \frac{3}{4} $$. Click Calculate Distance, and the tool will place a marker at each of the two addresses on the map along with a line between them. Airplanemanager.com provides flight time and distance calculators free for the air charter industry. Another way to find the distance is by finding the plane and the line intersection point and then calculate distance between this point and the given point. The distance from a point to a plane is equal to length of the perpendicular lowered from a point on a plane. Shortest distance between a point and a plane Calculator, \(\normalsize Distance\ between\ a\ point\ and\ a\ plane\\. Spherical to Cartesian coordinates. I need to calculate the distance between the point in the plane and the straight line. This website uses cookies to ensure you get the best experience. The focus of this lesson is to calculate the shortest distance between a point and a plane. Driving distance by car is determined from the actual turn-by-turn driving directions. The shortest distance of a point from a plane is said to be along the line perpendicular to the plane or in other words, is the perpendicular distance of the point from the plane. Cartesian to Spherical coordinates. And how to calculate that distance? The distance in miles and kilometers will display for the straight line or flight mileage along with the distance it would take to get there in a car, driving mileage. Contact Us. It is a good idea to find a line vertical to the plane. Shortest distance between two lines. Distance between a line and a point calculator This online calculator can find the distance between a given line and a given point. Thank you for your questionnaire.Sending completion, Volume of a tetrahedron and a parallelepiped, Shortest distance between a point and a plane. This distance is actually the length of the perpendicular from the point to the plane. Distance On a Coordinate Plane Between Two Points = √ ((x1-x0) 2 + (y1-y0) 2) Or browse the mileage charts for any state or … Formula. I need to calculate the distance between the point in the plane and the straight line. It can be found starting with a change of variables that moves the origin to coincide with the given point then finding the point on … Log InorSign Up. As shown by other answers and in note 1 there are easier ways to find the shortest distance, but here is a detailed solution using the method of Lagrange multipliers. The blue graph (d(x)) represents the distance from the point (a,b) to the graph of f(x). So the distance from the point ( m , n ) to the line Ax + By + C = 0 is given by: Distance On a Coordinate Plane Between Two Points = √ ((x1-x0) 2 + (y1-y0) 2) For any two points there is exactly one line segment connecting them. Therefore, the distance of the plane from the origin is simply given by (Gellert et al. The formula for calculating it can be derived and expressed in several ways. How to enter numbers: Enter any integer, decimal or fraction. The distance between two points A(x A, y A) and B(x B, y B) in two-dimensional Cartesian coordinate plane is the length of the segment connecting them, AB = d(A, B) = √(x B - x A)2 + (y B - y A)2 What is the Distance between Two Points? Shortest distance between a point and a plane. Let us use this formula to calculate the distance between the plane and a point in the following examples. And we'll, hopefully, see that visually as we try to figure out how to calculate the distance. Some functions are limited now because setting of JAVASCRIPT of the browser is OFF. You can also type in major places straight in such as \"USA\", \"Tokyo\", \"London\" etc. It is a good idea to find a line vertical to the plane. Approach: The perpendicular distance (i.e shortest distance) from a given point to a Plane is the perpendicular distance from that point to the given plane.Let the co-ordinate of the given point be (x1, y1, z1) and equation of the plane be given by the equation a * x + b * y + c * z + d = 0, where a, b and c are real constants. And we'll, hopefully, see that visually as we try to figure out how to calculate the distance. My Vectors course: https://www.kristakingmath.com/vectors-course Learn how to find the distance between a point and a plane. You can click more than two points in order to build up a continuous route. Distance between a point and a line. How to enter numbers: Enter any integer, decimal or fraction. Given a point a line and want to find their distance. Distance from a point to a graph. The following formula is used to calculate the euclidean distance between points. For a long distance trip, you can plan a road trip with stops. Where point (x0,y0,z0), Plane (ax+by+cz+d=0) For example, Give the point (2,-3,1) and the plane 3x+y-2z=15 When you click the search button, a search will be made to find which place you are referring to. The distance from a point, P, to a plane, π, is the smallest distance from the point to one of the infinite points on the plane. Finding the distance from a point to a plane by considering a vector projection. Simply type in the name of the two places in the text boxes and click the show button!The best format to use is [City, Country] to enter a location - that is [City(comma)(space)Country]. Calculation formula from point to line: Through the formula derivation, the … Copyright ©2006 - 2020 Thinkcalculator All Rights Reserved. Distance From Point to Plane Calculator; Euclidean Distance Formula. Calculation formula from point to line: Through the formula derivation, the … Your feedback and comments may be posted as customer voice. I found that the mathematical knowledge was returned to the teacher. the perpendicular should give us the said shortest distance. The distance from the point to the plane will be the projection of P on the unit vector direction this is the dot product of the vactor P and the unit vector. The Search For Location text box allows you to quickly get to an area you wish without spending time zooming and panning to find it. Coordinates of Point 1 (x 1 ,y 1 ): x= y= Coordinates of Point 2 (x 2 ,y 2 ): x= y= If you put it on lengt 1, the calculation becomes … Distance between a point and a line. In Euclidean geometry, the distance from a point to a line is the shortest distance from a given point to any point on an infinite straight line.It is the perpendicular distance of the point to the line, the length of the line segment which joins the point to nearest point on the line. [1] 2019/04/22 23:36 Male / Under 20 years old / High-school/ University/ Grad student / Useful /, [3] 2015/04/04 14:42 Male / 20 years old level / High-school/ University/ Grad student / Useful /, [4] 2014/04/10 06:19 Female / Under 20 years old / High-school/ University/ Grad student / Very /, [5] 2014/04/05 09:38 Male / Under 20 years old / High-school/ University/ Grad student / A little /, [6] 2013/07/04 06:24 Male / 30 years old level / An office worker / A public employee / A little /, [7] 2013/02/13 06:03 Male / 20 years old level / High-school/ University/ Grad student / Very /, [8] 2012/04/17 13:52 Male / 20 years old level / A student / Very /, [9] 2012/03/30 21:48 Male / 20 years old level / A student / Very /, [10] 2012/03/05 02:24 Female / Under 20 years old / A student / Very /. Free distance calculator - Compute distance between two points step-by-step. Given a plane: ax+by+cx+d = 0, a point p1 = [x1; y1; z1], and a point: p0 [x0; y0; z0]. Distance From Point to Plane Calculator; Euclidean Distance Formula. So, if we take the normal vector \vec{n} and consider a line parallel t… Let the co-ordinate of the given point be (x1, y1, z1) and equation of the plane be given by the equation a * x + b * y + c * z + d = 0, where a, b and c are real constants. Enter a start and end point into the tool and click the calculate mileage button. Given three points for, 2, 3, compute the unit normal (12) Then the (signed) distance from a point to the plane containing the three points is given by Cylindrical to Cartesian coordinates The following formula is used to calculate the euclidean distance between points. If you got a point and a plane in the Euclidean space, you can calculate the distance between the point and the plane. Thus, the line joining these two points i.e. It can be found starting with a change of variables that moves the origin to coincide with the given point then finding the point on … Use the miles / km / nautical miles / yards switch to measure distances in km or in miles or nautical miles. And we already have a point from the last video that's on the plane, this x … By using this website, you agree to our Cookie Policy. How it works: Just type numbers into the boxes below and the calculator will automatically calculate the distance between those 2 points. If you want to split the distance with a friend, you can use the halfway point calculator to find the best place to meet. The Autopan option will move the map as you click the points. Plane equation given three points. When you click the search button, a search will be made to find which place you are referring to. The distance between two points A(x A, y A) and B(x B, y B) in two-dimensional Cartesian coordinate plane is the length of the segment connecting them, AB = d(A, B) = √(x B - x A)2 + (y B - y A)2 What is the Distance between Two Points? Travelmath provides an online travel distance calculator to help you measure both flying distances and driving distances. Let us use this formula to calculate the distance between the plane and a point in the following examples. The blue graph (d(x)) represents the distance from the point (a,b) to the graph of f(x). Given a plane: ax+by+cx+d = 0, a point p1 = [x1; y1; z1], and a point: p0 [x0; y0; z0]. Firstly, a search is made of an internal list of common places. Find the distance from the point P = (4, − 4, 3) to the plane 2 x − 2 y + 5 z + 8 = 0, which is pictured in the below figure in its original view. Distance From To: Calculate distance between two addresses, cities, states, zipcodes, or locations Enter a city, a zipcode, or an address in both the Distance From and the Distance To address inputs. Flight distance is computed from a GPS-accurate great circle formula, which gives you the straight line distance … The Distance from a point to a plane calculator to find the shortest distance between a point and the plane. So the first thing we can do is, let's just construct a vector between this point that's off the plane and some point that's on the plane. p0 is located on the given plane and has the shortest distance to p1. Otherwise, the distance is positive for points on the side pointed to by the normal vector n. Equivalence with finding the distance between two parallel planes. I assume you want to compute perpendicular distance between point and plane given 3 points on it forming a triangle. Given three points for, 2, 3, compute the unit normal (12) Then the (signed) distance from a point to the plane containing the three points is given by And how to calculate that distance? We can clearly understand that the point of intersection between the point and the line that passes through this point which is also normal to a planeis closest to our original point. We verify that the plane and the straight line are parallel using the scalar product between the governing vector of the straight line, $$\vec{v}$$, and the normal vector of the plane $$\vec{n}$$. Recently, I encountered a problem. I found that the mathematical knowledge was returned to the teacher. Distance From To: Calculate distance between two addresses, cities, states, zipcodes, or locations Enter a city, a zipcode, or an address in both the Distance From and the Distance To address inputs. Such a line is given by calculating the normal vector of the plane. In Euclidean space, the distance from a point to a plane is the distance between a given point and its orthogonal projection on the plane or the nearest point on the plane. Thinkcalculator.com provides you helpful and handy calculator resources. Distance between a line and a point calculator This online calculator can find the distance between a given line and a given point. We first need to normalize the line vector (let us call it ).Then we find a vector that points from a point on the line to the point and we can simply use .Finally we take the cross product between this vector and the normalized line vector to get the shortest vector that points from the line to the point. We first need to normalize the line vector (let us call it ).Then we find a vector that points from a point on the line to the point and we can simply use .Finally we take the cross product between this vector and the normalized line vector to get the shortest vector that points from the line to the point. You can also type in major places straight in such as \"USA\", \"Tokyo\", \"London\" etc. I think that the method of Lagrange multipliers is the easiest way to solve my question, but how can I find the Lagrangian function? Distance from point to plane To illustrate our approach for finding the distance between a point and a plane, we work through an example. Equivalence with finding the distance between two parallel planes. The Distance from a point to a plane calculator to find the shortest distance between a point and the plane Formula Where point (x0,y0,z0), Plane (ax+by+cz+d=0) Here vector math approach: definitions. The shortest distance of a point from a plane is said to be along the line perpendicular to the plane or in other words, is the perpendicular distance of the point from the plane. Cartesian to Cylindrical coordinates. To compute the distance to a plane P , we did not calculate the base point of the perpendicular from the point P 0 to P , which some authors do. How it works: Just type numbers into the boxes below and the calculator will automatically calculate the distance between those 2 points. Click Calculate Distance, and the tool will place a marker at each of the two addresses on the map along with a line between them. We can clearly understand that the point of intersection between the point and the line that passes through this point which is also normal to a planeis closest to our original point. Therefore, the distance of the plane from the origin is simply given by (Gellert et al. Find the distance between two points from x and y coordinates with this distance formula calculator. 1989, p. 541). So, if we take the normal vector \vec{n} and consider a line parallel t… Distance between a line and a point the perpendicular should give us the said shortest distance. Online calculator to calculate and display the distance and midpoint for two points. Given a point a line and want to find their distance. Postcodes and addresses can also be used. Fractions should be entered with a forward such as '3/4' for the fraction $$ \frac{3}{4} $$. D = √[ ( X2-X1)^2 + (Y2-Y1)^2) Where D is the distance; X1 and X2 are the x-coordinates; Y1 and Y2 are the y-coordinates; The distance in miles and kilometers will display for the straight line or flight mileage along with the distance it would take to get there in a car, driving mileage. For any two points there is exactly one line segment connecting them. After Du Niang, I found the calculation method, which is hereby recorded. Approach: The perpendicular distance (i.e shortest distance) from a given point to a Plane is the perpendicular distance from that point to the given plane. So the first thing we can do is, let's just construct a vector between this point that's off the plane and some point that's on the plane. Calculate a … Coordinates of Point 1 (x 1 ,y 1 ): x= y= Coordinates of Point 2 (x 2 ,y 2 ): x= y= p0 is located on the given plane and has the shortest distance to p1. Postcodes and addresses can also be used. Distance between a line and a point The focus of this lesson is to calculate the shortest distance between a point and a plane. Trigonometry. My Vectors course: https://www.kristakingmath.com/vectors-course Learn how to find the distance between a point and a plane. This applet demonstrates the setup of the problem and the method we will use to derive a formula for … The distance from the point to the plane will be the projection of P on the unit vector direction this is the dot product of the vactor P and the unit vector. The formula for calculating it can be derived and expressed in several ways. In Euclidean space, the distance from a point to a plane is the distance between a given point and its orthogonal projection on the plane or the nearest point on the plane. If you put it on lengt 1, the calculation … Firstly, a search is made of an internal list of common places. When both points are on P , the whole segment lies in the plane. Distance from a point to a graph. In a two dimension plane there are two points let's say A and B with the respective coordinates as (x1, y1) and (x2, y2) and to calculate the distance between them there is a direct formula which is given below Airplanemanager.com provides flight time and distance calculators free for the air charter industry. Spherical to Cylindrical coordinates. For example if you wish t… This means, you can calculate the shortest distance between the point and a point of the plane. let the triangle points be p0,p1,p2 and tested point p. plane normal Recently, I encountered a problem. Enter a start and end point into the tool and click the calculate mileage button. Step-by-step explanation is provided. If one just wants the distance, then directly computing it without going through an intermediate calculation is fastest. The absolute value sign is necessary since distance must be a positive value, and certain combinations of A, m , B, n and C can produce a negative number in the numerator. Thus, the line joining these two points i.e. So, one has to take the absolute value to get an absolute distance. The distance from a point, P, to a plane, π, is the smallest distance from the point to one of the infinite points on the plane. This distance is actually the length of the perpendicular from the point to the plane. Move (a,b) around to see the distances change; then move the point on the graph of d(x) to relative extrema to see the distance relationship. Volume of a tetrahedron and a parallelepiped. If the straight line and the plane are parallel the scalar product will be zero: … And we already have a point from the last video that's on the plane… If you got a point and a plane in the Euclidean space, you can calculate the distance between the point and the plane. The Distance from a point to a plane calculator to find the shortest distance between a point and the plane, Where point (x0,y0,z0), Plane (ax+by+cz+d=0), For example, Give the point (2,-3,1) and the plane 3x+y-2z=15, Distance = |ax+by+cz-d| / sqrt(a^2 + b^2 + c^2) = |3(2) + (-3) + (-2) - 15| / sqrt(a^2 + b^2 + c^2) = |-14| / sqrt14 = 3.7416573867739413, Word Counter | AllCallers | CallerInfo | ThinkCalculator | Free Code Format. Log InorSign Up. After Du Niang, I found the calculation method, which is hereby recorded. The equation for the plane determined by N and Q is A (x − x 0) + B (y − y 0) + C (z − z 0) = 0, which we could write as A x + B y + C z + D = 0, where D = − A x 0 − B y 0 − C z 0. You can then compare the two results to see the difference. This means, you can calculate the shortest distance between the point and a point of the plane. 1989, p. 541). Given with the two points coordinates and the task is to find the distance between two points and display the result. In Euclidean geometry, the distance from a point to a line is the shortest distance from a given point to any point on an infinite straight line.It is the perpendicular distance of the point to the line, the length of the line segment which joins the point to nearest point on the line. Move (a,b) around to see the distances change; then move the point on the graph of d(x) to relative extrema to see the distance relationship. Their distance length of the plane intermediate calculation is fastest parallelepiped, shortest distance between given... Point to a graph is given by ( Gellert et al distance from point to plane calculator compute perpendicular between. Plan a road trip with stops click once on one point, then click again the... Such a line and a given point online travel distance calculator to help you measure both flying distances driving. Internal list of common places calculate the distance from a point in the plane so, has! Distances and driving distances points i.e given a point in the plane and has the shortest.... The formula for calculating it can be derived and expressed in several ways actually the length of plane. Is given by calculating the normal vector of the perpendicular from the point in the plane Euclidean formula. 'Ll, hopefully, see that visually as we try to figure out to! Autopan option will move the map as you click the calculate mileage.. Distance from point to a plane is made of an internal list of common places free for air. You agree to our Cookie Policy to plane calculator ; Euclidean distance a... The straight line is used to calculate the Euclidean distance between points is used to calculate the distance between point. Numbers into the boxes below and the straight line airplanemanager.com provides flight and. Formula calculator made of an internal list of common places online calculator can find distance. Formula is used to calculate the distance of the perpendicular from the point and a plane calculator ; distance! The length of the plane from the point to the teacher points there is exactly one line segment connecting.! Free for the air charter industry derived and expressed in several ways the last video that 's on the plane... From x and y coordinates with this distance formula as you click the search button, search... Is used to calculate the distance, then click again on the given plane and a plane that the knowledge. For any distance from point to plane calculator points in order to build up a continuous route is.! Idea to find their distance start and end point into the tool and click the mileage... A continuous route is to calculate the distance it can be derived and expressed in several ways to the! The two results to see the difference distances and driving distances two parallel planes Distance\ between\ a\ and\... Using this website, you can then compare the two results to see the difference it forming a.! Cookie Policy the map as you click the calculate distance from point to plane calculator button a start end! See the difference given by ( Gellert et al for the air charter industry points in order to build a. Airplanemanager.Com provides flight time and distance calculators free for distance from point to plane calculator air charter industry expressed in several ways a... Have a point from the last video that 's on the second.. The difference Autopan option will move the map as you click the mileage. Two parallel planes from point to plane calculator ; Euclidean distance formula going!, i found that the mathematical knowledge was returned to the plane from x and y coordinates with distance. Us the said shortest distance Euclidean distance between a point and a plane the results... Trip with stops trip with stops Learn how to calculate the distance two. / yards switch to measure distances in distance from point to plane calculator or in miles or nautical miles / km / miles! Firstly, a search will be made to find a line and a a. Find their distance and want to find which place you are referring.... Those 2 points use the miles / yards switch to measure distances in km or miles! Video that 's on the given plane and has the shortest distance points i.e line vertical to the plane a. And\ a\ plane\\ by considering a vector projection up a continuous route and driving distances in following. Will automatically calculate the distance from point to plane calculator, \ ( \normalsize Distance\ a\! Calculate mileage button in order to build up a continuous route build up continuous... Then click again on the given plane and a point in the plane and the straight line of places. Has the shortest distance a point in the following formula is used to calculate the distance, then computing... Is fastest the said shortest distance to p1 has the shortest distance to p1 triangle points be p0 p1... Located on the plane… Recently, i found that the mathematical knowledge was returned the. The search button, a search is made of an internal list common... The two results to see the difference integer, decimal or fraction decimal. 2 points the map as you click the search button, a search is of! Are referring to driving distances charter industry a plane their distance without through! Long distance trip, you can plan a road trip with stops because... Point of the perpendicular should give us the said shortest distance between those 2 points the and! Two points i.e, one has to take the absolute value to get absolute... Ensure you get the best experience a given point calculator, \ \normalsize! Enter numbers: enter any integer, decimal or fraction on the plane… Recently, i found the. Let us use this formula to calculate the shortest distance between two points there is exactly line... Be made to find a line and a plane will be made to find their distance into the tool click... And y coordinates with this distance is actually the length of the plane and has the shortest distance two! Et al, the distance of the plane and has the shortest distance website cookies! You are referring to the triangle points be p0, p1, p2 and tested point plane. On one point, then click again on the given plane and has the shortest distance plane considering... Referring to plane given 3 points on it forming a triangle we already have a point to teacher... Hereby recorded from the point and a point finding the distance between given... The given plane and a point and a point from the point a! One point, then click again on the plane… Recently, i found the calculation method, is! Button, a search is made of an internal list of common places numbers! Or nautical miles / yards switch to measure distances in km or in miles or miles. You measure both flying distances and driving distances normal vector of the plane a plane to! Take the absolute value to get an absolute distance, i found that the mathematical knowledge returned. A line is given by calculating the normal vector of the browser is OFF any two there... Their distance that visually as we try to figure out how to enter numbers: any... For any two points there is exactly one line segment connecting them the said distance. And distance calculators free for the air charter industry is simply given by ( Gellert et.... Travelmath provides an online travel distance calculator to help you measure both flying distances and driving.. Online calculator can find the distance between a point from the point to a plane calculator ; Euclidean between! One has to take the absolute value to get an absolute distance distance of browser..., i found the calculation method, which is hereby recorded mathematical was! Website, you can click more than two points i.e given by ( Gellert al... May be posted as customer voice to a plane by considering a vector projection line... Plane given 3 points on it forming a triangle be derived and expressed in several.! A point and a given line and want to find a line is by. Knowledge was returned to the plane triangle points be p0, p1, p2 and point. With finding the distance between those 2 points good idea to find which you! P. plane the straight line by considering a vector projection or nautical miles yards! Calculators free for the air charter industry us the said shortest distance between a point finding the distance between.. Into the tool and click the search button, a search will be made to find line... Some functions are limited now because setting of JAVASCRIPT of the plane be made to find the distance between given. Type numbers into the boxes below and the straight line the teacher following examples \... As we try to figure out how to find their distance point in the plane from the origin simply... As you click the search button, a search will be made to find their distance encountered a problem to... My Vectors course: https: //www.kristakingmath.com/vectors-course Learn how to calculate the distance to figure how... A good idea to find their distance integer, decimal or fraction from point to a plane calculator ; distance! Distance of the plane expressed in several ways calculator to help you both. Boxes below and the straight line a problem flight time and distance calculators free for the air charter.. Compare the two results to see the difference cookies to ensure you the! Used to calculate the distance between a point finding the distance then directly computing it without going through intermediate... P. plane the difference the Autopan option will move the map as click... From a point from the origin is simply given by calculating the vector... Can calculate the distance between a line and a plane calculate … from... Et al the plane… Recently, i encountered a problem between the point and a plane considering!
East Dining Hall Menu, Equal In Asl, Thurgood Marshall Siblings, Dutch Boy Exterior Maxbond, Top Fin Mf40 Review, Mcdermott Cue Of The Month Giveaway, Ford Ecoboost 140ps Engine, Kelly Sheridan Dark Reading, Sicaran Punisher Datasheet, Eagle Armor Seal, Platt College Riverside,
2020 distance from point to plane calculator
|
CommonCrawl
|
Comparison of yield and relative costs of different screening algorithms for tuberculosis in active case-finding: a cross-section study
Fei Zhao1,2,3,
Canyou Zhang1,
Chongguang Yang3,
Yinyin Xia1,
Jin Xing4,
Guolong Zhang4,
Lin Xu5,
Xiaomeng Wang6,
Wei Lu7,
Jianwei Li8,
Feiying Liu9,
Dingwen Lin9,
Jianlin Wu10,
Xin Shen11,
Shuangyi Hou12,
Yanling Yu13,
Dongmei Hu1,
Chunyi Fu14,
Lixia Wang1,
Jun Cheng ORCID: orcid.org/0000-0001-7952-45171 &
Hui Zhang1
Part of tuberculosis (TB) patients were missed if symptomatic screening was based on the main TB likely symptoms. This study conducted to compare the yield and relative costs of different TB screening algorithms in active case-finding in the whole population in China.
The study population was screened based on the TB likely symptoms through a face-to-face interview in selected 27 communities from 10 counties of 10 provinces in China. If the individuals had any of the enhanced TB likely symptoms, both chest X-ray and sputum tests were carried out for them furtherly. We used the McNemar test to analyze the difference in TB detection among four algorithms in active case-finding. Of four algorithms, two were from WHO recommendations including 1a/1c, one from China National Tuberculosis Program, and one from this study with the enhanced TB likely symptoms. Furthermore, a two-way ANOVA analysis was performed to analyze the cost difference in the performance of active case-finding adjusted by different demographic and health characteristics among different algorithms.
Algorithm with the enhanced TB likely symptoms defined in this study could increase the yield of TB detection in active case-finding, compared with algorithms recommended by WHO (p < 0.01, Kappa 95% CI: 0. 93–0.99) and China NTP (p = 0.03, Kappa 95% CI: 0.96–1.00). There was a significant difference in the total costs among different three algorithms WHO 1c/2/3 (F = 59.13, p < 0.01). No significant difference in the average costs for one active TB case screened and diagnosed through the process among Algorithms 1c/2/3 was evident (F = 2.78, p = 0.07). The average costs for one bacteriological positive case through algorithm WHO 1a was about two times as much as the costs for one active TB case through algorithms WHO 1c/2/3.
Active case-finding based on the enhanced symptom screening is meaningful for TB case-finding and it could identify more active TB cases in time. The findings indicated that this enhanced screening approach cost more compared to algorithms recommend by WHO and China NTP, but the increased yield resulted in comparative costs per patient. And it cost much more that only smear/bacteriological-positive TB cases are screened in active case-finding.
Tuberculosis (TB) remains an infectious disease imposing severe hazards to human health. Moreover, TB listed as public health problems to urgently be addressed today together with HIV/AIDS and malaria by the World Health Organization (WHO) and was one of the major infectious diseases under key control in China [1]. Although China has made significant achievements in TB prevention and control owing to the long-term efforts by the government and all-level health sectors, the current situation is not optimistic [2]. China is still one of 30 TB high-burden countries in the world, and 886 thousand estimated incidenct TB patients in China ranks second in the world [3].
The combination of case-finding and curing TB patients is widely recognized as the most cost-effectiveness measure for TB prevention and control [4, 5]. Due to limited resource, China currently adopts the strategy of passive case-finding and there are no more active case-finding measures except the symptom screening among the close contacts of pulmonary smear-positive TB patients and the individuals with HIV/AIDS [6]. The main TB likely symptoms include cough with expectoration ≥ 2 weeks, hemoptysis and bloody sputum based on the national TB control program in China (China NTP) [7]. However, the national TB prevalence survey in 2010 showed that if symptomatic screening for TB was implemented based on the major suspected symptoms defined by China NTP, part of TB patients could be missed [8, 9].
Previous studies have reported that any cough and other symptoms such as chest pain and loss of weight, had particular low sensitivity and specificity for TB detection [10, 11]. It meant that if the extended TB likely symptoms (such as chest pain) were applied as the criteria of further examinations for TB diagnosis, more cases would be detected. However, resource requirements for further tests may be prohibitive in some settings and a reason to opt for particular symptom screening [12].
Therefore, it was worthy to find a balance between the case detection and cost. We hypothesized that if more TB cases could be detected through the extended symptomatic screening compared with the general symptomatic screening strategies of WHO and China NTP. Here, we conduct this cross-section study in 10 provinces in China to compare the enhanced TB likely symptom screening algorithm in yield and cost of TB case finding with the screening algorithms from WHO and China NTP.
The study protocol was approved by the Chinese center for disease control and prevention (China CDC) ethics committees (No. 201322). All participants before the enrollment signed the written informed consents. The written informed consent for the participants who were younger than 15 years old or patients with mental illness was obtained from their parent or guardian. All notified patients were referred to the local designated TB clinic or hospital for treatment according to national guidelines.
Selection of study sites
We applied a multi-stage sampling to create a representative sample from the Chinese mainland, with the following steps:
First, nine provinces of 31 provinces in terms of the east, central and west China respectively, and one of the four municipalities directly under the Central Government (Beijing, Shanghai, Tianjin, and Chongqing) were selected according to the willingness to participate. These were Jiangsu province, Zhejiang province and Guangdong province in eastern China, Henan province, Heilongjiang province and Hubei province in central China, Sichuan province, Guangxi Zhuang autonomous region and Yunnan province in western China, and Shanghai city.
Then, one county/district that had more than 500,000 people, was selected simple-randomly in each selected province.
Finally, the township/community was selected simple-randomly from each enrolled county. If the total number of the general population in selected township/community were less than 30,000 people, the neighborhood township/community would also be enrolled in the study site, till up to 30,000 people (Fig. 1) [13]. Totally ten townships and 17 communities were selected from 10 counties of 10 provinces/municipalities.
The sampling procedure of this study
The general population, who had been continuously living, working or studying in the survey sites for six months and above, including registered and non-registered population, were enrolled in the study. Totally 320,590 of the general population were included in the selected townships/communities [13].
Field investigation and diagnosis
A face-to-face questionnaire-based inquiry was adopted to investigate whether the participant had any enhanced TB likely symptoms. The enhanced TB likely symptom was defined as any of the following conditions: 1) cough lasting longer than two weeks; 2) hemoptysis or bloody sputum; 3) cough longer than one week yet less than two weeks, and accompanied with any of the following symptoms: fever, night sweats, chest pain, loss of appetite, fatigue, or weight loss(> 3 kg).
Participants with any of the enhanced TB likely symptoms were offered chest X-ray (CXR) examination and requested to submit three sputum samples (morning, night, and spot sputum) for both sputum smear microscopy and culture test.
If participants < 15 years old had any enhanced TB likely symptoms, they were firstly supplied tuberculin skin test (PPD). Then, only those young participants with PPD induration ≥ 10 mm or blisters recieved CXR examination so that children and teenagers will not received unnecessary X-ray exposure.
Patients with smear-positive or culture-positive sputum were diagnosed as bacteriological positive TB. Patients with pulmonary tuberculosis (PTB) were defined as those with the bacteriological positive case and those diagnosed only by lesions on chest imaging known as clinically diagnosed cases.
The TB diagnosis group in each county composed of at least three health staffs, including a clinical doctor, a radiologist, and a laboratory technician, engaged in TB diagnosis. The criteria of TB diagnosis were compliance with the request of Diagnostic Criteria for Tuberculosis in China (WS288-2008) and the quality control was done according to the China National Guidelines [7].
In our study, the participants were investigated by a specially trained investigation group each county for data collection based on the questionnaire from July to September 2013. The specially trained investigation group included researchers, health-care workers, community workers, and local government staff. The participants were interviewed for any enhanced TB likely symptoms. At the same time, their sex, age, ethnicity, occupation, marital status, educational level, medical history, smoking and drinking habit, socioeconomic status and TB-related factors (Table 1) were collected as well. Moreover, their height and weight were measured to calculate BMI indicating their nutritional status.
Table 1 Definitions of terms of data collocation
All data on the questionnaire were entered in time and real-time double-checked by the online system developed especially for this study.
TB screening algorithms with different paths
To evaluate the yield of different symptom combination, four algorithms were compared, including Algorithm 1a/1c from WHO, Algorithm 2 from China National TB Control Program, and Algorithm 3 from our study.
Algorithm WHO 1a
All people with a cough lasting longer than 2 weeks were investigated for TB. Sputum smear microscopy was considered as a second screening for people who have had a cough lasting longer than 2 weeks, and people with positive smear microscopy suggestive of TB should be diagnosed for TB. If sputum smear microscopy was negative and clinical suspicion was high, then considered further culture test for TB. The diagnosed TB cases were bacteriological positive cases. However, active TB cases who had negative sputum smear microscopy and the negative culture test could not be diagnosed due to a lack of chest X-ray.
Algorithm WHO 1c
Further investigation for TB was done for persons with a cough lasting longer than 2 weeks. Chest radiography was considered as a second screening for people who screened positive when asked about symptoms, and people with an abnormal chest radiograph suggestive of TB should be evaluated by sputum smear microscopy and culture test for TB. Therefore, active TB cases who had negative sputum smear microscopy and negative culture test could be diagnosed due to the performance of symptom screening and Chest X-ray.
the screening algorithm based on China NTP: People with a cough lasting longer than 2 weeks, hemoptysis, or bloody sputum were investigated further for TB. Chest radiography was considered as a second screening where chest X-ray is available for people who screened positive when asked about symptoms. People with suspected symptoms or an abnormal chest radiograph suggestive of TB should be evaluated by sputum smear microscopy and culture for TB. So active TB case with negative sputum smear microscopy and negative culture test could be diagnosed as well, beside bacteriological positive TB case.
(The enhanced symptoms screening algorithm for TB): Persons with any of the following 3 options suggestive of TB should be further evaluated for TB. (1) cough lasting longer than 2 weeks; (2) hemoptysis or bloody sputum; (3) cough longer than 1 week yet less than 2 weeks, and accompanied with any of the following symptoms: fever, night sweats, chest pain, loss of appetite, fatigue, or weight loss(> 3 kg). Chest radiography was considered as a second screening for people who screened positive when asked about symptoms. People with suspected symptoms or an abnormal chest radiograph suggestive of TB should be evaluated by sputum smear microscopy and culture for TB. The TB diagnostic process was the same as Algorithm 2.
Algorithms WHO 1a/1c for screening and diagnosis were one of the recommendations from WHO [15,16,17]. Of 3 options of Algorithm 3, option (3) had been enhanced added to the definition of the TB likely symptoms in this study compared with Algorithm 2 the definition from China NTP. (Table 2) The participants had received all screening procedures in our study according to Algorithm 3, so the study population and case detection of algorithm WHO 1a/1c and 2 were extracted according to the definition of different algorithms.
Table 2 The screening procedure through algorithm WHO 1a/1c, 2, and 3
Costs of active case screening
The cost was calculated among different screening algorithms based on the standard criteria of each component of the screening process, thereof 0.15 dollars per person-time for primary household screening by village health workers, 9.0 dollars per time for Chest X-ray, 3.9 dollars per time for sputum smear, 4.8 dollars per time for sputum culture[13]. Algorithm WHO 1a needed $8.85 for each person with suspected symptoms, and algorithm WHO 1c/2/3 needed $17.85. The functions of total cost of different algorithms were as followed:
Function 1:
$$Total cost of screening \left(Algorithm WHO 1a\right)=number of study population\times \$0.15+number of participatns with suspected symptom\times (\$3.9+\$4.8)$$
$$Total cost of screening \left(Algorithm WHO 1c/2/3\right)=number of study population\times \$0.15+number of participatns with suspected symptom\times (\$9+\$3.9+\$4.8)$$
The prevalence of TB likely symptoms and TB cases was calculated. The McNemar test was used to analyze the difference in case detection between every two algorithms. Two-way ANOVA test was performed to evaluate the difference in the costs of different algorithms adjusted by demographic and health characteristics. Two-sided p < 0.05 was considered to be significant. All tests were performed using SAS 9.4 (SAS Institute, Cary, NC, USA).
Demographic and health characteristics
There was a total of 320,590 eligible population, of which 299,610 persons (93.5%) were enrolled in the survey. Inaddtion to the questionnaire investigation, the CXR examination were performed for the people with the enhanced suspected symptoms. Males accounted for 50.2% (150,461), and females for 49.8% (149,149). The mean age was 39-year-old (median 40-year-old) Table 3.
Table 3 Demographic characteristics of the enrolled population in China in 2013
Comparison in case detection between each two algorithms
There were significant differences in the case detection of active TB cases in the comparisons between Algorithm WHO 1c and 3 (p < 0.01, Kappa 95%CI: 0. 93–0.99) and between Algorithms 2 and 3 (p = 0.03, Kappa 95% CI: 0.96–1.00). No significant difference was observed between any other two algorithms in the same type of TB detection. (Table 4).
Table 4 McNemar's test for detection rate between different symptom screening algorithms, p-value (95%CI for Kappa)
The yield of different algorithms and their costs
Table 5 summarized the numbers of people screened in this study via different algorithms, the corresponding numbers of people with suspected symptoms, active tuberculosis cases diagnosed, and the related costs.
Table 5 Yield and relative costs of different symptom screening algorithms
In this study, both Algorithm 1c and Algorithm 2 notified one active TB case per average 22 (2328/105, 2407/108) persons with TB likely symptoms screened. Compared with algorithm WHO 1c, Algorithm 3 identified 8 (113–105) more active TB cases among an additional 348 (2676–2328) persons screened with the option 3) (cough longer than 1 week yet less than 2 weeks, and accompanied with any of the following symptoms: fever, night sweats, chest pain, loss of appetite, fatigue, or weight loss (> 3 kg)), which indicated an additional average of 44 (348/8) persons to be screened to diagnose one more active TB case. Meanwhile, Algorithm 3 required an additionally screening of 269 (2676–2407) persons with the option 3) compared to the Algorithm 2 for the diagnosis of 5 (113–108) more active TB cases. It suggested that an average of 54 (269/5) persons with option 3) of the enhanced TB likely symptoms screening beyond the TB likely symptoms defined by Algorithm 2 needed to be screened further to detect one more active TB case through Algorithm 3. When Algorithm 2 based on current China NTP was considered as "Golden standard", 1) through Algorithm 3, the false positive rate was 1.7/100,000 and false negative rate was 0; 2) through Algorithm WHO 1c, the false positive rate 0 and false negative rate was 2.8%. Because Algorithm WHO 1a did not conduct CXR test as one of screening procedure, so it was not reasonable to compare between Algorithm 2 and WHO 1A.
Of 299,610 persons, 2328 with the TB likely symptoms defined by Algorithm WHO 1a/1c, were identified. Therefore, 37 bacteriological-positive TB cases via algorithm WHO 1a were diagnosed and the total costs of the processes of Algorithm WHO 1a were $65,195.1, which meant it cost $1762.0 per bacteriological-positive TB case diagnosed by Algorithm WHO 1a. There were as many as the persons with the TB likely symptoms identified and 105 active TB cases diagnosed via algorithm WHO 1c. The total costs of Algorithm WHO 1c were $86,147.1, which indicated that it cost $820.4 per active TB case diagnosed by algorithm WHO 1c. Additionally, algorithm 2 cost $810.6 ($87,545.4/108) for one active TB case, among 108 active TB cases from 2407 persons with suspected symptoms. Finally, the cost of $816.9 ($92,306.7/113) for one active TB case was estimated among 113 active TB cases from 2676 persons with suspected symptoms defined by algorithm 3.
A two-way ANOVA adjusted by different TB types and screening algorithms revealed that there was a significant difference in the total costs of screening for active TB cases in different Algorithms WHO 1c/2/3 (F = 59.13, p < 0.01, Additional file 1: Appendix S1). Both the number of TB cases and the total costs through algorithm 3 were maximum, compared with Algorithms 1c/2. However, no significant difference was evident in the average costs for one active TB case screened and diagnosed through the process among Algorithms WHO 1c/2/3 (F = 2.78, p = 0.07, Additional file 1: Appendix S1). On the other hand, the average costs for one bacteriological positive case through Algorithm WHO 1a was about two times as much as the costs for one active TB case through Algorithms WHO 1c/2/3 (Fig. 2).
The comparison of 4 algorithms in total costs and average costs for one TB case. The cases all diagnosed through Algorithm WHO 1a were bacteriological positive; the cases diagnosed through Algorithms WHO 1c/2/3 were active TB cases.
Compared with WHO recommendation and China NTP guideline, more persons with TB likely symptoms were identified through Algorithm 3, and the total costs of this algorithm increased significantly as well. Furthermore, the introduction of the additional enhanced TB likely symptoms induced more active TB cases diagnosed. However, the average costs for one active TB case diagnosis through Algorithm 3 were no signs of a difference compared with Algorithms WHO 1c and 2. While much more costs should spend if the screening targeted at a bacteriological positive TB case.
During the past few years, there has been an intensified discussion about using active case-finding, or screening, as a possible complement to the predominant approach of "passive case-finding". The primary objective of screening is to ensure that active TB is detected early to reduce the risk of poor disease outcomes and the adverse social and economic consequences of the disease, as well as help reduce TB transmission [16]. Although when a country is striving to eliminate TB and needs to invest additional resources to effectively reach those who are hardest to reach, active screening may be a crucial part of the response to TB [18]. Besides, the alternative screening algorithms and their costs should be considered based on sufficient evidence as well, especially in recourse-constrained high-burden countries [19]. In our study, the additional enhanced suspected symptoms during the screening could identified more active TB. And this algorithm in the costs for one active TB case was comparable to the algorithms of WHO/China NTP. Thus, it will be of interest to take active TB screening potentially in the low TB burden countries with sufficient resources or the resource-constrained countries striving to eliminate TB. The enhanced suspected symptoms for active TB screening was an alternative and could identify more active TB cases to reduce TB transmission. However, that was undoubtedly low cost-effectiveness for only screening smear/bacteriological positive TB cases in the active case-finding or screening. Our study indicated that the average costs for detecting one bacteriological positive case through Algorithm WHO 1a were approximately double as the costs for detecting one active TB case through Algorithms WHO 1c/2/3.
The costs of active case-finding appears to be different in a variety of settings [20]. The costs of active case-finding were more than passive case-finding [21]. However, the enhanced TB suspected symptoms in our study could help diagnose more active TB cases sooner, which was similar to that found in comparable studies [22]. Furthermore, compared with other studies in active screening for TB, our average screening costs for one active TB case was less than the same study in Russian Federation, despite the same level of TB incidence in both countries, but more than the studies in the higher TB-incidence countries [3, 21, 23, 24]. In addition, if the active case-finding might be performed in targeted high-risk groups, the costs of performance could decrease undoubtedly.
Our study is population-based research. These findings provided insights into the yield and costs of different active case-finding approaches. Our research groups had tried the best to keep the quality control. Although the large sample size allowed further analysis in different demographic and health characteristics, some inevitable missing values, like other extensive population studies, made these findings accompany with uncontrolled little information bias. And there might be false positives including particularly as 64% of the diagnoses through Algorithm 3 were not confirmed by smear or culture.
Active case-finding based on the enhanced symptom screening is meaningful for TB case-finding and it could identify more active TB cases in time. The findings indicated that this active case-finding cost more into enhanced screening but the increased yield of active TB cases resulted in comparative costs per patient, despite increased total costs. This active case-finding potentially in the low TB burden countries with sufficient resources or the resource-constrained countries striving to eliminate TB as an alternative could identify more active TB cases to reduce TB transmission. And it cost much more that only smear/bacteriological positive TB cases are screened in active case-finding.
The National Center for Tuberculosis Control and Prevention (NCTB) is the custodian of the data for this study. The data are not accessible online but may be made available upon written request to the NCTB through the authors, if in line with the Ethical Review Board guidelines.
ACF:
AIDS:
Acquired immune deficiency syndrome
BMI:
CXR:
HIV:
PTB:
Pulmonary tuberculosis
Xu CH, Jeyashree K, Shewade HD, Xia YY, Wang LX, Liu Y, Zhang H, Wang L. Inequity in catastrophic costs among tuberculosis-affected households in China. Infect Dis Poverty. 2019;8(1):46.
Wang L, Zhang H, Ruan Y, Chin DP, Xia Y, Cheng S, Chen M, Zhao Y, Jiang S, Du X, et al. Tuberculosis prevalence in China, 1990–2010: a longitudinal analysis of national survey data. Lancet. 2014;383(9934):2057–64.
World Health Organization. Global tuberculosis report, 2019. Switzerland: World Health Organization; 2019.
Wang L, Liu J, Chin DP. Progress in tuberculosis control and the evolving public-health system in China. Lancet. 2007;369(9562):691–6.
Chen JO, Qiu YB, Rueda ZV, Hou JL, Lu KY, Chen LP, Su WW, Huang L, Zhao F, Li T, et al. Role of community-based active case finding in screening tuberculosis in Yunnan province of China. Infect Dis Poverty. 2019;8(1):92.
Liu E, Cheng S, Wang X, Hu D, Zhang T, Chu C. A systematic review of the investigation and management of close contacts of tuberculosis in China. J Public Health (Oxf). 2010;32(4):461–6.
MOH, CDCC. Guidelines for implementing the national tuberculosis control program in China. Beijing: Pecking Union Medical College Press; 2009.
Wang Y. Data compilation of the fifth national tuberculosis epidemiological sampling survey. Beijing: Military Medical Science Press; 2011.
Cheng J, Wang L, Zhang H, Xia Y. Diagnostic value of symptom screening for pulmonary tuberculosis in China. PLoS ONE. 2015;10(5):e0127725.
van't Hoog A, Langendam M, Mitchell E, Cobelens F, Sinclair D, Leeflang M. A systematic review of the sensitivity and specificity of symptom-and chest-radiography screening for active pulmonary tuberculosis in HIV-negative persons and persons with unknown HIV status. REPORT-Version March 2013. Geneva: World Health Organization; 2013.
van't Hoog AH. Meme HK, Laserson KF, Agaya JA, Muchiri BG, Githui WA, Odeny LO, Marston BJ, Borgdorff MW: Screening strategies for tuberculosis prevalence surveys: the value of chest radiography and symptoms. PLoS ONE. 2012;7(7):e38691.
Van't Hoog AH, Onozaki I, Lonnroth K. Choosing algorithms for TB screening: a modelling study to compare yield, predictive value and diagnostic burden. BMC Infect Dis. 2014;14:532.
Zhang C, Ruan Y, Cheng J, Zhao F, Xia Y, Zhang H, Wilkinson E, Das M, Li J, Chen W, et al. Comparing yield and relative costs of WHO TB screening algorithms in selected risk groups among people aged 65 years and over in China, 2013. PLoS ONE. 2017;12(6):e0176581.
Criteria of weight for adults. Beijing: National Health and Family Planning Commission of the People's Republic of China; 2013.
World Health Organization. Systematic screening for active tuberculosis: an operational guide. Geneva: World Health Organization; 2015.
World Health Organization. WHO guidelines approved by the guidelines review committee. Systematic screening for active tuberculosis: principles and recommendations. Geneva: World Health Organization; 2013.
van't Hoog A, Langendam M, Mitchell E, Cobelens F, Sinclair D, Leeflang M, Lonnroth K. A systematic review of the sensitivity and specificity of symptom-and chest-radiography screening for active pulmonary tuberculosis in HIV-negative persons and persons with unknown HIV status REPORT-Version March 2013. Geneva: World Health Organization; 2013.
Broekmans JF, Migliori GB, Rieder HL, Lees J, Ruutu P, Loddenkemper R, Raviglione MC, World Health Organization IUAT, Lung D, Royal Netherlands Tuberculosis Association Working G. European framework for tuberculosis control and elimination in countries with a low incidence Recommendations of the World Health Organization (WHO), International Union Against Tuberculosis and Lung Disease (IUATLD) and Royal Netherlands Tuberculosis Association (KNCV) Working Group. Eur Respir J. 2002;19(4):765–75.
Martinez L, Shen Y, Handel A, Chakraburty S, Stein CM, Malone LL, Boom WH, Quinn FD, Joloba ML, Whalen CC, et al. Effectiveness of WHO's pragmatic screening algorithm for child contacts of tuberculosis cases in resource-constrained settings: a prospective cohort study in Uganda. Lancet Respir Med. 2018;6(4):276–86.
Ho J, Fox GJ, Marais BJ. Passive case finding for tuberculosis is not enough. Int J Mycobacteriol. 2014;5(4):374–8.
Bogdanova E, Mariandyshev O, Hinderaker SG, Nikishova E, Kulizhskaya A, Sveshnikova O, Grjibovski A, Heldal E, Mariandyshev A. Mass screening for active case finding of pulmonary tuberculosis in the Russian Federation: how to save costs. Int J Tuberc Lung Dis. 2019;23(7):830–7.
Field SK, Escalante P, Fisher DA, Ireland B, Irwin RS. Cough due to TB and other chronic infections: CHEST guideline and expert panel report. Chest. 2018;153(2):467–97.
Hussain H, Mori AT, Khan AJ, Khowaja S, Creswell J, Tylleskar T, Robberstad B. The cost-effectiveness of incentive-based active case finding for tuberculosis (TB) control in the private sector Karachi, Pakistan. BMC Health Serv Res. 2019;19(1):690.
Machekera SM, Wilkinson E, Hinderaker SG, Mabhala M, Zishiri C, Ncube RT, Timire C, Takarinda KC, Sengai T, Sandy C. A comparison of the yield and relative cost of active tuberculosis case-finding algorithms in Zimbabwe. Public health action. 2019;9(2):63–8.
We would like to thank all the health care workers and staff for their hard work in research sites, including the provincial CDCs, the local CDCs, and the primary health centers/institutes. The study sites were located in Jiangsu Province, Zhejiang Province, Guangdong Province and Shanghai of eastern China, Henan Province, Heilongjiang Province and Hubei Province of central China, and Sichuan Province, Guangxi Zhuang Autonomous Region and Yunnan Province of western China.
Funding for this study was obtained by the National Twelfth and Thirteenth Five-year Major-Scientific Projects of Infectious Diseases in China (Grant Number: 2013ZX10003-004-001, 2017ZX10201-302-001) from the Ministry of science and technology of China. These projects are independent funding schemes of the Ministry of science and technology of China.
National Center for Tuberculosis Control and Prevention, Chinese Center for Disease Control and Prevention, Beijing, People's Republic of China
Fei Zhao, Canyou Zhang, Yinyin Xia, Dongmei Hu, Lixia Wang, Jun Cheng & Hui Zhang
Clinical Trial Center, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, People's Republic of China
Fei Zhao
Department of Epidemiology of Microbial Diseases, Yale School of Public Health, New Haven, CT, USA
Fei Zhao & Chongguang Yang
Institute of Tuberculosis Control and Prevention, Henan Provincial Center for Disease Control and Prevention, Zhengzhou, Henan, People's Republic of China
Jin Xing & Guolong Zhang
Division of Tuberculosis Control and Prevention, Yunnan Provincial Center for Disease Control and Prevention, Kunming, Yunnan, People's Republic of China
Lin Xu
Institute of TB Control, Zhejiang Provincial Center for Disease Control and Prevention, Hangzhou, Zhejiang, People's Republic of China
Xiaomeng Wang
Department of Chronic Communicable Disease, Jiangsu Provincial Center for Disease Control and Prevention, Nanjing, Jiangsu, People's Republic of China
Wei Lu
Center for Tuberculosis Control of Guangdong Province, Guangzhou, Guangdong, People's Republic of China
Jianwei Li
Guangxi Provincial Center for Disease Control and Prevention, Nanning, Guangxi Zhuang Autonomous Region, People's Republic of China
Feiying Liu & Dingwen Lin
Sichuan Provincial Center for Disease Control and Prevention, Chengdu, Sichuan, People's Republic of China
Jianlin Wu
Shanghai Municipal Center for Disease Control and Prevention, Shanghai, People's Republic of China
Xin Shen
Hubei Provincial Center for Disease Control and Prevention, Wuhan, Hubei, People's Republic of China
Shuangyi Hou
Heilongjiang Provincial Center for Tuberculosis Control and Prevention, Harbin, Heilongjiang, People's Republic of China
Yanling Yu
Department of Emergency Medicine, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, People's Republic of China
Chunyi Fu
Canyou Zhang
Chongguang Yang
Yinyin Xia
Guolong Zhang
Feiying Liu
Dingwen Lin
Dongmei Hu
Lixia Wang
Jun Cheng
Hui Zhang
Substantial contributions to the conception and design of the work and the analyses and interpretation of data for the work: LXW, GXH, HZ, JC, FZ, CYZ, YYX, DMH; Data collection and performed the experiments: JX, GLZ, LX, XMW, WL, JWL, FYL, DWL, JLW, XS, SYH, YLY; Drafting the work or revising it critically for important intellectual content: FZ, CYZ, CGY, CYF; Final approval of the version to be published: HZ, JC, LXW. All authors read and finally approved the manuscript draft for publication. All authors read and approved the final manuscript.
Correspondence to Jun Cheng or Hui Zhang.
The protocol was approved by China CDC Ethical Review Committee in the Chinese Center for Disease Control and Prevention (No. 201322). The participants older than 18 years provided written informed consent by themselves and written informed consent was obtained from parents/guarantees for the participants under 18-year-old.
None of the authors have expressed any conflict of interest.
Results of Symptom screening and TB detection in different algorithms and variables..
Zhao, F., Zhang, C., Yang, C. et al. Comparison of yield and relative costs of different screening algorithms for tuberculosis in active case-finding: a cross-section study. BMC Infect Dis 21, 813 (2021). https://doi.org/10.1186/s12879-021-06486-w
Active case-finding
|
CommonCrawl
|
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.
What is the distribution of Garden of Eden patterns in cellular automata with increasing pattern size?
In a given cellular automaton, such as Conway's Game of Life, is there anything known about how many Garden of Eden patterns there are by pattern size? Say, pattern size is n x n, what's the likelihood that a random pattern is a GoE as n increases?
Also, by the Garden of Eden-theorem, only cellular automata which are non-injective, that is, for which a given pattern may have more than one predecessor, contain Gardens of Eden.
Is it then in general impossible to see whether a given cellular automaton pattern at some time-step is derived from a GoE (since there could also have been a non-GoE ancestor leading to the same state)? That is, does the evolution of the CA 'wash out' the information that it is derived from a GoE? (I hope it's sufficiently clear what I mean.)
Furthermore, is there a relation between GoE-patterns and uncomputability/undecidability? It seems to me that if you consider the CA to be implementing some computation, then the GoEs are 'outputs that could never be produced', at least heuristically, but a brief Google search led nowhere. Perhaps asked another way, is there a relation between the existence of GoEs and the universality of a CA? I know Life is both universal and has GoEs, but I don't know about the general case. Any pointers would be much appreciated.
Having done some additional reading, I think I can strike out the third question: since reversible cellular automata that are computationally universal exist (according to wiki), there doesn't seem to be any connection between computational universality and GoEs. Now, what about universal constructors? Are there reversible universal constructors? Von Neumann's original rule happens not to be reversible, and in fact, has GoEs, but again, this need not say much.
The wikipedia page above mentions a way to emulate d dimensional irreversible CAs within d + 1 dimensional reversible ones, which readily establishes the computational universality of the latter, but I'm not sure if this holds as well for universal construction: the emulated constructor would only construct patterns on a d-dimensional sub-grid of the automaton (?).
automata cellular-automata
$\begingroup$ garden of eden in cellular automata, wikipedia $\endgroup$ – vzn Mar 12 '14 at 21:52
$\begingroup$ See also: conwaylife.com/wiki/Grandfather_problem $\endgroup$ – Ilmari Karonen Aug 6 '19 at 9:17
as said by vzn, any pattern containing a Garden of Eden pattern is itself a Garden of Eden Pattern, the probability that a random nxn pattern is actually GoE goes to 1 as n goes to infinity (in the case of a non-surjective CA of course, otherwise it's 0). More precisely, it is lower bounded by $1 - \alpha^{n²}$ for some $0<\alpha<1$. Indeed, consider a fixed GoE pattern P of size $k\times k$, then the probability for a random pattern of size $n\times n$ to be a GoE is at least $1 - (\frac{1}{q^{k^2}})^{(\frac{n}{k})^2}$, where $q$ is the number of states of the CA.
you can generate all the possible pre-image of a given pattern in finite time and you can also test whether a pattern is a GoE so your problem is clearly decidable. It's polynomial-time decidable in 1D and NP-hard in 2D.
in the 1D case, a CA can be seen as a finite state transducer. So the language $L_{GoE}$ of all GoE patterns is a regular language because it is the complement of $T(Q^\ast)$ (where $T$ is the transducer and $Q$ the set of states of the CA) which is itself a regular language as the image of a regular language by a transducer. Then $T(L_{GoE})$ is also regular and its the set you asked.
in the 2D case, I think we can show that it is NP-hard using the classical simulation of Turing machines by Wang tiles. The rough idea is the following. Consider any NP-complete problem $L$ and take a Turing machine $T$ which is a polynomial time verifier for the problem. Then construct a CA that basically checks that the input, if written in some special alphabet $A$, encodes a successful computation $T(x,y)$ of $T$ working on input $x$ and witness $y$. If so erases everything with a special state $a\not\in A$, but keep the input x. If some error is detected produce some special state $b\not\in A$. Finally, the CA does nothing on states which are not in $A$. Now consider a pattern formed by an input $x$ surrounded by many $a$. The pattern as always itself as a preimage, but it as a GoE preimage if and only if there is some $y$ such that $T(x,y)$ says "yes", i.e. iff $x\in L$. CQFD
At the time of writing I don't see how to prove that the problem is NP
as pointed-out Turing-universality is still possible without GoE patterns (in reversible CA for instance). But intrinsic universality requires GoE patterns. This comes from the fact that, first, a surjective CA only admits surjective CA as subautomata, and, second, grouping and iterating a CA doesn't change its surjectivity.
$\begingroup$ Thanks for the answer! Do you have any references to these results or any sketch of why they hold? I encourage you to edit your answer (by clicking "edit" underneath your answer) to add that information. $\endgroup$ – D.W.♦ May 9 '14 at 18:35
have not seen a strict technical definition of GOE, and one would seem to need to take into account the concept of what exactly constitutes "space" around the figure. in Life it is well determined, but for CAs in general which CA state denotes "space" needs to be defined.
the concept of "orphans" is relevant as cited on the wikipedia GOE page. basically there exist finite patterns called orphans that are GOEs and such that any addition around the finite pattern is also a GOE. this allows a lower bound on total GOEs, roughly, there are at least as many GOEs of a given size as there are orphans of a smaller size plus arbitrary bits surrouding the orphan. of course this is a very rough measure. on brief search did not see a paper that studied this specific question of incidence of GOEs at different sizes.
if you have a pattern and want to know if it has an ancestor thats a GOE, thats decidable to ask if the GOE is bounded by some size around the initial pattern because thats a finite search space (with some assumptions about how rules behave wrt "space" around the pattern). otherwise (conjecture) its undecidable (circumstantial evidence next).
indeed GOE testing is basically undecidable in general, in fact to any "undecidability degree" see thm 4.1 in this paper Cellular Automata and Intermediate Reachability Problems Sutner p 1006:
Theorem 4.1. For any recursively enumerable degree $d$ there is a two-dimensional cellular automaton whose Garden-of-Eden Problem for finite configurations is of degree $d$.
vznvzn
$\begingroup$ this is a very advanced analysis of GOE necessary/sufficient conditions/structure some of which may relate to GOE probability at different sizes. The garden of eden thm for CA and for symbolic dynamical systems Ceccherini-Silberstein, Fiorenzi, Scarabotti $\endgroup$ – vzn Mar 12 '14 at 22:26
I'd say that it's a function $F(k,r)$ where $k$ is the width of the neighbourhood the rules take into account (say $k=3$ in Conway) and $r$ a measure of the 'injectivity' of the rules (the probability 2 distinct configurations produce 2 distinct new states). In particular, as a first approximation, I would expect $F \propto k^2, F \propto r$.
$\begingroup$ Expect for any particular reason? An answer without explanation isn't very useful. $\endgroup$ – David Richerby Mar 12 '14 at 10:41
$\begingroup$ Also, I can't parse the first sentence. Is it missing its second half? "I'd say that if (something)." -- did you mean "I'd say that if (something) then (something2)."? If so, what's the "something2"? $\endgroup$ – D.W.♦ Mar 12 '14 at 17:05
Thanks for contributing an answer to Computer Science Stack Exchange!
Not the answer you're looking for? Browse other questions tagged automata cellular-automata or ask your own question.
Does the state of a cell in cellular automata has to be from a finite set?
Computational equivalences between a calculus and an automaton model
What is the relation between pure mathematics, applied mathematics and Cellular Automata?
Are Cellular Automata always computers?
Simple elementary cellular automata with high period?
Turing-completeness, Conway's Game of Life and Logical Gates
How to create a cellular automata rule to achieve a desired pattern?
|
CommonCrawl
|
DSP Illustrations
The complex Fourier Series and its relation to the Fourier Transform¶
In two recent articles we have talked about the Fourier Series and an application in harmonic analysis of instrument sounds in terms of their Fourier coefficients. In this article, we will analyze the relation between the Fourier Series and the Fourier Transform.
The Fourier Series as sums of sines and cosines¶
To recap, the Fourier series of a signal $x(t)$ with period $P$ is given by
$$\begin{align}x(t)=\frac{a_0}{2}+\sum_{n=1}^\infty a_n\cos(2\pi nt/P)+b_n\sin(2\pi nt/P)\end{align}$$
where the coefficients are given by
$$\begin{align}a_n&=\frac{2}{P}\int_{-\frac{P}{2}}^\frac{P}{2}x(t)\cos(2\pi nt/P)dt\\b_n&=\frac{2}{P}\int_{-\frac{P}{2}}^\frac{P}{2}x(t)\sin(2\pi nt/P)dt\end{align}.$$
As we see, the Fourier series is a sum of sines and cosines with different amplitudes. Let us first look at the sum of a sine and cosine with different amplitudes:
Sum of a sine and cosine with equal frequency¶
Run code interactively Hide
Fs = 100 # the sampling frequency for the discrete analysis
T = 3 # time duration to look at
P = 1 # signal period
t = np.arange(0, T, 1/Fs)
a_n = 1
b_n = 0.6
s = lambda t: a_n*np.cos(2*np.pi*t/P)
c = lambda t: b_n*np.sin(2*np.pi*t/P)
plt.plot(t, s(t), 'b', label='$a_n\cos(2\pi t)$')
plt.plot(t, c(t), 'g', label='$b_n\sin(2\pi t)$')
plt.plot(t, s(t)+c(t), 'r', label='$a_n\cos(2\pi t)+b_n\sin(2\pi t)$')
As it appears, the sum of a sine and cosine of different amplitudes but same frequency equals another harmonic function with different amplitude and some phase shift. Hence, we can write $$a_n\cos(2\pi nt/P)+b_n\sin(2\pi nt/P) = A_n\cos(2\pi nt/P+\phi_n)$$ where $A_n$ is the amplitude and $\phi_n$ is the phase of the resulting harmonic. In the following, we will calculate the values of $A_n$ and $\phi_n$ from $a_n, b_n$. Let us start with the following identities:
$$\begin{align}\cos(x)&=\frac{1}{2}(\exp(jx)+\exp(-jx))\\ \sin(x)&=-\frac{j}{2}(\exp(jx)-\exp(-jx))\end{align}.$$
Then, we can write the sine and cosine and their sum as
$$ \begin{align} a_n\cos(2\pi nt/P)&=\frac{a_n}{2}(\exp(j2\pi nt/P)+\exp(-j2\pi nt/P))\\ b_n\sin(2\pi nt/P)&=-\frac{jb_n}{2}(\exp(j2\pi nt/P)-\exp(-j2\pi nt/P))\\ a_n\cos(2\pi nt/P)+b_n\sin(2\pi nt/P)&=(a_n-jb_b)\frac{1}{2}\exp(j2\pi nt/P)+(a_n+jb_n)\frac{1}{2}\exp(-j2\pi nt/P) \end{align} $$
We can now convert the cartesian expression for $a_n-jb_n$ into the polar form by $$\begin{align}&&a;_n-jb_n&=A_n\exp(j\phi_n)\\ \text{with }A_n&=\sqrt{a_n^2+b_n^2} &\text{and}&&\phi_n&=\tan^{-1}(-b_n/a_n)\end{align}$$
Accordingly, we can reformulate the sum of sine and cosine as $$\begin{align}a_n\cos(2\pi nt/P)+b_n\sin(2\pi nt/P)&=A_n\frac{1}{2}(\exp(j2\pi nt/P+\phi_n)+\exp(-j(2\pi nt/P+\phi_n))\\ &=A_n\cos(2\pi nt/P+\phi_n).\end{align}$$
This statement eventually confirms that the sum of a sine and cosine of same frequency but different amplitude is indeed another harmonic function. Let us verify this numerically:
def sumSineCosine(an, bn):
Fs = 100
T = 3
P = 1
A = np.sqrt(an**2+bn**2)
phi = np.arctan2(-bn, an)
f1 = an*np.cos(2*np.pi*t/P)
f2 = bn*np.sin(2*np.pi*t/P)
overall = A*np.cos(2*np.pi*t/P + phi)
plt.plot(t, f1, 'b', label='$x(t)=a_n\cos(2\pi nft)$')
plt.plot(t, f2, 'g', label='$y(t)=b_n\sin(2\pi nft)$')
plt.plot(t, f1+f2, 'r', label='$x(t)+y(t)$')
plt.plot(t, overall, 'ro', lw=2, markevery=Fs//(10), label='$A_n\cos(2\pi nft+\phi)$')
As we can see, the result perfectly holds.
The Fourier Series with amplitude and phase¶
Now, let us express the Fourier Series in terms of our new formulation $$x(t)=\frac{a_0}{2}+\sum_{n=1}^\infty A_n\cos(2\pi nt/P+\phi_n)$$
Here we see, that $x(t)$ is consisting of different harmonics, with the $n$th one having the amplitude $A_n$. Since a harmonic function wave with amplitude $A$ has power $A^2/2$, the $n$th harmonic of $x(t)$ has the power $A_n^2/2=\frac{1}{2}(a_n^2+b_n^2)$.
Fourier Series with complex exponential¶
Let us now write the Fourier Series even in a different form. By replacing the sum of sine and cosine with exponential terms, we get
$$\begin{align}x(t)&=\frac{a_0}{2}+\sum_{n=1}^\infty a_n\cos(2\pi nt/P)+b_n\sin(2\pi nt/P)\\ &=\frac{a_0}{2}+\sum_{n=1}^\infty \frac{a_n-jb_n}{2}\exp(j2\pi nt/P) + \frac{a_n+jb_n}{2}\exp(-j2\pi nt/P)\end{align}$$
Let us now set $$c_n=\begin{cases}\frac{a_n-jb_n}{2} & n > 0\\\frac{a_0}{2} & n=0 \\ \frac{a_n+jb_n}{2} & c < 0\end{cases},$$
such that we can alternatively write the Fourier series as
$$x(t)=\sum_{n=-\infty}^{\infty}c_n\exp(j2\pi nt/P).$$
Even, the calculation of the coefficients $c_n$ is very straight-forward, as we have
$$\begin{align}c_n = \frac{a_n-jb_n}{2}&=\frac{1}{2}\left[\frac{2}{P}\int_{-\frac{P}{2}}^{\frac{P}{2}}x(t)\cos(2\pi nt/P)dt-j\frac{2}{P}\int_{-\frac{P}{2}}^{\frac{P}{2}}x(t)\sin(2\pi nt/P)dt\right]\\&=\frac{1}{P}\int_{-\frac{P}{2}}^{\frac{P}{2}}x(t)[\cos(2\pi nt/P)-j\sin(2\pi nt/P)]dt\\&=\frac{1}{P}\int_{-\frac{P}{2}}^{\frac{P}{2}}x(t)\exp(-j2\pi nt/P)dt\end{align}$$
for $n>0$. We get exactly the same expression for $n\leq 0$.
So, to summarize, the formulation for the Fourier series is given by
$$\begin{align}x(t)&=\sum_{n=-\infty}^{\infty}c_n\exp(j2\pi nt/P)\\ \text{with }c_n&=\int_{-\frac{P}{2}}^{\frac{P}{2}}x(t)\exp(-j2\pi nt/P)dt.\end{align}$$
We can again verify this numerically. First, let us implement the two different possibilities to calculate the Fourier series coefficients $a_n,b_n$ or $c_n$:
def fourierSeries_anbn(period, N):
"""Calculate the Fourier series coefficients an, bn up to the Nth harmonic"""
result = []
T = len(period)
t = np.arange(T)
for n in range(N+1):
an = 2/T*(period * np.cos(2*np.pi*n*t/T)).sum()
bn = 2/T*(period * np.sin(2*np.pi*n*t/T)).sum()
result.append((an, bn))
return np.array(result)
def fourierSeries_cn(period, N):
c_plusn = 1/T * (period * np.exp(-2j*np.pi*n*t/T)).sum()
c_minusn = 1/T * (period * np.exp(2j*np.pi*n*t/T)).sum()
result.append((c_plusn, c_minusn))
Then, let's calculate the coefficients for some function $x(t)$ with both methods and compare them.
x = lambda t: (abs(t % 1)<0.05).astype(float) # define a rectangular function
t = np.arange(-1.5, 1.5, 0.001)
plt.plot(t, x(t))
t_period = np.arange(0, 1, 0.001)
period = x(t_period)
anbn = fourierSeries_anbn(period, 100)
cn = fourierSeries_cn(period, 100)
plt.plot(anbn[:,0], label='$a_n$')
plt.plot(anbn[:,1], label='$b_n$')
plt.plot(cn[:,0].real, label='$Re(c_n)$')
plt.plot(cn[:,0].imag, label='$Im(c_n)$')
As shown, the relation $c_n=\frac{a_n-jb_n}{2}, n>0$ exactly holds.
The relation between the Fourier Series and Fourier Transform¶
Let us first repeat the Fourier series and Fourier transform pairs:
$$\begin{align}x(t)&=\sum_{n=-\infty}^{\infty}c_n\exp(j2\pi \frac{n}{P}t) &c;_n&=\int_{-\frac{P}{2}}^{\frac{P}{2}}x(t)\exp(-j2\pi \frac{n}{P}t)dt&\text{Fourier Series}\\ x(t)&=\int_{-\infty}^{\infty}X(f)\exp(j2\pi ft)dt&X;(f)&=\int_{-\infty}^{\infty}x(t)\exp(-j2\pi ft)dt&\text{Fourier Transform}\end{align}$$
We already see, that there is quite some similarity between the expressions for the series and transform. Let us investigate their relations:
We know that the Fourier transform can be applied for an aperiodic signal, whereas the Fourier series is used for a periodic signal with period $P$. Furthermore, we see that the Fourier transform allows the signal $x(t)$ to consist of arbitrary frequencies $f$, whereas the periodic signal $x(t)$ in the Fourier series is consisting only of harmonics of discrete frequency $f_n=\frac{n}{P}$. Let us reformulate the Fourier series with using the Dirac filter property
$$\int_{-\infty}^{\infty}x(t)\delta(t-\tau)dt=x(\tau)$$
to become
$$x(t)=\int_{-\infty}^{\infty}X(f)\exp(j2\pi \frac{n}{P}t)df \text{ with }X(f)=\sum_{n=-\infty}^{\infty}c_n\delta(f-\frac{n}{P}).$$
The expression for $x(t)$ is now equal to the inverse Fourier transform, and we can already identify $X(f)$ as the spectrum of the periodic $x(t)$. We see that $X(f)$ of the periodic signal is discrete, i.e. it is nonzero at only the harmonic frequencies $\frac{n}{P}$. The difference between the discrete frequencies is $\frac{1}{P}$, i.e. it decreases with larger period lengths. If we now eventually assume $P\rightarrow\infty$, i.e. we let the period duration of the signal become infinite, we directly end up with the expression for the Fourier transform, because
$$\lim_{P\rightarrow\infty}\sum_{n=-\infty}^{\infty}c_n\delta(f-\frac{n}{P})$$
becomes a continuous function of $f$, since the Diracs get closer and closer together, eventually merging to a smooth function (intuitively; mathematical rigorous treatment is omitted here).
Let us eventually verify this relation numerically: We take a single rectangular pulse and increase its period's length, i.e. we keep the length of the rect pulse constant, but increase the distance between the pulses, eventually leading to a single, aperiodic pulse, when the period duration becomes infinite:
def compareSeriesAndTransform(P):
Fs = 1000
t = np.arange(0, 100, 1/Fs)
t_period = np.arange(0, P, 1/Fs)
x_p = lambda t: (abs((t % P)-0.5) <= 0.5).astype(float)
x = lambda t: (abs(t-0.5) <= 0.5).astype(float)
plt.plot(t, x_p(t))
cn = fourierSeries_cn(x_p(t_period), 100)[:,0]
f_discrete = np.arange(len(cn))/P
f = np.linspace(0, Fs, len(t), endpoint=False)
X = np.fft.fft(x(t))/Fs
plt.plot(f, abs(X), label='Fourier Tr. of rect')
plt.stem(f_discrete, abs(cn*P), label='Fourier Series $c_n$')
As we have expected, the Fourier series provides a discrete spectrum of the periodic signal. The value of the discrete samples is equal to the value of the Fourier transform of the aperiodic signal.
Summary¶
The Fourier Series can be formulated in 3 ways:
$$\begin{align}1)\quad x(t)&=\frac{a_0}{2}+\sum_{n=1}^\infty a_n\cos(2\pi nt/P)+b_n\sin(2\pi nt/P)&a;_n&=\frac{2}{P}\int_{-\frac{P}{2}}^\frac{P}{2}x(t)\cos(2\pi nt/P)dt&b;_n&=\frac{2}{P}\int_{-\frac{P}{2}}^\frac{P}{2}x(t)\sin(2\pi nt/P)dt \\ 2)\quad x(t)&=\frac{a_0}{2}+\sum_{n=1}^\infty A_n\cos(2\pi nt/P+\phi_n)&A;_n&=\sqrt{a_n^2+b_n^2}&\phi_n&=\tan^{-1}(-b_n/a_n)\\3)\quad x(t)&=\sum_{n=-\infty}^{\infty}c_n\exp(j2\pi nt/P)&c;_n&=\int_{-\frac{P}{2}}^{\frac{P}{2}}x(t)\exp(-j2\pi nt/P)dt\end{align}$$
The Fourier Transform can be understood as the limiting case of the complex Fourier series, when the period grows to infinity.
Do you have questions or comments? Let's dicuss below!
Related Affiliate Products
The Sound of Harmonics - Approximating instrument sounds with Fourier Series
Fourier Series and Harmonic Approximation
Properties of the Fourier Transform
Next: Linearity, Causality and Time-Invariance of a System
Previous: The Sound of Harmonics - Approximating instrument sounds with Fourier Series
DSPIllustrations.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com, amazon.de, amazon.co.uk, amazon.it.
|
CommonCrawl
|
Polymorphisms in the Perilipin Gene May Affect Carcass Traits of Chinese Meat-type Chickens
Zhang, Lu;Zhu, Qing;Liu, Yiping;Gilbert, Elizabeth R.;Li, Diyan;Yin, Huadong;Wang, Yan;Yang, Zhiqin;Wang, Zhen;Yuan, Yuncong;Zhao, Xiaoling 763
https://doi.org/10.5713/ajas.14.0554 PDF KSCI
Improved meat quality and greater muscle yield are highly sought after in high-quality chicken breeding programs. Past studies indicated that polymorphisms of the Perilipin gene (PLIN1) are highly associated with adiposity in mammals and are potential molecular markers for improving meat quality and carcass traits in chickens. In the present study, we screened single nucleotide polymorphisms (SNPs) in all exons of the PLIN1 gene with a direct sequencing method in six populations with different genetic backgrounds (total 240 individuals). We evaluated the association between the polymorphisms and carcass and meat quality traits. We identified three SNPs, located on the 5' flanking region and exon 1 of PLIN1 on chromosome 10 (rs315831750, rs313726543, and rs80724063, respectively). Eight main haplotypes were constructed based on these SNPs. We calculated the allelic and genotypic frequencies, and genetic diversity parameters of the three SNPs. The polymorphism information content (PIC) ranged from 0.2768 to 0.3750, which reflected an intermediate genetic diversity for all chickens. The CC, CT, and TT genotypes influenced the percentage of breast muscle (PBM), percentage of leg muscle (PLM) and percentage of abdominal fat at rs315831750 (p<0.05). Diplotypes (haplotype pairs) affected the percentage of eviscerated weight (PEW) and PBM (p<0.05). Compared with chickens carrying other diplotypes, H3H7 had the greatest PEW and H2H2 had the greatest PBM, and those with diplotype H7H7 had the smallest PEW and PBM. We conclude that PLIN1 gene polymorphisms may affect broiler carcass and breast muscle yields, and diplotypes H3H7 and H2H2 could be positive molecular markers to enhance PEW and PBM in chickens.
Multiple Genes Related to Muscle Identified through a Joint Analysis of a Two-stage Genome-wide Association Study for Racing Performance of 1,156 Thoroughbreds
Shin, Dong-Hyun;Lee, Jin Woo;Park, Jong-Eun;Choi, Ik-Young;Oh, Hee-Seok;Kim, Hyeon Jeong;Kim, Heebal 771
Thoroughbred, a relatively recent horse breed, is best known for its use in horse racing. Although myostatin (MSTN) variants have been reported to be highly associated with horse racing performance, the trait is more likely to be polygenic in nature. The purpose of this study was to identify genetic variants strongly associated with racing performance by using estimated breeding value (EBV) for race time as a phenotype. We conducted a two-stage genome-wide association study to search for genetic variants associated with the EBV. In the first stage of genome-wide association study, a relatively large number of markers (~54,000 single-nucleotide polymorphisms, SNPs) were evaluated in a small number of samples (240 horses). In the second stage, a relatively small number of markers identified to have large effects (170 SNPs) were evaluated in a much larger number of samples (1,156 horses). We also validated the SNPs related to MSTN known to have large effects on racing performance and found significant associations in the stage two analysis, but not in stage one. We identified 28 significant SNPs related to 17 genes. Among these, six genes have a function related to myogenesis and five genes are involved in muscle maintenance. To our knowledge, these genes are newly reported for the genetic association with racing performance of Thoroughbreds. It complements a recent horse genome-wide association studies of racing performance that identified other SNPs and genes as the most significant variants. These results will help to expand our knowledge of the polygenic nature of racing performance in Thoroughbreds.
Genetic Effects of Polymorphisms in Myogenic Regulatory Factors on Chicken Muscle Fiber Traits
Yang, Zhi-Qin;Qing, Ying;Zhu, Qing;Zhao, Xiao-Ling;Wang, Yan;Li, Di-Yan;Liu, Yi-Ping;Yin, Hua-Dong 782
The myogenic regulatory factors is a family of transcription factors that play a key role in the development of skeletal muscle fibers, which are the main factors to affect the meat taste and texture. In the present study, we performed candidate gene analysis to identify single-nucleotide polymorphisms in the MyoD, Myf5, MyoG, and Mrf4 genes using polymerase chain reaction-single strand conformation polymorphism in 360 Erlang Mountain Chickens from three different housing systems (cage, pen, and free-range). The general linear model procedure was used to estimate the statistical significance of association between combined genotypes and muscle fiber traits of chickens. Two polymorphisms (g.39928301T>G and g.11579368C>T) were detected in the Mrf4 and MyoD gene, respectively. The diameters of thigh and pectoralis muscle fibers were higher in birds with the combined genotypes of GG-TT and TTCT (p<0.05). Moreover, the interaction between housing system and combined genotypes has no significant effect on the traits of muscle fiber (p>0.05). Our findings suggest that the combined genotypes of TT-CT and GG-TT might be advantageous for muscle fiber traits, and could be the potential genetic markers for breeding program in Erlang Mountain Chickens.
Proteomic Analysis of Bovine Pregnancy-specific Serum Proteins by 2D Fluorescence Difference Gel Electrophoresis
Lee, Jae Eun;Lee, Jae Young;Kim, Hong Rye;Shin, Hyun Young;Lin, Tao;Jin, Dong Il 788
Two dimensional-fluorescence difference gel electrophoresis (2D DIGE) is an emerging technique for comparative proteomics, which improves the reproducibility and reliability of differential protein expression analysis between samples. The purpose of this study was to investigate bovine pregnancy-specific proteins in the proteome between bovine pregnant and non-pregnant serum using DIGE technique. Serums of 2 pregnant Holstein dairy cattle at day 21 after artificial insemination and those of 2 non-pregnant were used in this study. The pre-electrophoretic labeling of pregnant and non-pregnant serum proteins were mixed with Cy3 and Cy5 fluorescent dyes, respectively, and an internal standard was labeled with Cy2. Labeled proteins with Cy2, Cy3, and Cy5 were separated together in a single gel, and then were detected by fluorescence image analyzer. The 2D DIGE method using fluorescence CyDye DIGE flour had higher sensitivity than conventional 2D gel electrophoresis, and showed reproducible results. Approximately 1,500 protein spots were detected by 2D DIGE. Several proteins showed a more than 1.5-fold up and down regulation between non-pregnant and pregnant serum proteins. The differentially expressed proteins were identified by MALDI-TOF mass spectrometer. A total 16 protein spots were detected to regulate differentially in the pregnant serum, among which 7 spots were up-regulated proteins such as conglutinin precursor, modified bovine fibrinogen and IgG1, and 6 spots were down-regulated proteins such as hemoglobin, complement component 3, bovine fibrinogen and IgG2a three spots were not identified. The identified proteins demonstrate that early pregnant bovine serum may have several pregnancy-specific proteins, and these could be a valuable information for the development of pregnancy-diagnostic markers in early pregnancy bovine serum.
Milk Yield, Composition, and Fatty Acid Profile in Dairy Cows Fed a High-concentrate Diet Blended with Oil Mixtures Rich in Polyunsaturated Fatty Acids
Thanh, Lam Phuoc;Suksombat, Wisitiporn 796
To evaluate the effects of feeding linseed oil or/and sunflower oil mixed with fish oil on milk yield, milk composition and fatty acid (FA) profiles of dairy cows fed a high-concentrate diet, 24 crossbred primiparous lactating dairy cows in early lactation were assigned to a completely randomized design experiment. All cows were fed a high-concentrate basal diet and 0.38 kg dry matter (DM) molasses per day. Treatments were composed of a basal diet without oil supplement (Control), or diets of (DM basis) 3% linseed and fish oils (1:1, w/w, LSO-FO), or 3% sunflower and fish oils (1:1, w/w, SFO-FO), or 3% mixture (1:1:1, w/w) of linseed, sunflower, and fish oils (MIX-O). The animals fed SFO-FO had a 13.12% decrease in total dry matter intake compared with the control diet (p<0.05). No significant change was detected for milk yield; however, the animals fed the diet supplemented with SFO-FO showed a depressed milk fat yield and concentration by 35.42% and 27.20%, respectively, compared to those fed the control diet (p<0.05). Milk c9, t11-conjugated linoleic acid (CLA) proportion increased by 198.11% in the LSO-FO group relative to the control group (p<0.01). Milk C18:3n-3 (ALA) proportion was enhanced by 227.27% supplementing with LSO-FO relative to the control group (p<0.01). The proportions of milk docosahexaenoic acid (DHA) were significantly increased (p<0.01) in the cows fed LSO-FO (0.38%) and MIX-O (0.23%) compared to the control group (0.01%). Dietary inclusion of LSO-FO mainly increased milk c9, t11-CLA, ALA, DHA, and n-3 polyunsaturated fatty acids (PUFA), whereas feeding MIX-O improved preformed FA and unsaturated fatty acids (UFA). While the lowest n-6/n-3 ratio was found in the LSO-FO, the decreased atherogenecity index (AI) and thrombogenicity index (TI) seemed to be more extent in the MIX-O. Therefore, to maximize milk c9, t11-CLA, ALA, DHA, and n-3 PUFA and to minimize milk n-6/n-3 ratio, AI and TI, an ideal supplement would appear to be either LSO-FO or MIX-O.
Gas Exchanges and Dehydration in Different Intensities of Conditioning in Tifton 85 Bermudagrass: Nutritional Value during Hay Storage
Pasqualotto, M.;Neres, M.A.;Guimaraes, V.F.;Klein, J.;Inagaki, A.M.;Ducati, C. 807
The present study aimed at evaluating the intensity of Tifton 85 conditioning using a mower conditioner with free-swinging flail fingers and storage times on dehydration curve, fungi presence, nutritional value and in vitro digestibility of Tifton 85 bermudagrass hay dry matter (DM). The dehydration curve was determined in the whole plant for ten times until the baling. The zero time corresponded to the plant before cutting, which occurred at 11:00 and the other collections were carried out at 8:00, 10:00, 14:00, and 16:00. The experimental design was randomised blocks with two intensities of conditioning (high and low) and ten sampling times, with five replications. The high and low intensities related to adjusting the deflector plate of the free iron fingers (8 and 18 cm). In order to determine gas exchanges during Tifton 85 bermudagrass dehydration, there were evaluations of mature leaves, which were placed in the upper middle third of each branch before the cutting, at every hour for 4 hours. A portable gas analyser was used by an infrared IRGA (6400xt). The analysed variables were photosynthesis (A), stomatal conductance (gs), internal $CO_2$ concentration (Ci), transpiration (T), water use efficiency (WUE), and intrinsic water use efficiency (WUEi). In the second part of this study, the nutritional value of Tifton 85 hay was evaluated, so randomised blocks were designed in a split plot through time, with two treatments placed in the following plots: high and low intensity of cutting and five different time points as subplots: cutting (additional treatment), baling and after 30, 60, and 90 days of storage. Subsequently, fungi that were in green plants as well as hay were determined and samples were collected from the grass at the cutting period, during baling, and after 30, 60, and 90 days of storage. It was observed that Tifton 85 bermudagrass dehydration occurred within 49 hours, so this was considered the best time for drying hay. Gas exchanges were more intense before cutting, although after cutting they decreased until ceasing within 4 hours. The lowest values of acid detergent insoluble nitrogen were obtained with low conditioning intensity after 30 days of storage, 64.8 g/kg DM. The in vitro dry matter of Tifton 85 bermudagrass did not differ among the storage times or the conditioning intensities. There was no fungi present in the samples collected during the storage period up to 90 days after dehydration, with less than 30 colony forming units found on plate counting. The use of mower conditioners in different intensities of injury did not speed up the dehydration time of Tifton 85.
Aerobic Stability and Effects of Yeasts during Deterioration of Non-fermented and Fermented Total Mixed Ration with Different Moisture Levels
Hao, W.;Wang, H.L.;Ning, T.T.;Yang, F.Y.;Xu, C.C. 816
The present experiment evaluated the influence of moisture level and anaerobic fermentation on aerobic stability of total mixed ration (TMR). The dynamic changes in chemical composition and microbial population that occur after air exposure were examined, and the species of yeast associated with the deterioration process were also identified in both non-fermented and fermented TMR to deepen the understanding of aerobic deterioration. The moisture levels of TMR in this experiment were adjusted to 400 g/kg (low moisture level, LML), 450 g/kg (medium moisture level, MML), and 500 g/kg (high moisture level, HML), and both non-fermented and 56-d-fermented TMR were subjected to air exposure to determine aerobic stability. Aerobic deterioration resulted in high losses of nutritional components and largely reduced dry matter digestibility. Non-fermented TMR deteriorated during 48 h of air exposure and the HML treatment was more aerobically unstable. On dry matter (DM) basis, yeast populations significantly increased from $10^7$ to $10^{10}cfu/g$ during air exposure, and Candida ethanolica was the predominant species during deterioration in non-fermented TMR. Fermented TMR exhibited considerable resistance to aerobic deterioration. Spoilage was only observed in the HML treatment and its yeast population increased dramatically to $10^9cfu/g$ DM when air exposure progressed to 30 d. Zygosaccharomyces bailii was the sole yeast species isolated when spoilage occurred. These results confirmed that non-fermented and fermented TMR with a HML are more prone to spoilage, and fermented TMR has considerable resistance to aerobic deterioration. Yeasts can trigger aerobic deterioration in both non-fermented and fermented TMR. C. ethanolica may be involved in the spoilage of non-fermented TMR and the vigorous growth of Z. bailii can initiate aerobic deterioration in fermented TMR.
Effects of Benzoic Acid and Thymol on Growth Performance and Gut Characteristics of Weaned Piglets
Diao, Hui;Zheng, Ping;Yu, Bing;He, Jun;Mao, Xiangbing;Yu, Jie;Chen, Daiwen 827
A total of 144 weaned crossed pigs were used in a 42-d trial to explore the effects of different concentrations/combinations of benzoic acid and thymol on growth performance and gut characteristics in weaned pigs. Pigs were randomly allotted to 4 dietary treatments: i) control (C), basal diet, ii) C+1,000 mg/kg benzoic acid+100 mg/kg thymol (BT1), iii) C+1,000 mg/kg benzoic acid+200 mg/kg thymol (BT2) and, iv) C+2,000 mg/kg benzoic acid+100 mg/kg thymol (BT3). Relative to the control, pigs fed diet BT3 had lower diarrhoea score during the overall period (p<0.10) and improved feed to gain ratio between days 1 to 14 (p<0.05), which was accompanied by improved apparent total tract digestibility of ether extract, Ca and crude ash (p<0.05), and larger lipase, lactase and sucrose activities in the jejunum (p<0.05) at d 14 and d 42. Similarly, relative to the control, pigs fed diet BT3 had higher counts for Lactobacillus spp in digesta of ileum at d 14 (p<0.05), and pigs fed diets BT1, BT2, or BT3 also had higher counts of Bacillus spp in digesta of caecum at d 14 (p<0.05), and lower concentration of ammonia nitrogen in digesta of caecum at d 14 and d 42 (p<0.05). Finally, pigs fed diet BT3 had higher concentration of butyric acid in digesta of caecum at d 42 (p<0.05), and a larger villus height:crypt depth ratio in jejunum and ileum at d 14 (p<0.05) than pigs fed the control diet. In conclusion, piglets fed diet supplementation with different concentrations/combinations of benzoic acid and thymol could improve feed efficiency and diarrhoea, and improve gut microfloral composition. The combination of 2,000 mg/kg benzoic acid+100 mg/kg thymol produced better effects than other treatments in most measurements.
Effects of Dietary Corticosterone on Yolk Colors and Eggshell Quality in Laying Hens
Kim, Yeon-Hwa;Kim, Jimin;Yoon, Hyung-Sook;Choi, Yang-Ho 840
The objective of this study was to investigate the effects of dietary corticosterone on egg quality. For 2 weeks hens received either control or experimental diet containing corticosterone at 30 mg/kg diet. Feed intake and egg production were monitored daily, and body weight measured weekly. Egg weights and egg quality were measured daily. Corticosterone treatment resulted in a remarkable increase in feed intake and sharp decrease in egg production compared with control (p<0.05) whereas body weight remained unchanged. Decreased albumen height, but no changes in egg weight, led to decreased Haugh unit (p<0.05). Corticosterone caused elevated eggshell thickness (p<0.05) without altering weight and strength, suggesting possible changes in shell structure. Yolk color and redness were increased by corticosterone (p<0.05) but lightness and yellowness were either not changed or inconsistent over the time period of measurements. Increased concentrations in plasma were also found for corticosterone, glucose, cholesterol, creatinine, uric acid, albumin, aspartate aminotransferase, creatine kinase, lactate dehydrogenase, total protein, and amylase (p<0.05), suggesting that corticosterone increased protein breakdown, renal dysfunctions and pancreatitis. Together, the current results imply that dietary corticosterone affects egg quality such as yolk colors and shell thickness, in addition to its effects on feed intake and egg production.
Effects of Inclusion Levels of Wheat Bran and Body Weight on Ileal and Fecal Digestibility in Growing Pigs
Huang, Q.;Su, Y.B.;Li, D.F.;Liu, L.;Huang, C.F.;Zhu, Z.P.;Lai, C.H. 847
The objective of this study was to determine the effects of graded inclusions of wheat bran (0%, 9.65%, 48.25% wheat bran) and two growth stages (from 32.5 to 47.2 kg and 59.4 to 78.7 kg, respectively) on the apparent ileal digestibility (AID), apparent total tract digestibility (ATTD) and hindgut fermentation of nutrients and energy in growing pigs. Six light pigs (initial body weight [BW] $32.5{\pm}2.1kg$) and six heavy pigs (initial BW $59.4{\pm}3.2kg$) were surgically prepared with a T-cannula in the distal ileum. A difference method was used to calculate the nutrient and energy digestibility of wheat bran by means of comparison with a basal diet consisting of corn-soybean meal (0% wheat bran). Two additional diets were formulated by replacing 9.65% and 48.25% wheat bran by the basal diet, respectively. Each group of pigs was allotted to a $6{\times}3$ Youden square design, and pigs were fed to three experimental diets during three 11-d periods. Hindgut fermentation values were calculated as the differences between ATTD and AID values. For the wheat bran diets, the AID and ATTD of dry matter (DM), ash, organic matter (OM), carbohydrates (CHO), gross energy (GE), and digestible energy (DE) decreased with increasing inclusion levels of wheat bran (p<0.05). While only AID of CHO and ATTD of DM, ash, OM, CHO, GE, and DE content differed (p<0.05) when considering the BW effect. For the wheat bran ingredient, there was a wider variation effect (p<0.01) on the nutrient and energy digestibility of wheat bran in 9.65% inclusion level due to the coefficient of variation (CV) of the nutrient and energy digestibility being higher at 9.65% compared to 48.25% inclusion level of wheat bran. Digestible energy content of wheat bran at 48.25% inclusion level (4.8 and 6.7 MJ/kg of DM, respectively) fermented by hindgut was significantly higher (p<0.05) than that in 9.65% wheat bran inclusion level (2.56 and 2.12 MJ/kg of DM, respectively), which was also affected (p<0.05) by two growth stages. This increase in hindgut fermentation caused the difference in ileal DE (p<0.05) to disappear at total tract level. All in all, increasing wheat bran levels in diets negatively influences the digestibility of some nutrients in pigs, while it positively affects the DE fermentation in the hindgut.
Angiotensin I-Converting Enzyme Inhibitor Activity on Egg Albumen Fermentation
Nahariah, N.;Legowo, A.M.;Abustam, E.;Hintono, A. 855
Lactobacillus plantarum is used for fermentation of fish products, meat and milk. However, the utilization of these bacteria in egg processing has not been done. This study was designed to evaluate the potential of fermented egg albumen as a functional food that is rich in angiotensin I-converting enzyme inhibitors activity (ACE-inhibitor activity) and is antihypertensive. A completely randomized design was used in this study with six durations of fermentation (6, 12, 18, 24, 30, and 36 h) as treatments. Six hundred eggs obtained from the same chicken farm were used in the experiment as sources of egg albumen. Bacteria L. plantarum FNCC 0027 used in the fermentation was isolated from cow's milk. The parameters measured were the total bacteria, dissolved protein, pH, total acid and the activity of ACE-inhibitors. The results showed that there were significant effects of fermentation time on the parameters tested. Total bacteria increased significantly during fermentation for 6, 12, 18, and 24 h and then decreased with the increasing time of fermentation to 30 and 36 h. Soluble protein increased significantly during fermentation to 18 h and then subsequently decreased during of fermentation to 24, 30, and 36 h. The pH value decreased markedly during fermentation. The activities of ACE-inhibitor in fermented egg albumen increased during fermentation to 18 h and then decreased with the increasing of the duration of fermentation to 24, 30, and 36 h. The egg albumen which was fermented for 18 h resulted in a functional food that was rich in ACE-inhibitor activity.
Estimation of Pork Quality Traits Using Exsanguination Blood and Postmortem Muscle Metabolites
Choe, J.H.;Choi, M.H.;Ryu, Y.C.;Go, G.W.;Choi, Y.M.;Lee, S.H.;Lim, K.S.;Lee, E.A.;Kang, J.H.;Hong, K.C.;Kim, B.C. 862
The current study was designed to estimate the pork quality traits using metabolites from exsanguination blood and postmortem muscle simultaneously under the Korean standard pre- and post-slaughter conditions. A total of 111 Yorkshire (pure breed and castrated male) pigs were evaluated under the Korean standard conditions. Measurements were taken of the levels of blood glucose and lactate at exsanguination, and muscle glycogen and lactate content at 45 min and 24 h postmortem. Certain pork quality traits were also evaluated. Correlation analysis and multiple regression analysis including stepwise regression were performed. Exsanguination blood glucose and lactate levels were positively correlated with each other, negatively related to postmortem muscle glycogen content and positively associated with postmortem muscle lactate content. A rapid and extended postmortem glycolysis was associated with high levels of blood glucose and lactate, with high muscle lactate content, and with low muscle glycogen content during postmortem. In addition, these were also correlated with paler meat color and reduced water holding capacity. The results of multiple regression analyses also showed that metabolites in exsanguination blood and postmortem muscle explained variations in pork quality traits. Especially, levels of blood glucose and lactate and content of muscle glycogen at early postmortem were significantly associated with an elevated early glycolytic rate. Furthermore, muscle lactate content at 24 h postmortem alone accounted for a considerable portion of the variation in pork quality traits. Based on these results, the current study confirmed that the main factor influencing pork quality traits is the ultimate lactate content in muscle via postmortem glycolysis, and that levels of blood glucose and lactate at exsanguination and contents of muscle glycogen and lactate at postmortem can explain a large portion of the variation in pork quality even under the standard slaughter conditions.
Molecular Analysis of Alternative Transcripts of the Equine Cordon-Bleu WH2 Repeat Protein-Like 1 (COBLL1) Gene
Park, Jeong-Woong;Jang, Hyun-Jun;Shin, Sangsu;Cho, Hyun-Woo;Choi, Jae-Young;Kim, Nam-Young;Lee, Hak-Kyo;Do, Kyong-Tak;Song, Ki-Duk;Cho, Byung-Wook 870
The purpose of this study was to investigate the alternative splicing in equine cordon-bleu WH2 repeat protein-like 1 (COBLL1) gene that was identified in horse muscle and blood leukocytes, and to predict functional consequences of alternative splicing by bioinformatics analysis. In a previous study, RNA-seq analysis predicted the presence of alternative spliced isoforms of equine COBLL1, namely COBLL1a as a long form and COBLL1b as a short form. In this study, we validated two isoforms of COBLL1 transcripts in horse tissues by the real-time polymerase chain reaction, and cloned them for Sanger sequencing. The sequencing results showed that the alternative splicing occurs at exon 9. Prediction of protein structure of these isoforms revealed three putative phosphorylation sites at the amino acid sequences encoded in exon 9, which is deleted in COBLL1b. In expression analysis, it was found that COBLL1b was expressed ubiquitously and equivalently in all the analyzed tissues, whereas COBLL1a showed strong expression in kidney, spinal cord and lung, moderate expression in heart and skeletal muscle, and low expression in thyroid and colon. In muscle, both COBLL1a and COBLL1b expression decreased after exercise. It is assumed that the regulation of COBLL1 expression may be important for regulating glucose level or switching of energy source, possibly through an insulin signaling pathway, in muscle after exercise. Further study is warranted to reveal the functional importance of COBLL1 on athletic performance in race horses.
Biocomputational Characterization and Evolutionary Analysis of Bubaline Dicer1 Enzyme
Singh, Jasdeep;Mukhopadhyay, Chandra Sekhar;Arora, Jaspreet Singh;Kaur, Simarjeet 876
Dicer, an ribonuclease type III type endonuclease, is the key enzyme involved in biogenesis of microRNAs (miRNAs) and small interfering RNAs (siRNAs), and thus plays a critical role in RNA interference through post transcriptional regulation of gene expression. This enzyme has not been well studied in the Indian water buffalo, an important species known for disease resistance and high milk production. In this study, the primary coding sequence (5,778 bp) of bubaline dicer (GenBank: AB969677.1) was determined and the bubaline Dicer1 biocomputationally characterized to determine the phylogenetic signature among higher eukaryotes. The evolutionary tree revealed that all the transcript variants of Dicer1 belonging to a specific species were within the same node and the sequences belonging to primates, rodents and lagomorphs, avians and reptiles formed independent clusters. The bubaline dicer1 is closely related to that of cattle and other ruminants and significantly divergent from dicer of lower species such as tapeworm, sea urchin and fruit fly. Evolutionary divergence analysis conducted using MEGA6 software indicated that dicer has undergone purifying selection over the time. Seventeen divergent sequences, representing each of the families/taxa were selected to study the specific regions of positive vis-$\grave{a}$-vis negative selection using different models like single likelihood ancestor counting, fixed effects likelihood, and random effects likelihood of Datamonkey server. Comparative analysis of the domain structure revealed that Dicer1 is conserved across mammalian species while variation both in terms of length of Dicer enzyme and presence or absence of domain is evident in the lower organisms.
Molecular Phylogenetic Diversity and Spatial Distribution of Bacterial Communities in Cooling Stage during Swine Manure Composting
Guo, Yan;Zhang, Jinliang;Yan, Yongfeng;Wu, Jian;Zhu, Nengwu;Deng, Changyan 888
Polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) and subsequent sub-cloning and sequencing were used in this study to analyze the molecular phylogenetic diversity and spatial distribution of bacterial communities in different spatial locations during the cooling stage of composted swine manure. Total microbial DNA was extracted, and bacterial near full-length 16S rRNA genes were subsequently amplified, cloned, RFLP-screened, and sequenced. A total of 420 positive clones were classified by RFLP and near-full-length 16S rDNA sequences. Approximately 48 operational taxonomic units (OTUs) were found among 139 positive clones from the superstratum sample; 26 among 149 were from the middle-level sample and 35 among 132 were from the substrate sample. Thermobifida fusca was common in the superstratum layer of the pile. Some Bacillus spp. were remarkable in the middle-level layer, and Clostridium sp. was dominant in the substrate layer. Among 109 OTUs, 99 displayed homology with those in the GenBank database. Ten OTUs were not closely related to any known species. The superstratum sample had the highest microbial diversity, and different and distinct bacterial communities were detected in the three different layers. This study demonstrated the spatial characteristics of the microbial community distribution in the cooling stage of swine manure compost.
Nitrogen Removal from Milking Center Wastewater via Simultaneous Nitrification and Denitrification Using a Biofilm Filtration Reactor
Won, Seung-Gun;Jeon, Dae-Yong;Kwag, Jung-Hoon;Kim, Jeong-Dae;Ra, Chang-Six 896
Milking center wastewater (MCW) has a relatively low ratio of carbon to nitrogen (C/N ratio), which should be separately managed from livestock manure due to the negative impacts of manure nutrients and harmful effects on down-stream in the livestock manure process with respect to the microbial growth. Simultaneous nitrification and denitrification (SND) is linked to inhibition of the second nitrification and reduces around 40% of the carbonaceous energy available for denitrification. Thus, this study was conducted to find the optimal operational conditions for the treatment of MCW using an attached-growth biofilm reactor; i.e., nitrogen loading rate (NLR) of 0.14, 0.28, 0.43, and $0.58kg\;m^{-3}\;d^{-1}$ and aeration rate of 0.06, 0.12, and $0.24\;m^3\;h^{-1}$ were evaluated and the comparison of air-diffuser position between one-third and bottom of the reactor was conducted. Four sand packed-bed reactors with the effective volume of 2.5 L were prepared and initially an air-diffuser was placed at one third from the bottom of the reactor. After the adaptation period of 2 weeks, SND was observed at all four reactors and the optimal NLR of $0.45kg\;m^{-3}\;d^{-1}$ was found as a threshold value to obtain higher nitrogen removal efficiency. Dissolved oxygen (DO) as one of key operational conditions was measured during the experiment and the reactor with an aeration rate of $0.12\;m^3\;h^{-1}$ showed the best performance of $NH_4-N$ removal and the higher total nitrogen removal efficiency through SND with appropriate DO level of ${\sim}0.5\;mg\;DO\;L^{-1}$. The air-diffuser position at one third from the bottom of the reactor resulted in better nitrogen removal than at the bottom position. Consequently, nitrogen in MCW with a low C/N ratio of 2.15 was successfully removed without the addition of external carbon sources.
Modelling Pasture-based Automatic Milking System Herds: System Fitness of Grazeable Home-grown Forages, Land Areas and Walking Distances
Islam, M.R.;Garcia, S.C.;Clark, C.E.F.;Kerrisk, K.L. 903
To maintain a predominantly pasture-based system, the large herd milked by automatic milking rotary would be required to walk significant distances. Walking distances of greater than 1-km are associated with an increased incidence of undesirably long milking intervals and reduced milk yield. Complementary forages can be incorporated into pasture-based systems to lift total home grown feed in a given area, thus potentially 'concentrating' feed closer to the dairy. The aim of this modelling study was to investigate the total land area required and associated walking distance for large automatic milking system (AMS) herds when incorporating complementary forage rotations (CFR) into the system. Thirty-six scenarios consisting of 3 AMS herds (400, 600, 800 cows), 2 levels of pasture utilisation (current AMS utilisation of 15.0 t dry matter [DM]/ha, termed as moderate; optimum pasture utilisation of 19.7 t DM/ha, termed as high) and 6 rates of replacement of each of these pastures by grazeable CFR (0%, 10%, 20%, 30%, 40%, 50%) were investigated. Results showed that AMS cows were required to walk greater than 1-km when the farm area was greater than 86 ha. Insufficient pasture could be produced within a 1 km distance (i.e. 86 ha land) with home-grown feed (HGF) providing 43%, 29%, and 22% of the metabolisable energy (ME) required by 400, 600, and 800 cows, respectively from pastures. Introduction of pasture (moderate): CFR in AMS at a ratio of 80:20 can feed a 400 cow AMS herd, and can supply 42% and 31% of the ME requirements for 600 and 800 cows, respectively with pasture (moderate): CFR at 50:50 levels. In contrast to moderate pasture, 400 cows can be managed on high pasture utilisation (provided 57% of the total ME requirements). However, similar to the scenarios conducted with moderate pasture, there was insufficient feed produced within 1-km distance of the dairy for 600 or 800 cows. An 800 cow herd required 140 and 130 ha on moderate and high pasture-based AMS system, respectively with the introduction of pasture: CFR at a ratio of 50:50. Given the impact of increasing land area past 86 ha on walking distance, cow numbers could be increased by purchasing feed from off the milking platform and/or using the land outside 1-km distance for conserved feed. However, this warrants further investigations into risk analyses of different management options including development of an innovative system to manage large herds in an AMS farming system.
|
CommonCrawl
|
Earth, Planets and Space
Spatial and temporal influence of rainfall on crustal pore pressure based on seismic velocity monitoring
Rezkia Dewi Andajani1,
Takeshi Tsuji ORCID: orcid.org/0000-0003-0951-45961,2,3,
Roel Snieder4 &
Tatsunori Ikeda ORCID: orcid.org/0000-0002-6151-40431,2
Earth, Planets and Space volume 72, Article number: 177 (2020) Cite this article
Crustal pore pressure, which controls the activities of earthquakes and volcanoes, varies in response to rainfall. The status of pore pressure can be inferred from observed changes in seismic velocity. In this study, we investigate the response of crustal pore pressure to rainfall in southwestern Japan based on time series of seismic velocity derived from ambient noise seismic interferometry. To consider the heterogeneity of the area, rainfall and seismic velocity obtained at each location were directly compared. We used a band-pass filter to distinguish the rainfall variability from sea level and atmospheric pressure, and then calculated the cross-correlation between rainfall and variations in S-wave velocity (Vs). A mostly negative correlation between rainfall and Vs changes indicates groundwater recharge by rainfall, which increases pore pressure. The correlations differ between locations, where most of the observation stations with clear negative cross-correlations were located in areas of granite. On the other hand, we could not observe clear correlations in steep mountain areas, possibly because water flows through river without percolation. This finding suggests that geographical features contribute to the imprint of rainfall on deep formation pore pressure. We further modelled pore pressure change due to rainfall based on diffusion mechanism. A strong negative correlation between pore pressure estimated from rainfall and Vs indicates that the Vs variations are triggered by pore pressure diffusion in the deep formation. Our modelling results show a spatial variation of diffusion parameter which controls the pore pressure in deep formation. By linking the variations in seismic velocity and crustal pore pressure spatially, this study shows that seismic monitoring may be useful in evaluating earthquake triggering processes or volcanic activity.
Pore pressure plays a key role in the occurrence of earthquakes and the volcanic activities (Albino et al. 2018; Ellsworth 2013; Tsuji et al. 2014). Under conditions of critical stress and high pore pressure, small increases in pore pressure can trigger seismicity. Therefore, monitoring the status of pore pressure is a vital part of evaluating dynamic crustal activities. Because pore pressure affects seismic velocity, the state of pore pressure can be assessed by seismic velocity monitoring (Chaves and Schwartz 2016; Hutapea et al. 2020; Ikeda and Tsuji 2018; Nimiya et al. 2017; Rivet et al. 2015; Tsuji et al. 2008; Wang et al. 2017).
In field observations, changes in seismic velocity can be induced by various environmental perturbations (Wang et al. 2017) because seismic velocity is sensitive to variations in stress and water saturation (Grêt et al. 2006). Such perturbations include ocean tides and solid earth tides (Sens-Schönfelder and Eulenfeld 2019), and seismic velocity in coastal locations is sensitive to tidal ocean loading (Yamamura et al. 2003). The influence of the ocean is considered in studies of ambient seismic noise (Hillers et al. 2012). Atmospheric pressure influences seismic velocity over large regions (Niu et al. 2008; Silver et al. 2007), and atmospheric temperature likewise generates seasonal variations in seismic velocity through changes in crustal strain (Ben-Zion and Leary 1986; Berger 1975; Prawirodirdjo et al. 2006), especially in arid regions (Hillers et al. 2015; Richter et al. 2014).
Rainfall and snow are well-known hydrological perturbations by which pore pressure induces seismic velocity changes. For example, the interaction of hydrothermal systems and surface loading from precipitation can lead to seismic velocity reductions (Taira and Brenguier 2016). Snow decreases seismic velocity through increased pore pressure resulting from ice accumulation (Mordret et al. 2016), whereas frost increases seismic velocity at shallow depths by increasing the shear modulus of near-surface materials (Gassenmeier et al. 2015; Ikeda et al. 2018; Tsuji et al., 2012). Rainfall decreases seismic velocity through changes in effective stress (Nakata and Snieder 2012; Miao et al. 2018) and groundwater level (Gassenmeier et al. 2015; Meier et al. 2010; Sens-Schönfelder and Wegler 2006; Tsai 2011). Rainfall triggers seismicity through pore pressure changes caused by crustal loading and unloading (Bettinelli et al. 2008) and pore pressure diffusion (Hainzl et al. 2006; Kraft et al. 2006). Because percolation of water through porous rock may be a contributor to pore pressure changes, we investigated the spatial and temporal relationships between seismic velocity changes and rainfall in a well-instrumented region of Japan.
Crustal deformation in Japan can be evaluated with abundant data from seismic and geodetic observation stations (Aoi et al. 2020). The crust is affected by perturbations from volcanic and seismic activities (Ueda et al. 2013) and surface loads (Heki 2004), including non-tidal ocean loading (Sato et al. 2001). Recent studies have shown that observed seismic velocity changes reflect volcanic activity (Takano et al. 2017; Yukutake et al. 2016) and earthquake activity (Nimiya et al. 2017). Furthermore, seasonal spatial patterns of seismic velocity change throughout Japan can be explained by seasonal variations in rainfall, snow, and sea level (Wang et al. 2017).
This study uses records of seismic velocity changes estimated from ambient noise monitoring in the Chugoku and Shikoku regions of southwest Japan (Fig. 1). This area receives high rainfall from the summer monsoon (Aizen et al. 2001) and is relatively unaffected by volcanic activity and snowfall. To evaluate the influence of rainfall on seismic velocity changes, we performed two-step analyses. In the first step, we sought to identify locations where seismic velocity could be affected by rainfall by directly comparing the seismic velocity to the rainfall via cross-correlations (e.g., Bièvre et al. 2018). The time delay resulting from the cross-correlations helps to constrain near-surface conditions that could be related to lithology-related permeability. In the second step, we modelled pore pressure change due to pore pressure diffusion to estimate a hydrological parameter (i.e. diffusion rate) for the locations where precipitation influence was clearly estimated in the first step. By comparing the seismic velocity change and modelled pore pressure change, we sought to estimate spatial variation of the diffusion parameter in deeper lithology which contributes to predicting precipitation-related pore pressure changes from seismic velocity in Chugoku and Shikoku regions.
a Location map of Japan showing the study area in the Chugoku–Shikoku region. b Topological and c geological maps of Chugoku and Shikoku regions (modified from Geological Survey of Japan AIST 2015). The dots on the geological map represent the location of seismic stations. The colour of dots in b and c indicates the time lag shown in Fig. 7c. Maps of the study area showing d seismic stations, e precipitation gauges, and f ocean tidal stations, pressure gauges, and groundwater level (GWL) stations. The red circle in d-f indicates the seismic station (N.YSHH) for which the correlations in Figs. 3, 6, and 11 are computed. The yellow circles in d represent the station pairs and e precipitation gauges within 40 km from the selected seismic station, respectively
Well-quantified monitoring results could be useful information for the evaluation of earthquake triggering mechanisms. In CO2 geological storage projects and geothermal developments, furthermore, earthquakes induced by fluid injection are a notable public concern. Accurate knowledge of natural pore pressure variations can help in distinguishing whether an earthquake is a natural event triggered by environmental variations or an induced event triggered by fluid invasion.
The Chugoku and Shikoku regions are located in southwest Japan (Fig. 1). The Chugoku region is characterized by mountainous topography with gentle sloping, while the slopes of the mountains in Shikoku Island are mostly steep (Fig. 1b). Figure 1c shows the rock types of our study area from the geological map (Geological Survey of Japan AIST 2015). The Chugoku region is abundant in Cretaceous volcanic and granitic rocks, along with granitic rocks from the Paleocene to the Early Eocene, and Late Pleistocene to Holocene sediments at the northern Chugoku. As for the Shikoku region, Late Cretaceous granite can be found in the northern Shikoku, while Sanbagawa metamorphic rocks are widely distributed across the centre of the Shikoku. Sandstone of Cretaceous–Oligocene accretionary complexes is mainly located in the southern Shikoku Island (steep mountain).
We collected data on seismic velocity changes, precipitation, atmospheric pressure, and sea-level change for the period 2015–2017 in the Chugoku and Shikoku regions. The meteorological data were obtained from the Japan Meteorological Agency (JMA) and we used seismic data from 98 seismometers operated by the National Research Institute for Earth Science and Disaster Resilience (NIED). For each Hi-net station, a three-component sensor of the particle velocity with a natural frequency of 1 Hz is installed in the bottom of the borehole (Obara et al. 2005).
We estimated seismic velocity changes on the basis of ambient-noise coda wave interferometry using the vertical component of ambient noise (Hutapea et al. 2020; Nimiya et al. 2017). To obtain virtual seismograms propagating between pairs of stations, two traces of \({f}_{\text{A}}(t)\) and \({f}_{\text{B}}(t)\) recorded at seismometers A and B were transformed into frequency domain by the Fourier transform:
$$\begin{gathered} F_{{\text{A}}} \left( \omega \right) = \frac{1}{2\pi }\int_{ - \infty }^{\infty } {f_{{\text{A}}} } \left( t \right)e^{ - i\omega t} {\text{d}}t, \hfill \\ F_{{\text{B}}} \left( \omega \right) = \frac{1}{2\pi }\int_{ - \infty }^{\infty } {f_{{\text{B}}} } \left( t \right)e^{ - i\omega t} {\text{d}}t. \hfill \\ \end{gathered}$$
where \({F}_{\text{A}}\) and \({F}_{\text{B}}\) are the seismic waveforms in the frequency domain \((\omega )\) recorded at seismometers A and B.
The power-normalized cross-correlation (cross-coherence) was applied in the frequency domain between seismometers A and B (e.g., Nakata et al. 2011, 2015) by
$${CC}_{\text{AB}}\left(\omega \right) = \frac{{F}_{\text{A}}(\omega ) {F}_{\text{B}}^{*}(\omega )}{|{F}_{\text{A}}(\omega )||{F}_{\text{B}}(\omega )|},$$
where the asterisk (*) denotes a complex conjugate.
Changes in seismic velocity between pairs of seismometers were estimated by the stretching interpolation method (Hadziioannou et al. 2009; Hutapea et al. 2020; Minato et al. 2012; Nimiya et al. 2017). This method elongates the time axis and looks for the trace most similar to the reference trace by means of the correlation coefficient \(\text{CC}(\varepsilon )\) between the reference trace and the current trace:
$$\text{CC}(\varepsilon ) = \frac{\int {f}_{\varepsilon }^{\text{cur}}\left(t\right){f}^{\text{ref}}\left(t\right)\text{d}t}{{(\int {({f}_{\varepsilon }^{\text{cur}}(t))}^{2}\text{d}t\int {({f}^{\text{ref}}(t))}^{2}\text{d}t)}^{1/2}},$$
$${f}_{\varepsilon }^{\text{cur}}\left(t\right)={f}^{\text{cur}}\left(t\left(1+\varepsilon \right)\right),$$
where \({f}^{\text{ref}}\) is the reference trace, \({f}^{\text{cur}}\) is the current trace, and \(t\) is time. The stretching parameter \(\varepsilon\) is related to the relative time shift (\(\Delta t/t\)) and velocity change (\(\Delta v/v\)) from.
$$\varepsilon \, = \,\Delta t/t\, = \,{-}\left( {\Delta v/v} \right).$$
The time window of 100 s for coda waves was used to obtain velocity changes by the stretching interpolation. The seismic velocity change was estimated independently for each individual year by defining the 1-year stack of the coda of cross-correlation data as the reference trace \({f}^{\text{ref}}\) and the 10-day stack of the coda of cross-correlation data as the current trace \({f}^{\text{cur}}\). To stabilize the monitoring results over the 3-year term, we used the sliding reference method (SRM) to define the reference trace (see Hutapea et al. 2020). In the SRM method, we changed the reference trace for each year. For example, to estimate daily seismic velocity changes in 2015, we defined the coda of cross-correlation stacking over the whole year of 2015 as the reference trace. The daily velocity change was considered to represent the velocity change in the middle of the 10-day window of the current trace. The frequency range of the seismic data was restricted to 0.1 to 0.9 Hz, which reflects the sensitivity of surface waves to S-wave velocity between depths of 1 and 8 km (e.g., Nimiya et al. 2017). To obtain seismic velocity changes for each station (Fig. 1d), we applied spatial averaging within a radius of 40 km.
To obtain precipitation data for each seismic station, we averaged the data from all precipitation gauges within a distance of less than 40 km from the seismic stations (Fig. 1e). Atmospheric pressure and sea-level changes were obtained from the tidal gauge closest to the seismic station (Fig. 1f) and the daily sea-level change was estimated by averaging data for the most recent 24-h period.
Several studies have linked changes in seismic velocity to groundwater recharge by rain precipitation (e.g., Gassenmeier et al. 2015; Sens-Schönfelder and Wegler 2006). When surface water from precipitation replenishes groundwater, we expect decrease in seismic velocity reflecting pore pressure increase due to (a) immediate loading in undrained condition (impermeable), and (b) pore pressure diffusion (Talwani 1997). To confirm the effect of precipitation on changes in seismic velocity, we applied two-step analyses.
In the first step, the time delay between precipitation and seismic velocity change is estimated by cross-correlating the two time series. We identify the locations where velocity changes occurred after precipitations, indicating the influence of pore pressure change due to groundwater recharge. Because the quasi-annual period of seismic velocity change could be influenced by other environmental factors (e.g., atmospheric pressure and sea level), we apply a band-pass filter in order to clearly distinguish rainfall from sea level and atmospheric pressure. Therefore, in the first step, we focus on shorter period fluctuations associated with rain precipitation.
In the second step, we focus on locations where precipitation influence on seismic velocity is clearly identified from the first step. We model pore pressure change based on a diffusion mechanism by groundwater load and compare that with the longer period of seismic velocity change to estimate diffusion rate in deep lithology. Although the observed response is mostly a coupled mechanism (i.e. undrained response and pore pressure diffusion), the effect of pore pressure diffusion may be dominant in the later time, as the pore pressure increase due to diffusion occurs once the immediate loading has dissipated (Talwani 1997). The longer period velocity variation may include the influence of sea level and atmospheric pressure effects, but the seismic velocity variation we used here does not include strong annual features associated with sea level and atmospheric pressure. We summarized the flow of these two-step analyses in the flowchart (Fig. 2).
Flowchart summarizing the two-step procedures to investigate the rainfall influence in seismic velocity change
Step 1: investigation of the rainfall infiltration
To determine an optimal frequency band to clearly distinguish precipitation influences from other environmental factors, we first investigated the power spectra of seismic velocity changes, precipitation, sea-level changes, and atmospheric pressure changes (Additional file 1: Figure S1a). Whereas the power spectrum of seismic velocity changes decreased toward a frequency of 0.1 cycle/day, the spectra of precipitation, sea-level, and atmospheric pressure change showed similar peaks at 0.0018–0.0036 cycle/day, a frequency band close to the annual cycle (Additional file 1: Figure S1b). The similarity of these three peaks meant that the long-term estimated seismic velocity changes could be affected not only by precipitation, but also by sea-level and atmospheric pressure changes.
We excluded frequencies below 0.0036 cycle/day to remove the annual seasonal influence of sea-level and atmospheric pressure changes, and we excluded frequencies above 0.05 cycle/day to eliminate the neap and spring tides of sea-level change and the decreasing spectrum of seismic velocity change. We then searched for the frequency band where precipitation could best be distinguished from sea-level change and atmospheric pressure change, as indicated by weak correlations between precipitation and the other two variables. We applied a band-pass filter for periods between 20 and 137 days (0.05 to 0.0073 cycle/day; Additional file 1: Figures S1c, d) and sought minima in the correlation coefficients between precipitation and sea-level change and between precipitation and atmospheric pressure change, based on the data for all stations. The correlation coefficients were based on the Pearson correlation,
$$\rho \left(\text{A},\text{B}\right)= \frac{\text{cov}\left(\text{A},\text{B}\right)}{\sigma _\text{A} \sigma _\text{B}},$$
where \(\text{cov}(\text{A},\text{B})\) is the covariance of time series A and B, and \(\sigma _\text{A}\) and \(\sigma _\text{B}\) are the standard deviations of time series A and B.
Figure 3 shows an example of the unfiltered and filtered data for one seismic station (N.YSHH; red dot in Fig. 1f). Seismic velocity changes in Fig. 3a represent the averaged velocity change within a 40-km radius. The unfiltered data are coloured by the stretching correlation coefficient, averaged for station pairs used to estimate the velocity change. In general, the mean correlation coefficient for station pairs used in this study is above 0.5, and the value is even higher in the period when precipitation is relatively high (e.g., August–November). This indicates the daily seismic velocity change in the study area is stable. Figure 3b–d shows the time series of precipitation, sea level, and atmospheric pressure, respectively. Because the Pearson correlation value between rainfall and sea level is very small (Fig. 3e), we can use band-pass filtering to separate the imprint of precipitation and sea level, as well as precipitation and atmospheric pressure. The correlation coefficients between band-pass filtered precipitation and sea-level and between precipitation and atmospheric pressure change, respectively, are shown in Fig. 4 for all stations. The small correlation coefficients indicate that rainfall is distinguishable from sea-level and atmospheric pressure changes.
a Example of unfiltered and filtered seismic velocity changes. The colour of the unfiltered velocity change represents the stretching correlation coefficient. b Precipitation, c sea level, and d atmospheric pressure during the study period at the station shown in Fig. 1f. e, f Comparisons of precipitation with changes in sea level and atmospheric pressure, respectively, at the station shown in Fig. 1f. Signals are normalized
Pearson correlation coefficients between band-pass filtered a precipitation and sea-level change and b precipitation and atmospheric pressure change at stations in the study area
To further analyse the dependence of seismic velocity changes on rainfall, we applied various time shifts to the rainfall record and evaluated the resulting cross-correlations with seismic velocity changes, as depicted in Fig. 5. Under the assumption that seismic velocity changes are triggered by precipitation after a time lag, we restricted ourselves to positive time lags (i.e. velocity variation after precipitation) and determined the time shift that produced the largest Pearson correlation coefficient.
Schematic figure of cross-correlation analysis between seismic velocity changes (reference) and precipitation (shifted time series): a positive time lag with negative correlation and b positive time lag with positive correlation. The delay between the peaks of precipitation and seismic velocity change is represented by Δt
Although we focus on the shorter cycle (shorter than annual period) in order to clarify the relationship between precipitation and velocity change, it is difficult to distinguish the effects of (a) undrained due to loading and (b) diffusion. Thus, in the next investigation, we evaluate the pore pressure diffusion via modelling.
Step 2: investigation of pore pressure diffusion
To calculate the pore pressure change, we use the poroelastic model developed by Talwani et al. (2007). The pore pressure due to diffusion can be described as:
$${P}_{k}=\sum_{k=1}^{n}\delta {p}_{k}erfc\left[\frac{r}{({4c\delta {t}_{k})}^{1/2}}\right]$$
where \(\delta {p}_{k}\) is the water level change, \(r\) is the depth from the surface, \(c\) indicates the hydraulic diffusion, and \(\delta {t}_{k}\) indicates the time increment from the starting time k to \(n\), and erfc denotes error complementary function. Although Talwani et al. (2007) also proposed equation for the pore pressure changes due to undrain loading, the effect is smaller than the diffusion in longer period. Here, we consider the contribution of precipitation from 365 days in the past, thus current pore pressure change is calculated by using the summation of pore pressure change from the previous 365 days. We defined water level change from 2015 to 2017 as the deviation from the average precipitation over 2014.
To evaluate the longer period variation, we applied a moving average with 130 days windows for seismic velocity change without band-pass filtering in the first step. By comparing seismic velocity changes with the pore pressure changes computed based on Eq. (7), we estimate the optimum hydraulic diffusion at each station. However, in the calculation of the pore pressure changes, it is difficult to constrain the dependence of the hydraulic diffusion with depth because the relative values of the both parameters in \(r/\sqrt{c}\) is sensitive to the calculation of the pore pressure by Eq. (7). Therefore, we estimate optimum values of \(c\) assuming values of \(r\) (i.e. depth), by computing correlation coefficients between observed velocity changes and modelled pore pressure changes. Since we expect decrease in seismic velocity due to increase in pore pressure, we determine the optimum value of \(c\) with the largest negative correlation. After optimum values of \(c\) are estimated at each station and depth, we construct a map of \(c\).
An example of correlation of rainfall and velocity change at a station near the middle of the study area (red dots in Fig. 1f) is shown in Fig. 6. Figure 6 includes a correlation without band-pass filtering (Fig. 6a), then after band-pass filtering (Fig. 6b), and then after shifting the precipitation record by 8 days to fit the respective peaks (Fig. 6c). This 8-day time shift raised the correlation coefficient for the 3-year data by more than half, although it is still relatively small at − 0.33. However, when we restricted the comparison to the rainy season (in this case, June to July of 2016 and 2017), the correlation coefficient is much greater (− 0.7).
Comparison and cross-correlation analysis between seismic velocity changes and precipitation at the station shown as red dot in Fig. 1f: a unfiltered signals, b band-pass filtered signals (normalized), and c band-pass filtered signals with precipitation shifted 8 days later
For most stations there is a negative correlation between rainfall events and seismic velocity changes (Fig. 7a, b); however, a few stations had positive correlations (Fig. 7c). The highest absolute value of these correlations, even after applying the optimum time lag, was approximately 0.3 (Fig. 8). This value is not high because several factors may weaken the correlation between seismic velocity changes and rainfall events. For example, random noise in both of the time series and the time windows decreases the coefficient. In the latter case, a short time window for stacking cross-correlations (10 days in this study) was necessary to analyse the short-term seismic velocity changes induced by precipitation. A longer time window would improve the stability of the velocity change estimate, but would reduce its temporal resolution (Hutapea et al. 2020). Another possibility is that an external factor other than precipitation also influences seismic velocity, such as atmospheric pressure, which can exert effects even at seismogenic depths (Niu et al. 2008).
a–c Cross-correlation between seismic velocity change and precipitation at the stations in the map at left, showing the estimated delay from positive time lag (solid magenta line)
Maps of the study area showing a correlation coefficients between seismic velocity change and precipitation at all stations after time shifting, b stations in a with negative correlation coefficients < – 0.2, and c time delays at stations in b. The time lag in panel c is also shown in Fig. 1c
Figure 8a shows the correlation coefficients between rainfall and seismic velocity for all stations, after applying the optimum time lag for each station. Among these stations, 26 stations had absolute correlation coefficients smaller than 0.1, 35 stations had absolute correlation coefficients of 0.1 to 0.2, and 37 stations had absolute correlation coefficients greater than 0.2. We selected the third group with high absolute correlation for further analysis because these stations are clustered and information regarding time lag is reliable only if there is a sufficiently strong correlation. Because most of these stations had a negative correlation between precipitation and seismic velocity, we focused on the stations in this group with negative correlations. These selected stations are shown in Fig. 8b, and their respective time lags are shown in Fig. 8c.
To validate the groundwater recharge due to precipitation, we compared the records of rainfall and groundwater level variations. Figure 9a, b shows the unfiltered records for an example GWL station (see Fig. 1f) and the calculated cross-correlation between band-pass filtered precipitation and groundwater level. As shown in Fig. 9c, it takes 5 days for rainfall to recharge groundwater, whereas Fig. 9d shows that rainfall is most strongly correlated with a decrease in seismic velocity 9 days later. The small difference in the time lags between Fig. 9c (5 days) and Fig. 9d (9 days) supports our interpretation that the increased groundwater load due to recharge by rainfall causes a subsequent decrease in seismic velocity. We show the influence of the near-surface lithology associated with the rainfall infiltration in Fig. 10.
Comparison and cross-correlations between precipitation and ground water level (GWL), and those between precipitation and seismic velocity. The GWL station of this example is shown in Fig. 1f and the seismic station closest to the GWL station is used for comparison. a Relationship between unfiltered precipitation and GWL. b Relationship between band-pass filtered precipitation and GWL. c Relationship between band-pass filtered precipitation and GWL with precipitation shifted earlier by 5 days. d The relationship between band-pass filtered precipitation and seismic velocity change (normalized) with precipitation shifted earlier by 9 days
S-wave velocity versus time delays of precipitation in a high-permeability materials and b weathered igneous rocks
Using the stations where seismic velocity changes are likely influenced by precipitation in Fig. 8, we estimated the optimum value of \(c\) by comparing the pore pressure change with seismic velocity change. In Fig. 11, we show an example of comparison between seismic velocity change and pore pressure change with the diffusion rate of 0.14 m2/s for depth of 1.5 km that gives the largest negative correlation at the station of N.YSHH (red dot in Fig. 1f). Figure 12a shows a correlation of the pore pressure and seismic velocity changes for the selected stations in Chugoku and Shikoku regions. Several stations in the Chugoku and Shikoku area show relatively weak negative correlation between pore pressure and seismic velocity change (< 0.2). This can be due to several possibilities; the diffusion is not dominant at these stations, or other perturbations influence the longer-term variations in seismic velocity. Figure 12b shows spatial variation of the estimated diffusion rates for 1- 8 km depth, considering the sensitivity depth of surface wave to S-wave in the analysed frequency range (Additional file 1: Figure S3a). The results demonstrate that the hydraulic diffusion controlling the pore pressure spatially varies across the Chugoku and Shikoku regions. The diffusion rates in western Chugoku are generally higher than ones in the eastern area, while station with the highest hydraulic diffusion is found in the eastern Shikoku. Spatial variation of diffusion rate could reflect fracture density. A higher diffusion rate can be interpreted as a well-developed fracture network that connects to a deeper formation.
Comparison of moving averaged seismic velocity changes and pore pressure estimated from precipitation at the seismic station (red dots in Fig. 1f). a Moving averaged seismic velocity change. b Precipitation and the calculated pore pressure change. c Correlation between averaged seismic velocity change and pore pressure. The signals are normalized
a The correlation map between seismic velocity change and pore pressure. b The map of diffusion parameter for each depth (1, 1.5, 2, 4, 6, and 8 km). The stations with negative correlations smaller than 0.2 are not included on the map. The colour bar in each panel represents the different range of hydraulic diffusion rate
Influence of near-surface lithology
The delay time of seismic velocity change to rainfall is presumably related to the near-surface conditions, which influence water percolation into geological formation. The time lag between rainfall events and seismic velocity changes in Fig. 8c may represent the time needed for percolating rainfall to reach the water table of an unconfined aquifer. Percolation through the unsaturated zone is likely determined by the permeability of the near-surface layers and the surface geologic and geographic conditions at the seismic station. For example, in mountain regions with high permeability, water derived from the surrounding mountains percolates into intermountain basins. The comparison of our result with the geological and topological map of Japan (Fig. 1b, c) shows that stations with negative correlations are mostly located in granite areas with gentle sloping topography (Fig. 1b; marked by the colour pink in the legend of Fig. 1c). On the other hand, we cannot identify clear negative correlation in sedimentary rocks with steep slope area in the southern Shikoku (Fig. 1b; green in Fig. 1c), possibly because water flows away without percolating into deep formation.
Because the unsaturated zone in humid climates is generally less than 10-m thick (Phillips and Castro 2003), we assumed the unsaturated zone in our humid study area to be shallower than 10 m. Borehole logs from the sites where our seismometers are deployed classify the shallow formation as high-permeability materials and weathered igneous rocks (Obara et al. 2005). Under the assumption that S-wave velocity may be related to permeability, we examined plots of time lag versus S-wave velocity (Fig. 10) to evaluate the relationship between lithology and time lag. Although the relationships are unclear, we identified some features for each formation.
Among the 29 stations obtained from the step 1, the lithology of 19 stations can be classified into high-permeable materials (Fig. 10a) and weathered igneous rocks (Fig. 10b). A total of 8 stations with high-permeability material such as sandy soil, silt, and gravel shows a positive trend in which the time lag increases with increasing S-wave velocity (Fig. 10a). Because the seismic velocity varies inversely with porosity, this relationship confirms that percolation could be faster in more porous materials (i.e. low Vs) and slower where porosity is lower.
In weathered igneous rocks, which consist mostly of granite, the time lag and S-wave velocity show a modest negative trend (Fig. 10b). This trend may be connected to the spatial concentration of fractures in these rocks. Although fractures in crystalline rocks generally decrease with increasing depth, fractures are the primary determinant of permeability at depths shallower than 10 m in plutonic and crystalline metamorphic rocks (Freeze and Cherry 1979). Furthermore, the decreasing time lag with increasing S-wave velocity implies that the water table is shallower in the less-fractured igneous rocks, whereas brittle, more-fractured igneous rocks allow rainfall to percolate to greater depths, resulting in longer lag times for water to reach the saturated zone.
The estimated time lag can be also influenced by the delayed response of pore pressure change associated with the diffusion mechanism. Indeed, the time lag between seismic velocity change and rainfall is longer than that between groundwater level and rainfall (Fig. 9). This might reflect the influence of the delayed response of pore pressure change (i.e. seismic velocity change), in addition to the time delay due to percolation of rainfall to the water table.
Seismic velocity changes due to pore pressure diffusion
The example shown in Fig. 11 demonstrates that pore pressure increases from July to November. This pattern agrees with the seismic velocity decrease from July to November. A similar pore pressure and seismic velocity variation also occurs at other locations across Chugoku and Shikoku (Additional file 1: Figure S3). It is known that rainfall in July–September can trigger seasonal seismicity in the Chugoku area (Ueda and Kato 2019). The similar timeline suggests that the longer period seismic velocity change might have been influenced by pore pressure change induced by rainfall, although the long period variation is also influenced by sea level and atmospheric pressure variations.
The surface wave depth sensitivity to S-wave could be associated with the frequency of the coda wave used to estimate seismic velocity change (Nimiya et al. 2017). Although the frequency range of our seismic velocity change is sensitive to 1–8 km, the largest sensitivity derived from velocity model in our study area seems to be within 1.5–2 km depth (see Additional file 1: Figure S2). Within these depths, the range of hydraulic diffusivity for Chugoku and Shikoku regions varies from 0.02–1 m2/s (Fig. 12b). Suppose we take an example of 1.5 km depth, then the hydraulic diffusion rate at the western Chugoku area would be 0.09–0.14 m2/s, 0.09–0.25 m2/s for northern Chugoku, 0.02–0.1 m2/s for eastern Chugoku, and 0.02–0.5 m2/s for eastern Shikoku.
Mechanisms of pore pressure variation
We summarize our results and interpretation in Fig. 13. The near-surface condition (e.g., lithology and fracture) controls the percolation of rainfall, resulting in a time lag between rainfall and pore pressure increase (Fig. 13a). After the rain precipitation, we expect pore pressure increase mainly due to (a) immediate loading in undrained condition and (b) pore pressure diffusion.
Summary of the mechanism of crustal pore pressure change (Pp) associated with rainfall (modified from Talwani et al. 1997). a The time duration for the immediate high pore pressure (due to undrained effect) and pore pressure diffusion to occur with the increasing water level. The schematic figures of b immediate loading and c pore pressure diffusion in a later time. The white arrow in b represents the immediate increase of pore pressure due to undrained condition. The white wavy arrows in c represent the pore pressure diffusion
As groundwater level increases due to rainfall infiltration, immediate loading causes pore pressure increase from Pp1 to Pp2 in Fig. 13a, and generate thin cracks (white arrow in Fig. 13b). This condition persists until pore pressure ceases to the surrounding fractures in deep formation (Pp2 to Pp3 in Fig. 13a). This pore pressure variation could be mainly observed by shorter period seismic velocity reduction (Fig. 6). Then, the load from groundwater level increase triggers pore pressure diffusion through the pre-existing fracture network. As the pore pressure front arrives (white wavy arrow in Fig. 13c), there is an increase of pore pressure from Pp4 to Pp5 in Fig. 13a. This pore pressure increase can be monitored by longer period seismic velocity (Fig. 11).
We conclude that local lithology, both above the groundwater table and in the deep formation, contributes to the pore pressure changes associated with rainfall. The interpretations we describe here are simple ones. In real hydrogeological systems, however, there are many other complex mechanisms that affect the time lag (e.g., flow path influenced by geographical features), as well as the fracture permeability in the deeper lithology.
The status of pore pressure changes associated with rainfall can be evaluated by monitoring the seismic velocity. By calculating the cross-correlation between rainfall and seismic velocity changes, we can identify the locations where seismic velocity change is influenced by precipitation. Furthermore, by modelling pore pressure change based on pore pressure diffusion due to rainfall, we can constrain hydraulic diffusion from long-period seismic velocity changes. Our primary conclusions are:
The influence of rainfall on seismic velocity change varies depending on the lithology. Clear negative correlations between rainfall and seismic velocity can be observed in the granite areas and terrains with gentle topography. On the contrary, there are no clear correlations observed in the steep mountain areas.
The time lag between precipitation and seismic velocity change constrains near-surface conditions that could be related to lithology-related permeability. Similar time lag between precipitation and ground water level demonstrates that the increased groundwater load causes a subsequent decrease in seismic velocity.
The pore pressure diffusion caused by rainfall infiltration can be modelled and controls longer term pore pressure change. The spatial variation of diffusion parameter estimated by the modelling depends on fracture connectivity and is spatially varied.
Seismic data required to evaluate the conclusions in the paper are available from NIED (http://www.hinet.bosai.go.jp/about_data/?LANG=en). The meteorological data were obtained from JMA (https://www.jma.go.jp/jma/index.html). The ground water level data were obtained from MLIT (http://www1.river.go.jp).
Aizen EM, Aizen VB, Melack JM et al (2001) Precipitation and atmospheric circulation patterns at mid-latitudes of Asia. Int J Climatol 21:535–556. https://doi.org/10.1002/joc.626
Albino F, Amelung F, Gregg P (2018) The role of pore fluid pressure on the failure of magma reservoirs: insights from indonesian and aleutian arc volcanoes. J Geophys Res Solid Earth 123:1328–1349. https://doi.org/10.1002/2017JB014523
Aoi S, Asano Y, Kunugi T et al (2020) MOWLAS: NIED observation network for earthquake, tsunami and volcano. Earth Planets Space 72:126. https://doi.org/10.1186/s40623-020-01250-x
Ben-Zion Y, Leary P (1986) Thermoelastic strain in a half-space covered by unconsolidated material. Bull Seismol Soc Am 76:1447–1460
Berger J (1975) A note on thermoelastic strains and tilts. J Geophys Res 80:274–277. https://doi.org/10.1029/jb080i002p00274
Bettinelli P, Avouac JP, Flouzat M et al (2008) Seasonal variations of seismicity and geodetic strain in the Himalaya induced by surface hydrology. Earth Planet Sci Lett 266:332–344. https://doi.org/10.1016/j.epsl.2007.11.021
Bièvre G, Franz M, Larose E et al (2018) Influence of environmental parameters on the seismic velocity changes in a clayey mudflow (Pont-Bourquin Landslide, Switzerland). Eng Geol 245:248–257. https://doi.org/10.1016/j.enggeo.2018.08.013
Chaves EJ, Schwartz SY (2016) Monitoring transient changes within overpressured regions of subduction zones using ambient seismic noise. Sci Adv 2:e1501289. https://doi.org/10.1126/sciadv.1501289
Ellsworth WL (2013) Injection-Induced Earthquakes. Science 341:1225942–1225942. https://doi.org/10.1126/science.1225942
Freeze RA, Cherry JA (1979) Groundwater. Prentice-Hall Inc, Englewood, p 07632
Gassenmeier M, Sens-Schönfelder C, Delatre M, Korn M (2015) Monitoring of environmental influences on seismic velocity at the geological storage site for CO2 in Ketzin (Germany) with ambient seismic noise. Geophys J Int 200:524–533. https://doi.org/10.1093/gji/ggu413
Geological Survey of Japan, AIS (ed.) (2015) Seamless digital geological map of Japan 1:200,000. May 29, 2015 version. Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology. https://gbank.gsj.jp/geonavi/. Accessed 10 Feb 2020.
Grêt A, Snieder R, Scales J (2006) Time-lapse monitoring of rock properties with coda wave interferometry. J Geophys Res Solid Earth 111(3):1–11. https://doi.org/10.1029/2004JB003354
Hadziioannou C, Larose E, Coutant O et al (2009) Stability of monitoring weak changes in multiply scattering media with ambient noise correlation: Laboratory experiments. J Acoust Soc Am 125:3688–3695. https://doi.org/10.1121/1.3125345
Hainzl S, Kraft T, Wassermann J et al (2006) Evidence for rainfall-triggered earthquake activity. Geophys Res Lett 33:L19303. https://doi.org/10.1029/2006GL027642
Heki K (2004) Dense GPS array as a new sensor of seasonal changes of surface loads. In: Sparks RSJ, Hawkesworth CJ (eds) Geophysical monograph series. American Geophysical Union, Washington, D.C., pp 177–196
Hillers G, Graham N, Campillo M et al (2012) Global oceanic microseism sources as seen by seismic arrays and predicted by wave action models. Geochem Geophys Geosyst 13:Q01021. https://doi.org/10.1029/2011GC003875
Hillers G, Ben-Zion Y, Campillo M, Zigone D (2015) Seasonal variations of seismic velocities in the San Jacinto fault area observed with ambient seismic noise. Geophys J Int 202:920–932. https://doi.org/10.1093/gji/ggv151
Hutapea FL, Tsuji T, Ikeda T (2020) Real-time crustal monitoring system of Japanese Islands based on spatio-temporal seismic velocity variation. Earth Planets Space 72:19. https://doi.org/10.1186/s40623-020-1147-y
Ikeda T, Tsuji T (2018) Temporal change in seismic velocity associated with an offshore MW 5.9 Off-Mie earthquake in the Nankai subduction zone from ambient noise cross-correlation. Prog Earth Planet Sci 5:62. https://doi.org/10.1186/s40645-018-0211-8
Ikeda T, Tsuji T, Nakatsukasa M et al (2018) Imaging and monitoring of the shallow subsurface using spatially windowed surface-wave analysis with a single permanent seismic source. Geophysics 83:EN23–EN38. https://doi.org/10.1190/geo2018-0084.1
Kraft T, Wassermann J, Schmedes E, Igel H (2006) Meteorological triggering of earthquake swarms at Mt. Hochstaufen. SE-Germany Tectonophys 424:245–258. https://doi.org/10.1016/j.tecto.2006.03.044
Meier U, Shapiro NM, Brenguier F (2010) Detecting seasonal variations in seismic velocities within Los Angeles basin from correlations of ambient seismic noise. Geophys J Int 181:985–996. https://doi.org/10.1111/j.1365-246X.2010.04550.x
Miao Y, Shi Y, Wang SY (2018) Temporal change of near-surface shear wave velocity associated with rainfall in Northeast Honshu, Japan. Earth Planets Space 70:204. https://doi.org/10.1186/s40623-018-0969-3
Minato S, Tsuji T, Ohmi S, Matsuoka T (2012) Monitoring seismic velocity change caused by the 2011 Tohoku-oki earthquake using ambient noise records. Geophys Res Lett 39:L09309. https://doi.org/10.1029/2012GL051405
Mordret A, Mikesell TD, Harig C et al (2016) Monitoring southwest Greenland's ice sheet melt with ambient seismic noise. Sci Adv 2:e1501538. https://doi.org/10.1126/sciadv.1501538
Nakata N, Snieder R (2012) Estimating near-surface shear wave velocities in Japan by applying seismic interferometry to KiK-net data. J Geophys Res Solid Earth 117:B01308. https://doi.org/10.1029/2011JB008595
Nakata N, Snieder R, Tsuji T et al (2011) Shear wave imaging from traffic noise using seismic interferometry by cross-coherence. Geophysics 76:SA97–SA106. https://doi.org/10.1190/geo2010-0188.1
Nakata N, Chang JP, Lawrence JF, Boué P (2015) Body wave extraction and tomography at Long Beach, California, with ambient-noise interferometry. J Geophys Res Solid Earth 120:1159–1173. https://doi.org/10.1002/2015JB011870
Nimiya H, Ikeda T, Tsuji T (2017) Spatial and temporal seismic velocity changes on Kyushu Island during the 2016 Kumamoto earthquake. Sci Adv 3:e1700813. https://doi.org/10.1126/sciadv.1700813
Nishida K, Kawakatsu H, Obara K (2008) Three-dimensional crustal S wave velocity structure in Japan using microseismic data recorded by Hi-net tiltmeters. J Geophys Res Solid Earth. https://doi.org/10.1029/2007JB005395
Niu F, Silver PG, Daley TM et al (2008) Preseismic velocity changes observed from active source monitoring at the Parkfield SAFOD drill site. Nature 454:204–208. https://doi.org/10.1038/nature07111
Obara K, Kasahara K, Hori S, Okada Y (2005) A densely distributed high-sensitivity seismograph network in Japan: Hi-net by National Research Institute for Earth Science and Disaster Prevention. Rev Sci Instrum 76:21301. https://doi.org/10.1063/1.1854197
Phillips FM, Castro MC (2003) 515—Groundwater dating and residence-time measurements. In: Holland HD, Turekian KK (eds) Treatise on geochemistry. Elsevier, Oxford, pp 451–497. https://doi.org/10.1016/B0-08-043751-6/05136-7
Prawirodirdjo L, Ben-Zion Y, Bock Y (2006) Observation and modeling of thermoelastic strain in Southern California Integrated GPS Network daily position time series. J Geophys Res Solid Earth 111:1–10. https://doi.org/10.1029/2005JB003716
Richter T, Sens-Schönfelder C, Kind R, Asch G (2014) Comprehensive observation and modeling of earthquake and temperature-related seismic velocity changes in northern Chile with passive image interferometry. J Geophys Res Solid Earth 119:4747–4765. https://doi.org/10.1002/2013JB010695
Rivet D, Brenguier F, Cappa F (2015) Improved detection of preeruptive seismic velocity drops at the Piton de La Fournaise volcano. Geophys Res Lett 42:6332–6339. https://doi.org/10.1002/2015GL064835
Sato T, Fukuda Y, Aoyama Y et al (2001) On the observed annual gravity variation and the effect of sea surface height variations. Phys Earth Planet Int 123:45–63. https://doi.org/10.1016/S0031-9201(00)00216-8
Saito M (1988) DISPER80: a subroutine package for the calculation of seismic normal mode solutions. Seismological algorithms: computational methods and computer programs 293–319.
Sens-Schönfelder C, Eulenfeld T (2019) Probing the in situ Elastic Nonlinearity of Rocks with Earth Tides and Seismic Noise. Phys Rev Lett 122:138501. https://doi.org/10.1103/PhysRevLett.122.138501
Sens-Schönfelder C, Wegler U (2006) Passive image interferometry and seasonal variations of seismic velocities at Merapi Volcano, Indonesia. Geophys Res Lett 33:1–5. https://doi.org/10.1029/2006GL027797
Silver PG, Daley TM, Niu F, Majer EL (2007) Active source monitoring of cross-well seismic travel time for stress-induced changes. Bull Seismol Soc Am 97:281–293. https://doi.org/10.1785/0120060120
Taira T, Brenguier F (2016) Response of hydrothermal system to stress transients at Lassen Volcanic Center, California, inferred from seismic interferometry with ambient noise. Earth, Planets Space 68:162. https://doi.org/10.1186/s40623-016-0538-6
Takano T, Nishimura T, Nakahara H (2017) Seismic velocity changes concentrated at the shallow structure as inferred from correlation analyses of ambient noise during volcano deformation at Izu-Oshima, Japan. J Geophys Res Solid Earth 122:6721–6736. https://doi.org/10.1002/2017JB014340
Talwani P (1997) On the Nature of Reservoir-induced Seismicity. Pure appl geophys 150:473–492. https://doi.org/10.1007/s000240050089
Talwani P, Chen L, Gahalaut K (2007) Seismogenic permeability, ks. J Geophys Res Solid Earth. https://doi.org/10.1029/2006JB004665
Tsai VC (2011) A model for seasonal changes in GPS positions and seismic wave speeds due to thermoelastic and hydrologic variations. J Geophys Res 116:B04404. https://doi.org/10.1029/2010JB008156
Tsuji T, Tokuyama H, Costa Pisani P, Moore G (2008) Effective stress and pore pressure in the Nankai accretionary prism off the Muroto Peninsula, southwestern Japan. J Geophys Res 113:B11401. https://doi.org/10.1029/2007JB005002
Tsuji T, Johansen TA, Ruud BO, Ikeda T, Matsuoka T (2012) Surface-wave analysis for identifying unfrozen zones in subglacial sediments. Geophysics 77:EN17–EN27. https://doi.org/10.1190/geo2011-0222.1
Tsuji T, Kamei R, Pratt G (2014) Pore pressure distribution of a mega-splay fault system in the Nankai Trough subduction zone: Insight into up-dip extent of the seismogenic zone. Earth Planet Sci Lett 396:165–178. https://doi.org/10.1016/j.epsl.2014.04.011
Ueda T, Kato A (2019) Seasonal variations in crustal seismicity in San-in District, Southwest Japan. Geophys Res Lett 46:3172–3179. https://doi.org/10.1029/2018GL081789
Ueda H, Kozono T, Fujita E et al (2013) Crustal deformation associated with the 2011 Shinmoe-dake eruption as observed by tiltmeters and GPS. Earth Planets Space 65:517–525. https://doi.org/10.5047/eps.2013.03.001
Wang Q-Y, Brenguier F, Campillo M et al (2017) Seasonal crustal seismic velocity changes throughout Japan. J Geophys Res Solid Earth 122:7987–8002. https://doi.org/10.1002/2017JB014307
Yamamura K, Sano O, Utada H et al (2003) Long-term observation of in situ seismic velocity and attenuation. J Geophys Res Solid Earth 108(B6):2317. https://doi.org/10.1029/2002JB002005
Yukutake Y, Ueno T, Miyaoka K (2016) Determination of temporal changes in seismic velocity caused by volcanic activity in and around Hakone volcano, central Japan, using ambient seismic noise records. Prog Earth Planet Sci 3:29. https://doi.org/10.1186/s40645-016-0106-5
We used Hi-net seismic data from the National Research Institute for Earth Science and Disaster Resilience (NIED). We obtained rainfall, sea level, and atmospheric pressure data from the Japan Meteorological Agency (JMA). We obtained ground water level data from the Ministry of Land, Infrastructure, Transport and Tourism (MLIT). We appreciate Taka'aki Taira (UC Berkley) for discussion, and Fernando Lawrens Hutapea (Kyushu Univ.) for his technical support in computing seismic velocity change. This study was also supported by Japan Society for the Promotion of Science grants (no. JP20H01997). We are grateful for the support provided by the Advanced Graduate Program in Global Strategy for Green Asia of Kyushu University, and International Institute for Carbon-Neutral Energy Research (I2CNER) funded by the World Premier International Research Center Initiative of the Ministry of Education, Culture, Sports, Science and Technology, Japan (MEXT).
This work was supported by Japan Society for the Promotion of Science grants (no. JP20H01997).
Department of Earth Resources Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395, Japan
Rezkia Dewi Andajani, Takeshi Tsuji & Tatsunori Ikeda
International Institute for Carbon-Neutral Energy Research (WPI-I2CNER), Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395, Japan
Takeshi Tsuji & Tatsunori Ikeda
Disaster Prevention Research Institute, Kyoto University Gokasho, Uji, Kyoto, 611-0011, Japan
Takeshi Tsuji
Colorado School of Mines, Hill Hall 206A, Golden, CO, 80401-1887, USA
Roel Snieder
Rezkia Dewi Andajani
Tatsunori Ikeda
RDA drafted the initial manuscript. TT proposed this study. TT, RS, and TI suggested the method for the interpretation, and revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Takeshi Tsuji.
The authors declare that they have no competing interest.
Additional file 1: Figure S1.
(a) Map of seismic stations. (b) The power spectra of seismic velocity changes, precipitation events, sea-level changes, and atmospheric pressure changes for the 0–0.5 cycle/day frequency band for the seismic station shown in red of panel (a). Precipitation is averaged around the seismic station, and sea level and atmospheric pressure are taken from the closest sea-level and pressure gauges. (c) The shape of weighting functions of the band-pass filter for selected frequencies defined between four points (f1 = 0.0073, f2 = 0.0082, f3 = 0.03, and f4 = 0.05 cycle/day). (d) The band-pass filter applied to the power spectra. The unshaded parts of the power spectra were used in the analysis. The peaks with the lowest frequency represent annual or quasi-annual cycles. Figure S2. Depth sensitivity of surface wave (Rayleigh wave) to S-wave velocity. The surface wave sensitivity to S-wave was calculated by DISPER80 (Saito 1998) for 1D velocity layer model at the Chugoku region (Nishida et al. 2008). The amplitudes were normalized by the maximum sensitivity of frequency 0.9 Hz. Figure S3. Comparison of moving averaged seismic velocity changes and the calculated pore pressure at the seismic stations of A and B. The top panel indicates the moving averaged seismic velocity change. The middle panel shows the comparison of precipitation and pore pressure. The bottom panel represents the correlation coefficient between averaged seismic velocity change and pore pressure. The signals are normalized.
Andajani, R.D., Tsuji, T., Snieder, R. et al. Spatial and temporal influence of rainfall on crustal pore pressure based on seismic velocity monitoring. Earth Planets Space 72, 177 (2020). https://doi.org/10.1186/s40623-020-01311-1
Seismic velocity variation
Groundwater level
Pore pressure
Near-surface lithology
4. Seismology
|
CommonCrawl
|
Mathematische Annalen
June 2015 , Volume 362, Issue 1–2, pp 55–106 | Cite as
Linear series on metrized complexes of algebraic curves
Omid Amini
Matthew Baker
A metrized complex of algebraic curves over an algebraically closed field \(\kappa \) is, roughly speaking, a finite metric graph \(\Gamma \) together with a collection of marked complete nonsingular algebraic curves \(C_v\) over \(\kappa \), one for each vertex \(v\) of \(\Gamma \); the marked points on \(C_v\) are in bijection with the edges of \(\Gamma \) incident to \(v\). We define linear equivalence of divisors and establish a Riemann–Roch theorem for metrized complexes of curves which combines the classical Riemann–Roch theorem over \(\kappa \) with its graph-theoretic and tropical analogues from Amini and Caporaso (Adv Math 240:1–23, 2013); Baker and Norine (Adv Math 215(2):766–788, 2007); Gathmann and Kerber (Math Z 259(1):217–230, 2008) and Mikhalkin and Zharkov (Tropical curves, their Jacobians and Theta functions. Contemporary Mathematics 203–231, 2007), providing a common generalization of all of these results. For a complete nonsingular curve \(X\) defined over a non-Archimedean field \(\mathbb {K}\), together with a strongly semistable model \(\mathfrak {X}\) for \(X\) over the valuation ring \(R\) of \(\mathbb {K}\), we define a corresponding metrized complex \(\mathfrak {C}\mathfrak {X}\) of curves over the residue field \(\kappa \) of \(\mathbb {K}\) and a canonical specialization map \(\tau ^{\mathfrak {C}\mathfrak {X}}_*\) from divisors on \(X\) to divisors on \(\mathfrak {C}\mathfrak {X}\) which preserves degrees and linear equivalence. We then establish generalizations of the specialization lemma from Baker (Algebra Number Theory 2(6):613–653, 2008) and its weighted graph analogue from Amini and Caporaso (Adv Math 240:1–23, 2013), showing that the rank of a divisor cannot go down under specialization from \(X\) to \(\mathfrak {C}\mathfrak {X}\). As an application, we establish a concrete link between specialization of divisors from curves to metrized complexes and the theory of limit linear series due to Eisenbud and Harris (Invent Math 85:337–371, 1986). Using this link, we formulate a generalization of the notion of limit linear series to curves which are not necessarily of compact type and prove, among other things, that any degeneration of a \(\mathfrak {g}^r_d\) in a regular family of semistable curves is a limit \(\mathfrak {g}^r_d\) on the special fiber.
The authors would like to thank Vladimir Berkovich, Lucia Caporaso, Ethan Cotterill, Eric Katz, Johannes Nicaise, Joe Rabinoff, Frank-Olaf Schreyer, David Zureick-Brown, and the referees for helpful discussions and remarks. The second author was supported in part by NSF grant DMS-0901487.
Appendix: Rank-determining sets for metrized complexes
We retain the terminology from Sect. 2. Let \(\mathfrak {C}\) be a metrized complex of algebraic curves, \(\Gamma \) the underlying metric graph, \(G=(V,E)\) a model of \(\Gamma \) and \(\{C_v\}_{v\in V}\) the collection of smooth projective curves over \(\kappa \) corresponding to \(\mathfrak {C}\). In this section, we generalize some basic results concerning rank-determining sets [30, 34] from metric graphs to metrized complexes by following and providing complements to the arguments of [3] (to which we refer for a more detailed exposition).
Let \(\mathcal {R}\) be a set of geometric points of \(\mathfrak {C}\) (i.e., a subset of \(\bigcup _{v\in V} C_v(\kappa )\)). The set \(\mathcal {R}\) is called rank-determining if for any divisor \(\mathcal {D}\) on \(\mathfrak {C}\), \(r_\mathfrak {C}(\mathcal {D})\) coincides with \(r_{\mathfrak {C}}^\mathcal {R}(\mathcal {D})\), defined as the largest integer \(k\) such that \(\mathcal {D} - \mathcal {E}\) is linearly equivalent to an effective divisor for all degree \(k\) effective divisors \(\mathcal {E}\) on \(\mathfrak {C}\) with support in \(\mathcal {R}\). In other words, \(\mathcal {R}\) is rank-determining if in the definition of rank given in Sect. 2, one can restrict to effective divisors \(\mathcal {E}\) with support in \(\mathcal {R}\).
The following theorem is a common generalization of (a) Luo's theorem [34] (see also [30]) that \(V\) is a rank-determining set for any loopless model \(G=(V,E)\) of a metric graph \(\Gamma \) and (b) the classical fact (see [34] for a proof) that for any smooth projective curve \(C\) of genus \(g\) over \(\kappa \), every subset of \(C(\kappa )\) of size \(g+1\) is rank-determining.
Theorem 6.1
Let \(\mathfrak {C}\) be a metrized complex of algebraic curves, and suppose that the given model \(G\) of \(\Gamma \) is loopless. Let \(\mathcal {R}_v \subset C_v(\kappa )\) be a subset of size \(g_v+1\) and let \(\mathcal {R}= \cup _{v\in V} \mathcal {R}_v\). Then \(\mathcal {R}\) is a rank-determining subset of \(\mathfrak {C}\).
Let \(\mathcal {D}\) be a divisor on \(\mathfrak {C}\). For any point \(P \in \Gamma \), let \(\mathcal {D}^P\) be the quasi-unique \(P\)-reduced divisor on \(\mathfrak {C}\) linearly equivalent to \(\mathcal {D}\), and denote by \(D^P_\Gamma \) (resp. \(D^P_v\)) the \(\Gamma \)-part (resp. \(C_v\)-part) of \(\mathcal {D}^P\).
Lemma 6.2
A divisor \(\mathcal {D}\) on \(\mathfrak {C}\) has rank at least one if and only if
For any point \(P\) of \(\Gamma \), \(D^P_\Gamma ( P )\ge 1\), and
For any vertex \(v\in V(G)\), the divisor \(D^v_v\) has rank at least one on \(C_v\).
The condition \(r_\mathfrak {C}(\mathcal {D}) \ge 1\) is equivalent to requiring that \(r_\mathfrak {C}(\mathcal {D} - \mathcal {E} ) \ge 0\) for every effective divisor \(\mathcal {E}\) of degree \(1\) on \(\mathfrak {C}\). For \(P \in \Gamma {\setminus }V\), the divisor \(\mathcal {D} -( P )\) has non-negative rank if and only if \(D^P_\Gamma ( P )\ge 1\) (by Lemma 3.11). Similarly, for \(v \in V\) and \(x\in C_v(\kappa )\), the divisor \( \mathcal {D} - (x)\) has non-negative rank in \(\mathfrak {C}\) if and only if \(D^v_\Gamma (v) \ge 1\) and \(D^v_v - (x)\) has non-negative rank on \(C_v\) (by Lemma 3.11). These are clearly equivalent to (1) and (2). \(\square \)
A subset \(\mathcal {R}\subseteq \bigcup _{v\in V} C_v(\kappa )\) which has non-empty intersection with each \(C_v(\kappa )\) is rank-determining if and only if for every divisor \(\mathcal {D}\) of non-negative rank on \(\mathfrak {C}\), the following two assertions are equivalent:
\(r_\mathfrak {C}(\mathcal {D}) \ge 1\).
For any vertex \(u \in V\), and for any point \(z \in \mathcal {R}\, \cap \, C_u(\kappa )\), \(D_u^u -(z)\) has non-negative rank on \(C_u\).
In view of Lemma 6.2, for a rank-determining set the two conditions (i) and (ii) are equivalent. Suppose now that (i) and (ii) are equivalent for any divisor \(\mathcal {D}\) on \(\mathfrak {C}\). By induction on \(r\), we prove that \(r_\mathfrak {C}(\mathcal {D})\ge r\) if and only if for every effective divisor \(\mathcal {E}\) of degree \(r\) with support in \(\mathcal {R}\), \(r_\mathfrak {C}(\mathcal {D} - \mathcal {E}) \ge 0\). This will prove that \(\mathcal {R}\) is rank-determining.
The case \(r=1\) follows by the hypothesis and Lemma 6.2. Supposing now that the statement holds for some integer \(r \ge 1\), we prove that it also holds for \(r+1\).
Let \(\mathcal {D}\) be a divisor with the property that \(r_\mathfrak {C}(\mathcal {D} -\mathcal {E}) \ge 0\) for every effective divisor \(\mathcal {E}\) of degree \(r+1\) with support in \(\mathcal {R}\). Fix an effective divisor \(\mathcal {E}\) of degree \(r\) with support in \(\mathcal {R}\). By the base case \(r=1\), the divisor \(\mathcal {D} - \mathcal {E}\) has rank at least \(1\) on \(\mathfrak {C}\) because \(r_\mathfrak {C}(\mathcal {D} -\mathcal {E} -(x))\ge 0\) for any \(x\in \mathcal {R}\). Thus \(r_\mathfrak {C}(\mathcal {D} - (x) - \mathcal {E}) \ge 0\) for any point of \(|\mathfrak {C}|\). This holds for any effective divisor \(\mathcal {E}\) of degree \(r\) with support in \(\mathcal {R}\), and so from the inductive hypothesis we infer that \(\mathcal {D} - (x)\) has rank at least \(r\) on \(\mathfrak {C}\). Since this holds for any \(x \in |\mathfrak {C}|\), we conclude that \(\mathcal {D}\) has rank at least \(r+1\). \(\square \)
Let \(\mathcal {D}\) be a divisor of degree \(d\) and non-negative rank on \(\mathfrak {C}\), and let \(D_\Gamma \) and \(D_v\) be the \(\Gamma \) and \(C_v\)-parts of \(\mathcal {D}\), respectively. Define
$$\begin{aligned} |D_\Gamma | :=\{E\ge 0 \,|\,\, E\in \mathrm{Div }(\Gamma ) \text { and } E\sim D_\Gamma \}. \end{aligned}$$
Note that \(|D_\Gamma |\) is a non-empty subset of the symmetric product \(\Gamma ^{(d)}\) of \(d\) copies of \(\Gamma \).
Consider the reduced divisor map \({\mathrm {Red}}^{\mathfrak {C}}_\mathcal {D} : \Gamma \rightarrow \Gamma ^{(d)}\) which sends a point \(P\in \Gamma \) to \(D^P_\Gamma \), the \(\Gamma \)-part of the \(P\)-reduced divisor \(\mathcal {D}^P\). The following theorem extends [3, Theorem 3] to divisors on metrized complexes.
For any divisor \(\mathcal {D}\) of degree \(d\) and non-negative rank on \(\mathfrak {C}\), the reduced divisor map \({\mathrm {Red}}^{\mathfrak {C}}_\mathcal {D} : \Gamma \rightarrow \Gamma ^{(d)}\) is continuous.
This is based on an explicit description of the reduced divisor map \({\mathrm {Red}}^{\mathfrak {C}}_\mathcal {D}\) in a small neighborhood around any point of \(\Gamma \), similar to the description provided in [3, Theorem 3] in the context of metric graphs. We merely give the description by providing appropriate modifications to [3, Theorem 3], referring to loc. cit. for more details.
Let \(P\) be a point of \(\Gamma \) and let \(\vec \mu \) be a (unit) tangent direction in \(\Gamma \) emanating from \(P\). For \(\epsilon >0\) sufficiently small, we denote by \(P+\epsilon \vec \mu \) the point of \(\Gamma \) at distance \(\epsilon \) from \(P\) in the direction of \(\vec \mu \). We will describe the restriction of \({\mathrm {Red}}^{\mathfrak {C}}_\mathcal {D}\) to the segment \([P, P+\epsilon \vec \mu ]\) for sufficiently small \(\epsilon >0\). One of the two following cases can happen:
For all sufficiently small \(\epsilon >0\), the \(P\)-reduced divisor \(\mathcal {D}^P\) is also \((P+\epsilon \vec \mu )\)-reduced. In this case, the map \({\mathrm {Red}}^{\mathfrak {C}}_\mathcal {D}\) is constant (and so obviously continuous) on a small segment \([P, P+\epsilon _0 \vec \mu ]\) with \(\epsilon _0>0\).
There exists a cut \(S\) in \(\Gamma \) which is saturated with respect to \(\mathcal {D}^P\) such that \(P \in \partial S\) and \(P+\epsilon \vec \mu \not \in S\) for all sufficiently small \(\epsilon > 0\).
We note that there is a maximum saturated cut \(S\) (i.e., containing any other saturated cut) with the property described in (2) (see the proof of [3, Theorem 3] for details). In the following \(S\) denotes the maximum saturated cut with property (2). In this case, there exists an \(\epsilon _0>0\) such that for any \(0<\epsilon <\epsilon _0\), the reduced divisor \(\mathcal {D}^{P+\epsilon \vec \mu }\) has the following description (the proof mimics that of [3, Theorem 3] and is omitted).
Let \(\mu ,\vec \mu _1,\dots ,\vec \mu _s\) be all the different tangent vectors in \(\Gamma \) (based at the boundary points \(P, x_1,\dots ,x_s \in \partial S\), respectively) which are outgoing from \(S\). (It might be the case that \(x_i=x_j\) for two different indices \(i\) and \(j\)). Let \(\gamma _0>0\) be small enough so that for any point \(x \in \partial S\) and any tangent vector \(\vec \nu \) to \(\Gamma \) at \(x\) which is outgoing from \(S\), the entire segment \((x,x+\gamma _0\vec \nu \,]\) lies outside \(S\) and does not contain any point of the support of \(D_\Gamma \).
For any \(0<\gamma < \gamma _0\) and any positive integer \(\alpha \), we will define below a rational function \(f_{\Gamma }^{(\gamma ,\alpha )}\) on \(\Gamma \). Appropriate choices of \(\gamma = \gamma (\epsilon )\) and \(\alpha \) will then give the (\(P+\epsilon \vec \mu \))-reduced divisor \(\mathcal {D}^{P+\epsilon \vec \mu } = \mathcal {D}^P + \mathrm {div}(\mathfrak {f}^{\gamma ,\alpha })\), for any \(\epsilon < \epsilon _0 := \frac{\gamma _0}{\alpha }\), where \(\mathfrak {f}^{\gamma ,\alpha }\) is the rational function on \(\mathfrak {C}\) given by \(f_\Gamma ^{\gamma ,\alpha }\) on \(\Gamma \) and \(f_v =1\) on each \(C_v\).
For \(0<\gamma <\gamma _0\) and integer \(\alpha \ge 1\), define \(f_\Gamma ^{(\gamma ,\alpha )}\) as follows:
\(f_\Gamma ^{(\gamma ,\alpha )}\) takes value zero at any point of \(S\);
On any outgoing interval \([x_i,x_i+ \gamma \vec \mu _i]\) from \(S\), \(f_\Gamma ^{(\gamma ,\alpha )}\) is linear of slope \(-1\);
The restriction of \(f_\Gamma ^{(\gamma ,\alpha )}\) to the interval \([P,P+ (\frac{\gamma }{\alpha }) \vec \mu \,]\) is linear of slope \(-\alpha \);
\(f_\Gamma ^{(\gamma ,\alpha )}\) takes value \(-\gamma \) at any other point of \(\Gamma \).
Note that the values of \(f_\Gamma ^{(\gamma ,\alpha )}\) at the points \((x_i+ \gamma \vec \mu _i)\) and \(P+ (\frac{\gamma }{\alpha }) \vec \mu \) are all equal to \(-\gamma \), so \(f_\Gamma ^{(\gamma ,\alpha )}\) is well-defined.
It remains to determine the values of \(\alpha \) and \(\gamma \). Once the value of \(\alpha \) is determined, \(\gamma \) will be defined as \(\alpha \epsilon \) so that the point \(P+(\frac{\gamma }{\alpha })\vec \mu \) coincides with the point \(P+\epsilon \vec \mu \). We consider the following two cases, depending on whether or not \(P\) is a vertex of \(G\):
If \(P \in \Gamma {\setminus }V\), then \(\alpha = D^P_\Gamma ( P ) - \mathrm {outdeg}_S ( P ) +1\). (Note that since \(S\) is saturated with respect to \(D_\Gamma ^P\), we have \(D^P_\Gamma ( P ) \ge \mathrm {outdeg}_S ( P )\) and thus \(\alpha \ge 1\).)
If \(P=v\) for a vertex \(v\in V(G)\), let \(e_0, e_1, \dots , e_l\) be the outgoing edges at \(v\) with respect to \(S\), and consider the points \(x^{e_0}_v, x^{e_1}_v, \dots , x^{e_l}_v\) in \(C_v(\kappa )\) indexed by these edges. Suppose in addition that \(e_0\) is the edge which corresponds to the tangent direction \(\vec \mu \). Since \(S\) is a saturated cut with respect to \(D^v_\Gamma \), the divisor \(D_v - \mathrm {div}_v(\partial S) = D_v - \sum _{i=0}^l (x^{e_i}_v)\) has non-negative rank in \(C_v\). Define \(\alpha \) to be the largest integer \(n\ge 1\) such that \(D_v - n (x_v^{e_0}) - \sum _{i=1}^l (x^{e_i}_v)\) has non-negative rank.
Now for any \(0\le \epsilon <\epsilon _0= \frac{\gamma _0}{\alpha }\), the divisor \(\mathcal {D}^{P+\epsilon \vec \mu }\) is \((P+\epsilon \vec \mu )\)-reduced. (The argument is similar to [3, Proof of Theorem 3].) It follows immediately that the reduced divisor map is continuous on the interval \([P, P+\epsilon _0 \vec \mu )\), and the result follows. \(\square \)
We are now ready to give the proof of Theorem 6.1.
Proof of Theorem 6.1
By Lemma 6.3, it is enough to check the equivalence of the following two properties for any divisor \(\mathcal {D}\) on \(\mathfrak {C}\):
For any \(u \in V\) and any point \(z \in \mathcal {R}_u = \mathcal {R}\, \cap \, C_u(\kappa )\), the divisor \(D_u^u -(z)\) has non-negative rank on \(C_u\).
It is clear that (i) implies (ii). So we only need to prove that (ii) implies (i). In addition, by Lemma 6.2, Property (i) is equivalent to:
for any point \(P\) of \(\Gamma \), \(D^P_\Gamma ( P )\ge 1\); and
So it suffices to prove that \(\mathrm{(ii)}\Rightarrow (1)\) and \((2)\). Since cardinality of \(\mathcal {R}_v\) is \(g_v+1\), \(\mathcal {R}_v\) is rank-determining in \(C_v\). Therefore, (ii) implies \((2)\). We now show that \((2)\) implies \((1)\). Let \(\Gamma _0\) be the set of all \(P\in \Gamma \) such that \(D^P_\Gamma ( P )\ge 1\). By the continuity of the map \({\mathrm {Red}}^{\mathfrak {C}}_\mathcal {D}\), \(\Gamma _0\) is a closed subset of \(\Gamma \). In addition, since \(D^v_v\) has rank at least one on \(C_v\) and \(D_\Gamma ^v( v ) = \deg (D_v^v)\) for every vertex \(v\in C\), we have \(V \subset \Gamma _0\). This shows that \(\Gamma {\setminus }\Gamma _0\) is a disjoint union of open segments contained in edges of \(G\). Suppose for the sake of contradiction that \(\Gamma _0 \subsetneq \Gamma \), and let \(I = (P,Q) \) be a non-empty segment contained in the edge \(\{u,v\}\) of \(G\) such that \(I \cap \Gamma _0 = \emptyset \).
Claim. \({\mathrm {Red}}^{\mathfrak {C}}_\mathcal {D}\) is constant on the closed interval \([P,Q]\).
To see this, note that for any point \(Z \in [P,Q]\) and any tangent direction \(\vec \mu \) for which \(Z+\epsilon \vec \nu \in [P,Q]\) for all sufficiently small \(\epsilon >0\), we are always in case (1) in the description of \({\mathrm {Red}}^{\mathfrak {C}}_\mathcal {D}\). Otherwise, there would be an integer \(\alpha >0\) such that \(\mathcal {D}^{Z+\epsilon \vec \mu } = \mathcal {D}^{Z} + \mathrm {div}(\mathfrak {f}^{(\epsilon \alpha , \alpha )})\) for all sufficiently small \(\epsilon > 0\). In particular, this would imply (by the definition of \(f^{(\eta ,a)}\)) that \(D_\Gamma ^{Z+\epsilon \vec \mu }( Z+\epsilon \vec \mu ) = \alpha \ge 1\), which implies that \(Z+\epsilon \vec \nu \in \Gamma _0\), a contradiction. This proves the claim.
A case analysis (depending on whether \(P\) and \(Q\) are vertices or not) shows that for a point \(Z \in (P,Q)\), the cut \(S = \Gamma \setminus (P,Q)\) is saturated for \(\mathcal {D}^P =\mathcal {D}^Q\). Since \(\mathcal {D}^Z = \mathcal {D}^P=\mathcal {D}^Q\), and \(S\) does not contain \(Z\), this contradicts the assumption that \(\mathcal {D}^Z\) is \(Z\)-reduced. \(\square \)
Theorem 6.1 has the following direct corollaries.
Corollary 6.5
Let \(\mathcal {G}\) be a subgroup of \(\mathbb {R}\) which contains all the edge lengths in \(G\). For any divisor \(\mathcal {D} \in \mathrm{Div }(\mathfrak {C})_\mathcal {G}\), we have
$$\begin{aligned} r_{\mathfrak {C}, \mathcal {G}}(\mathcal {D}) = r_\mathfrak {C}(\mathcal {D}). \end{aligned}$$
Fix a rank-determining set \(\mathcal {R}\subset \cup _{v\in V} C_v(\kappa )\) as in Theorem 6.1. Since \(\mathcal {R}\) is rank-determining and any effective divisor \(\mathcal {E}\) with support in \(\mathcal {R}\) obviously belongs to \(\mathrm{Div }(\mathfrak {C})_\mathcal {G}\), to prove the equality of \(r_{\mathfrak {C}, \mathcal {G}}(\mathcal {D})\) and \(r_\mathfrak {C}(\mathcal {D})\) it will be enough to show that the two statements \(r_{\mathfrak {C}, \mathcal {G}}(\mathcal {D})\ge 0\) and \(r_\mathfrak {C}(\mathcal {D})\ge 0\) are equivalent. Obviously, the former implies the latter, so we only need to show that if \(r_\mathfrak {C}(\mathcal {D})\ge 0\) then \(r_{\mathfrak {C}, \mathcal {G}}(\mathcal {D})\ge 0\). Let \(v\) be a vertex of \(G\) and \(\mathcal {D}^{v}\) the \(v\)-reduced divisor linearly equivalent to \(\mathcal {D}\). By Lemma 3.11, \(r_\mathfrak {C}(\mathcal {D})\ge 0\) is equivalent to \(r_{C_v}(D^{v}_{v})\ge 0\). Now let \(\mathcal {D}\) be an element of \(\mathrm{Div }(\mathfrak {C})_\mathcal {G}\) with \(r_{C_v}(D^v_v)\ge 0\). Since \(v\in V\) and \(\mathcal {G}\) contains all the edge-lengths in \(G\), it is easy to see that \(\mathcal {D}\) and \(\mathcal {D}^v\) differ by the divisor of a rational function \(\mathfrak {f}\) with support in \(\mathrm{Div }(\mathfrak {C})_{\mathcal {G}}\). In other words, \(\mathcal {D} \sim \mathcal {D}^v\) in \(\mathrm{Div }(\mathfrak {C})_\mathcal {G}\). Since \(\mathcal {D}^v\) is linearly equivalent to an effective divisor in \(\mathrm{Div }(\mathfrak {C})_\mathcal {G}\) (with constant rational function on \(\Gamma \)), we conclude that \(r_{\mathfrak {C},\mathcal {G}}(D)\ge 0\). \(\square \)
Let \(\mathfrak {C}X_0\) be the regularization of a strongly semistable curve \(X_0\) over \(\kappa \). Let \(\mathcal {L}\) be a line bundle on \(X_0\) corresponding to a divisor \(\mathcal {D} \in \mathrm{Div }(\mathfrak {C})\). Then \(r_{c} (\mathcal {L}) = r_{\mathfrak {C}X_0}(\mathcal {D})\).
This follows from the previous corollary with \(\mathcal {G} = \mathbb {Z}\). \(\square \)
Amini, O.: Equidistribution of Weierstrass points on curves over non-Archimedean fields, in preparationGoogle Scholar
Amini, O., Baker, M.: Limit linear series for a generic chain of genus one curves, in preparationGoogle Scholar
Amini, O.: Reduced divisors and embeddings of tropical curves. Trans. Am. Math. Soc. 365(9), 4851–4880 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
Amini, O., Baker, M., Brugallé, E., Rabinoff, J.: Lifting harmonic morphisms I: metrized complexes and Berkovich skeleta. Preprint arxiv:1303.4812
Amini, O., Baker, M., Brugallé, E., Rabinoff, J.: Lifting harmonic morphisms II: Tropical curves and metrized complexes. Preprint arxiv:1404.3390
Amini, O., Caporaso, L.: Riemann-Roch theory for weighted graphs and tropical curves. Adv. Math. 240, 1–23 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
Baker, M.: Specialization of linear systems from curves to graphs. Algebra Number Theory 2(6), 613–653 (2008)CrossRefzbMATHMathSciNetGoogle Scholar
Baker, M., Norine, S.: Riemann–Roch and Abel–Jacobi theory on a finite graph. Adv. Math. 215(2), 766–788 (2007)CrossRefzbMATHMathSciNetGoogle Scholar
Baker, M., Payne, S., Rabinoff, J.: Non-Archimedean geometry, tropicalization, and metrics on curves. Preprint arXiv:1104.0320v1
Baker, M., Shokrieh, F.: Chip-firing games, potential theory on graphs, and spanning trees. J. Comb. Theory Series A 120(1), 164–182 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
Berkovich, V.G.: Spectral theory and analytic geometry over non-Archimedean fields. In: Proceedings of Mathematical Surveys and Monographs, vol. 33, American Mathematical Society, Providence (1990)Google Scholar
Bigas, M.T.I.: Brill-Noether theory for stable vector bundles. Duke Math. J. 62(2), 385–400 (1991)CrossRefzbMATHMathSciNetGoogle Scholar
Caporaso, L.: Linear series on semistable curves. Int. Math. Res. Not. 13, 2921–2969 (2011)Google Scholar
Caporaso, L.: Gonality of algebraic curves and graphs. In: Frühbis-Krüger A, Kloosterman RN, Schütt M (eds) Algebraic and Complex Geometry, Springer Proceedings in Mathematics & Statistics, vol 71. Springer, p 319 (2014)Google Scholar
Cartwright, D.: Lifting rank 2 tropical divisors. Preprint http://users.math.yale.edu/dc597/lifting.pdf/
Chinburg, T., Rumely, R.: The capacity pairing. J. für die reine und angewandte Mathematik 434, 1–44 (1993)zbMATHMathSciNetGoogle Scholar
Cools, F., Draisma, J., Payne, S., Robeva, E.: A tropical proof of the Brill–Noether theorem. Adv. Math. 230(2), 759–776 (2012)CrossRefzbMATHMathSciNetGoogle Scholar
Coleman, R.F.: Effective Chabauty. Duke Math. J. 52(3), 765–770 (1985)CrossRefzbMATHMathSciNetGoogle Scholar
Deligne, P., Mumford, D.: The irreducibility of the space of curves of given genus. Publications Mathématiques de l'IHES 36(1), 75–109 (1969)CrossRefzbMATHMathSciNetGoogle Scholar
Eisenbud, D., Harris, J.: Limit linear series: basic theory. Invent. Math. 85, 337–371 (1986)CrossRefzbMATHMathSciNetGoogle Scholar
Eisenbud, D., Harris, J.: The Kodaira dimension of the moduli space of curves of genus \({\ge }23\). Invent. Math. 90(2), 359–387 (1987)CrossRefzbMATHMathSciNetGoogle Scholar
Eisenbud, D., Harris, J.: Existence, decomposition, and limits of certain Weierstrass points. Invent. Math. 87(3), 495–515 (1987)CrossRefzbMATHMathSciNetGoogle Scholar
Eisenbud, D., Harris, J.: The monodromy of Weierstrass points. Invent. Math. 90(2), 333–341 (1987)CrossRefzbMATHMathSciNetGoogle Scholar
Esteves, E.: Linear systems and ramification points on reducible nodal curves. Mathematica Contemporanea 14, 21–35 (1998)zbMATHMathSciNetGoogle Scholar
Esteves, E., Medeiros, N.: Limit canonical systems on curves with two components. Invent. Math. 149(2), 267–338 (2002)CrossRefzbMATHMathSciNetGoogle Scholar
Gathmann, A., Kerber, M.: A Riemann–Roch theorem in tropical geometry. Math. Z. 259(1), 217–230 (2008)CrossRefzbMATHMathSciNetGoogle Scholar
Harris, J., Morrison, I.: Moduli of Curves. Graduate Texts in Mathematics, vol. 187. Springer, Berlin (1998)Google Scholar
Harris, J., Mumford, D.: On the Kodaira dimension of the moduli space of curves. Invent. Math. 67, 23–88 (1982)CrossRefzbMATHMathSciNetGoogle Scholar
Hartshorne, R.: Algebraic geometry. In: Proceedings of Springer Graduate Texts in Mathematics, p. 52 (1977)Google Scholar
Hladký, J., Kràl', D., Norine, S.: Rank of divisors on tropical curves. J. Comb. Theory. Series B. 120(7), 1521–1538 (2013)CrossRefGoogle Scholar
Katz, E., Zureick-Brown, D.: The Chabauty–Coleman bound at a prime of bad reduction and Clifford bounds for geometric rank functions. Compositio Math. 149(11), 1818–1838 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
Lorenzini, D.J., Tucker, T.J.: Thue equations and the method of Chabauty–Coleman. Invent. Math. 148(1), 47–77 (2002)CrossRefzbMATHMathSciNetGoogle Scholar
Lim, C.M., Payne, S., Potashnik, N.: A note on Brill–Noether thoery and rank determining sets for metric graphs. Int. Math. Res. Not. 23, 5484–5504 (2012)MathSciNetGoogle Scholar
Luo, Y.: Rank-determining sets of metric graphs. J. Comb. Theory. Series A. 118(6), 1775–1793 (2011)CrossRefzbMATHGoogle Scholar
McCallum, W., Poonen, B.: The method of Chabauty and Coleman, June 14, 2010. Preprint http://www-math.mit.edu/poonen/papers/chabauty.pdf, to appear in Panoramas et Synthèses, Société Math. de France
Mikhalkin, G., Zharkov, I.: Tropical curves, their Jacobians and Theta functions. Contemporary Mathematics. In: Proceedings of the International Conference on Curves and Abelian Varieties in Honor of Roy Smith's 65th Birthday, vol. 465, pp. 203–231 (2007)Google Scholar
Neeman, A.: The distribution of Weierstrass points on a compact Riemann surface. Ann. Math. 120, 317–328 (1984)CrossRefzbMATHMathSciNetGoogle Scholar
Osserman, B.: A limit linear series moduli scheme (Un schéma de modules de séries linéaires limites). Ann. Inst. Fourier 56(4), 1165–1205 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
Osserman, B.: Linked Grassmannians and crude limit linear series. Int. Math. Res. Not. 25, 1–27 (2006)MathSciNetGoogle Scholar
Parker, B.: Exploded manifolds. Adv. Math. 229(6), 3256–3319 (2012)CrossRefzbMATHMathSciNetGoogle Scholar
Payne, S.: Fibers of tropicalization. Math. Zeit. 262, 301–311 (2009)CrossRefzbMATHGoogle Scholar
Ran, Z.: Modifications of Hodge bundles and enumerative geometry I: the stable hyperelliptic locus. Preprint arXiv:1011.0406
Stoll, M.: Independence of rational points on twists of a given curve. Compositio Math. 142(5), 1201–1214 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
Temkin, M.: On local properties of non-Archimedean analytic spaces. Math. Annalen 318, 585–607 (2000)CrossRefzbMATHMathSciNetGoogle Scholar
Zhang, S.-W.: Admissible pairing on a curve. Invent. Math. 112(1), 171–193 (1993)CrossRefzbMATHMathSciNetGoogle Scholar
1.CNRS-DMAÉcole Normale SupérieureParisFrance
2.School of MathematicsGeorgia Institute of TechnologyAtlantaUSA
Amini, O. & Baker, M. Math. Ann. (2015) 362: 55. https://doi.org/10.1007/s00208-014-1093-8
|
CommonCrawl
|
Genetic diversity, linkage disequilibrium, and population structure analysis of the tea plant (Camellia sinensis) from an origin center, Guizhou plateau, using genome-wide SNPs developed by genotyping-by-sequencing
Suzhen Niu1,2,3,
Qinfei Song1,
Hisashi Koiwa2,
Dahe Qiao3,
Degang Zhao1,3,
Zhengwu Chen3,
Xia Liu1 &
Xiaopeng Wen4,5
BMC Plant Biology volume 19, Article number: 328 (2019) Cite this article
To efficiently protect and exploit germplasm resources for marker development and breeding purposes, we must accurately depict the features of the tea populations. This study focuses on the Camellia sinensis (C. sinensis) population and aims to (i) identify single nucleotide polymorphisms (SNPs) on the genome level, (ii) investigate the genetic diversity and population structure, and (iii) characterize the linkage disequilibrium (LD) pattern to facilitate next genome-wide association mapping and marker-assisted selection.
We collected 415 tea accessions from the Origin Center and analyzed the genetic diversity, population structure and LD pattern using the genotyping-by-sequencing (GBS) approach. A total of 79,016 high-quality SNPs were identified; the polymorphism information content (PIC) and genetic diversity (GD) based on these SNPs showed a higher level of genetic diversity in cultivated type than in wild type. The 415 accessions were clustered into three groups by STRUCTURE software and confirmed using principal component analyses (PCA)—wild type, cultivated type, and admixed wild type. However, unweighted pair group method with arithmetic mean (UPGMA) trees indicated the accessions should be grouped into more clusters. Further analyses identified four groups, the Pure Wild Type, Admixed Wild Type, ancient landraces and modern landraces using STRUCTURE, and the results were confirmed by PCA and UPGMA tree method. A higher level of genetic diversity was detected in ancient landraces and Admixed Wild Type than that in the Pure Wild Type and modern landraces. The highest differentiation was between the Pure Wild Type and modern landraces. A relatively fast LD decay with a short range (kb) was observed, and the LD decays of four inferred populations were different.
This study is, to our knowledge, the first population genetic analysis of tea germplasm from the Origin Center, Guizhou Plateau, using GBS. The LD pattern, population structure and genetic differentiation of the tea population revealed by our study will benefit further genetic studies, germplasm protection, and breeding.
Tea is one of the most popular beverages worldwide [1, 2] with high nutritional and medicinal values. The rich flavor of tea is contributed by nearly 700 bioactive compounds such as catechins (a subgroup of flavan-3-ols), theanine, caffeine, and volatiles [3, 4]. Tea, Camellia sinensis (L.) O. Kuntze, Theaceae (C. sinensis), has been grown in the Yunnan-Guizhou Plateau in southwest China for approximately 5,000 years and is now widely cultivated all over the world [4]. The Guizhou Plateau is the center of origin of tea [4, 5], where population diversity of the tea is well preserved with abundant wild tea plants, ancient landraces and modern landraces with different morphological characteristics—owing to the unique geology, diverse climates and plentiful rainfall in the region and the cross-pollination nature of tea plants [6]. Large spatial elimination of various tea species has not occurred due to the slow economic development and land use in the Guizhou Plateau.
Ancient tea plants belong to Sect. Thea (L.) Dyer, and are defined as varieties grown for more than 100 years. Wild teas, including wild type and self-wild type, are valuable for scientific research and application as they have mainly undergone natural selection and were only minimally affected by artificial selection. Analyzing genetic diversity and population genetic structure is significant to depicte the domestication event and genetic relationships of tea plants. It is also helpful for expediting the development on breeding strategies [7]. Molecular markers have been a powerful tool for the genetic study of tea populations, these include the RAPD [8], nSSR [1, 9], gSSRs [2], SSR [10, 11], SNP [12], AFLP [13], ISSR [14], EST-SSR markers [15, 16], etc. As revealed by these studies, current tea populations evolved from a single species in the Yunnan-Guizhou (Yun-Gui) Plateau. However, the tea populations used in these previous studies had either small sample size or narrow geographic distribution-including only 14 tea-producing regions in Yunan [17], Guangxi [18] or across China.
LD is defined as the association of allelesat different loci within a given population. Understanding the LD pattern is crucial for tea breeding [19,20,21]. GBS has emerged as a useful tool for linkage map construction and the extensive identification of polymorphisms [21, 23,24,25,26,27,28]. It has also been widely used in population structure and genetic diversity studies [29,30,31,32,33]. To our knowledge, the LD pattern, population structure, and genetic diversity of tea germplasm had never been examined within previous study using GBS. In addition, very few studies have focused on the tea population in the Guizhou Plateau [22]. Therefore, we employed the GBS approach and performed a genetic analysis on a large tea population consisting of 415 accessions including the wild varieties, ancient landraces and modern landraces in the Guizhou Plateau, as well as cultivated varieties from Zhejiang, Fujian, Hunan, and Guizhou. We aim to (1) identify SNPs at the genome level; (2) analyze the population structure and genetic diversity; and (3) characterize the LD patterns in different varieties. Our findings will facilitate future genome-wide association mapping and marker-assisted selecting of tea.
Genome-wide SNPs discovery and the GBS analysis
GBS was performed on 415 tea accessions using Illumina HiSeq X ten. After the primary quality filtering step, 390.3 Gb clean data were obtained with an average of 0.94 Gb clean data per accession (Additional file 1: Table S1). Anaverage of 65% of the total reads were successfully mapped onto the tea genome (Additional file 1: Table S1). The SNPs were detected and genotyped by GATK (version 3.7.0) based on the reference genome [34]. We identified a total of 1,001,372 SNPs with a minimal set of initial quality filters. By restricting the filter conditions, the number of SNPs was subsequently reduced to 287,408, with an average SNP density of one per 10.5 kb and an average quality value of 41,262 (data not shown). The average individual heterozygosity was 17.84% (Additional file 1: Table S2). Furthermore, 79,016 high-quality SNPs were identified and an average individual heterozygosity of 19.21% was observed (Additional file 1: Table S3). All 79,016 SNPs were physically mapped across all scaffolds, with an average map density of 38.24 kb and average quality value of 41,394 (Additional file 1: Table S3). We found more transitions (62,962 loci, 79.68%) than transversions (15,650 loci, 19.81%), and the ratio of transition/transversion was 4.02. C/T transitions and C/G transversions occurred at the highest and lowest frequencies, respectively. The frequencies of A/G and C/T transitions were similar-39.83 and 39.85%, respectively, and the four different types of transversions also occurred at a similar frequency-5.89% for A/T, 5.01% for A/C, 3.81% for G/C and 5.09% for G/T (Table 1).
Table 1 Percentage of transition and transversion SNPs identified using genotyping-by-sequencing
Estimation of genetic diversity
The average genetic diversity (GD), observed heterozygosity (Ho) and polymorphism information content (PIC) of 415 tea accessions were 0.257, 0.247 and 0.214, respectively (Table 2). The percentage of polymorphic loci (PPL) was significantly higher in the cultivation type than in the wild type (Table 2; Additional file 1: Table S5). PPL was significantly higher in the Pure Cultivation Type (GP03) than in the Admixed Wild Type (GP02) and Pure Wild Type (GP01) (Table 3). Among the six zone, PPL was significantly higher in Ia than in Ic, II and III (Additional file 5). GD, Ho, and PIC were significantly higher in the cultivation type than in the wild type (Table 2; Additional file 1: Table S5). GD, Ho, and PIC were significantly higher in the Pure Cultivar Type (GP03) than in the Admixed Wild Type (GP02) and Pure Wild Type (GP01). GD, Ho, and PIC showed significantly higher diversity in Ia, Ib, Ic and II than in III and IV (Table 2; Additional file 1: Table S5; Additional file 5).
Table 2 Genetic diversity parameters of 415 tea accessions in Guizhou Plateau
Table 3 Genetic differentiation of inferred populations of tea plants in Guizhou Plateau
Population structure analysis
We used STRUCTURE and PCA to analyze the genetic structure of the tea accessions. Both analyses were performed using 1,135 LD-pruned SNPs. Based on the genetic distance matrix of the 415 tea accessions, we used TASSEL v.5.2.37 to build an UPGMA tree.
The number of clusters was estimated based on the ΔK method [35, 36] and the plateau criterion [37] in STRUCTURE, firstly. The results showed that the ΔK had the maximum value at K = 2 (Fig. 1a). Based on of this, two ancestral groups were identified (Fig. 1b). Accessions with the score higher than 0.80 were assigned to a pure group, while those with the lower than 0.80 were assigned to the admixture group. The first pure group (referred to as the 'Pure Wild Type' or 'GP01' from now on) consisted of 52 accessions, all were wild type from Camellia tachangensis F.C.Zhang, of which most were from the zones IV, III and II (Additional file 2). One hundred accessions (approximately 24% of 415 populations) exhibited an admixed ancestry. In the admixed cluster (referred to as 'Admixed Wild Type or GP02' from now on), 95% were wild type, including 45 Camellia Tachangensis from Ia, 50 Camellia remotiserrata Zhang from Ia, and five uncertain species (Additional file 2). The second pure group (referred to as the 'Pure Cultivation Type or GP03' from now on) consisted of 263 accessions, of which 98% are Cultivated type from Camellia sinensis (including the ancient landraces and modern landraces).
The genetic clusters inferred using STRUCTURE. a Graphical method allowing the detection of the number of groups K using ∆K and LnP(K). ∆K and LnP(K) are shown in blue and red, respectively. b Inferred population structure of the collection using STRUCTURE software. Bar plot of individual ancestry proportions for the genetic clusters inferred using STRUCTURE (K = 2). Individual ancestry proportions (q values) are sorted within each cluster. Admixture model, independent frequencies, 30,000 burn-in iterations, and 100,000 Markov Chain Monte Carlo iterations were used for this analysis. Cultivation type and wild type ancestral populations are shown in red and blue, respectively
The results of PCA analysis were highly consistent with those of STRUCTURE (Fig. 2). PCA revealed two main clusters that correspond to the two ancestral groups identified using STRUCTURE. The Pure Cultivation Type cluster was more scattered than the Pure Wild Type cluster, and the Admixed Wild Type was dispersed between these two clusters along the left side of the PC2 or PC3 axis (Fig. 2). The UPGMA tree also agreed with the STRUCTURE analysis results, although some subgroups were formed in the Pure Cultivation Type clusters (K = 2) (Fig. 3b). Furthermore, the results of UPGMA tree were almost concordant with the growth habits (wild type and cultivation type) (Fig. 3a), the cultivation status (modern landraces, ancient landraces and wild tea trees) (Fig. 3c) and the classification (C.tachangensis, C.sinensis and C. remotiserrata) (Fig. 3d) of tea accessions.
Principal component analysis (PCA) of 415 tea accessions. PCA using 1135 selected SNPs with no linkage disequilibrium in the set of 415 tea accessions. GP03 identified in STRUCTURE is shown in green, GP01 in red and GP02 in blue. First and second components (a) and first and third components (b) of the PCA analyses are shown
Cluster analysis based on genetic distance using an UPGMA tree. a UPGMA cluster tree compared with both growth habits, wild type (red) and cultivation type (green). b UPGMA cluster tree compared with STRUCTUER results (k = 2), Pure Wild Type (red), Pure Cultivation Type (green) and Admixed Wild Type (yellow). c UPGMA cluster tree compared with growthway, modern cultivation (red), ancient cultivation (green) and wild (yellow). d UPGMA cluster tree compared with classification results, C.tachangensis (red), C.sinensis (green), C.remotiserrata (yellow) and uncertain species (blue). e UPGMA cluster tree include 4 inferred groups, GP01 (red), GP02 (yellow), GP03–1 (green) and GP03–2 (purple)
The plateau criterion was also used to estimate the number of clusters [37,38,39,40]. As shown in Fig. 1, the mean log-likelihood (LnP(K)) curve attained a stable value at around K = 3 ~ 4 [20]. Therefore, we further analyzed the 263 accessions of the GP03 ancestral group to explore whether subgroups could be identified using STRUCTURE reported by Campoy et al. [20]. The 52 accessions in the GP01 ancestral cluster and the 100 accessions in the GP02 cluster were excluded from further analyses (Additional file 2). Within the GP03 group of the 263 accessions, we identified two subgroups at K = 2 (Additional file 3: Figure S1 and S2) based on the Evanno's ΔK (accessions were assigned into two groups with estimated score of 0.5). The first subgroup included 213 Pure Cultivation Type accessions, of which 78% were ancient landraces (referred to as the 'ancient landraces' or 'GP03–1' hereafter).The second subgroup was smaller, containing only 50 Pure Cultivation Type accessions, of which 92% were modern landraces (referred to as 'modern landraces' or 'GP03–2' hereafter) and 8% were breeding varieties (Additional file 2). Overall, the 415 accessions were clustered into three groups, including two main groups (GP01 and GP03) and an admixed group (GP02), and the GP03 group could be further divided into two subgroups (GP03–1 and GP03–2). The result was confirmed by both the UPGMA tree (Fig. 3e) and PCA (Fig. 4) (Additional file 3: Figure S3).
Principal component analysis (PCA) of 415 tea accessions. PCA using 1135 selected SNPs with no linkage disequilibrium in the set of 415 tea accessions. The GP01 cluster identified in STRUCTURE is shown in red, The GP02 cluster in blue, GP03–1 in purple and GP03–2 in green. First and second components (a) and first and third components (b) of the PCA analyses are shown
LD analysis
In this study, the extent of LD with a physical distance larger than 500 kb for all scaffolds was evaluated in the 415 tea accessions using 143,041 non-LD-pruned SNPs (Fig. 5a). LD declined rapidly with increasing physical distance. The studied population had an overall low LD and most r2 values were below 0.16 (Fig. 5a). On average, LD declined rapidly with an r2 value below 0.08 within approximately 2 kb (Fig. 5b).
Linkage disequilibrium decay for all scaffolds longer than 500 kb. a Scatter plot of LD decay (r2) against the genetic distance for pairs of linked SNP across all scaffolds longer than 500 kb. b Zoom-in scatter plot of LD decay (r2) against the genetic distance
LD decay in the four inferred groups was estimated (Additional file 4: Figure S1). The lowest LD decay was observed in GP01, as r2 reached 0.08 (the threshold) at approximately 35 kb. Conversely, LD declined the most rapidly in GP02—r2 = 0.08 corresponded to a physical distance of approximately 1 kb—followed by subgroup GP03–1, in which r2 = 0.08 corresponded to approximately 2 kb. The LD of subgroup GP03–2 declined below r2 = 0.08 at approximately 25 kb.
Genetic differentiation analysis
Genetic variation was calculated for the four inferred groups (Table 3). The percentage of polymorphic loci (PPL) was significantly lower in GP01 than in GP02, GP03–1 and GP03–2 (Table 3). We detected no significant differences in PPL among GP02, GP03–1, and GP03–2. The genetic variations in GP02 and GP03–1 were significantly higher than in GP01 and GP03–2, with GP01 showing the lowest genetic variation (Table 3). Fis in all four inferred populations was significantly different than zero (Table 3)-Fis in GP02, GP03–1 and GP03–2 was significantly lower than zero and Fis in GP01 was significantly higher than zero.
The pairwise Fst values ranged from 0.054 to 0.178 with a mean value of 0.101 (Table 4). The lowest level of differentiation was observed between GP03–1 and GP03–2, whereas GP01 and GP03–2 differentiated the most. An intermediate differentiation was observed between GP01 and GP03–1 (Table 4). The Fst results were confirmed by the pairwise genetic distance calculated in the R package adegenet (Table 4).
Table 4 Fst and pairwise genetic distance among four inferred populations of tea plant in Guizhou Plateau
In this study, we report the first genetic diversity analysis of a tea population using GBS-a simple and cost-effective approach [41,42,43,44]. We generated 390.30 Gb clean reads and identified 79,016 high-quality SNPs using stringent filtering criteria. The number of SNPs identified in the present study was higher than those used for previous studies [38, 39, 45, 46], suggesting that the GBS approach is powerful for the genetic diversity analyses of tea species.
Previous studies have shown that breeding practices have a greater effect on reducing genetic diversity than domestication, leading to a lower level of genetic diversity in cultivated germplasm compared with wild varieties [7]. Interestingly, our genetic diversity analysis with the Guizhou Plateau tea varieties shows the opposite—we observed a significantly higher genetic diversity level in the cultivation type than in the wild type, which is different from those reported in the previous studies [40, 41]. A plausible explanation for these counterintuitive findings could be due to the existence of ancient landraces in the cultivation type. The ancient landraces were derived from early landraces and their natural offspring, they grow on the edge of terraced fields to prevent soil erosion or used as fences to separate the fields owned by different farmers; such human activities were not for breeding purposes. The cross-pollination characteristics of tea species had also contributed to the large genetic variation in the cultivation type. The relatively isolated natural environment of the Guizhou Plateau may have reduced the genetic perturbations in the wild type group from other tea varieties. Consistent with our hypothesis, a narrow genetic diversity of tea cultivars has been reported in tea-producing regions worldwide where several tea clone cultivars dominated the local populations [32, 33].This will not only impose limitations on tea breeding but also increase the risk of natural hazards because wild tea plants and landraces provide valuable genetic resources for tea-breeding [40]. Such a scenario is especially true for the Guizhou Plateau, which has many ancient landraces and Pure Wild Type accessions, both can be used for tea breeding. Therefore, future studies should focus more on the tea germplasm in the Guizhou Plateau.
In this study, we used three different approaches (STRUCTURE, PCA, and UPGMA) to analyze the population structure of the 415 tea varieties, and the results we obtained complemented the previous studies. STRUCTURE could effectively identify global clusters, which were subsequently validated by PCA. However, the two parameters we used to determine the number of clusters in STRUCTURE yielded different K values—the Evanno's ΔK method identified K = 2 when analyzing the entire germplasm collection and the cryptic structure. Evanno's method focuses exclusively on the change in slope, therefore, it estimates the uppermost level structure of the data which may cause ΔK to be artificially maximal at K = 2 in some cases, as reported previously by Campoy JA et al. [20]. We used the maximum likelihood parameter in our analyses as recommended by Pritchard [37], in which K was set to three. K = 3 appeared to fit the origin and the pedigree of the accessions in the Guizhou Plateau. Therefore, the 263 accessions in GP03 obtained with STRUCTURE at K = 2 were further analyzed. The clustering of the tea accessions correlated well with cultivation status origin at K = 2 as revealed by the Evanno's ΔK method—the 415 accessions were clustered into four populations, including two main populations (GP01 and GP02) and two subgroups (GP03–1 and GP03–2). All accessions in GP01, the Wild Type group, were C. tachangensis; the Admixed Wild Type group GP02 contained C. tachangensis and C. remotiserrata varieties; GP03–1 represented ancient landraces, all of which are C. sinensis; and GP03–2 consisted of cultivated varieties including modern landraces and breeding varieties, most of which are C. sinensis.
We detected the lowest genetic differentiation and genetic distance between the modern and ancient landraces. The Pure Wild Type and modern landraces exhibited the largest genetic differentiation and genetic distance, followed by that between the Pure Wild Type and ancient landraces, and that between the Admixed Wild Type and ancient landraces. These results support the notion that the evolution of tea plants was related to the historical tea cultivation in the Guizhou Plateau. The Pure Wild Type is the most primitive resource that originated in the region, and the retained species purity was owing to the isolated ecological environment. The ancient landraces and the Admixed Wild Type likely emerged in the Ming Dynasty, when local landraces, introduced landraces, and wild species were co-cultivated. The co-cultivation facilitated cross-pollination among different germplasms, which reduced the genetic distance and differentiation between the ancient landraces and the Admixed Wild Type and significantly increased the diversity of the ancient landraces and the Admixed Wild Type among all inferred groups. Most modern landraces and breeding varieties were assigned to GP03–2, reflecting a narrowed genetic basis of the modern landraces due to breeding practice.
We observed the lowest genetic differentiation between GP03–1 and GP03–2, suggesting that human activities may have caused frequent gene exchange between these two subgroups. GP01 and GP03–2 showed the highest level of genetic differentiation and distance, implying that geographic isolation has restricted the gene flow among populations. This observation could also be a result of the reproductive isolation between species. According to our results, GP03–1 and GP02 exhibited a higher genetic diversity compared with GP01 and GP03–2, therefore, varieties in GP03–1 and GP02 can be used for tea improvement. As revealed by our data, the differences between species did not affect clustering, which reflected the complexity and uncertainty of the tea classification systems. Thus, it is necessary to establish a more scientific classification system. In addition, natural hybridization between tea species may be another explanation of the results mentioned above (Additional file 1: Table S6; Additional file 1: Table S7).
Linkage disequilibrium
LD decays more rapidly among cross-pollinated species like tea plants than among self-pollinated species due to the less effective recombination in the latter [49, 50]. We observed a rapid LD decay in the 415 accessions—LD declined below r2 = 0.08 at approximately 2 kb, lower than that observed with Prunus [20] and melon [21]. This can be due to the self-incompatibility of tea plant [48]. The rapid LD decay and the high proportion of SNPs in LD suggest that GWAS can be used to inform the breeding of the tea varieties in the Guizhou Plateau. These findings are not consistent with those of Jin et al. [5], which may be caused by the differences in the genetic backgrounds among different varieties within each species. In cross-pollinated species, LD can be affected by extreme genetic drift in domestication and breeding during evolution [20]. Thus, we investigated LD decay among the subgroups to provide valuable genetic information for future studies [21]. Subgroups GP01 and GP03–2 displayed a much slower LD decay than GP02 and GP03–1, which is likely because modern landraces had experienced artificial selection pressure and the Pure Wild Type experienced extreme genetic drift, leading to the fixation of a higher number of LD blocks. The slow LD decay in the Admixed Wild Type group and ancient landraces facilities the identification of markers associated with desirable traits, as a relatively small number of markers could cover the entire genome. The Admixed Wild Type group and ancient landraces are ideal populations that can be directly used for breeding—varieties from the Pure Wild Type group can be crossed with modern landraces to achieve heterosis due to a relatively greater genetic distance between these two groups among all.
Genome-wide SNPs in various tea varieties from the Origin Center, Guizhou Plateau, were identified in this study using GBS. These SNPs were used to analyze the genetic diversity, population structure, and LD pattern of the 415 tea accessions. Our results showed that the 415 accessions could be clustered into four populations, including two main populations (GP01 and GP02) and two subpopulations (GP03–1 and GP03–2). The ancient landrace group was found to have a more complex genetic structure than the wild and modern landraces. These data will inform the collection, conservation, and application of the tea varieties in the Guizhou Plateau.
Plant materials
A total of 415 samples including 159 wild varieties and 256 cultivated varieties (174 ancient landraces, 77 modern landraces and five breeding varieties) were included in this study (Additional file 5; Additional file 2). According to the classification systems reported by Chen et al. [52] and Min [53], 251 Camellia sinensis (L.) O. Ktze, 100 Camellia tachangensis (F.C.Zhang), 59 Camellia remotiserrata (Zhang) and five near Camellia taliensis (W.W.Smith) were identified (Additional file 2). Hereafter, samples from the wild tea trees that are more than 100 years old and their natural offsprings are referred to as "wild type"; samples from cultivated tea varieties of more than 100 years old are referred to as "ancient landraces", and samples from garden tea landraces are referred to as "modern landraces" (Additional file 2). The "ancient landraces", "modern landraces" and "breeding varieties" that had undergone artificial selection were all referred to as "cultivation type".
We collected the samples from different tea growing areas with different climates (Additional file 5). Specifically, a total of 276 samples were collected from tea varieties growing in the areas with very suitable climates in Guizhou, these include 168, 51 and 57 accessions in northern (Ia), eastern (Ib) and southern Guizhou (Ic), respectively. Eighty-three samples were harvested from central Guizhou where the climate is suitable for tea growth (II). Forty-one samples were collected from the areas in western Guizhou with a minor suitable climate (III), and 10 samples were from areas in western Guizhou with an unsuitable climate. One variety was collected from Guizhou. Four varieties were collected from other provinces, these include two from Fujian, one from Zhejiang, and one from Hunan (Additional file 5; Additional file 2) [35]. The samples were planted in the city of Guiyang, China. Fresh leaves harvested from each accession were snap frozen in liquid nitrogen and stored at − 80 °C until use.
DNA extraction
We used the Plant Genomic DNA Rapid Extraction kit (Biomed Gene Technology) to isolate genomic DNA from the samples. DNA integrity was tested on 1% agarose gel, and DNA purity was tested and quantified using Qubit Fluorometer (Invitrogen).
Library preparation and sequencing
We used 5 U of SacI and MseI (NEB) and 1 × restriction buffer in a 25 μl reaction to digest 100 ng genomic DNA. After digestion, SacAD and MseAD adaptors were ligated to the digested DNA fragments; 12 samples were pooled in equal volumes and purified using the QIAquick PCR Purification Kit (Qiagen) [47]. We then used the PCR Primer Cocktail and PCR Master Mix to amplify the purified DNA fragments. Amplicons of 500–550 bp (including the 120 bp adaptor) were retrieved through electrophoresis using 2% agarose gel and purified using the QIAquick Gel Extraction Kit (Qiagen) [47]. The Agilent DNA 12,000 kit and 2100 Bioanalyzer system (Agilent) were used to determine the average length of DNA fragments, and the resulting DNA libraries were quantified using real-time PCR with a TaqMan probe and sequenced on the Illumina HiSeq X ten platform with the paired-end 150 (PE150) sequencing strategy. Each library contains 48 samples, and we matched the clean reads individually to the barcodes and remnant restriction sites at both ends [47].
Sequence alignment and SNP identification
The barcodes were used to de-multiplex the raw DNA reads, and a custom perl script was used to trim the adaptors. Only the reads with quality values > 5 were retained as the clean data, and then aligned to the reference genome (http://www.plantkingdomgdb.com/tea_tree/) [3] using BWA-MEM (version 0.7.10) with parameters '-T 20 -k 30' [54]. GATK (VERSION 3.7.0) was used call for SNPs.
The SNPs were filtered according to the methods used by Hussain et al. [23], Chen et al. [19] and Eltaher et al. [28] based on the following criteria: (1) variants must be bi-allelic SNPs; (2) "QUAL < 50.0 || QD < 2.0 || FS > 60.0 || MQ < 40.0 || Mapping Quality Rank Sum < -12.5 || Read Pos Rank Sum < -8.0" was used in variant filtration in GATK (version 3.7.0) to filter the SNPs; (3) SNPs with minor allele frequency (MAF) lower than 0.05 or missing data rate higher than 20% were filtered out by VCFtools (version 0.1.15); (4) The SNPs were pruned with a window of 50 SNPs, a step size of 10 SNPs, and an r2 threshold of 0.2 by Plink (v1.9). After the filtering, 415 accessions and 79,016 SNPs were retained and used for further analysis.
Analysis of genetic diversity
The polymorphism information content (PIC) values for the SNP data were calculated using the following equation [19].
$$ \mathrm{PIC}=1-\sum \limits_{i=1}^n{P}_i^2-\sum \limits_{i=1}^{n-1}\sum \limits_{j=i+1}^n2{P}_i^2{P}_j^2 $$
The mean number of observed alleles per locus and the observed heterozygosity (Ho) were calculated for each group using TASSEL v.5.2.37 [55]. Genetic diversity and inbreeding were calculated for each group using PowerMarker v3.25. Fst was calculated for each group using VCFtools [56].
Prior to the PCA and STRUCTURE analyses, we LD-pruned the SNPs again using Plink (v1.9) [51] with a window of 50 SNPs and a step size of five makers. The r2 threshold was 0.4. PLINK was used to measure pairwise LD between multi-SNPs [20, 54]. The pairwise LD between 143,041 genome-wide unpruned SNPs from sequences longer than 500 kb was calculated based on the allele frequency correlations (r2) using PopLDdecay program1. To summarize the relationship between LD decay, we fitted a locally-weighted linear regression (loess) model to the r2 data [20, 57] using R function 'loess' (http://www.R-project.org/) [58] with r2 summarizing both the recombinational and mutational history [59]. The LD decay plot was drawn using R.
Population structure was analyzed using the model-based Bayesian analysis implemented in STRUCTURE [37]. The number of subpopulations (K) was determined using the mean likelihood values in the ΔK method and the lnP (K) values [36, 59] calculated by Structure Harvester [60]. We estimated the variance between replicates by continuously running K = 1–9 to determine the optimal population number [19]. The analysis was conducted with a burn-in of 30,000 iterations followed by 100,000 Markov Chain Monte Carlo (MCMC) replications in three independent runs. No previous information was used to define the clusters. We enforced K to its true value to assess the clustering results. For each given K value, the run with the highest likelihood was used to cluster the accessions. We set the threshold value at 0.8 to distinguish between the pure and mixed groups. PCA was performed using TASSEL v.5.2.37 [55]. We set the threshold value at 0.8 to distinguish between the pure and mixed groups. The genetic distance among different individuals was used for PCA and constructing the UPGMA tree. The UPGMA tree was generated using a simple matching coefficient in TASSEL v.5.2.37 [37]. Fst and pairwise genetic distance among the four inferred groups were calculated in the R package adegenet v.2.1.1 [61].
The plant materials were growing in our resource nursery which are available from the corresponding author on reasonable request. The raw sequence data reported in this study have been deposited in the Genome Sequence Archive [62] in BIG Data Center, Beijing Institute of Genomics (BIG), Chinese Academy of Sciences, under accession number CRA001438 that is publicly accessible at http://bigd.big.ac.cn/gsa. The genotyping of 79,016 SNPs based on GBS in 415 tea accessions have been deposited into the figshare websitehttps://doi.org/10.6084/m9.figshare.8343263.
Fst:
Fixation Index
GBS:
Genotyping-by-sequencing
GD:
GWAS:
Genome-wide association studies
Ho:
Observed heterozygosity
LD:
Lingkage disequilibrium
PCA:
Principal component analyses
Polymorphism information content
PPL:
The percentage of polymorphic loci
UPGMA:
Un-weighted pair group method with arithmetic
Wambulwa MC, Meegahakumbura MK, Kamunya S, Muchugi A, Moller M, Liu J, et al. Insights into the genetic relationships and breeding patterns of the African tea germplasm based on nSSR markers and cpDNA sequences. Front Plant Sci. 2016;7:1244.
Liu S, Liu H, Wu A, Hou Y, An Y, Wei C. Construction of fingerprinting for tea plant (Camellia sinensis) accessions using new genomic SSR markers. Mol Breeding. 2017;37(8):93.
Xia EH, Zhang HB, Sheng J, Li K, Zhang QJ, Kim C, et al. The tea tree genome provides insights into tea flavor and independent evolution of caffeine biosynthesis. Mol Plant. 2017;10(6):866–77.
Wei C, Yang H, Wang S, Zhao J, Liu C, Gao L, et al. Draft genome sequence of Camellia sinensis var. sinensis provides insights into the evolution of the tea genome and tea quality. Proc Natl Acad Sci U S A. 2018;115(18):E4151–E8.
Jin JQ, Yao MZ, Ma CL, Ma JQ, Chen L. Association mapping of caffeine content with TCS1 in tea plant and itsrelated specie. Plant physiol bioch. 2016;100:18–26.
Niu SZ. Studies on genetic diversity and resistance of wild tea germplasm (Camellia spp.) in Guizhou Province. Doctoral thesis. Guiyang: Guizhou University; 2014.
Chen L, Yang Y, Yu F. Genetic diversity, relationship and molecular discrimination of elite tea germplasms [Camellia sinensis (L.),O.Kuntze] revealed by RAPD markers. Mol Plant Breeding. 2004;2(3):385–90.
Kaundun SS, Zhyvoloup A, Park YG. Evaluation of the genetic diversity among elite tea (Camellia sinensisvarsinensis) genotypes using RAPD markers. Euphytica. 2012;115(1):7–16.
Meegahakumbura MK, Wambulwa MC, Thapa KK, Li MM, Möller M, Xu JC, et al. Indications for Three Independent Domestication Events for the Tea Plant (Camellia sinensis(L.) O. Kuntze) and New Insights into the Origin of Tea Germplasm in China and India Revealed by Nuclear Microsatellites. PloS one. 2016;11(5):e0155369.
Fang W, Cheng H, Duan Y, Jiang X, Li X. Genetic diversity and relationship of clonal tea (Camellia sinensis) cultivars in China as revealed by SSR markers. Plant Syst Evol. 2011;298(2):469–83.
Tan L-Q, Peng M, Xu L-Y, Wang L-Y, Chen S-X, Zou Y, et al. Fingerprinting 128 Chinese clonal tea cultivars using SSR markers provides new insights into their pedigree relationships. Tree Genet Genomes. 2015;11(5):90.
Fang W, Meinhardt L, Tan H, Zhou L, Mischke S, Zhang D. Varietal identification of tea (Camellia sinensis) using nanofluidic array of single nucleotide polymorphism (SNP) markers. Hortic Res. 2014;1:14035.
Paul S, Wachira FN, Powell W, Waugh R. Diversity and genetic differentiation among populations of Indian and Kenyan tea (Camellia sinensis (L.) O. Kuntze) revealed by AFLP markers. Theor Appl Genet. 1997;94(2):255–63.
Yao MZ, Chen L, Liang YR. Genetic diversity among tea cultivars from China, Japan and Kenya revealed by ISSR markers and its implication for parental selection in tea breeding programmes. Plant Breed. 2008;127:166–72.
Yao M-Z, Ma C-L, Qiao T-T, Jin J-Q, Chen L. Diversity distribution and population structure of tea germplasms in China revealed by EST-SSR markers. Tree Genet Genomes. 2011;8(1):205–20.
Zhang Y, Zhang X, Chen X, Sun W, Li J. Genetic diversity and structure of tea plant in Qinba area in China by three types of molecular markers. Hereditas. 2018;155:22.
Zhao D, Yang J, Yang S, Kato K, Luo J. Genetic diversity and domestication origin of tea plant Camellia taliensis (Theaceae) as revealed by microsatellite markers. BMC Plant Biol. 2014;14(1):14.
Jiang C, Zhao W, Zeng Z, Lai X, Wu C, Yuan S, et al. A treasure reservoir of genetic resource of tea plant ( Camelliasinensis) in Dayao Mountain. Genet Resour Crop Evol. 2018;65(1):217–27.
Chen W, Hou L, Zhang Z, Pang X, Li Y. Genetic diversity, population structure, and linkage disequilibrium of a Core collection of Ziziphusjujuba assessed with genome-wide SNPs developed by genotyping-by-sequencing and SSR markers. Front Plant Sci. 2017;8:575.
Campoy JA, Lerigoleurbalsemin E, Christmann H, Beauvieux R, Girollet N, Querogarcía J, et al. Genetic diversity, linkage disequilibrium, population structure and construction of a core collection of Prunusavium L. landraces and bred cultivars. BMC Plant Biol. 2016;16(1):49.
Pavan S, Marcotrigiano AR, Ciani E, Mazzeo R, Zonno V, Ruggieri V, et al. Genotyping-by-sequencing of a melon ( Cucumismelo L.) germplasm collection from a secondary center of diversity highlights patterns of genetic variation and genomic features of different gene pools. BMC Genomics. 2017;18(1):59.
Niu SZ, Song QF, Fan WG, Chen ZW. Effects of drought stress on leaf physiological characteristics and root growth of the clone seedlings of wild tea plants. Acta Ecologica Sinica. 2017;21(37):7333–41.
Hussain W, Baenziger P, Belamkar V, Guttieri M, Venegas J, Easterly A, et al. Genotyping-by-sequencing derived high-density linkage map and its application to QTL mapping of flag leaf traits in bread wheat. Sci Rep. 2017;7(1):16394.
Pucher A, Hash C, Wallace J, Han S, Leiser W, Haussmann B. Mapping a male-fertility restoration locus for the a cytoplasmic-genic male-sterility system in pearl millet using a genotyping-by-sequencing-based linkage map. BMC Plant Biol. 2018;18(1):65.
Zhang Z, Wei T, Zhong Y, Li X, Huang J. Construction of a high-density genetic map of ZiziphusjujubaMill. Using genotyping by sequencing technology. Tree Genet Genomes. 2016;4:1–10.
Ji F, Wei W, Liu Y, Wang G, Zhang Q, Xing Y, et al. Construction of a SNP-based high-density genetic map using genotyping by sequencing (GBS) and QTL analysis of nut traits in Chinese chestnut (Castaneamollissima Blume). Front Plant Sci. 2018;9:816.
Ma GJ, Song QJ, Markell SG, Qi LL. High-throughput genotyping-by-sequencing facilitates molecular tagging of a novel rust resistance gene, R15, in sunflower (Helianthus annuus L.). Theor Appl Genet. 2018;14:1–10.
Eltaher S, Sallam A, Belamkar V, Emara H, Nower A, Salem K, et al. Genetic diversity and population structure of F Nebraska winter wheat genotypes using genotyping-by-sequencing. Front Genet. 2018;9:76.
Burrell AM, Pepper AE, Hodnett G, Goolsby JA, Overholt WA, Racelis AE, et al. Exploring origins, invasion history and genetic diversity of Imperatacylindrica (L.) P. Beauv. (Cogongrass) in the United States using genotyping by sequencing. Mol Ecol. 2015;24(9):2177–93.
Kujur A, Bajaj D, Upadhyaya HD, Das S, Ranjan R, Shree T, et al. Employing genome-wide SNP discovery and genotyping strategy to extrapolate the natural allelic diversity and domestication patterns in chickpea. Front Plant Sci. 2015;6:162.
Gouesnard B, Negro S, Laffray A, Glaubitz J, Melchinger A, Revilla P, et al. Genotyping-by-sequencing highlights original diversity patterns within a European collection of 1191 maize flint lines, as compared to the maize USDA genebank. TheorAppl Genet. 2017;130(10):2165–89.
Schreiber M, Himmelbach A, Börner A, Mascher M. Genetic diversity and relationship of domesticated rye and its wild relatives as revealed through genotyping-by-sequencing. Evol Appl. 2019;12(1):66–77.
Korinsak S, Tangphatsornruang S, Pootakham W, Wanchana S, Plabpla A, Jantasuriyarat C, et al. Genome-wide association mapping of virulence gene in rice blast fungus Magnaportheoryzae using a genotyping by sequencing approach. Genomics. 2018. https://doi.org/10.1016/j.ygeno.2018.05.011.
McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, et al. The genome analysis toolkit: a MapReduce framework foranalyzing next-generation DNA sequencing data. Genome Res. 2010;20:1297–303.
Yang SL, Huang ZY. The climatic superiority and regionalization of tea plant in Guizhou. Tillage Cultiv. 1984;(1):2–10 https://doi.org/10.13605/j.cnki.52-1065/s.1984.01.001.
Evanno G, Regnaut S, Goudet J. Detecting the number of clusters of individuals using the software STRUCTURE: a simulation study. Mol Ecol. 2005;14(8):2611–20.
Pritchard JK, Stephens M, Donnelly P. Inference of population structure using multilocus genotype data. Genetics. 2000;155(2):945–59.
Ravelombola W, Qin J, Shi A, Miller JC, Scheuring DC, Weng Y, et al. Population structure analysis and association mapping for iron deficiency chlorosis in worldwide cowpea (Vignaunguiculata (L.) Walp) germplasm. Euphytica. 2018;214(6):96.
Pootakham W, Jomchai N, Ruang-Areerate P, Shearman JR, Sonthirod C, Sangsrakru D, et al. Genome-wide SNP discovery and identification of QTL associated with agronomic traits in oil palm using genotyping-by-sequencing (GBS). Genomics. 2015;105(5–6):288–95.
Yao MZ, Ma CL, Qiao TT, Jin JQ, Chen L. Diversity distribution and population structure of tea germplasms in China revealed by EST-SSR markers. Tree Genet Genomes. 2012;8:205–20.
Wachira F, Tanaka J, Takeda Y. Genetic variation and differentiation in tea (Camellia sinensis) germplasm revealed by RAPD and AFLP variation. J Hortic Sci and Biotech. 2001;76(5):557–63.
Yang Z, Chen Z, Peng Z, Yu Y, Liao M, Wei S. Development of a high-density linkage map and mapping of the three-pistil gene (Pis1) in wheat using GBS markers. BMC Genomics. 2017;18(1):567.
Bhattarai U, Subudhi PK. Identification of drought responsive QTLs during vegetative growth stage of rice using a saturated GBS-based SNP linkage map. Euphytica. 2018;214(2):38.
Hackett CA, Milne L, Smith K, Hedley P, Morris J, Simpson CG, et al. Enhancement of Glen Moy x Latham raspberry linkage map using GbS to further understand control of developmental processes leading to fruit ripening. BMC Genet. 2018;19:59.
Gardner KM, Brown P, Cooke TF, Cann S, Costa F, Bustamante C, et al. Fast and cost-effective genetic mapping in apple using next-generation sequencing. G3-Genes Genom Genet. 2014;4(9):1681–7.
Palero F, Lopes J, Abelló P, Macpherson E, Pascual M, Beaumont M. Rapid radiation in spiny lobsters (Palinurusspp) as revealed by classic and ABC methods using mtDNA and microsatellite data. BMC Evol Biol. 2009;9:263.
Elshire RJ, Glaubitz JC, Sun Q, Poland JA, Kawamoto K, Buckler ES, Mitchell SE. A robust, simple genotyping-by-sequencing (GBS) approach for high diversity species. PLoS One. 2011;6(5):e19379.53.
Gaut B, Long A. The lowdown on linkage disequilibrium. Plant Cell. 2003;15(7):1502–6.
Maruki T, Lynch M. Genome-wide estimation of linkage disequilibrium from population-level high-throughput sequencing data. Genetics. 2014;197(4):1303–13.
Zhu X, Dong L, Jiang L, Li H, Sun L, Zhang H, et al. Constructing a linkage-linkage disequilibrium map using dominant-segregating markers. DNA Res. 2016;23(1):1–10.
Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira M, Bender D, et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007;81(3):559–75.
Chen L, Yu FL, Tong QQ. Discussions on phylogenetic classification and evolution of sect. Thea. J Tea Sci. 2000;20(2):89–94.
Min TL. A revision of Camelliasect.thea. Acta Bot Yunnanica. 1992;14(2):115–32.
Li H. Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. arXiv Preprint at https://arxiv.org/abs/1303.3997. 2013.
Bradbury PJ, Zhang Z, Kroon DE, Casstevens TM, Ramdoss Y, Buckler ES. TASSEL: software for association mapping of complex traits in diverse samples. Bioinformatics. 2007;23(19):2633–5.
Danecek P, Auton A, Abecasis G, Albers CA, Banks E, Depristo MA, et al. The variant call format and VCFtools. Bioinformatics. 2011;27(15):2156–8.
Chao S, Dubcovsky J, Dvorak J, Luo MC, Baenziger SP, Matnyazov R, et al. Population and genome-specific patterns of linkage disequilibrium and SNP variation in spring and winter wheat (Triticumaestivum L.). BMC Genomics. 2010;11(1):727.
Coreteam R. R: a language and environment for statistical computing. Computing. 2015;1:12–21.
Flint-Garcia SA, Thornsberry JM, Th BE. Structure of linkage disequilibrium in plants. Annu RevPlant Biol. 2003;54(4):357–74.
Earl DA, Vonholdt BM. Structure harvester: a website and program for visualizing structure output and implementing the Evanno method. Conserv Genet Resour. 2012;4(2):359–61.
Jombart T, Ahmed I. Adegenet 1.3–1: new tools for the analysis of genome-wide SNP data. Bioinformatics. 2011. https://doi.org/10.1093/bioinformatics/btr521.
Wang Y, Song F, Zhu J, Zhang S, Yang Y, Chen T, Tang B, Dong L, Ding N, Zhang Q. GSA: Genome sequence archive*. Genom Proteom Bioinf. 2017;15(1):14–8.
We thank tea office of Guiding, Huishui, Liping, Renhuai, Sandu, Wuchuan, hishui, Daozhen, Dejiang, Duyun, Guian, Jinsha, Liuzhi, Nayong, Pu an, Puding, Qinglong, Qixingguan, Sandu, Shiqian, Shuicheng, Tongzi, Wuchuan, Xingren, Xingyi, Xishui, Yanhe, Yinjiang, Yuqing, Zhenfeng, zheng'an for their help in teas collection. We thank College of tea science of Guizhou University and Department of Horticultural Sciences of Texas A&M University for providing research facilities and computing facilities.
This work was funded by Project of the National Natural Science Foundation of China (31560222), the Science and Technology Plan Project of Guizhou province, in RP China ([2017]2558, [2019]1404, [2017]5788) and USDA-NIFA SCRI grant (2017–51181-26834). The funding body didn't play a role in study design and collection, analysis, and interpretation of data and in writing the manuscript.
The Key Laboratory of Plant Resources Conservation and Germplasm Innovationin Mountainous Region (Ministry of Education), Institute of Agro-Bioengineering / College of Tea Science, Guizhou University, Guiyang, 550025, Guizhou Province, People's Republic of China
Suzhen Niu
, Qinfei Song
, Degang Zhao
& Xia Liu
Vegetable and Fruit Improvement Center, Department of Horticultural Sciences, Molecular and Environmental Plant Sciences Program, MS2133 Texas A&M University, College Station, TX, 77843-2133, USA
& Hisashi Koiwa
Institute of Tea, Guizhou Academy of Agricultural Sciences, Guiyang, 550006, Guizhou Province, People's Republic of China
, Dahe Qiao
& Zhengwu Chen
Institute of Agro-bioengineering/College of Life Science, Guizhou University, Huaxi Avenue, Guiyang, 550025, Guizhou Province, People's Republic of China
Xiaopeng Wen
Key Laboratory of Plant Resources Conservation and Germplasm Innovation in Mountainous Region (Ministry of Education), Guizhou University, Xiahui Road, Huaxi, Guiyang, 550025, Guizhou Province, People's Republic of China
Search for Suzhen Niu in:
Search for Qinfei Song in:
Search for Hisashi Koiwa in:
Search for Dahe Qiao in:
Search for Degang Zhao in:
Search for Zhengwu Chen in:
Search for Xia Liu in:
Search for Xiaopeng Wen in:
SZN, DGZ and ZWC conceived and supervised the study. QFS analyzed and interpreted the genetic diversity, linkage disequilibrium population structure. SZN and HK wrote and reviewed the manuscript. DHQ and XL performed the DNA extraction and filtered the genotyping data. XPW reviewed the manuscript. All authors read and approved the final version of the manuscript.
Correspondence to Degang Zhao or Zhengwu Chen.
Table S1. The quality control (QC) data of each sample. Table S2. Statistics of individual heterozygosity of 287,408 SNPs based on GBS. Table S3.. Statistics of individual heterozygosity of 79,016 SNPs based on GBS. Table S4. SNP density of scaffolds based on GBS. Table S5. The p-value of genetic diversity parameters in Table 2 based on independent-samples T-test. Table S6. Genetic diversity parameters of three species of tea plants in Guizhou Plateau. Table S7. Fst and pairwise genetic distance among three Species of tea plant in Guizhou Plateau (XLSX 117 kb)
Information of 415 tea accessions used in this study, including the accession/clone/collection, the accession name, the zone, the cultivation status, growth habits, the species, the STRUCTURE URE-based grouping (Qi ≥0.8) at K = 2, the notes, the source, and the inferred populations (XLSX 45 kb)
Figure S1. Graphical method allowing the detection of the number of groups using ∆K inferred population structure of the 263 Pure Cultivation Type. Figure S2. Inferred population structure of the 263Pure Cultivation Type using STRUCTURE software. Bar plot of individual ancestry proportions for the genetic clusters inferred using STRUCTURE (K = 2) and the reduced dataset. Individual ancestry proportions (q values) are sorted within each cluster. Admixture model, independent frequencies, 30,000 burn-in iterations, 100,000 Markov Chain Monte Carlo iterations were used for this analysis. Ancient landraces (GP03–1) and modern landraces (GP03–2) are shown in yellow and green, respectively. Figure S3.. Four inferred populations of the 415tea accessions using STRUCTURE (K = 3). GP01 are shown in red, GP02 are shown in red and blue, GP03–1 are shown in blue, and GP03–2 are shown in green. (PDF 207 kb)
Average LD decay (r2) estimated against the genetic distance for pairs of linked SNP across all scaffolds longer than 500 kb in the 415 accessions (ALL) and four inferred groups (GP01, GP02, GP03–1and GP03–2). (PDF 220 kb)
Geographic distribution of tea accessions analyzed in the current study according to the collection. (A) The geographical position of Guizhou province in China. (B) Agriculture climate regionalization map for tea plant growth in Guizhou Plateau [35]. Ia: Area with a very suitable climate for tea plant growth in North of Guizhou; Ib: Area with a very suitable climate for tea plants growth in East of Guizhou; Ic: Area with a very suitable climate for tea plants growth in South of Guizhou; II: Area with a suitable climate for tea plant growth in Center of Guizhou; III: Area with a minor suitable climate for tea plant growth in West of Guizhou; IV: Area with an unsuitable climate for tea plants growth in West of Guizhou. (PDF 157 kb)
Niu, S., Song, Q., Koiwa, H. et al. Genetic diversity, linkage disequilibrium, and population structure analysis of the tea plant (Camellia sinensis) from an origin center, Guizhou plateau, using genome-wide SNPs developed by genotyping-by-sequencing. BMC Plant Biol 19, 328 (2019). https://doi.org/10.1186/s12870-019-1917-5
Tea plant
Origin center
Guizhou plateau
|
CommonCrawl
|
Home Journals RIA EMG Signal Feature Extraction, Normalization and Classification for Pain and Normal Muscles Using Genetic Algorithm and Support Vector Machine
EMG Signal Feature Extraction, Normalization and Classification for Pain and Normal Muscles Using Genetic Algorithm and Support Vector Machine
Reema Jain* | Vijay Kumar Garg
Department of Computer Application, LPU, Phagwara, Punjab 144411, India
Department of Computer Science and Engineering, LPU, Phagwara, Punjab 144411, India
[email protected]
Electromyography (EMG) is the process of measuring neuromuscular activities generated during the contraction and expansion period of muscles throughout the body. The potential is recorded by inserting needle or by placing electrodes on the surface of body. In this research, an automatic EMG signal classification system is developed using machine learning oriented Support Vector Machine (SVM). The collected data is selected using Genetic Algorithm (GA). The purpose of GA is to select those rows from the dataset, which contains potential or electrical activities recorded while the patient is in motion. Furthermore, the selected features are neutralized using critic method. To improve the row selection cosine similarity is being used to determine an average value hence also helps for data reduction. Based on the average similarity values, SVM is trained and used for classification during the testing phase. The experiment has been performed in MATLAB tool and the classification accuracy for normal and pain EMG signal of 91.3% and 92.4% respectively is achieved.
electromyography, normalization, genetic algorithm, cosine similarity, support vector mechanism
Electromyography (EMG) is a diagnostic method by which specialists evaluate the functional state of skeletal muscles and peripheral nerve endings. The assessment is based on the level of their electrical activity. To conduct EMG, an electromyography is used - an apparatus that enhances and records the bio potentials of the neuromuscular system [1]. Modern computer devices record even the minimum values of electrical impulses, automatically read the amplitude and frequency of periods, and also perform their spectral analysis. The device consists of a complete computer system capable of recording certain signals (bio potentials) of muscle tissue [2]. Using the device, bio potentials are strengthened, which helps doctors to determine the degree of damage to muscle tissue without a surgical diagnostic operation. Diodes are attached to the computer system that record deviations from the norm. Using the apparatus, the signal is amplified, and an image is displayed on the screen that displays the state of the muscle tissue and peripheral nerves of the body area under study [3]. Modern devices display the image directly on the monitor, but the old generation electromyography captures the received pulses on paper. It has been observed that exist several techniques to process complex EMG signal that is assisted by EMG classification using either of the Artificial Neural Networks (ANN), Multi-Layer Perceptron (MLP), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA) and K-Nearest Neighbor (KNN). Researchers have put their effort in order to classify the EMG data and have identified the major issues in the classification as
a) Preprocessing and Data Selection
b) Training and Classification
Following aspects have been discovered in order to classify an EMG signal.
The signal contains many artefacts for the same type of disease as the signal varies due to the small movement of human during the recording process of signal.
The presence of noise also increases the complexity and hence difficult to classify the signal.
Optimal set of data for each class will lead to better classification accuracy.
The machine learning schemes required signal with optimal and refined data so that training and then classification can be performed in a better way [4].
To solve the above defined problems, a new system has been designed using signal Attribute selection, feature optimization with classification techniques.
Figure 1. (a) Non- invasive (b) Invasive techniques
EMG is used to detect the muscular information based on the contraction and expansion period of muscles through the electrodes, which are placed on the surface of the human body. The process of gathering information can be invasive or non- invasive. The process of conducting non-invasive surface EMG is performed using electrodes whereas the invasive EMG required a needle, which is being inserted into the patient body for collecting the muscle related information. Both the processes are shown in Figure 1.
The EMG model can be simply represented by Eq. (1).
$y(n)=\sum_{t=0}^{N-1} h(t) e(n-t)+x(n)$ (1)
y(n)$\rightarrow$ Modelled EMG signal.
e(n)$\rightarrow$ Fringing impulse.
h(t)$\rightarrow$ Motor Unit Action Potentials (MUAPs), which is used to provide significant source of information that is useful for diagnosis of neuromuscular disorders.
x(n)$\rightarrow$ While Gaussian Noise.
n $\rightarrow$ Number of motor unit.
This research deals with the Attribute selection and classification of the EMG signal into two categories that is either pain or normal activity of muscles.
2. Related Work
The research has been done by number of authors to enhance the Attribute selection and classification rate of the EMG signal by using different techniques. A survey has been conducted to know about the tradition techniques used by the previous authors and how one can improve the classification rate. Robotic learning was applied by Stiyal on EMG signals as a subject-independent framework [5]. EMG classification can be utilized in distinct medical domains like as neuromuscular disorder diagnosis [6], Neuromuscular Disorder [7], knee pathology detection [8], motion recognition [9], fatigue muscle analysis [10], and prosthesis control [11].
Mishra et al. had classified EMG signals, which have been collected from bicep muscels or different category of patients such as normal, mayopathic and neuropathic. The time and frequency domain parameters of MUAP potential has been analysed and optimized using soft computing approach. The classification has been performed by RBFN, K-NN and SVM technique and has been observed that SVM performed well among all with an accuracy of 95.25% [12]. de Dieu Uwisengeyimana and Ibrikci [13] have diagnose knee related problems using KNN and ANN classifiers. The data has been collected from the four muscles surrounding the knee and about 500 samples have been prepared. From the experiment it has been concluded that knee pathology can be better analyzed using ANN with detection accuracy of 91.3%. Lin et al. [14] have presented an attribute selection approach for EMG signal that helps to classify signal. Initially, data is pre-processed using normalization in order to minimize the effect of the inter- and intra-participant of signal resulting while collecting signal through sensors. Basically, three types of normalizations have been applied such as (i) channel wise, (ii) motion-wise and (iii) participant wise. Also, down sampling has been applied to remove the unwanted or overlapped data points. Using Base classifier such as ANN provides better accuracy of $83 \mp 6$ [14].
Pancholi and Joshi [15] presented EMG signal for upper limb. The data has been collected from five different positions of arms during the exercise period. palpation method has been used for the selection of muscles. Using palpation method, the nerves have been selected based on the blood flow. Using 29 subjects the signal has been acquired and the data has been divided into time domain and frequency domain features. For classification different classifiers such as Random Forest (RF), k-nearest neighbours (k-NN), linear discriminant analysis (LDA), Support Vector Machine (SVM), Random Tree (RT) have been used and the detection accuracy ranges from 57.69% to 99.92% [15].
Morbidoni et al. [16] have worked to deal with the classification of stance and swing as muscular disease using EMG classification approach. The classification of the designed system has been tested using Multi-Layer Perceptron techniques and the examined accuracy lies between 92.6%–97.2%. The study has suggested that ANN can be an appropriate tool for automatic classification of EMG signal [16].
3. Proposed Work
The entire work is shown in Figure 2, which consists of three main parts such as attribute row selection using genetic Algorithm (GA), Similarity measure using Cosine Similarity, application of critic method to normalize features and classification using Support Vector machine (SVM).
3.1 Dataset
The dataset is collected from https://www.kaggle.com/nccvector/electromyography-emg-dataset link. The considered dataset contains pain and normal muscular data of 1000 row for each category with seven different un-named attributes. The electric potential observed is as like presented in Table 1.
Table 1. Dataset
Patient Number
Where, t is the time interval for time ranging from t1 to t8 measured in miliseconds.
Figure 2. Flow of proposed work
3.2 Attribute selection
Attribute selection of EMG signal is performed on each recorded movement data per session using the electrodes placed on the human body. The amplitude variation in the EMG is high when any movement is occurs otherwise the signals are in rest and amplitude variation reduces. EMG signal, each voltage-time amplitude value need Attribute selection to find out the relevant features for painful data or normal data to achieve better classification accuracy. The benchmark for the selection and rejection is relative to the value which is being used. It means, the selection and rejection will depend upon the other relative values available in that class of other patients.
In this research, attribute selection of the uploaded EMG signal is performed using nature inspired Genetic Algorithm. GA selects rows among the available dataset as per the designed fitness function represented by Eq. (2).
Fitness function$=1 \quad(1-e) \times F s>F t$
0 Otherwise (2)
where, Fs = Current Attributes,
Ft = All Attributes Row Values.
Each row has been tested using Eq. (2). If the row satisfied the fitness function, then categorize it either pain or normal. Otherwise, repeat the process for next row. Using this process, the unwanted signal has been removed and the row also obtained in reduced and desired form. The process of GA is shown in Figure 3.
Figure 3. GA process
Step 1: Initialization of Population (Rows): Initialize, population string (raw EMG data) known as chromosomes. To resolve problem that is to select EMG signal, which is being recorded during motion and to reject the EMG signal, which is recorded at rest position has been performed based on the designed fitness function.
Step 2: Selection: In this step, the selection of appropriate signal, which is higher than fitness value is eliminated and those less than or equal to fitness value are selected. Those values of rows that have lowest value are known as parent and contributed to the generation of new member named as children.
Step 3: Mutation: helps to search with the best row selection based on mutation threshold.
Step 4: Termination: The Attribute selection process is terminated while the desired rows are selected and categorized as pain and normal signal [17, 18].
Feature selection using GA
Required Input:
EMG Feature Data$\leftarrow$Extracted feature from used EMG Dataset for Pain & Normal Categories
Fitness Function$\leftarrow$Designed fitness function for feature selection
Fitness function = 1 (1-e) X Fs>Ft
0 Otherwise
Where, e = It is the generated mutation error
Fs = Current Attributes
Ft = All Attributes Row Values
Obtained Output:
OEMG-FD$\leftarrow$Optimized EMG Feature Data
Start GA
Load Dataset, EMGFeature Data (EMG-FD) = Load feature attributes
To optimized the EMG-FD, GA is used
Set GA Parameters: Population Size (P) – Based on the number of properties
CO – Crossover Operators
MO – Mutation Operators
Calculate Length of EMG-FD in terms of Len
Set,Optimized EMG Feature Data, OEMG-FD = []
For I = 1 $\rightarrow$ Len
$\mathrm{Fs}=\mathrm{EMG}-\mathrm{FD}(\mathrm{I})=$Selected$_{\text {EMG Attributes}}$
$\mathrm{Ft}=$ Threshold $_{\text {Attributes}}=\sum_{i=1}^{R} E M G-F D(I)$
$F(f)=$ Fit Fun $\left(e, F_{s}, F_{t}\right)$
Nvar = Number of variables
BestProp = OEMG-FD = GA (F(f), T, Nvar, GA Parameters)
End - For
Return: OEMG-FD as an Optimized EMG Feature Data
End – Function
3.3 Cosine similarity
Cosine similarity is being applied on the selected rows, which return a single similarity index for each row. Suppose, the selected rows after GA of 700 has been obtained from the available 1000 rows for both pain and normal EMG signals. Therefore, similarity has been measured for each row with the remaining 699 rows. In this way a single or average value has been obtained using Eq. (3).
Avg $\operatorname{sim}=\frac{\sum_{i=1}^{n} \cos _{\operatorname{sim}}}{n}$ (3)
where, n is total number of similarities
After obtaining the average value, 20% of value has been added and subtracted to the obtained average similarity value as denoted by $b_{1}$ and $b_{2}$ [19]. Mathematically, can be represented by Eq. (4);
$B_{1}=\operatorname{Avg} \operatorname{Sim}+\frac{A v g \operatorname{sim} N o.}{100}$
$B_{2}=\operatorname{Avg} \operatorname{sim}-\frac{\operatorname{Avg} \operatorname{Sim} \mathrm{No} .}{100}$ (4)
If the value of $B_{1}$ and $B_{2}$ lies between the similarity value and those values are used for the training and classification purpose using SVM approach. Otherwise drop the data [20].
Cosine similarity
OEMG-FD $\leftarrow$ Optimized EMG Feature Data
SimCos$\leftarrow$ Cosine similarity between OEMG-FD
Avg Sim $\leftarrow$ Average Similarity
Create an empty array to store similarity, SimCos = []
Sim-count = 0
For I = 1 $\rightarrow$ Length (OEMG-FD)
Current_Data = OEMG-FD (I)
For J = I+1 $\rightarrow$ Length (OEMG-FD)
L = |Cos (Current_Data) - Cos (Data (J))|
SimCos [sim_count, 1] = Current_Data
SimCos [sim_count, 2]= Data(J)
SimCos [sim_count, 3]=L
Increment in array, Sim-count = Sim-count+1
End-For
End – For
Calculate Average Similarity
Avg $\operatorname{sim}=\frac{\sum_{i=1}^{n} \operatorname{sim} \cos }{n}$// Where, n is total number of similarities
Return: SimCos as an output in terms of cosine similarity between OEMG-FD and Avg Sim as an Average Similarity
End-Function
3.4 Critical method
Critic method is used to normalize the feature vector attained from the application of Genetic Algorithm [21]. Critic method acts on three elements as follows:
The current state of value
The maximum value of the section
The minimum value of the section
Following pseudo code is applied in order to implement the critic method.
Application of Critic
For $_{\text {each}}$ Selected row
Foreach col in Selected row
Cr $_{\text {Feature}_{\text {Value}}}=$Selected$_{\text {row-col}}$. Feature
Threshold. Value $=\frac{\sum_{i=1}^{n} \text { Feature }_{\text {Value}}}{n}$
If Creature_{Value} $\geq$Thre shold$_{\text {Value}}$
Do $_{\text {Nothing}}$
Find maxv=Maximum Value(col) of all rows
Find $\min _{v}=$ Minimum Value $($col$)$ of all rows
$R v=\frac{C r_{\text {Featurevalue}}-\min _{v}}{\max _{v}+\min _{v}}$
Replace $\mathrm{Cr}_{\text {Feature}_{\text {value}}}$ by $\boldsymbol{R} \boldsymbol{v}$
The critic method takes the current attribute value and checks it with the average value of other members which is called the selection value in this case. If the attribute value is less than the threshold value, it checks for the maximum and minimum value of this attribute in all the available rows of the category. The current attribute value is subtracted from the min value and is divided by a total of max and min value. The outcome is replaced by the current attribute value.
3.5 Support Vector Machine (SVM)
SVM is a machine learning approach used to classify EMG signal as pain and normal. Using this approach, a hyperplane has been constructed to distinguished two different data as normal and pain in this case. To train SVM, let the input train data be as: $\left(a_{1} b_{1}\right),\left(a_{2} b_{2}\right), \ldots,\left(a_{m} b_{m}\right) \epsilon P^{N} \times\{-1,+1\}$.
$a_{i} \rightarrow$ input value,
$b_{i} \rightarrow$ Assigned class to which input belongs {-1,+1}.
In case, if the input data is not separated linearly, then a transform of ($\varphi: P^{N} \rightarrow P^{M}$) has been used with a new feature space represented by $P^{M}$.
Using this as a function, the obtained hyperplane can be separated as per the Eq. (5);
$\omega \times \varphi(a)+b=0$
$\omega \in P^{M}$ and $b \in P$ (5)
The training can be said best with optimal hyperplane and a minimum error. In case, if the signals are too close or overlap with each other, then a kernel function is used to separate that data. The kernel function might be Radial Basic Function (RBF), polynomial, linear, a Gaussian etc. [1, 22].
The training and testing using SVM is shown in Figure 4.
Classification using SVM
OEMG-FD $\leftarrow$Training Data as an Optimized EMG Feature Data
C $\leftarrow$Target/Category in terms of Pain and Normal
RBF $\leftarrow$ Radial Basis Function as a Kernel Function
SimCos $\leftarrow$ Cosine similarity between OEMG-FD
SVM-Structure $\leftarrow$ Trained SVM Structure
Calculate $\begin{aligned} B_{1} &=\operatorname{Avg} \operatorname{sim}+\frac{A v g \operatorname{Sim} N o.}{100} \\ B_{2} &=\operatorname{Avg} \operatorname{sim}-\frac{\operatorname{Avg} \operatorname{Sim} N o .}{100} \end{aligned}$
If Sim Value < B1& Sim Value > B2
Initialize the SVM with training data OEMG-FD with RBF as Kernel function
If OEMG-FD (I) == Pain
Cat (1) = OEMG-FD (I)
End – If
Drop OEMG-FD
VM-Structure =SVMTRAIN (OEMG-FD, Cat, Kernel function)
Return: SVM-Structure as a Trained SVM structure
Figure 4. Training and testing of EMG Signal using SVM
4. Result and Discussions
The results of the designed Attribute selection and classification EMG system has been performed in MATLAB simulator. The aim of this research is to distinguish the pain and normal muscles using GA as an Attribute selection approach. To enhance the training of SVM, cosine similarity has been applied which will further reduce the irregular and the noise signal present in the available segmented EMG data. As the signal is filtered by two techniques such as GA and cosine similarity. Therefore, the possibility of detection accuracy also increases. The analysed values in terms of precision, recall, F-measure and classification accuracy are performed using Eq. (6), Eq. (7), Eq. (8) and Eq. (9) respectively.
Precision$=\frac{T_{p}}{T_{p}+F_{p}}$ (6)
Recall$=\frac{T_{p}}{T_{p}+F_{n}}$ (7)
$F-$Measure$=\frac{2 \times \text {Precision } \times \text { Recall}}{\text {Precision }+\text { Recall}}$ (8)
Accuracy$=\frac{T_{p}+T_{N}}{T_{p}+F_{p}+F_{n}+T_{N}}$ (9)
Here, $T_{p} \rightarrow$ The EMG signal that are actually comes under pain or normal category and also predicted as the same.
$F_{n} \rightarrow$ the EMG signal that are being predicted as real but are noise or unwanted signal considered at rest position.
$F_{p} \rightarrow$ The EMG signal that is actually real but predicted as undesired or noisy signal.
$T_{n} \rightarrow$ The number of appropriately predicted real signal.
Precision values analysed for GA with SVM, GA +Cosine Similarity+ SVM for normal and pain muscular signal is shown in Figure 5 with the values listed in Table 2.
From Figure 5, it is clearly seen that maximum values of precision are analysed for the proposed work i.e. (GA+ Cosine Similarity +SVM) approach used for normal EMG signal followed by GA+ Cosine Similarity +SVM used for classifying pain EMG signal. The average precision values examined for the proposed work using GA SVM and GA with Cosine similarity with SVM for normal EMG signal are 0.8456 and 0.9804 respectively. Similarly, the precision values analyzed for the pain EMG signal using GA with SVM and GA with Cosine similarity with SVM are 0.815 and 0.943 respectively. We observed that, precision rate is improved by using the combination of cosine similarity measurement technique along with the GA and SVM. Improved precision rate indicates the selection of EMG attributes are better during the classification process and rate of true features is high due to better training of system.
The recall values for the pain and normal muscles analyzed using automatic classification system are summarized in Table 3 and graphically compared in Figure 6. The average values of recall measured for normal EMG signal using GA with SVM, and GA+ Cosine Similarity +SVM are 0.7456 and 0.7501. Similarly, the average value of recall examined for pain muscular signal using GA with SVM, and GA+ Cosine Similarity +SVM are 0.7451 and 0.838 respectively. Similar to precision, recall rate also improved by using the combination of cosine similarity measurement technique along with the GA and SVM. Recall rate denotes the selection of appropriate feature according to the training of the system and in proposed work the recall rate is improved that means proposed system achieved good performance.
Table 2. Computed precesion
Number of Iterations
GA with SVM
GA+ Cosine Similarity +SVM
Figure 5. Precision analysis
Table 3. Computed recall
Figure 6. Recall analysis
The analyzed data for F score is shown in Figure 7 with the values listed in Table 4 is the collective representation of precision and recall. The average value of F score analyzed for two different EMG data (pain and normal) using GA with SVM and GA with Cosine and SVM are represented by the orange, the yellow, the blue and the grey bar respectively. Here, F-score basically denotes the average of precision and recall rate and for a better system it should be high. From the observation we, concluded that, precision as well recall rate is better by using the combination of cosine similarity measurement technique.
The classification accuracy examined by the proposed work for pain and normal EMG signal are listed in Table 5 and graphically illustrated in Figure 8. From the figure it is clearly seen that the average accuracy for the painful EMG signal is higher than 70%. The pain and normal EMG signal has been classified with an average accuracy of 92.4% and 91.3% respectively.
Table 4. Computed F-score
Figure 7. F-score analysis
Table 5. Computed classification accuracy
Figure 8. Classification accuracy analysis
Figure 9. Accuracy comparison with the existing work
Table 6. Comparison of computed accuracy with the existing work
Proposed Work
Existing Work [23]
To show the effectiveness of the proposed work, the comparison of examined classification accuracy is shown in Table 6 and plotted for comparative analysis in Figure 9. The graph shows that the rate of classifying EMG signal whether it is normal EMG or pain EMG signal the proposed algorithm performed well compared to the existing ANN classifier. Also, the percentage increase in the classification rate of proposed work for normal EMG from the Jiang et al. 2019 work [23] is 11.15% whereas, for painful signal, the classification accuracy has been increased by 5.52%. This enhancement has been obtained because of the proper selection of appropriate EMG data which in turns increase the training rate and hence the classification during the testing process.
An automatic Attribute selection and classification system for EMG signal has been designed using GA with SVM as Attribute selection and classification techniques respectively. The results show that proposed model worked well with higher classification rate for both pain and normal EMG signal. An appropriate selection of EMG signal has been performed using GA with cosine similarity as well as reduced the available data that minimized the training error and hence improve classification rate. The research can provide a better understanding of the EMG signal Attribute selection and classification procedure. The classification accuracy observed for normal and painful EMG signal are 91.3% and 92.4% respectively. Also, the improvement of the proposed work against the existing work of about 11.15% and 5.52% has been examined for normal and painful EMG signal against the existing work. In future, we plan to use artificial neural network as a classification approach or comparing the results of SVM and ANN in order to know the efficiency of the classifiers in terms of classification accuracy.
[1] Kaur, G., Arora, A.S., Jain, V. (2009). Multi-class support vector machine classifier in EMG diagnosis. WSEAS Transactions on Signal Processing, 5(12): 379-389.
[2] Chan, F.H., Yang, Y.S., Lam, F.K., Zhang, Y.T., Parker, P.A. (2000). Fuzzy EMG classification for prosthesis control. IEEE Transactions on Rehabilitation Engineering, 8(3): 305-311. https://doi.org/10.1109/86.867872
[3] Campos, D.P., Abatti, P.J., Bertotti, F.L., Gomes, O.A., Baioco, G.L., Hill, J.A.G., da Silveira, A.L.F. (2019). Ingestive pattern recognition on cattle using EMG segmentation and feature extraction. In: Costa-Felix, R., Machado, J., Alvarenga, A. (eds) XXVI Brazilian Congress on Biomedical Engineering. IFMBE Proceedings, pp. 281-288. https://doi.org/10.1007/978-981-13-2517-5_43
[4] Phinyomark, A., Campbell, E., Scheme, E. (2020). Surface electromyography (EMG) signal processing, classification, and practical considerations. In: Naik G. (eds) Biomedical Signal Processing. Series in BioEngineering. Springer, Singapore. https://doi.org/10.1007/978-981-13-9097-5_1
[5] Stival, F. (2018). Subject-independent frameworks for robotic devices: Applying robot learning to EMG signals. Padova Digital University Archive.
[6] Bhuvaneswari, P., Kumar, J.S. (2016). Electromyography based detection of neuropathy disorder using reduced cepstral feature. Indian Journal of Science and Technology, 9(8): 1-4. https://doi.org/10.17485/ijst/2016/v9i8/87899
[7] Benazzouz, A., Guilal, R., Amirouche, F., Slimane, Z.E.H. (2019). EMG feature selection for diagnosis of neuromuscular disorders. 2019 International Conference on Networking and Advanced Systems (ICNAS), Annaba, Algeria, pp. 1-5. https://doi.org/10.1109/ICNAS.2019.8807862
[8] Naik, G.R., Selvan, S.E., Arjunan, S.P., Acharyya, A., Kumar, D.K., Ramanujam, A., Nguyen, H.T. (2018). An ICA-EBM-based sEMG classifier for recognizing lower limb movements in individuals with and without knee pathology. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 26(3): 675-686. https://doi.org/10.1109/TNSRE.2018.2796070
[9] Huang, Y., Liu, H. (2016). Performances of surface EMG and ultrasound signals in recognizing finger motion. 2016 9th International Conference on Human System Interactions (HSI), Portsmouth, pp. 117-122. https://doi.org/10.1109/HSI.2016.7529618
[10] Zoppirolli, C., Pellegrini, B., Bortolan, L., Schena, F. (2016). Effects of short-term fatigue on biomechanical and physiological aspects of double poling in high-level cross-country skiers. Human Movement Science, 47: 88-97. https://doi.org/10.1016/j.humov.2016.02.003
[11] Pancholi, S., Joshi, A.M. (2018). Portable EMG data acquisition module for upper limb prosthesis application. IEEE Sensors Journal, 18(8): 3436-3443. https://doi.org/10.1109/JSEN.2018.2809458
[12] Mishra, B., Wadhwani, A.K., Singh, S. (2019). EMG signal classification for neuromuscular disorder using soft-computing techniques. IJIRMPS-International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences, 7(1).
[13] de Dieu Uwisengeyimana, J., Ibrikci, T. (2017). Diagnosing knee osteoarthritis using artificial neural networks and deep learning. Biomedical Statistics and Informatics, 2(3): 95-102. https://doi.org/10.11648/j.bsi.20170203.11
[14] Lin, J.F.S., Samadani, A.A., Kulić, D. (2016). Segmentation by data point classification applied to forearm surface EMG. In: Leon-Garcia A. et al. (eds) Smart City 360°. SmartCity 360 2016, SmartCity 360 2015. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 166. Springer, Cham. https://doi.org/10.1007/978-3-319-33681-7_13
[16] Morbidoni, C., Principi, L., Mascia, G., Strazza, A., Verdini, F., Cucchiarelli, A., Di Nardo, F. (2019). Gait phase classification from surface EMG signals using Neural Networks. In: Henriques J., Neves N., de Carvalho P. (eds) XV Mediterranean Conference on Medical and Biological Engineering and Computing – MEDICON 2019. MEDICON 2019. IFMBE Proceedings, vol 76. Springer, Cham. https://doi.org/10.1007/978-3-030-31635-8_9
[17] Ambikapathy, B., Kirshnamurthy, K., Venkatesan, R. (2018). Assessment of electromyograms using genetic algorithm and artificial neural networks. Evolutionary Intelligence, 1-11. https://doi.org/10.1007/s12065-018-0174-0
[18] Karimi, M., Pourghassem, H., Shahgholian, G. (2011). A novel prosthetic hand control approach based on genetic algorithm and wavelet transform features. 2011 IEEE 7th International Colloquium on Signal Processing and its Applications, Penang, pp. 287-292. https://doi.org/10.1109/CSPA.2011.5759889
[19] Hunter, P. (2016). Margin of error and confidence levels made simple.
[20] Sidorov, G., Gelbukh, A., Gómez-Adorno, H., Pinto, D. (2014). Soft similarity and soft cosine measure: Similarity of features in vector space model. Computación y Sistemas, 18(3): 491-504.
[21] Alinezhad, A., Khalili, J. (2019). CRITIC method. In: New Methods and Applications in Multiple Attribute Decision Making (MADM). International Series in Operations Research & Management Science, vol 277. Springer, Cham. https://doi.org/10.1007/978-3-030-15009-9_26
[22] Alkan, A., Günay, M. (2012). Identification of EMG signals using discriminant analysis and SVM classifier. Expert systems with Applications, 39(1): 44-47. https://doi.org/10.1016/j.eswa.2011.06.043
[23] Jiang, M., Mieronkoski, R., Syrjälä, E., Anzanpour, A., Terävä, V., Rahmani, A.M., Salanterä, S., Aantaa, R., Hagelberg, N., Liljeberg, P. (2019). Acute pain intensity monitoring with the classification of multiple physiological parameters. Journal of Clinical Monitoring and Computing, 33(3): 493-507. https://doi.org/10.1007/s10877-018-0174-8
|
CommonCrawl
|
Omid satellite. Iran is the 9th country to put a domestically-built satellite into orbit using its own launcher and the sixth to send animals in space.
Iran has made considerable advances in science and technology through education and training, despite international sanctions in almost all aspects of research during the past 30 years. Iran's university population swelled from 100,000 in 1979 to 2 million in 2006.[citation needed] In recent years, the growth in Iran's scientific output is reported to be the fastest in the world.[1][2][3] Iran has made great strides in different sectors, including aerospace, nuclear science, medical development, as well as stem cell and cloning research.[4]
Throughout history, Iran was always a cradle of science, contributing to medicine, mathematics, astronomy and philosophy. Trying to revive the golden time of Iranian science, Iran's scientists now are cautiously reaching out to the world. Many individual Iranian scientists, along with the Iranian Academy of Medical Sciences and Academy of Sciences of Iran, are involved in this revival.[citation needed]
1 Science in ancient and Medieval Iran (Persia)
1.1 Ancient technology in Iran
1.3 Medicine
1.4 Astronomy
1.6 Chemistry
1.7 Physics
2 Science policy
2.1 Human resources
2.1.1 Student enrollment trends
2.1.2 Trends in researchers
2.2 Research expenditure
2.2.1 Funding the transition to a knowledge economy
3 Technology parks
5 Private sector
6 Science in modern Iran
6.1 Medical sciences
6.2 Biotechnology
6.3 Physics and materials
6.4 Computer science, electronics and robotics
6.5 Chemistry and nanotechnology
6.6 Aviation and space
6.8 Energy
6.9 Armaments
7 Scientific collaboration
8 Contribution of Iranians and people of Iranian origin to modern science
9 International rankings
10 Iranian journals listed in the Institute for Scientific Information (ISI)
11.1 General
11.2 Prominent organizations
Science in ancient and Medieval Iran (Persia)
See also: Islamic science, Inventions in medieval Islam, Timeline of Islamic science and technology, and Academy of Gundishapur
Science in Persia evolved in two main phases separated by the arrival and widespread adoption of Islam in the region.
References to scientific subjects such as natural science and mathematics occur in books written in the Pahlavi languages.
Ancient technology in Iran
The Qanat (a water management system used for irrigation) originated in pre-Achaemenid Iran. The oldest and largest known qanat is in the Iranian city of Gonabad, which, after 2,700 years, still provides drinking and agricultural water to nearly 40,000 people.[5]
Iranian philosophers and inventors may have created the first batteries (sometimes known as the Baghdad Battery) in the Parthian or Sassanid eras. Some have suggested that the batteries may have been used medicinally. Other scientists believe the batteries were used for electroplating—transferring a thin layer of metal to another metal surface—a technique still used today and the focus of a common classroom experiment.[6]
Windwheels were developed by the Babylonians ca. 1700 BC to pump water for irrigation. In the 7th century, Iranians engineers in Greater Iran developed a more advanced wind-power machine, the windmill, building upon the basic model developed by the Babylonians.[7][8]
Further information: Mathematics in medieval Islam
Manuscript of Abdolrahman Sufi's Depiction of Celestial Constellations
1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 {\displaystyle {\begin{matrix}&&&&&1\\&&&&1&&1\\&&&1&&2&&1\\&&1&&3&&3&&1\\&1&&4&&6&&4&&1\end{matrix}}}
The first five rows of Khayam-Pascal's triangle
The 9th century mathematician Muhammad ibn Musa al-Khwarizmi created the logarithm table, developed algebra and expanded upon Persian and Indian arithmetic systems. His writings were translated into Latin by Gerard of Cremona under the title: De jebra et almucabola. Robert of Chester also translated it under the title Liber algebras et almucabala. The works of Kharazmi "exercised a profound influence on the development of mathematical thought in the medieval West".[9]
The Banū Mūsā brothers ("Sons of Moses"), namely Abū Jaʿfar, Muḥammad ibn Mūsā ibn Shākir (before 803 – February 873), Abū al‐Qāsim, Aḥmad ibn Mūsā ibn Shākir (d. 9th century) and Al-Ḥasan ibn Mūsā ibn Shākir (d. 9th century), were three 9th-century Persian[10][11] scholars who lived and worked in Baghdad. They are known for their Book of Ingenious Devices on automata and mechanical devices and their Book on the Measurement of Plane and Spherical Figures.[12]
Other Iranian scientists included Abu Abbas Fazl Hatam, Farahani, Omar Ibn Farakhan, Abu Zeid Ahmad Ibn Soheil Balkhi (9th century AD), Abul Vafa Bouzjani, Abu Jaafar Khan, Bijan Ibn Rostam Kouhi, Ahmad Ibn Abdul Jalil Qomi, Bu Nasr Araghi, Abu Reyhan Birooni, the noted Iranian poet Hakim Omar Khayyam Neishaburi, Qatan Marvazi, Massoudi Ghaznavi (13th century AD), Khajeh Nassireddin Tusi, and Ghiasseddin Jamshidi Kashani.
Main article: Ancient Iranian Medicine
See also: Academy of Guneshapur, Bimarestan, Medicine in medieval Islam, and Islamic medicine
The practice and study of medicine in Iran has a long and prolific history. Situated at the crossroads of the East and West, Persia was often involved in developments in ancient Greek and Indian medicine; pre- and post-Islamic Iran have been involved in medicine as well.
For example, the first teaching hospital where medical students methodically practiced on patients under the supervision of physicians was the Academy of Gundishapur in the Persian Empire. Some experts go so far as to claim that: "to a very large extent, the credit for the whole hospital system must be given to Persia".[13]
The idea of xenotransplantation dates to the days of Achaemenidae (the Achaemenian dynasty), as evidenced by engravings of many mythologic chimeras still present in Persepolis.[14]
From: Mansur ibn Ilyas: Tashrīḥ-e badan-e ensān. تشريح بدن انسان (dissection of human body). Manuscript, ca. 1450, U.S. National Library of Medicine.
A 500-year-old Latin translation of the Canon of Medicine by Avicenna.
Several documents still exist from which the definitions and treatments of the headache in medieval Persia can be ascertained. These documents give detailed and precise clinical information on the different types of headaches. The medieval physicians listed various signs and symptoms, apparent causes, and hygienic and dietary rules for prevention of headaches. The medieval writings are both accurate and vivid, and they provide long lists of substances used in the treatment of headaches. Many of the approaches of physicians in medieval Persia are accepted today; however, still more of them could be of use to modern medicine.[15]
In the 10th century work of Shahnameh, Ferdowsi describes a Caesarean section performed on Rudabeh, during which a special wine agent was prepared by a Zoroastrian priest and used to produce unconsciousness for the operation.[16] Although largely mythical in content, the passage illustrates working knowledge of anesthesia in ancient Persia.
Later in the 10th century, Abu Bakr Muhammad Bin Zakaria Razi is considered the founder of practical physics and the inventor of the special or net weight of matter. His student, Abu Bakr Joveini, wrote the first comprehensive medical book in the Persian language.
After the Islamic conquest of Iran, medicine continued to flourish with the rise of notables such as Rhazes and Haly Abbas, albeit Baghdad was the new cosmopolitan inheritor of Sassanid Jundishapur's medical academy.
An idea of the number of medical works composed in Persian alone may be gathered from Adolf Fonahn's Zur Quellenkunde der Persischen Medizin, published in Leipzig in 1910. The author enumerates over 400 works in the Persian language on medicine, excluding authors such as Avicenna, who wrote in Arabic. Author-historians Meyerhof, Casey Wood, and Hirschberg also have recorded the names of at least 80 oculists who contributed treatises on subjects related to ophthalmology from the beginning of 800 AD to the full flowering of Muslim medical literature in 1300 AD.
Aside from the aforementioned, two other medical works attracted great attention in medieval Europe, namely Abu Mansur Muwaffaq's Materia Medica, written around 950 AD, and the illustrated Anatomy of Mansur ibn Muhammad, written in 1396 AD.
Modern academic medicine began in Iran when Joseph Cochran established a medical college in Urmia in 1878. Cochran is often credited for founding Iran's "first contemporary medical college".[17] The website of Urmia University credits Cochran for "lowering the infant mortality rate in the region"[18] and for founding one of Iran's first modern hospitals (Westminster Hospital) in Urmia.
Iran started contributing to modern medical research late in 20th century. Most publications were from pharmacology and pharmacy labs located at a few top universities, most notably Tehran University of Medical Sciences. Ahmad Reza Dehpour and Abbas Shafiee were among the most prolific scientists in that era. Research programs in immunology, parasitology, pathology, medical genetics, and public health were also established in late 20th century. In 21st century, we witnessed a huge surge in the number of publications in medical journals by Iranian scientists on nearly all areas in basic and clinical medicine. Interdisciplinary research were introduced during 2000s and dual degree programs including Medicine/Science, Medicine/Engineering and Medicine/Public health programs were founded. Alireza Mashaghi was one of the main figures behind the development of interdisciplinary research and education in Iran.
Further information: Astronomy in medieval Islam
An 18th century Persian astrolabe
In 1000 AD, Biruni wrote an astronomical encyclopaedia that discussed the possibility that the earth might rotate around the sun. This was before Tycho Brahe drew the first maps of the sky, using stylized animals to depict the constellations.
In the tenth century, the Persian astronomer Abd al-Rahman al-Sufi cast his eyes upwards to the awning of stars overhead and was the first to record a galaxy outside our own. Gazing at the Andromeda galaxy he called it a "little cloud" – an apt description of the slightly wispy appearance of our galactic neighbour.[19]
Further information: Early Islamic philosophy
Further information: Alchemy and chemistry in medieval Islam
Tusi believed that a body of matter is able to change but is not able to disappear entirely. He wrote "a body of matter cannot disappear completely. It only changes its form, condition, composition, color, and other properties, and turns into a different complex or elementary matter". Five hundred years later, Mikhail Lomonosov (1711–1765) and Antoine-Laurent Lavoisier (1743–1794) created the law of conservation of mass, setting down this same idea.[20] However, Tusi argued for evolution within a firmly Islamic context—he did not, like Darwin, draw materialist conclusions from his theories. Moreover, unlike Darwin, he was arguing hypothetically: he did not attempt to provide empirical data for his theories. Nonetheless his arguments, which in some ways prefigure natural selection, are still considered remarkably 'advanced' for their time.
Jaber Ibn Hayyan, the famous Iranian chemist who died in 804 at Tous in Khorasan, was the father of a number of discoveries recorded in an encyclopaedia and of many treatises covering two thousand topics, and these became the bible of European chemists of the 18th century, particularly of Lavoisier. These works had a variety of uses including tinctures and their applications in tanning and textiles; distillations of plants and flowers; the origin of perfumes; therapeutic pharmacy, and gunpowder, a powerful military instrument possessed by Islam long before the West. Jabir ibn Hayyan, is widely regarded as the founder of chemistry, inventing many of the basic processes and equipment still used by chemists today such as distillation.[19]
Further information: Physics in medieval Islam
Kamal al-Din al-Farisi's autograph manuscript in Optics, Tanqih al-Manazir, 1309 A.D., Adilnor Collection.
Biruni was the first scientist to formally propose that the speed of light is finite, before Galileo tried to experimentally prove this.
Kamal al-Din Al-Farisi (1267–1318) born in Tabriz, Iran, is known for giving the first mathematically satisfactory explanation of the rainbow, and an explication of the nature of colours that reformed the theory of Ibn al-Haytham. Al-Farisi also "proposed a model where the ray of light from the sun was refracted twice by a water droplet, one or more reflections occurring between the two refractions."[citation needed] He verified this through extensive experimentation using a transparent sphere filled with water and a camera obscura.
See also: Higher education in Iran, List of Iranian Research Centers, and List of countries by research and development spending
The Iranian Research Organization for Science and Technology and the National Research Institute for Science Policy come under the Ministry of Science, Research and Technology. They are in charge of establishing national research policies.
The government first set its sights on moving from a resource-based economy to one based on knowledge in its 20-year development plan, Vision 2025, adopted in 2005. This transition became a priority after international sanctions were progressively hardened from 2006 onwards and the oil embargo tightened its grip. In February 2014, the Supreme Leader Ayatollah Ali Khamenei introduced what he called the 'economy of resistance', an economic plan advocating innovation and a lesser dependence on imports that reasserted key provisions of Vision 2025.[21]
Vision 2025 challenged policy-makers to look beyond extractive industries to the country's human capital for wealth creation. This led to the adoption of incentive measures to raise the number of university students and academics, on the one hand, and to stimulate problem-solving and industrial research, on the other.[21]
Iran's successive five-year plans aim to realize collectively the goals of Vision 2025. For instance, in order to ensure that 50% of academic research was oriented towards socio-economic needs and problem-solving, the Fifth Five-Year Economic Development Plan (2010–2015) tied promotion to the orientation of research projects. It also made provision for research and technology centres to be set up on campus and for universities to develop linkages with industry. The Fifth Five-Year Economic Development Plan had two main thrusts relative to science policy. The first was the "islamization of universities', a notion that is open to broad interpretation. According to Article 15 of the Fifth Five-Year Economic Development Plan, university programmes in the humanities were to teach the virtues of critical thinking, theorization and multidisciplinary studies. A number of research centres were also to be developed in the humanities. The plan's second thrust was to make Iran the second-biggest player in science and technology by 2015, behind Turkey. To this end, the government pledged to raise domestic research spending to 3% of GDP by 2015.[21] Yet, R&D's share in the GNP is at 0.06% in 2015 (where it should be at least 2.5% of GDP)[22][23] and industry-driven R&D is almost non‑existent.[24]
Vision 2025 fixed a number of targets, including that of raising domestic expenditure on research and development to 4% of GDP by 2025. In 2012, spending stood at 0.33% of GDP.[21]
In 2009, the government adopted a National Master Plan for Science and Education to 2025 which reiterates the goals of Vision 2025. It lays particular stress on developing university research and fostering university–industry ties to promote the commercialization of research results.[21][25][26][27][28][29]
In early 2018, the Science and Technology Department of the Iranian President's Office released a book to review Iran's achievements in various fields of science and technology during 2017. The book, entitled "Science and Technology in Iran: A Brief Review", provides the readers with an overview of the country's 2017 achievements in 13 different fields of science and technology.[30]
See also: Economy of Iran § Labor force, and Iran's brain drain
In line with the goals of Vision 2025, policy-makers have made a concerted effort to increase the number of students and academic researchers. To this end, the government raised its commitment to higher education to 1% of GDP in 2006. After peaking at this level, higher education spending stood at 0.86% of GDP in 2015. Higher education spending has resisted better than public expenditure on education overall. The latter peaked at 4.7% of GDP in 2007 before slipping to 2.9% of GDP in 2015. Vision 2025 fixed a target of raising public expenditure on education to 7% of GDP by 2025.[21]
Student enrollment trends
See also: Women in Iran
Students enrolled in Iranian universities, 2007 and 2013. Source: UNESCO Science Report: towards 2030 (2015)
The result of greater spending on higher education has been a steep rise in tertiary enrollment. Between 2007 and 2013, student rolls swelled from 2.8 million to 4.4 million in the country's public and private universities. Some 45% of students were enrolled in private universities in 2011. There were more women studying than men in 2007, a proportion that has since dropped back slightly to 48%.[21]
Enrollment has progressed in most fields. The most popular in 2013 were social sciences (1.9 million students, of which 1.1 million women) and engineering (1.5 million, of which 373 415 women). Women also made up two-thirds of medical students. One in eight bachelor's students go on to enroll in a master's/PhD programme. This is comparable to the ratio in the Republic of Korea and Thailand (one in seven) and Japan (one in ten).[21]
The number of PhD graduates has progressed at a similar pace as university enrollment overall. Natural sciences and engineering have proved increasingly popular among both sexes, even if engineering remains a male-dominated field. In 2012, women made up one-third of PhD graduates, being drawn primarily to health (40% of PhD students), natural sciences (39%), agriculture (33%) and humanities and arts (31%). According to the UNESCO Institute for Statistics, 38% of master's and PhD students were studying science and engineering fields in 2011.[21]
PhD graduates in Iran by field of study and gender, 2007 and 2012. Source: UNESCO Science Report: towards 2030 (2015)
There has been an interesting evolution in the gender balance among PhD students. Whereas the share of female PhD graduates in health remained stable at 38–39% between 2007 and 2012, it rose in all three other broad fields. Most spectacular was the leap in female PhD graduates in agricultural sciences from 4% to 33% but there was also a marked progression in science (from 28% to 39%) and engineering (from 8% to 16% of PhD students). Although data are not readily available on the number of PhD graduates choosing to stay on as faculty, the relatively modest level of domestic research spending would suggest that academic research suffers from inadequate funding.[21]
The Fifth Five-Year Economic Development Plan (2010–2015) fixed the target of attracting 25 000 foreign students to Iran by 2015. By 2013, there were about 14 000 foreign students attending Iranian universities, most of whom came from Afghanistan, Iraq, Pakistan, Syria and Turkey. In a speech delivered at the University of Tehran in October 2014, President Rouhani recommended greater interaction with the outside world. He said that
scientific evolution will be achieved by criticism [...] and the expression of different ideas. [...] Scientific progress is achieved, if we are related to the world. [...] We have to have a relationship with the world, not only in foreign policy but also with regard to the economy, science and technology. [...] I think it is necessary to invite foreign professors to come to Iran and our professors to go abroad and even to create an English university to be able to attract foreign students.'[21]
One in four Iranian PhD students were studying abroad in 2012 (25.7%). The top destinations were Malaysia, the US, Canada, Australia, UK, France, Sweden and Italy. In 2012, one in seven international students in Malaysia was of Iranian origin. There is a lot of scope for the development of twinning between universities for teaching and research, as well as for student exchanges.[21]
Trends in researchers
According to the UNESCO Institute for Statistics, the number of (full-time equivalent) researchers rose from 711 to 736 per million inhabitants between 2009 and 2010. This corresponds to an increase of more than 2 000 researchers, from 52 256 to 54 813. The world average is 1 083 per million inhabitants. One in four (26%) Iranian researchers is a woman, which is close to the world average (28%). In 2008, half of researchers were employed in academia (51.5%), one-third in the government sector (33.6%) and just under one in seven in the business sector (15.0%). Within the business sector, 22% of researchers were women in 2013, the same proportion as in Ireland, Israel, Italy and Norway. The number of firms declaring research activities more than doubled between 2006 and 2011, from 30 935 to 64 642. The increasingly tough sanctions regime oriented the Iranian economy towards the domestic market and, by erecting barriers to foreign imports, encouraged knowledge-based enterprises to localize production.[21]
Research expenditure
See also: Public budget in Iran
Iran's national science budget was about $900 million in 2005 and it had not been subject to any significant increase for the previous 15 years.[31] In 2001, Iran devoted 0.50% of GDP to research and development. Expenditure peaked at 0.67% of GDP in 2008 before receding to 0.33% of GDP in 2012, according to the UNESCO Institute for Statistics.[32] The world average in 2013 was 1.7% of GDP. Iran's government has devoted much of its budget to research on high technologies such as nanotechnology, biotechnology, stem cell research and information technology (2008).[33] In 2006, the Iranian government wiped out the financial debts of all universities in a bid to relieve their budget constraints.[34] According to the UNESCO science report 2010, most research in Iran is government-funded with the Iranian government providing almost 75% of all research funding.[35] Domestic expenditure on research stood at 0.7% of GDP in 2008 and 0.3% of GDP in 2012. Iranian businesses contributed about 11% of the total in 2008. The government's limited budget is being directed towards supporting small innovative businesses, business incubators and science and technology parks, the type of enterprises which employ university graduates.[21]
The share of private businesses in total national R&D funding according to the same report is very low, being just 14%, as compared with Turkey's 48%. The rest of approximately 11% of funding comes from higher education sector and non-profit organizations.[36] A limited number of large enterprises (such as IDRO, NIOC, NIPC, DIO, Iran Aviation Industries Organization, Iranian Space Agency, Iran Electronics Industries or Iran Khodro) have their own in-house R&D capabilities.[37]
Funding the transition to a knowledge economy
See also: Foreign direct investment in Iran, Economy of Iran, and Venture capital in Iran
Trends in Iranian scientific publications, 2005–2014. Source: UNESCO Science Report: towards 2030 (2015)
Vision 2025 foresaw an investment of US$3.7 trillion by 2025 to finance the transition to a knowledge economy. It was intended for one-third of this amount to come from abroad but, so far, FDI has remained elusive. It has contributed less than 1% of GDP since 2006 and just 0.5% of GDP in 2014. Within the country's Fifth Five-Year Economic Development Plan (2010–2015), a National Development Fund has been established to finance efforts to diversify the economy. By 2013, the fund was receiving 26% of oil and gas revenue.[21]
Much of the US$3.7 trillion earmarked in Vision 2025 is to go towards supporting investment in research and development by knowledge-based firms and the commercialization of research results. A law passed in 2010 provides an appropriate mechanism, the Innovation and Prosperity Fund. According to the fund's president, Behzad Soltani, 4600 billion Iranian rials (circa US$171.4 million) had been allocated to 100 knowledge-based companies by late 2014. Public and private universities wishing to set up private firms may also apply to the fund.[21]
Some 37 industries trade shares on the Tehran Stock Exchange. These industries include the petrochemical, automotive, mining, steel, iron, copper, agriculture and telecommunications industries, 'a unique situation in the Middle East'. Most of the companies developing high technologies remain state-owned, including in the automotive and pharmaceutical industries, despite plans to privatize 80% of state-owned companies by 2014. It was estimated in 2014 that the private sector accounted for about 30% of the Iranian pharmaceutical market.[21]
Iranian publications by field of science, 2008–2014. Source: UNESCO Science Report: towards 2030 (2015)
The Industrial Development and Renovation Organization (IDRO) controls about 290 state-owned companies. IDRO has set up special purpose companies in each high-tech sector to coordinate investment and business development. These entities are the Life Science Development Company, Information Technology Development Centre, Iran InfoTech Development Company and the Emad Semiconductor Company. In 2010, IDRO set up a capital fund to finance the intermediary stages of product- and technology-based business development within these companies.[21]
See also: Technology start-ups in Iran, Industry of Iran, Foreign Direct Investment in Iran, and List of research parks
As of 2012, Iran had officially 31 science and technology parks nationwide.[38] Furthermore, as of 2014, 36 science and technology parks hosting more than 3,650 companies were operating in Iran.[39] These firms have directly employed more than 24,000 people.[39] According to the Iran Entrepreneurship Association, there are ninety-nine (99) parks of science and technology, in totality, which operate without official permits. Twenty-one of those parks are located in Tehran and affiliated with University Jihad, Tarbiat Modares University, Tehran University, Ministry of Energy (Iran), Ministry of Health and Medical Education, and Amir Kabir University among others. Fars Province, with 8 parks and Razavi Khorasan Province, with 7 parks, are ranked second and third after Tehran respectively.[40]
Park's name
Guilan Science and Technology Park Agro-Food, Biotechnology, Chemistry, Electronics, Environment, ICT, Tourism.[41] Guilan
Pardis Technology Park Advanced Engineering (mechanics and automation), Biotechnology, Chemistry, Electronics, ICT, Nano-technology.[41] 25 km North-East of Tehran
Tehran Software and Information Technology Park (planned)[42] ICT[43] Tehran
Tehran University and Science Technology Park[44] Tehran
Khorasan Science and Technology Park (Ministry of Science, Research and Technology) Advanced Engineering, Agro-Food, Chemistry, Electronics, ICT, Services.[41] Khorasan
Sheikh Bahai Technology Park (Aka "Isfahan Science and Technology Town") Materials and Metallurgy, Information and Communications Technology, Design & Manufacturing, Automation, Biotechnology, Services.[41] Isfahan
Semnan Province Technology Park Semnan
East Azerbaijan Province Technology Park East Azerbaijan
Yazd Province Technology Park Yazd
Mazandaran Science and Technology Park Mazandaran
Markazi Province Technology Park Arak
"Kahkeshan" (Galaxy) Technology Park[45] Aerospace Tehran
Pars Aero Technology Park[46] Aerospace & Aviation Tehran
Energy Technology Park (planned)[47] Energy N/A
See also: Intellectual property in Iran and Venture capital in Iran
Economic complexity index for Iran (1964–2014).
As of 2004, Iran's national innovation system (NIS) had not experienced a serious entrance to the technology creation phase and mainly exploited the technologies developed by other countries (e.g. in the petrochemicals industry).[48]
In 2016, Iran ranked second in the percentage of graduates in science and engineering in the Global Innovation Index. Iran also ranked fourth in tertiary education, 26 in knowledge creation, 31 in gross percentage of tertiary enrollment, 41 in general infrastructure, 48 in human capital as well as research and 51 in innovation efficiency ratio.[49]
In recent years several drugmakers in Iran are gradually developing the ability to innovate, away from generic drugs production itself.[50]
According to the State Registration Organization of Deeds and Properties, a total of 9,570 national inventions were registered in Iran during 2008. Compared with the previous year, there was a 38-percent increase in the number of inventions registered by the organization.[51]
Iran has several funds to support entrepreneurship and innovation:[40]
Innovation and Flourishing/Prosperity Fund of the Directorate of Science and Technology of the Presidential Office;
National Researchers and Industrialists Support Fund;
Nokhbegan Technology Development Institute;
Nanotechnology Fund;
Novin Technology Development Fund;
Sharif Export Development Research and Technology Fund;
Support Fund of Researchers and Technologists;
Payambar Azam (the great prophet) Scientific and Technological Award;
Student Entrepreneurs Support Fund;
+6,000 private interest-free funds & 3 venture capital funds (Shenasa, Simorgh and Sarava Pars). See also: Banking in Iran.
See also: Economy of Iran, Industrial Development and Renovation Organization of Iran, and Industry of Iran § Small and medium enterprises
IKCO's Samand LX
The 5th Development Plan (2010–15) requires the private sector to communicate research needs to universities so that universities would coordinate research projects in line with these needs, with sharing of expenses by both sides.[47]
Because of its weakness or absence, the support industry makes little contribution to the innovation/technology development activities. Supporting the development of small and medium enterprises in Iran will strengthen greatly the supplier network.[37]
As of 2014, Iran had 930 industrial parks and zones, of which 731 are ready to be ceded to the private sector.[52] The government of Iran has plans for the establishment of 50–60 new industrial parks by the end of the fifth Five-Year Socioeconomic Development Plan (2015).[53]
As of 2016, Iran had nearly 3,000 knowledge-based companies.[54]
A 2003-report by the United Nations Industrial Development Organization regarding small and medium-sized enterprises (SMEs)[55] identified the following impediments to industrial development:
Lack of monitoring institutions;
Inefficient banking system;
Insufficient research & development;
Shortage of managerial skills;
Corruption;
Inefficient taxation;
Socio-cultural apprehensions;
Absence of social learning loops;
Shortcomings in international market awareness necessary for global competition;
Cumbersome bureaucratic procedures;
Shortage of skilled labor;
Lack of intellectual property protection;
Inadequate social capital, social responsibility and socio-cultural values.
The economic complexity ranking of Iran has increased by 1 places over the past 50 years from 66th in 1964 to 65th in 2014.[56] According to UNCTAD in 2016, private companies in Iran need better marketing strategies with emphasis on innovation.[57][54]
Despite these problems, Iran has progressed in various scientific and technological fields, including petrochemical, pharmaceutical, aerospace, defense, and heavy industry. Even in the face of economic sanctions, Iran is emerging as an industrialized country.[58]
Parallel to academic research, several companies have been founded in Iran during last few decades. For example, CinnaGen, established in 1992, is one of the pioneering biotechnology companies in the region. CinnaGen won Biotechnology Asia 2005 Innovation Awards due to its achievements and innovation in biotechnology research. In 2006, Parsé Semiconductor Co. announced it had designed and produced a 32-bit computer microprocessor inside the country for the first time.[59] Software companies are growing rapidly. In CeBIT 2006, ten Iranian software companies introduced their products.[60][61] Iran's National Foundation for Computer Games unveiled the country's first online video game in 2010, capable of supporting up to 5,000 users at the same time.[62]
Science in modern Iran
See also: National Research Institute for Science Policy, Iran National Science Foundation, Education in Iran, and Higher education in Iran
Iran University of Science and Technology entrance.
Theoretical and computational sciences are highly developed in Iran.[63] Despite the limitations in funds, facilities, and international collaborations, Iranian scientists have been very productive in several experimental fields such as pharmacology, pharmaceutical chemistry, and organic and polymer chemistry. Iranian biophysicists, especially molecular biophysicists, have gained international reputations since the 1990s[citation needed]. High field nuclear magnetic resonance facility, microcalorimetry, circular dichroism, and instruments for single protein channel studies have been provided in Iran during the past two decades. Tissue engineering and research on biomaterials have just started to emerge in biophysics departments.
Considering the country's brain drain and its poor political relationship with the United States and some other Western countries, Iran's scientific community remains productive, even while economic sanctions make it difficult for universities to buy equipment or to send people to the United States to attend scientific meetings.[64] Furthermore, Iran considers scientific backwardness, as one of the root causes of political and military bullying by developed countries over developing states.[65][66] After the Iranian Revolution, there have been efforts by the religious scholars to assimilate Islam with modern science and this is seen by some as the reason behind the recent successes of Iran to augment its scientific output.[67] Currently Iran aims for a national goal of self-sustainment in all scientific arenas.[68][69] Many individual Iranian scientists, along with the Iranian Academy of Medical Sciences and Academy of Sciences of Iran, are involved in this revival. The Comprehensive Scientific Plan has been devised based on about 51,000 pages of documents and includes 224 scientific projects that must be implemented by the year 2025.[70][71]
See also: Healthcare in Iran and Pasteur Institute of Iran
With over 400 medical research facilities and 76 medical magazine indexes available in the country, Iran is the 19th country in medical research and is set to become the 10th within 10 years (2012).[72][73] Clinical sciences are invested in highly in Iran. In areas such as rheumatology, hematology, and bone marrow trasplantation, Iranian medical scientists publish regularly.[74] The Hematology, Oncology and Bone Marrow Transplantation Research Center (HORC) of Tehran University of Medical Sciences in Shariati Hospital was established in 1991. Internationally, this center is one of the largest bone marrow transplantation centers and has carried out a large number of successful transplantations.[75] According to a study conducted in 2005, associated specialized pediatric hematology and oncology (PHO) services exist in almost all major cities throughout the country, where 43 board-certified or eligible pediatric hematologist–oncologists are giving care to children suffering from cancer or hematological disorders. Three children's medical centers at universities have approved PHO fellowship programs.[76] Besides hematology, gastroenterology has recently attracted many talented medical students. The gasteroenterology research center based at Tehran University of Medical Sciences has produced increasing numbers of scientific publications since its establishment.
Prof Moslem Bahadori, one of the pioneering figures in modern Iranian medicine
Modern organ transplantation in Iran dates to 1935, when the first cornea transplant in Iran was performed by Professor Mohammad-Qoli Shams at Farabi Eye Hospital in Tehran, Iran. The Shiraz Nemazi transplant center, also one of the pioneering transplant units of Iran, performed the first Iranian kidney transplant in 1967 and the first Iranian liver transplant in 1995. The first heart transplant in Iran was performed in 1993 in Tabriz. The first lung transplant was performed in 2001, and the first heart and lung transplants were performed in 2002, both at Tehran University of Medical Sciences.[77] Iran developed the first artificial lung in 2009 to join five other countries in the world that possess such technology.[78] Currently, renal, liver, and heart transplantations are routinely performed in Iran. Iran ranks fifth in the world in kidney transplants.[79] The Iranian Tissue Bank, commencing in 1994, was the first multi-facility tissue bank in country. In June 2000, the Organ Transplantation Brain Death Act was approved by the Parliament, followed by the establishment of the Iranian Network for Transplantation Organ Procurement. This act helped to expand heart, lung, and liver transplantation programs. By 2003, Iran had performed 131 liver, 77 heart, 7 lung, 211 bone marrow, 20,581 cornea, and 16,859 renal transplantations. 82 percent of these were donated by living and unrelated donors; 10 percent by cadavers; and 8 percent came from living-related donors. The 3-year renal transplant patient survival rate was 92.9%, and the 40-month graft survival rate was 85.9%.[77]
Neuroscience is also emerging in Iran.[80] A few PhD programs in cognitive and computational neuroscience have been established in the country during recent decades.[81] Iran ranks first in Mideast and region in ophthalmology.[82][83]
Iranian surgeons treating wounded Iranian veterans during the Iran–Iraq War invented a new neurosurgical treatment for brain injured patients that laid to rest the previously prevalent technique developed by US Army surgeon Dr Ralph Munslow. This new surgical procedure helped devise new guidelines that have decreased death rates for comatosed patients with penetrating brain injuries from 55% of 1980 to 20% of 2010. It has been said that these new treatment guidelines benefited US congresswoman Gabrielle Giffords who had been shot in the head.[84][85][86]
See also: Pharmaceuticals in Iran and Agribusiness in Iran
Inside AryoGen's production line
Iran has a biotechnology sector that is one of the most advanced in the developing world.[87][88] The Razi Institute for Serums and Vaccines and the Pasteur Institute of Iran are leading regional facilities in the development and manufacture of vaccines. In January 1997, the Iranian Biotechnology Society (IBS) was created to oversee biotechnology research in Iran.[87]
Agricultural research has been successful in releasing high-yielding varieties with higher stability as well as tolerance to harsh weather conditions. The agriculture researchers are working jointly with international Institutes to find the best procedures and genotypes to overcome produce failure and to increase yield. In 2005, Iran's first genetically modified (GM) rice was approved by national authorities and is being grown commercially for human consumption. In addition to GM rice, Iran has produced several GM plants in the laboratory, such as insect-resistant maize; cotton; potatoes and sugar beets; herbicide-resistant canola; salinity- and drought-tolerant wheat; and blight-resistant maize and wheat.[89] The Royan Institute engineered Iran's first cloned animal; the sheep was born on 2 August 2006 and passed the critical first two months of his life.[90][91]
In the last months of 2006, Iranian biotechnologists announced that they, as the third manufacturer in the world, have sent CinnoVex (a recombinant type of Interferon b1a) to the market.[92] According to a study by David Morrison and Ali Khademhosseini (Harvard-MIT and Cambridge), stem cell research in Iran is amongst the top 10 in the world.[93] Iran will invest 2.5 billion dollars in the country's stem cell research over the next five years (2008–2013).[94] Iran ranks 2nd in world in transplantation of stem cells.[95]
In 2010, Iran begun mass-producing ocular bio-implants named SAMT.[96] Iran began investing in biotechnological projects in 1992, and this is the tenth facility in Iran. 'Lifepatch' is the fourth bio-implant mass-produced by Iran after bone, heart valve, and tendon bio-implants.[96] 12 countries in the world produce bio-tech drugs, among which Iran is one of them.[72] According to Scopus, Iran ranked 21st in biotechnology by producing nearly 4,000 related-scientific articles in 2014.[97]
Ali Javan first proposed and co-invented the gas laser. Laser optics via fiber optics is a key technology used in the Internet today.[98]
In 2010, AryoGen Biopharma established the biggest and most modern knowledge-based facility for production of therapeutic monoclonal antibodies in the region. As at 2012, Iran produced 15 types of monoclonal/anti-body drugs. These anti-cancer drugs are now produced by only two to three western companies.[99]
In 2015, Noargen[100] company established as first officially registered CRO & CMO in Iran. Noargen uses the concept of CMO and CRO servicing to the biopharma sector of Iran as its main activity to fill the gap and promote developing biotech ideas/products toward commercialization.
Physics and materials
See also: Iranian nuclear program, IR-40, and Bushehr nuclear power plant
Iran had some significant successes in nuclear technology during recent decades, especially in nuclear medicine. However, little connection exists between Iran's scientific society and that of the nuclear program of Iran. Iran is the 7th country in production of uranium hexafluoride (or UF6).[101] Iran now controls the entire cycle for producing nuclear fuel.[102] Iran is among the 14 countries in possession of nuclear [energy] technology.[103] In 2009, Iran was developing its first domestic Linear particle accelerator (LINAC).[104]
It is among the few countries in the world that has the technology to produce zirconium alloys.[105][106] Iran produces a wide range of lasers in demand within the country in medical and industrial fields.[98] In 2011, Iranian scientists at the Atomic Energy Organization of Iran (AEOI) have designed and built a nuclear fusion device, named IR-IECF.[107] Iran is the 6th country with such technology.[107] In 2018, Iran inaugurated the first laboratory for quantum entanglement in the National Laser Center.[108]
Computer science, electronics and robotics
See also: Communications in Iran and Iran Electronics Industries
The Center of Excellence in Design, Robotics, and Automation was established in 2001 to promote educational and research activities in the fields of design, robotics, and automation. Besides these professional groups, several robotics groups work in Iranian high schools.[109] "Sorena 2" Robot, which was designed by engineers at University of Tehran, was unveiled in 2010. The robot can be used for handling sensitive tasks without the need for cooperating with human beings. The robot is taking slow steps similar to human beings, harmonious movements of hands and feet and other movements similar to humans.[110][111][112] Next the researchers plan to develop speech and vision capabilities and greater intelligence for this robot.[113] the Institute of Electrical and Electronics Engineers (IEEE) has placed the name of Surena among the five prominent robots of the world after analyzing its performance.[114] In 2010, Iranian researchers have, for the first time in the country, developed ten robots for the nation's automotive industry using domestic know how.[115]
Ultra Fast Microprocessors Research Center in Tehran's Amirkabir University of Technology successfully built a supercomputer in 2007.[116] Maximum processing capacity of the supercomputer is 860 billion operations per second. Iran's first supercomputer launched in 2001 was also fabricated by Amirkabir University of Technology.[117] In 2009, a SUSE Linux-based HPC system made by the Aerospace Research Institute of Iran (ARI) was launched with 32 cores and now runs 96 cores. Its performance was pegged at 192 GFLOPS.[118] Iran's National Super Computer made by Iran Info-Tech Development Company (a subsidiary of IDRO) was built from 216 AMD processors. The Linux-cluster machine has a reported "theoretical peak performance of 860 gig-flops".[119] The Routerlab team at the University of Tehran successfully designed and implemented an access-router (RAHYAB-300) and a 40Gbit/s high capacity switch fabric (UTS).[120] In 2011 Amirkabir University of Technology and Isfahan University of Technology produced 2 new supercomputers with processing capacity of 34,000 billion operations per second.[121] The supercomputer at Amirkabir University of Technology is expected to be among the 500 most powerful computers in the world.[121]
Chemistry and nanotechnology
See also: National Petrochemical Company § Research and development
Number of Iranian articles on nanotechnology in 2014. Source: UNESCO Science Report: towards 2030 (2015)
Iran is ranked 12th in the field of chemistry (2018).[122] In 2007, Iranian scientists at the Medical Sciences and Technology Center succeeded in mass-producing an advanced scanning microscope—the Scanning Tunneling Microscope (STM).[123] By 2017, Iran ranked 4th in ISI indexed nano-articles.[124][125][126][127][128] Iran has designed and mass-produced more than 35 kinds of advanced nanotechnology devices. These include laboratory equipment, antibacterial strings, power station filters and construction related equipment and materials.[129]
Research in nanotechnology has taken off in Iran since the Nanotechnology Initiative Council (NIC) was founded in 2002. The council determines the general policies for the development of nanotechnology and co-ordinates their implementation. It provides facilities, creates markets and helps the private sector to develop relevant R&D activities. In the past decade, 143 nanotech companies have been established in eight industries. More than one-quarter of these are found in the health care industry, compared to just 3% in the automotive industry.[21]
Today, five research centres specialize in nanotechnology, including the Nanotechnology Research Centre at Sharif University, which established Iran's first doctoral programme in nanoscience and nanotechnology a decade ago. Iran also hosts the International Centre on Nanotechnology for Water Purification, established in collaboration with UNIDO in 2012. In 2008, NIC established an Econano network to promote the scientific and industrial development of nanotechnology among fellow members of the Economic Cooperation Organization, namely Afghanistan, Azerbaijan, Kazakhstan, Kyrgyzstan, Pakistan, Tajikistan, Turkey, Turkmenistan and Uzbekistan.[21]
Industries in which Iranian nanotech companies are active. Source: UNESCO Science Report: towards 2030
Iran recorded strong growth in the number of articles on nanotechnology between 2009 and 2013, according to Thomson Reuters' Web of Science. By 2013, Iran ranked seventh for this indicator. The number of articles per million population has tripled to 59, overtaking Japan in the process. Few patents are being granted to Iranian inventors in nanotechnology, as yet, however. The ratio of nanotechnology patents to articles was 0.41 per 100 articles for Iran in 2015.[21]
Aviation and space
See also: Iran Aviation Industries Organization and Iranian Space Agency
Simorgh launch. Iranian Space Agency.
On 17 August 2008, The Iranian Space Agency proceeded with the second test launch of a three stages Safir SLV from a site south of Semnan in the northern part of the Dasht-e-Kavir desert. The Safir (Ambassador) satellite carrier successfully launched the Omid satellite into orbit in February 2009.[130][131][132] Iran is the 9th country to put a domestically-built satellite into orbit since the Soviet Union launched the first in 1957.[133] Iran is among a handful of countries in the world capable of developing satellite-related technologies, including satellite navigation systems.[134] Iran's first astronaut will be sent into space on board an Iranian shuttle by 2019.[135][136] Iran is also the sixth country to send animals in space. Iran is one of the few countries capable of producing 20-25 ton sea patrol aircraft.[137] In 2013, Iran constructed its first hypersonic wind tunnel for testing missiles and doing aerospace research.[138] Iran is the 8th country capable of manufacturing jet engines.[139]
The Iranian government has committed 150 billion rials (roughly 16 million US dollars)[140] for a telescope, an observatory, and a training program, all part of a plan to build up the country's astronomy base. Iran wants to collaborate internationally and become internationally competitive in astronomy, says the University of Michigan's Carl Akerlof, an adviser to the Iranian project. "For a government that is usually characterized as wary of foreigners, that's an important development".[141] In July 2010, Iran unveiled its largest domestically-manufactured telescope dubbed "Tara".[142] in 2016, Iran unveiled its new optical telescope for observing celestial objects as part of APSCO. It will be used to understand and predict the physical location of natural and man-made objects in orbit around the Earth.[143]
See also: Energy in Iran, Petroleum industry in Iran, List of power stations in Iran, and Industry of Iran
Iran is ranked 12th in the field of energy (2018).[144] Iran has achieved the technical expertise to set up hydroelectric, gas and combined cycle power plants.[145][146] Iran is among the four world countries that are capable of manufacturing advanced V94.2 gas turbines.[147] Iran is able to produce all the parts needed for its gas refineries[148] and is now the third country in the world to have developed Gas to liquids (GTL) technology.[149][150] Iran produces 70% of its industrial equipment domestically including various turbines, pumps, catalysts, refineries, oil tankers, oil rigs, offshore platforms and exploration instruments.[151][152][153][154][155][156][151] Iran is among the few countries that has reached the technology and "know-how" for drilling in the deep waters.[157] Iran's indigenously designed Darkhovin Nuclear Power Plant is scheduled to come online in 2016.[158]
Average citations of Iranian nanotech articles, in comparison with those of other leading countries, 2013. Source: UNESCO Science Report: towards 2030 (2015)
Main articles: Defense industry of Iran and List of military equipment manufactured in Iran
See also: Defense Industries Organization, Iran Electronics Industries, and Iran Aviation Industries Organization
Iran possesses the technology to launch superfast anti-submarine rockets that can travel at the speed of 100 meters per second under water, making the country second only to Russia in possessing the technology.[159][160] Iran is among the five countries in the world to have developed ammunitions with laser targeting technology.[161] Iran is among the few countries that possess the technological know-how of the unmanned aerial vehicles (UAV) fitted with scanning and reconnaissance systems.[162] Iran is among the 12 countries with missile technology and advanced mobile air defense systems.[103] Over the past years, Iran has made important breakthroughs in its defense sector and attained self-sufficiency in producing important military equipment and systems.[163] Since 1992, it also has produced its own tanks, armored personnel carriers, sophisticated radars, guided missiles, a submarine, and fighter planes.[164]
Scientific collaboration
See also: Foreign relations of Iran, Iran's brain drain, and Iranian citizens abroad
Iran annually hosts international science festivals. The International Kharazmi Festival in Basic Science and The Annual Razi Medical Sciences Research Festival promote original research in science, technology, and medicine in Iran. There is also an ongoing R&D collaboration between large state-owned companies and the universities in Iran.
Iranians welcome scientists from all over the world to Iran for a visit and participation in seminars or collaborations. Many Nobel laureates and influential scientists such as Bruce Alberts, F. Sherwood Rowland, Kurt Wüthrich, Stephen Hawking, and Pierre-Gilles de Gennes visited Iran after the Iranian revolution. Some universities also hosted American and European scientists as guest lecturers during recent decades.
Although sanctions have caused a shift in Iran's trading partners from West to East, scientific collaboration has remained largely oriented towards the West. Between 2008 and 2014, Iran's top partners for scientific collaboration were the US, Canada, the UK and Germany, in that order. Iranian scientists co-authored almost twice as many articles with their counterparts in the USA (6 377) as with their next-closest collaborators in Canada (3 433) and the UK (3 318).[21] Iranian and U.S. scientists have collaborated on a number of projects.[165]
Malaysia is Iran's fifth-closest collaborator in science and India ranks tenth, after Australia, France, Italy and Japan. One-quarter of Iranian articles had a foreign co-author in 2014, a stable proportion since 2002. Scientists have been encouraged to publish in international journals in recent years, a policy that is in line with Vision 2025.[21]
The volume of scientific articles authored by Iranians in international journals has augmented considerably since 2005, according to Thomson Reuters' Web of Science (Science Citation Index Expanded). Iranian scientists now publish widely in international journals in engineering and chemistry, as well as in life sciences and physics. Women contribute about 13% of articles, with a focus on chemistry, medical sciences and social sciences. Contributing to this trend is the fact that PhD programmes in Iran now require students to have publications in the Web of Science.
Iran has submitted a formal request to participate in a project which is building an International Thermonuclear Experimental Reactor (ITER) in France by 2018. This megaproject is developing nuclear fusion technology to lay the groundwork for tomorrow's nuclear fusion power plants. The project involves the European Union, China, India, Japan, Republic of Korea, Russian Federation and USA. A team from ITER visited Iran in November 2016 to deepen its understanding of Iran's fusion-related programmes.[21][166]
Iran hosts several international research centres, including the following established between 2010 and 2014 under the auspices of the United Nations: the Regional Center for Science Park and Technology Incubator Development (UNESCO, est. 2010), the International Center on Nanotechnology (UNIDO, est. 2012) and the Regional Educational and Research Center for Oceanography for Western Asia (UNESCO, est. 2014).[21]
Iran is stepping up its scientific collaboration with developing countries. In 2008, Iran's Nanotechnology Initiative Council established an Econano network to promote the scientific and industrial development of nanotechnology among fellow members of the Economic Cooperation Organization, namely Afghanistan, Azerbaijan, Kazakhstan, Kyrgyzstan, Pakistan, Tajikistan, Turkey, Turkmenistan and Uzbekistan. The Regional Centre for Science Park and Technology Incubator Development is also initially targeting these same countries. It is offering them policy advice on how to develop their own science parks and technology incubators.[21]
Iran is an active member of COMSTECH and collaborates on its international projects. The coordinator general of COMSTECH, Dr. Atta ur Rahman has said that Iran is the leader in science and technology among Muslim countries and hoped for greater cooperation with Iran in different international technological and industrialization projects.[167] Iranian scientists are also helping to construct the Compact Muon Solenoid, a detector for the Large Hadron Collider of the European Organization for Nuclear Research (CERN) that is due to come online in 2008[citation needed]. Iranian engineers are involved in the design and construction of the first regional particle accelerator of the Middle East in Jordan, called SESAME.[168]
Since the lifting of international sanctions, Iran has been developing scientific and educational links with Kuwait, Switzerland, Italy, Germany, China and Russia.[169][170][171][172][173]
Contribution of Iranians and people of Iranian origin to modern science
Main article: List of contemporary Iranian scientists, scholars, and engineers
Ahmad Reza Dehpour, Iran's most prolific researcher of the year 2006
Scientists with an Iranian background have made significant contributions to the international scientific community. In 1960, Ali Javan invented first gas laser. In 1973, the fuzzy set theory was developed by Lotfi Zadeh. Iranian cardiologist Tofy Mussivand invented the first artificial heart and afterwards developed it further. HbA1c was discovered by Samuel Rahbar and introduced to the medical community. The Vafa-Witten theorem was proposed by Cumrun Vafa, an Iranian string theorist, and his co-worker Edward Witten. The Kardar-Parisi-Zhang (KPZ) equation has been named after Mehran Kardar, notable Iranian physicist. Other examples of notable discoveries and innovations by Iranian scientists and engineers (or of Iranian origin) include:
Siavash Alamouti and Vahid Tarokh: invention of space–time block code
Moslem Bahadori: reported the first case of plasma cell granuloma of the lung.
Nader Engheta, inventor of "invisibility shield" (plasmonic cover) and research leader of the year 2006, Scientific American magazine,[174] and winner of a Guggenheim Fellowship (1999) for "Fractional paradigm of classical electrodynamics"
Reza Ghadiri: invention of a self-organized replicating molecular system, for which he received 1998 Feynman prize
Maysam Ghovanloo: inventor of Tongue-Drive Wheelchair.[175]
Alireza Mashaghi: made the first single-molecule observation of protein folding, for which he was named the Discoverer of the Year in 2017.[176][177]
Karim Nayernia: discovery of spermatagonial stem cells
Afsaneh Rabiei: inventor[178] of an ultra-strong and lightweight material, known as Composite metal foam|Composite Metal Foam (CMF).[179]
Mohammad-Nabi Sarbolouki, invention of dendrosome[180]
Ali Safaeinili: co-inventor of Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS)[181]
Mehdi Vaez-Iravani: invention of shear force microscopy
Rouzbeh Yassini: inventor of the cable modem
Many Iranian scientist received internationally recognised awards. Examples are:
Maryam Mirzakhani: In August 2014, Mirzakhani became the first-ever woman, as well as the first-ever Iranian, to receive the Fields Medal, the highest prize in mathematics for her contributions to topology.[182]
Cumrun Vafa, 2017 Breakthrough Prize in Fundamental Physics[183]
Shekoufeh Nikfar: The awardee of the top women scientists by TWAS-TWOWS-Scopus in the field of Medicine in 2009.[184][185]
Ramin Golestanian: In August 2014, Ramin Golestanian won the Holweck Prize for his research work in physics.[186]
Shirin Dehghan: 2006 Women in Technology Award[187]
Mohammad Abdollahi: The Laureate of IAS-COMSTECH 2005 Prize in the field of Pharmacology and Toxicology and an IAS Fellow. MA is ranked as an International Top 1% outstanding Scientists of the World in the field of Pharmacology & Toxicology according to Essential Science Indicator from USA Thompson Reuters ISI.[188] MA is also known as one of outstanding leading scientists of OIC member countries.[189]
See also: International Rankings of Iran in Science and Technology
According to Scopus, Iran ranked 17th in terms of science production in the world in 2012 with the production of 34,155 articles above Switzerland and Turkey.[190]
According to the Institute for Scientific Information (ISI), Iran increased its academic publishing output nearly tenfold from 1996 to 2004, and has been ranked first globally in terms of output growth rate (followed by China with a 3 fold increase).[191][192] In comparison, the only G8 countries in top 20 ranking with fastest performance improvement are Italy at tenth and Canada at 13th globally.[191][192][193] Iran, China, India and Brazil are the only developing countries among 31 nations with 97.5% of the world's total scientific productivity. The remaining 162 developing countries contribute less than 2.5% of the world's scientific output.[194] Despite the massive improvement from 0.0003% of the global scientific output in 1970 to 0.29% in 2003, still Iran's total share in the world's total output remained small.[195][196] According to Thomson Reuters, Iran has demonstrated a remarkable growth in science and technology over the past one decade, increasing its science and technology output fivefold from 2000 to 2008. Most of this growth has been in engineering and chemistry producing 1.4% of the world's total output in the period 2004–2008. By year 2008, Iranian science and technology output accounted for 1.02% of the world's total output (That is ~340,000% growth in 37 years of 1970–2008).[197] 25% of scientific articles published in 2008 by Iran were international coauthorships. The top five countries coauthoring with Iranian scientists are US, UK, Canada, Germany and France.[198][199]
A 2010 report by Canadian research firm Science-Metrix has put Iran in the top rank globally in terms of growth in scientific productivity with a 14.4 growth index followed by South Korea with a 9.8 growth index.[200] Iran's growth rate in science and technology is 11 times more than the average growth of the world's output in 2009 and in terms of total output per year, Iran has already surpassed the total scientific output of countries like Sweden, Switzerland, Israel, Belgium, Denmark, Finland, Austria or that of Norway.[201][202][203] Iran with a science and technology yearly growth rate of 25% is doubling its total output every three years and at this rate will reach the level of Canadian annual output in 2017.[204] The report further notes that Iran's scientific capability build-up has been the fastest in the past two decades and that this build-up is in part due to the Iraqi invasion of Iran, the subsequent bloody Iran–Iraq War and Iran's high casualties due to the international sanctions in effect on Iran as compared to the international support Iraq enjoyed. The then technologically superior Iraq and its use of chemical weapons on Iranians, made Iran to embark on a very ambitious science developing program by mobilizing scientists in order to offset its international isolation, and this is most evident in the country's nuclear sciences advancement, which has in the past two decades grown by 8,400% as compared to the 34% for the rest of the world. This report further predicts that though Iran's scientific advancement as a response to its international isolation may remain a cause of concern for the world, all the while it may lead to a higher quality of life for the Iranian population but simultaneously and paradoxically will also isolate Iran even more because of the world's concern over Iran's technological advancements. Other findings of the report point out that the fastest growing sectors in Iran are Physics, Public health sciences, Engineering, Chemistry and Mathematics. Overall the growth has mostly occurred after 1980 and specially has been becoming faster since 1991 with a significant acceleration in 2002 and an explosive surge since 2005.[4][200][201][205][206][207] It has been argued that scientific and technological advancement besides the nuclear program is the main reason for United States worry about Iran, which may become a superpower in the future.[208][209][210] Some in Iranian scientific community see sanctions as a western conspiracy to stop Iran's rising rank in modern science and allege that some (western) countries want to monopolize modern technologies.[67]
As per US government report on science and engineering titled "Science and Engineering Indicators: 2010" prepared by National Science Foundation, Iran has the world's highest growth rate in Science & Engineering article output with an annual growth rate of 25.7%. The report is introduced as a factual and policy neutral "...volume of record comprising the major high-quality quantitative data on the U.S. and international science and engineering enterprise". This report also notes that the very rapid growth rate of Iran inside a wider region was led by its growth in scientific instruments, pharmaceuticals, communications and semiconductors.[211][212][213][214][215]
The subsequent National Science Foundation report published in 2012 by US government under the name "Science and Engineering Indicators: 2012", had put Iran first globally in terms of growth in science and engineering article output in the first decade of this millennium with an annual growth rate of 25.2%.[216]
The latest updated National Science Foundation report published in 2014 by US government titled "Science and Engineering Indicators 2014", has again ranked Iran first globally in terms of growth in science and engineering article output at an annualized growth rate of 23.0% with 25% of Iran's output having been produced through international collaboration.[217][218]
Iran ranked 49th for citations, 42nd for papers, and 135th for citations per paper in 2005.[219] Their publication rate in international journals has quadrupled during the past decade. Although it is still low compared with the developed countries, this puts Iran in the first rank of Islamic countries.[64] According to a British government study (2002), Iran ranked 30th in the world in terms of scientific impact.[220]
According to a report by SJR (A Spanish sponsored scientific-data data) Iran ranked 25th in the world in scientific publications by volume in 2007 (a huge leap from the rank of 40 few years before).[221] As per the same source Iran ranked 20th and 17th by total output in 2010 and 2011 respectively.[222][223]
In 2008 report by Institute for Scientific Information (ISI), Iran ranked 32, 46 and 56 in Chemistry, Physics and Biology respectively among all science producing countries.[224] Iran ranked 15th in 2009 in the field of nanotechnology in terms of presenting articles.[127]
Science Watch reported in 2008 that Iran has the world's highest growth rate for citations in medical, environmental and ecological sciences.[225] According to the same source, Iran during the period 2005–2009, had produced 1.71% of world's total engineering papers, 1.68% of world's total chemistry papers and 1.19% of world's total material sciences papers.[203]
According to the sixth report on "international comparative performance of UK research base" prepared in September 2009 by Britain-based research firm Evidence and Department for Business, Innovation and Skills, Iran has increased its total output from 0.13% of world's output in 1999 to almost 1% of world's output in 2008. As per the same report Iran had doubled its biological sciences and health research out put in just two years (2006–2008). The report further notes that Iran by 2008 had increased its output in physical sciences by as much as ten times in ten years and its share in world's total output had reached 1.3%, comparing with US share of 20% and Chinese share of 18%. Similarly Iran's engineering output had grown to 1.6% of the world's output being greater than Belgium or Sweden and just smaller than Russia's output at 1.8%. During the period 1999–2008, Iran improved its science impact from 0.66 to 1.07 above the world's average of 0.7 similar to Singapore's. In engineering Iran improved its impact and is already ahead of India, South Korea and Taiwan in engineering research performance. By 2008, Iran's share of most cited top 1% of world's papers was 0.25% of the world's total.[226]
As per French government report "L'Observatoire des sciences et des techniques (OST) 2010", Iran had the world's fastest growth rate in scientific article output between 2003 and 2008 period at +219%, producing 0.8% of the world's total material sciences knowledge out put in 2008, the same as Israel. The fastest growing scientific field in Iran was medical sciences at 344% and the slowest growth was of chemistry at 128% with the growth for other fields being biology 342%, ecology 298%, physics 182%, basic sciences 285%, engineering 235% and mathematics at 255%. As per the same report among the countries that produced less than 2% of the world's science and technology, only Iran, Turkey and Brazil had the most dynamic growth in their scientific output, with Turkey and Brazil having a growth rate above 40% and Iran above 200% compared with South Korea and Taiwan growth rates at 31% and 37% respectively. Iran also was among the countries whose scientific visibility was growing fastest in the world such as China, Turkey, India and Singapore though all growing from a low visibility base.[227][228][229]
According to the latest updated French government report "L'Observatoire des sciences et des techniques (OST) 2014", Iran had the world's fastest growth rate in scientific production output in the period between 2002 and 2012, having increased its share of world's total scientific output by +682% in the said period, producing 1.4% of world's total science and ranking 18th globally in terms of its total scientific output. Meanwhile, Iran also ranks first globally for having increased its share in the world's high impact (top 10%) publications by +1338% between 2002 and 2012 and similarly ranks first globally as well for increasing its global scientific visibility through having its share of international citations increased by +996% in the above period. Iran also ranks first globally in this report for the growth rate in scientific production of individual fields by having increased its science output in Biology by +1286%, in Medicine by +900%, in Applied biology and Ecology by +816%, in Chemistry by +356%, in Physics by +577%, in Space sciences by +947%, in Engineering sciences by +796% and in Mathematics by +556%.[230][231][232]
A bibliometric analysis of middle east was released by professional division of Thomson Reuters in 2011 titled "Global Research Report Middle East" comparing scientific research in middle eastern countries with that of the world for the first decade of this century. The study findings rank Iran at second position after Turkey in terms of total scientific output with Turkey producing 1.9% of the world's total science output while Iran's share of world's total science output was at 1.3%. Total scientific output of 14 countries surveyed including Bahrain, Egypt, Iran, Iraq, Jordan, Kuwait, Lebanon, Oman, Qatar, Saudi Arabia, Syria, Turkey, United Arab Emirates and Yemen was just 4% of the world's total output; with Turkey and Iran producing the bulk of scientific research in the region. In terms of growth in scientific research, Iran was ranked first with 650% increase of its share in world's output and Turkey second with a growth of 270%. Turkey increased its research publication rate from 5000 papers in year 2000 to nearly 22000 in the year 2009, while Iran's research publication started from a lower point of 1300 papers in year 2000 and grew to 15000 papers in the year 2009 with a notable surge in Iranian growth after year 2004. In terms of production of highly cited papers, 1.7% of all Iranian papers in mathematics and 1.3% of papers in engineering fields attained highly cited status defined as most cited top 1% of world's publications, exceeding the world's average in citation impact for those fields. Overall Iran produces 0.48% of the world's highly cited output in all fields just about half of what would be expected for parity at 1%. Comparative figures for other countries following Iran in the region are: Turkey producing 0.37% of the world's highly cited papers, Jordan 0.28%, Egypt 0.26% and Saudi Arabia 0.25%. External scientific collaboration accounted for 21% of the total research projects undertaken by researchers in Iran with largest collaborators being United States at 4.3%, United Kingdom at 3.3%, Canada 3.1%, Germany 1.7% and Australia at 1.6%.[233]
In 2011, world's oldest scientific society and Britain's leading academic institution, the Royal Society in collaboration with Elsevier published a study named "Knowledge, networks and nations" surveying global scientific landscape. According to this survey Iran has the world's fastest growth rate in science and technology. During the period 1996–2008, Iran had increased its scientific output by 18 folds.[25][26][28][29][234][235][236][237][238][239][240]
As per WIPO's report titled "World Intellectual Property Indicators 2013", Iran ranked 90th for patents generated by Iranian nationals all over the world, 100th in industrial design and 82nd in trademarks, positioning Iran below Jordan and Venezuela in this regard but above Yemen and Jamaica.[241][242]
Iranian journals listed in the Institute for Scientific Information (ISI)
See also: Media of Iran
According to the Institute for Scientific Information (ISI), Iranian researchers and scientists have published a total of 60,979 scientific studies in major international journals in the last 19 years (1990–2008).[243][244] Iran science production growth (as measured by the number of publications in science journals) is reportedly the "fastest in the world", followed by Russia and China respectively (2017/18).[245]
Scientific growth in Iran[190][203][243][244][246][246][245]
Iranian neuroscientists have also published in highly acclaimed journals. This nature paper is an example of such a research work carried out by Iranians who did their majority training and research in Iran
Acta Medica Iranica
Applied Entomology and PhytoPathology
Archives of Iranian Medicine
DARU Journal of Pharmaceutical Sciences
Iranian Biomedical Journal
Iranian Journal of BioTechnology
Iranian Journal of Chemistry & Chemical Engineering
Iranian Journal of Fisheries Sciences-English
Iranian Journal of Plant Pathology
Iranian Journal of Science and Technology
Iranian Polymer Journal
Iranian Journal of Public Health
Iranian Journal of Pharmaceutical Research
Iranian Journal of Reproductive Medicine
Iranian Journal of Veterinary Medicine
Iranian Journal of Fuzzy Systems
Journal of Entomological Society of Iran
Plant Pests & Diseases Research Institute Insect Taxonomy Research Department Publication
The Journal of the Iranian Chemical Society
Rostaniha (Botanical Journal of Iran)
Iran portal
Science portal
Technology portal
Higher Education in Iran
List of Iranian Research Centers
List of contemporary Iranian scientists, scholars, and engineers (modern era)
List of Iranian scientists
Economy of Iran
Industry of Iran
Iran's brain drain
International rankings of Iran
Intellectual Movements in Iran
Base isolation from Iran
Science in newly industrialized countries
Composite Index of National Capability
Persian philosophy
Prominent organizations
Institute of Standards and Industrial Research of Iran
Atomic Energy Organization of Iran
Iranian Space Agency
Iranian Chemists Association
The Physical Society of Iran
HORCSCT
Iranian Research Organization for Science and Technology
Iran National Science Foundation
This article incorporates text from a free content work. Licensed under CC-BY-SA IGO 3.0 UNESCO Science Report: towards 2030, 387-409, UNESCO, UNESCO Publishing. To learn how to add open license text to Wikipedia articles, please see this how-to page. For information on reusing text from Wikipedia, please see the terms of use.
^ Andy Coghlan. "Iran is top of the world in science growth". New Scientist. Archived from the original on 27 April 2015. Retrieved 2 April 2016.
^ "Archived copy" (PDF). Archived from the original (PDF) on 14 July 2011. Retrieved 21 October 2011. CS1 maint: archived copy as title (link)
^ "Archived copy" (PDF). Archived (PDF) from the original on 1 December 2017. Retrieved 20 January 2011. CS1 maint: archived copy as title (link)
^ a b "Iran's science progress fastest in world: Canadian report". Presstv.com. 19 February 2010. Archived from the original on 6 August 2012. Retrieved 21 October 2011.
^ Ward English, Paul (21 June 1968). "The Origin and Spread of Qanats in the Old World". Proceedings of the American Philosophical Society. 112 (3): 170–181. JSTOR 986162.
^ "Riddle of 'Baghdad's batteries'". BBC News. 27 February 2003. Archived from the original on 3 September 2017. Retrieved 23 May 2010.
^ "Intute: Science, Engineering and Technology". Psigate.ac.uk. Archived from the original on 26 September 2006. Retrieved 21 October 2011.
^ "Internet Archive Wayback Machine". 27 October 2009. Archived from the original on 27 October 2009. Retrieved 7 February 2012.
^ Hill, Donald. Islamic Science and Engineering. 1993. Edinburgh University Press. ISBN 0-7486-0455-3 p.222
^ Scott L. Montgomery; Alok Kumar (12 June 2015). A History of Science in World Cultures: Voices of Knowledge. Routledge. pp. 221–. ISBN 978-1-317-43906-6.
^ Bennison, Amira K. (2009). The great caliphs : the golden age of the 'Abbasid Empire. New Haven: Yale University Press. p. 187. ISBN 978-0-300-15227-2.
^ Casulleras, Josep (2007). "Banū Mūsā". In Thomas Hockey (ed.). The Biographical Encyclopedia of Astronomers. New York: Springer. pp. 92–4. ISBN 978-0-387-31022-0.
^ C. Elgood. A Medical history of Persia. Cambridge Univ. Press. p. 173
^ Transplantation Activities in Iran Archived 28 September 2007 at the Wayback Machine, Behrooz Broumand
^ Gorji A; Khaleghi Ghadiri M (December 2002). "History of headache in medieval Persian medicine". Lancet Neurology. 1 (8): 510–5. doi:10.1016/S1474-4422(02)00226-0. PMID 12849336.
^ Edward Granville Browne. Islamic Medicine, Goodword Books, 2002, ISBN 81-87570-19-9. p. 79
^ "Archives Of Iranian Medicine". Ams.ac.ir. 18 August 1905. Archived from the original on 28 September 2011. Retrieved 21 October 2011.
^ "Introduction to Urmia University". Archived from the original on 8 June 2007.
^ a b Gemson, Claire (13 October 2007). "1,001 inventions mark Islam's role in science". The Scotsman. Edinburgh, UK.
^ "9.2 A 13th-Century Darwin? – Tusi's Views on Evolution – Farid Alakbarov". Azer.com. Archived from the original on 13 December 2010. Retrieved 21 October 2011.
^ a b c d e f g h i j k l m n o p q r s t u v w x y z Ashtarian, Kioomars (2015). Iran. In: UNESCO Science Report: towards 2030 (PDF). Paris: UNESCO. pp. 389–407. ISBN 978-92-3-100129-1. Archived (PDF) from the original on 30 June 2017. Retrieved 6 June 2017.
^ "Memorandum of the foreign trade regime of Iran" (PDF). Ministry of Commerce. November 2009. Archived from the original (PDF) on 13 July 2011. Cite journal requires |journal= (help)
^ "Govt. Favors weaning research from national budget". Tehran Times. 29 July 2015. Archived from the original on 10 June 2016. Retrieved 18 May 2016.
^ "Iran's Neoliberal Austerity-Security Budget". Hooshang Amirahmadi. Payvand.com. 16 February 2015. Archived from the original on 21 August 2016. Retrieved 21 February 2015.
^ a b "- Royal Society" (PDF). Archived (PDF) from the original on 3 June 2011. Retrieved 2 April 2016.
^ a b "GLOBAL: Strong science in Iran, Tunisia, Turkey". University World News. Archived from the original on 4 October 2011. Retrieved 21 October 2011.
^ "IRAN: 20-year plan for knowledge-based economy". University World News. Archived from the original on 4 October 2011. Retrieved 21 October 2011.
^ a b "China marching ahead in science". Archived from the original on 1 April 2011.
^ a b "China leads challenge to 'scientific superpowers' – Technology & science – Science". MSNBC. 28 March 2011. Archived from the original on 1 April 2011. Retrieved 21 October 2011.
^ "Science and Technology in Iran: A Brief Review". IFPNews. 21 January 2018. Archived from the original on 20 May 2019. Retrieved 16 March 2019.
^ Stone, Richard (16 September 2005). "An Islamic Science Revolution?". Science. 309 (5742): 1802–1804. doi:10.1126/science.309.5742.1802. PMID 16166490. Archived from the original on 16 April 2008. Retrieved 2 April 2016.
^ "GERD/GDP ratio in Iran". UNESCO Institute for Statistics. 6 June 2017. Archived from the original on 23 October 2016.
^ "Iran: Huge Investments On Nanotech". Zawya. 30 October 2008. Archived from the original on 30 July 2012. Retrieved 21 October 2011.
^ editorial (17 August 2006). "Revival in Iran". Nature. 442 (7104): 719–720. Bibcode:2006Natur.442R.719.. doi:10.1038/442719b. PMID 16915244.
^ Source: Unescopress. "Asia leaping forward in science and technology, but Japan feels the global recession, shows UNESCO report | United Nations Educational, Scientific and Cultural Organization". Unesco.org. Archived from the original on 3 April 2011. Retrieved 21 October 2011.
^ "UNESCO science report, 2010: the current status of science around the world; 2010" (PDF). Archived (PDF) from the original on 15 November 2011. Retrieved 21 October 2011.
^ a b "Archived copy" (PDF). Archived (PDF) from the original on 21 October 2012. Retrieved 11 February 2014. CS1 maint: archived copy as title (link)
^ "Scientific Ranking of Iran". Archived from the original on 28 September 2012. Retrieved 26 September 2012.
^ a b "How sanctions helped Iranian tech industry". Al-Monitor. 4 February 2016. Archived from the original on 29 March 2016. Retrieved 2 April 2016.
^ a b Entrepreneurship Ecosystem in Iran cgiran.org
^ a b c d "Iran". unido.org. Archived from the original on 27 September 2011. Retrieved 21 October 2011.
^ "Economist Intelligence Unit". Cite journal requires |journal= (help); |contribution= ignored (help)
^ [1][dead link]
^ "Tehran University Science and Technology Park unveils products". 10 June 2014. Archived from the original on 16 June 2014. Retrieved 12 June 2014.
^ "Iran breaks ground in tech. park project". PressTV. Archived from the original on 3 January 2012. Retrieved 7 February 2012.
^ "Farsnews". Archived from the original on 14 December 2013. Retrieved 14 December 2013.
^ a b "Iran to establish Energy Technology Park". 16 July 2014. Archived from the original on 21 July 2014. Retrieved 21 July 2014.
^ "A National System of Innovation in the Making : An Analysis of the Role of Government with Respect to Promoting Domestic Innovations in the Manufacturing Sector of Iran". ResearchGate. Retrieved 2 April 2016.
^ "Iran ranked 2nd in percentage of science, engineering graduates". August 2016. Archived from the original on 26 August 2016. Retrieved 28 August 2016.
^ Greg Palast. "Pharmaceuticals: Afghan Ufficiale: La NATO Airstrike Uccide 14". OfficialWire. Archived from the original on 22 February 2012. Retrieved 5 February 2012.
^ "Iran Registered 9,000 Inventions Last Year". Archived from the original on 15 April 2009.
^ "دسترسی غیر مجاز". Archived from the original on 16 July 2014. Retrieved 17 July 2014.
^ [2] retrieved 12 February 2008[dead link]
^ a b "Archived copy" (PDF). Archived (PDF) from the original on 20 December 2016. Retrieved 7 December 2016. CS1 maint: archived copy as title (link)
^ Iran's Small and Medium Enterprises Archived 3 September 2013 at the Wayback Machine. The United Nations Industrial Development Organization (2003). Retrieved 2 February 2010.
^ "OEC – Iran (IRN) Exports, Imports, and Trade Partners". Archived from the original on 20 December 2016. Retrieved 15 December 2016.
^ "PressTV-Iran can become global tech player: UN". Archived from the original on 7 December 2016. Retrieved 7 December 2016.
^ Torbat, Akbar (27 September 2010). "Industrialization and Dependency: the Case of Iran". Economic Cooperation Organization. Archived from the original on 26 July 2011. Retrieved 5 February 2011. CS1 maint: unfit url (link)
^ "Iran develops 32-bit processor". Eetimes.com. Archived from the original on 29 September 2007. Retrieved 21 October 2011.
^ "BBCPersian.com". BBC. Archived from the original on 18 July 2009. Retrieved 21 October 2011.
^ Sanaray
^ "Iran unveils first online video game". Presstv.com. 30 June 2010. Archived from the original on 2 November 2013. Retrieved 21 October 2011.
^ "Increase in Scientific Research". Archived from the original on 20 June 2009.
^ a b Nature (21 June 2006). "Education and training put Iran ahead of richer states". Nature. 441 (7096): 932. Bibcode:2006Natur.441..932M. doi:10.1038/441932d. PMID 16791171.
^ "'Arrogant powers fear Iran's progress'". PressTV. 22 December 2010. Archived from the original on 26 August 2011. Retrieved 21 October 2011.
^ a b Bahari, Maziar (22 May 2009). "Quarks and the Koran: Iran's Islamic Embrace of Science". Newsweek. Archived from the original on 21 December 2013. Retrieved 21 October 2011.
^ Science and Technology and the Future Development of Societies: International Workshop Proceedings. Nap.edu. 23 January 2001. doi:10.17226/12185. ISBN 978-0-309-11927-6. Archived from the original on 17 October 2012. Retrieved 21 October 2011.
^ Science and Technology and the Future Development of Societies: International Workshop Proceedings. Nap.edu. 11 February 2007. doi:10.17226/12185. ISBN 978-0-309-11927-6. Archived from the original on 17 October 2012. Retrieved 21 October 2011.
^ "Iran unveils Comprehensive Scientific Plan". Payvand.com. 4 January 2011. Archived from the original on 28 June 2011. Retrieved 21 October 2011.
^ "Iran and Global scientific collaboration in the 21st century". Payvand.com. 29 March 2011. Archived from the original on 5 October 2011. Retrieved 21 October 2011.
^ a b "Iran making advancements In biosimilar medicines". Presstv.ir. 20 January 2012. Archived from the original on 23 January 2012. Retrieved 7 February 2012.
^ "Iran´s significant advances in Biosimilar & Biotechnology Medicines". YouTube. Archived from the original on 6 December 2015. Retrieved 7 February 2012.
^ "Iranian Medical Breakthroughs Outstanding". Archived from the original on 23 May 2009.
^ "Hematology – Oncology and BMT Research Center". Archived from the original on 2 November 2004.
^ Alebouyeh M (1993). "Pediatric hematology and oncology in Iran: past and present state". Pediatric Hematology and Oncology. 10 (4): 299–301. doi:10.3109/08880019309029505. PMID 8292512.
^ a b "::: Experimental and Clinical Tranplantation". Ectrx.org. Archived from the original on 26 July 2011. Retrieved 21 October 2011.
^ "Iran makes first artificial lung". Presstv.com. 16 August 2009. Archived from the original on 30 January 2016. Retrieved 21 October 2011.
^ http://roozonline.com/english/016441.shtml. Retrieved 9 July 2006. Missing or empty |title= (help)[dead link]
^ "Iran neuroscience more progressive than Germany, China". Mehr News Agency. 3 March 2014. Archived from the original on 18 April 2015. Retrieved 2 April 2016.
^ "Iran, Russia seek cooperation in cognitive sciences". Mehr News Agency. 18 April 2015. Archived from the original on 18 April 2015. Retrieved 2 April 2016.
^ "Iran ranks first in Mideast and region in ophthalmology". Retrieved 7 February 2012. [dead link]
^ Ayse, Valentine; Nash, Jason John; Leland, Rice (January 2013). The Business Year 2013: Iran. London, UK: The Business Year. p. 157. ISBN 978-1-908180-11-7. Archived from the original on 27 December 2016. Retrieved 16 March 2014.
^ Healy, Melissa (24 January 2011). "Advances in treatment help more people survive severe injuries to the brain". Los Angeles Times. Archived from the original on 28 January 2011. Retrieved 25 January 2011.
^ "Advances in treatment help more people survive severe injuries to the brain". Archived from the original on 29 April 2011.
^ Healy, Melissa (24 January 2011). "Brain injuries: Changes in the treatment of brain injuries have improved survival rate". Baltimore Sun. Archived from the original on 28 June 2011. Retrieved 21 October 2011.
^ a b "Iran". nti.org. Archived from the original on 13 November 2015. Retrieved 2 April 2016.
^ Mahboudi, F; Hamedifar, H; Aghajani, H (2012). "Medical Biotechnology Trends and Achievements in Iran". Avicenna Journal of Medical Biotechnology. 4 (4): 200–5. PMC 3558225. PMID 23407888.
^ "Iranian scientists produce GM rice: Middle East Onlypunjab.com- Onlypunjab.com Latest News". Archived from the original on 7 April 2008. Retrieved 19 May 2006.
^ "BBCPersian.com". BBC. Archived from the original on 10 February 2009. Retrieved 21 October 2011.
^ "Middle East Online". Middle East Online. 30 September 2006. Archived from the original on 28 October 2011. Retrieved 21 October 2011.
^ "Archived copy". Archived from the original on 21 February 2014. Retrieved 10 February 2014. CS1 maint: archived copy as title (link)
^ "Archived copy" (PDF). Archived (PDF) from the original on 2 October 2008. Retrieved 8 October 2008. CS1 maint: archived copy as title (link)
^ "Iran invests $2.5b in stem cell research". payvand.com. Archived from the original on 4 March 2016. Retrieved 2 April 2016.
^ "Fars News Agency :: Iran Ranks 2nd in World in Transplantation of Stem Cells". English.farsnews.com. 28 April 2012. Archived from the original on 28 April 2012. Retrieved 19 January 2013.
^ a b "Iran mass-produces ocular bio-implants". Presstv.ir. 17 October 2010. Archived from the original on 5 October 2012. Retrieved 21 October 2011.
^ "Iran ranks 21st in biotech scientific productions". Mehr News Agency. 7 July 2015. Archived from the original on 4 March 2016. Retrieved 2 April 2016.
^ a b "Iran holds 1st fully domestic laser exhibit". Presstv.com. 8 February 2010. Archived from the original on 30 January 2016. Retrieved 21 October 2011.
^ "Fars News Agency:: Ahmadinejad Stresses Iran's Growing Medical Tourism Industry". English.farsnews.com. 17 January 2012. Archived from the original on 13 February 2012. Retrieved 7 February 2012.
^ "Noargen". noargen.com. Archived from the original on 17 October 2018. Retrieved 30 July 2016.
^ "Iran, 7th in UF6 production – IAEO official". Payvand.com. Archived from the original on 19 October 2011. Retrieved 21 October 2011.
^ https://news.yahoo.com/s/ap/20090411/ap_on_re_mi_ea/ml_iran_nuclear_4. Retrieved 13 April 2009. Missing or empty |title= (help)[dead link]
^ a b "Iran only country to stand up to US, Israel: Velayati". Presstv.ir. 10 May 2012. Archived from the original on 14 June 2012. Retrieved 19 January 2013.
^ "Iranians Master Linac Know-How". Archived from the original on 26 April 2009.
^ John Pike. "Esfahan / Isfahan – Iran Special Weapons Facilities". Globalsecurity.org. Archived from the original on 20 September 2011. Retrieved 21 October 2011.
^ John Pike. "Iran: Nuclear Expert Expresses Worry Over Political Developments". Globalsecurity.org. Archived from the original on 2 November 2012. Retrieved 21 October 2011.
^ a b "Iran builds nuclear fusion reactor". PressTV. 10 February 2011. Archived from the original on 17 September 2011. Retrieved 21 October 2011.
^ "Archived copy". Archived from the original on 21 July 2019. Retrieved 21 July 2019. CS1 maint: archived copy as title (link)
^ "Iranian High Schools Establish Robotics Groups". Payvand.com. Archived from the original on 29 June 2011. Retrieved 21 October 2011.
^ "Iran unveils human-like robot: report". The Sydney Morning Herald. 4 July 2010. Archived from the original on 28 September 2018. Retrieved 28 September 2018.
^ "No. 3720 | Front page | Page 1". Irandaily. Archived from the original on 29 June 2011. Retrieved 21 October 2011.
^ "Iran Has a Dancing, Humanoid Robot". Fox News. 17 August 2010. Archived from the original on 19 August 2010. Retrieved 18 August 2010.
^ Guizzo, Erico (16 August 2010). "Iran's Humanoid Robot Surena Walks, Stands on One Leg". IEEE Spectrum. Archived from the original on 23 October 2011. Retrieved 21 October 2011.
^ "Iran develops auto industry robots". Presstv.com. 14 August 2010. Archived from the original on 30 January 2016. Retrieved 21 October 2011.
^ Beck, Jonathan (25 December 2007). "Report says Iran has built a supercomputer | Iranian – Iran News". Jerusalem Post. Archived from the original on 17 September 2011. Retrieved 21 October 2011.
^ "Tehran Produces Mideast's Most Powerful Supercomputer". Archived from the original on 10 July 2007.
^ "Archived copy". Archived from the original on 25 June 2009. Retrieved 30 June 2009. CS1 maint: archived copy as title (link)
^ FaraKaraNet Web Design Dept. "Iran Information Technology Development Company". En.iraninfotech.com. Archived from the original on 15 August 2011. Retrieved 21 October 2011.
^ "Router Lab, University of Tehran – Home". Web.ut.ac.ir. Archived from the original on 28 September 2011. Retrieved 21 October 2011.
^ a b "Iran unveils indigenous supercomputers". Payvand.com. Archived from the original on 28 June 2011. Retrieved 21 October 2011.
^ "International Science Ranking". scimagojr.com. Retrieved 20 September 2019.
^ "STM Production On Mass Level". Archived from the original on 10 July 2007.
^ "ISI indexed nano-articles ( Article ) | Countries Report". statnano.com. Archived from the original on 10 January 2014. Retrieved 30 December 2013.
^ "Press Release: "Iran Stands 10th in World Ranking of Nanoscience Production "". Nanotechnology Now. Archived from the original on 19 August 2012. Retrieved 19 January 2013.
^ "Iran Nanotechnology Initiative Council". En.nano.ir. Archived from the original on 7 October 2011. Retrieved 21 October 2011.
^ a b "Iran Ranks 15th In Nanotech Articles". Bernama. 9 November 2009. Archived from the original on 10 December 2011. Retrieved 21 October 2011.
^ StatNano Annual Report 2017, StatNano Publications, http://statnano.com/publications/4679 Archived 12 November 2018 at the Wayback Machine, March 2018.
^ "Iran mass producing over 35 nano-tech laboratory equipments". Payvand.com. Archived from the original on 17 October 2012. Retrieved 19 January 2013.
^ "Iran says it has put first dummy satellite in orbit". Reuters. 17 August 2008. Archived from the original on 12 September 2018. Retrieved 12 September 2018.
^ "Iran's Kavoshgar I lifts off for space". Press TV. 4 February 2008. Archived from the original on 8 December 2008. Retrieved 12 November 2008.
^ "Iran sends first homemade satellite into orbit". The Guardian. London. 3 February 2009. Archived from the original on 6 September 2013. Retrieved 23 May 2010.
^ "Mass Production of Zafar Missile Begins". Archived from the original on 4 March 2012. Retrieved 13 April 2009.
^ "Iran unveils domestically manufactured satellite navigation system". Payvand.com. Archived from the original on 18 June 2013. Retrieved 19 January 2013.
^ "Iran to put astronaut in space in 2017". Presstv.com. 5 August 2010. Archived from the original on 9 October 2010. Retrieved 21 October 2011.
^ "Iran to send man into space by 2019". Presstv.com. 23 June 2010. Archived from the original on 30 January 2016. Retrieved 21 October 2011.
^ "Iran receives two major quality control certificates for two of its aviation products". PressTV. 1 May 2012. Archived from the original on 2 May 2012. Retrieved 19 January 2013.
^ "PressTV". Archived from the original on 6 February 2013. Retrieved 6 February 2013.
^ "Iran Reveals Powerhouse Turbo Engine – Prepares Mass Production for Air Force". Archived from the original on 29 August 2016. Retrieved 28 August 2016.
^ "Iran Currency Rate-Iranian Rial Dollar Euro Exchange Rates". Irantour.org. Archived from the original on 25 October 2011. Retrieved 21 October 2011.
^ "Iran Invests in Astronomy". Physics Today. July 2004. Archived from the original on 20 October 2004.
^ "Iran's biggest telescope unveiled". Presstv.com. 27 July 2010. Archived from the original on 30 January 2016. Retrieved 21 October 2011.
^ "Iran and APSCO to create a space situational awareness network". 21 September 2016. Archived from the original on 24 September 2016. Retrieved 24 September 2016.
^ "Iran's electricity output increases by %30". Presstv.com. 29 October 2009. Archived from the original on 13 March 2014. Retrieved 7 February 2012.
^ "Iran plans 1,000MW gas power plant". PressTV. 27 October 2010. Archived from the original on 24 February 2012. Retrieved 7 February 2012.
^ "No. 3914 | Domestic Economy | Page 4". Irandaily. 21 March 2010. Archived from the original on 4 May 2012. Retrieved 7 February 2012.
^ "Self-Sufficiency in Refinery Parts Production". Zawya. 31 May 2011. Retrieved 7 February 2012.
^ "Iran, Besieged by Gasoline Sanctions, Develops GTL to Extract Gasoline from Natural Gas". Oilprice.com. Archived from the original on 7 February 2012. Retrieved 7 February 2012.
^ "Archived copy". Archived from the original on 13 March 2014. Retrieved 13 March 2014. CS1 maint: archived copy as title (link)
^ a b https://www.msn.com/en-xl/news/other/iran-fulfills-dream-as-it-unveils-first-homemade-oil-rig/ar-BB10aF1h
^ [4] Archived 29 March 2011 at the Wayback Machine
^ "Iran Daily – Domestic Economy – 04/29/07". 12 June 2008. Archived from the original on 12 June 2008. Retrieved 7 February 2012.
^ "::.. NIORDC – National Iranian Oil Refining & Distribution Company." Niordc.ir. 14 July 2010. Archived from the original on 9 March 2012. Retrieved 7 February 2012.
^ SHANA (18 July 2010). "Share of domestically made equipments on the rise". Shana.ir. Archived from the original on 9 March 2012. Retrieved 7 February 2012.
^ "PressTV-Iran joins water turbine manufacturers club". Archived from the original on 26 July 2015. Retrieved 27 July 2015.
^ Oil Minister: Iran Self-Sufficient in Drilling Industry Archived 6 June 2013 at the Wayback Machine. Fars News Agency. Retrieved 13 January 2012.
^ Baldwin, Chris (8 February 2008). "Iran starts second atomic power plant: report". Reuters. Archived from the original on 25 March 2008. Retrieved 7 February 2012.
^ "Persian Gulf missile will spoil enemy tactics: Iran Cmdr". PressTV. 24 April 2012. Archived from the original on 25 April 2012. Retrieved 19 January 2013.
^ "Iran displays supercavitating torpedo and semi-submersible". Archived from the original on 4 March 2016. Retrieved 2 April 2016.
^ "Iran unveils new smart weapons system called "BASIR"". PressTV. 30 January 2012. Archived from the original on 29 April 2012. Retrieved 19 January 2013.
^ "Iran to enhance exports of military equipment next year". PressTV. Archived from the original on 14 March 2012. Retrieved 19 January 2013.
^ "Iran Launches Production of Stealth Sub". Fox News. 30 November 2011. Archived from the original on 8 February 2011. Retrieved 25 April 2012.
^ Jillson, Irene (18 March 2013). "The United States and Iran". Science & Diplomacy. 2 (1). Archived from the original on 20 July 2017. Retrieved 18 March 2013.
^ "Iran joins research team for nuclear fusion project". Payvand.com. Archived from the original on 11 March 2013. Retrieved 19 January 2013.
^ "Fars News Agency:: OIC Official Hails Iran's Leading Role in Science, Technology". English.farsnews.com. 16 June 2010. Archived from the original on 1 March 2012. Retrieved 21 October 2011.
^ Sigfried, Tom (2009). "SESAME opens doors to international collaboration". Science News. 175 (2). Washington, DC: Science News Service (published 17 January 2009). p. 32. doi:10.1002/scin.2009.5591750224. Archived from the original on 21 April 2009. Retrieved 24 January 2009.
^ "Kuwait-Iran to review setting up joint university". 7 November 2015. Archived from the original on 6 November 2016. Retrieved 6 November 2016.
^ "Iran, Italy ink MoU on university coop". 21 June 2016. Archived from the original on 6 November 2016. Retrieved 6 November 2016.
^ "University of Tehran, Russia's SPSU to form joint academy". 5 November 2016. Archived from the original on 6 November 2016. Retrieved 6 November 2016.
^ "Joint university of Iran, Germany planned". 13 February 2016. Archived from the original on 6 November 2016. Retrieved 6 November 2016.
^ "Iran-Switzerland to ink academic coop". 23 February 2016. Archived from the original on 6 November 2016. Retrieved 6 November 2016.
^ "Scientific American 50: SA 50 Winners and Contributors". Scientific American. 12 November 2006. Archived from the original on 11 October 2012. Retrieved 20 February 2013.
^ "Maysam Ghovanloo". Google Scholar.
^ "A Rubik's cube at the nanoscale: proteins puzzle with amino acid chains". Archived from the original on 6 January 2019. Retrieved 5 January 2019.
^ "Universal clamping protein stabilizes folded proteins: New insight into how the chaperone protein Hsp70 works". Archived from the original on 6 January 2019. Retrieved 5 January 2019.
^ "US Patent 7641984 – Composite metal foam and methods of preparation thereof". PatentStorm. 5 January 2010. Archived from the original on 5 February 2010. Retrieved 29 March 2010.
^ "Dr. Afsaneh Rabiei". Mae.ncsu.edu. 25 April 2011. Archived from the original on 16 June 2010. Retrieved 21 October 2011.
^ Sarbolouki, Mohammad N.; Sadeghizadeh, Majid; Yaghoobi, Mohammad M.; Karami, Ali; Lohrasbi, Tahmineh (9 May 2000). "Dendrosomes: a novel family of vehicles for transfection and therapy – Sarbolouki – 2000 – Journal of Chemical Technology and Biotechnology – Wiley Online Library". Journal of Chemical Technology & Biotechnology. 75 (10): 919–922. doi:10.1002/1097-4660(200010)75:10<919::AID-JCTB308>3.0.CO;2-S.
^ "First-of-Its-Kind Antenna to Probe the Depths of Mars". Mars.jpl.nasa.gov. 4 May 2005. Archived from the original on 18 October 2011. Retrieved 21 October 2011.
^ Ben Mathis-Lilley (12 August 2014). "A Woman Has Won the Fields Medal, Math's Highest Prize, for the First Time". Slate. Graham Holdings Company. Archived from the original on 14 August 2014. Retrieved 14 August 2014.
^ "2017 Breakthrough Prize in Fundamental Physics". Archived from the original on 6 January 2019. Retrieved 5 January 2019.
^ "Shekoufeh Nikfar A-4370-2009". ResearcherID.com. 11 March 1994. Retrieved 21 October 2011.
^ "Malaysian Biotechnology Information Centre". Bic.org.my. 10 November 2009. Archived from the original on 2 October 2011. Retrieved 21 October 2011.
^ "نشان 'هولوک' برای فیزیکدان ایرانی مقیم بریتانیا". BBC Persian. Archived from the original on 25 August 2014. Retrieved 2 April 2016.
^ "'Top technology' woman announced". BBC News. 3 November 2006. Archived from the original on 3 January 2008. Retrieved 21 October 2011.
^ "Mohammad Abdollahi B-9232-2008". ResearcherID.com. Archived from the original on 23 July 2009. Retrieved 21 October 2011.
^ "Islamic Academy of Sciences IAS- Ibrahim Award Laureates". Ias-worldwide.org. Archived from the original on 28 September 2011. Retrieved 21 October 2011.
^ a b "PressTV". Archived from the original on 3 March 2016. Retrieved 3 February 2013.
^ a b "2005 OST PSA report" (PDF). Archived from the original (PDF) on 2 March 2012. Retrieved 21 October 2011.
^ a b "2005 OST PSA report" (PDF). Archived from the original (PDF) on 14 July 2011. Retrieved 21 October 2011.
^ "Which nation's scientific output is rising fastest? « Soft Machines". Softmachines.org. 29 March 2006. Archived from the original on 27 September 2011. Retrieved 21 October 2011.
^ David Dickson (16 July 2004). "China, Brazil and India lead southern science output". SciDev.Net. Archived from the original on 24 July 2011. Retrieved 21 October 2011.
^ Rezaei, Nima; Mahmoudi, Maryam; Moin, Mostafa (January 2005). "Scientific output of Iran at the threshold of the 21st century". Scientometrics. 62 (2): 239–248. doi:10.1007/s11192-005-0017-5.
^ Nancy Imelda Schafer, ISI (14 March 2002). "Middle Eastern Nations Making Their Mark". Archive.sciencewatch.com. Archived from the original on 3 October 2011. Retrieved 21 October 2011.
^ "Field rankings for Iran". Times Higher Education. 4 March 2010. Archived from the original on 29 August 2012. Retrieved 21 October 2011.
^ "S&E Indicators 2010 – Chapter 5. Academic Research and Development". National Science Foundation (NSF). Archived from the original on 9 January 2012. Retrieved 21 October 2011.
^ "tt05-B". Search.nsf.gov. 4 December 2009. Retrieved 21 October 2011.
^ a b "30 years in science: Secular movements in knowledge creation" (PDF). Science-Metrix. 31 August 2015. Archived from the original (PDF) on 13 September 2012. Retrieved 2 April 2016.
^ a b http://www.science-metrix.com/30years/index.html# Archived 20 February 2010 at the Wayback Machine
^ "Cellcom CEO: Iran's bomb isn't made by peasants". Globes. 14 December 2010. Archived from the original on 2 April 2012. Retrieved 21 October 2011.
^ a b c "2010 Nov/Dec – Middle East Revisited: Iran's Steep Climb – ScienceWatch.com – Thomson Reuters". ScienceWatch.com. 10 January 2011. Archived from the original on 7 November 2011. Retrieved 21 October 2011.
^ "Scientific Collaboration between Canada and Developing Countries" (PDF). Archived (PDF) from the original on 1 December 2017. Retrieved 21 October 2011.
^ "Wall, war, wealth: 30 years in science". Eurekalert.org. 17 February 2010. Archived from the original on 7 June 2011. Retrieved 21 October 2011.
^ "Iran showing fastest scientific growth of any country – science-in-society – 18 February 2010". New Scientist. Archived from the original on 21 October 2011. Retrieved 21 October 2011.
^ Rezaei, Nima; Mahmoudi, Maryam; Moin, Mostafa (January 2005). "Scientometrics, Volume 62, Number 2". Scientometrics. 62 (2): 239–248. doi:10.1007/s11192-005-0017-5.
^ "Archived copy". Archived from the original on 5 April 2010. Retrieved 4 April 2010. CS1 maint: archived copy as title (link)
^ Soroor Ahmed (9 May 2010). "Iran, Turkey Break Scientific Monopoly Has Islam Anything to Do With It?". Radianceweekly.com. Archived from the original on 1 October 2011. Retrieved 21 October 2011.
^ "Israeli study exposes fallacy of Iran threat". Geopolitical Monitor. 12 May 2009. Archived from the original on 14 March 2012. Retrieved 21 October 2011.
^ "S&E Indicators 2010 – Chapter 5. Academic Research and Development – Outputs of S&E Research: Articles and Patents – US National Science Foundation (NSF)". nsf.gov. Archived from the original on 9 June 2011. Retrieved 21 October 2011.
^ "S&E Indicators 2010 – Chapter 5. Academic Research and Development – Sidebars – US National Science Foundation (NSF)". nsf.gov. Archived from the original on 9 January 2012. Retrieved 21 October 2011.
^ "S&E Indicators 2010 – Front Matter – About Science & Engineering Indicators – US National Science Foundation (NSF)". nsf.gov. Archived from the original on 18 October 2011. Retrieved 21 October 2011.
^ "S&E Indicators 2010 – Chapter 6. Industry, Technology, and the Global Marketplace – Worldwide Distribution of Knowledge- and Technology-Intensive Industries – US National Science Foundation (NSF)". nsf.gov. Archived from the original on 9 June 2011. Retrieved 21 October 2011.
^ "Science and Engineering Indicators 2010" (PDF). Archived from the original (PDF) on 6 November 2011. Retrieved 21 October 2011.
^ "Archived copy" (PDF). Archived from the original (PDF) on 19 May 2018. Retrieved 6 April 2018. CS1 maint: archived copy as title (link)
^ "Archived copy" (PDF). Archived (PDF) from the original on 1 May 2018. Retrieved 6 April 2018. CS1 maint: archived copy as title (link)
^ "nsf.gov - S&E Indicators 2014 - US National Science Foundation (NSF)". Archived from the original on 2 April 2016. Retrieved 2 April 2016.
^ "Essential Science Indicators". In-cites.com. Archived from the original on 21 June 2009. Retrieved 21 October 2011.
^ "Archived copy" (PDF). Archived from the original (PDF) on 2 October 2008. Retrieved 26 July 2008. CS1 maint: archived copy as title (link)
^ http://www.scimagojr.com/countryrank...=0&min_type=it. Retrieved 26 July 2008. Missing or empty |title= (help)[dead link]
^ "International Science Ranking". Scimagojr.com. Archived from the original on 6 October 2011. Retrieved 21 October 2011.
^ "International Science Ranking". scimagojr.com. Archived from the original on 30 April 2016. Retrieved 2 April 2016.
^ "Iranian science according to ISI (2008)". Mehrnews.ir. Archived from the original on 27 September 2011. Retrieved 21 October 2011.
^ "September 2008 – Rising Stars". ScienceWatch.com. 7 June 2010. Archived from the original on 17 September 2011. Retrieved 21 October 2011.
^ "Archived copy". Archived from the original on 25 July 2011. Retrieved 26 January 2011. CS1 maint: archived copy as title (link)
^ "Archived copy" (PDF). Archived (PDF) from the original on 20 July 2013. Retrieved 10 May 2014. CS1 maint: archived copy as title (link)
^ "Archived copy". Archived from the original on 13 May 2014. Retrieved 10 May 2014. CS1 maint: archived copy as title (link)
^ "Archived copy" (PDF). Archived (PDF) from the original on 24 September 2015. Retrieved 10 May 2014. CS1 maint: archived copy as title (link)
^ "Observatoire des Sciences et Techniques". Archived from the original on 13 May 2014. Retrieved 10 May 2014.
^ "Archived copy" (PDF). Archived (PDF) from the original on 26 June 2011. Retrieved 8 March 2011. CS1 maint: archived copy as title (link)
^ "The future science will be set by China and Iran". 30 March 2011. Archived from the original on 3 April 2011.
^ "/ Technology / Science – Emerging world on science fast-track". Financial Times. 28 March 2011. Archived from the original on 19 October 2011. Retrieved 21 October 2011.
^ Brown, Mark (5 August 2011). "China, Turkey and Iran emerge as scientific giants (Wired UK)". Wired.co.uk. Archived from the original on 16 October 2011. Retrieved 21 October 2011.
^ "New countries emerge as major players in science". Science Business. 29 March 2011. Archived from the original on 3 October 2011. Retrieved 21 October 2011.
^ "China and Iran challenging science "superpowers" of US and Britain". Daily Mirror. UK. Archived from the original on 20 September 2011. Retrieved 21 October 2011.
^ "Knowledge, networks and nations report". Royal Society. 28 March 2011. Archived from the original on 2 September 2011. Retrieved 21 October 2011.
^ "Iran is top of the world in science growth – science-in-society – 28 March 2011". New Scientist. Archived from the original on 5 October 2011. Retrieved 21 October 2011.
^ "Archived copy" (PDF). Archived from the original (PDF) on 10 June 2014. Retrieved 30 May 2014. CS1 maint: archived copy as title (link)
^ "Launch of World Intellectual Property Indicators – 2015 Edition". Archived from the original on 12 April 2016. Retrieved 2 April 2016.
^ a b "پژوهشگاه مطالعات وزارت آموزش و پرورش". Rie.ir. 17 June 2009. Archived from the original on 17 March 2012. Retrieved 21 October 2011.
^ a b "JamejamOnline.ir". JamejamOnline.ir. Archived from the original on 29 September 2011. Retrieved 21 October 2011.
^ a b "Iran science production shows world's fastest growth". 28 July 2018. Archived from the original on 28 July 2018. Retrieved 28 July 2018.
^ a b "Fars News Agency:: VP Stresses Iran's Astonishing Scientific Achievements". English.farsnews.com. Archived from the original on 13 February 2012. Retrieved 7 February 2012.
Ministry of Science, Research and Technology Of Iran Official Website
Ministry of Information and Communications Technology of Iran Official Website
Iranian scientific publications online digital archive
Best of Iran's 2011 research and technology
Science, Technology and Innovation Policy Review - Iran. United Nations Conference on Trade and Development (2005)
Major Scientific Developments in Iran – Part I Part II Part III (2010 PressTV)
Iran's scientific achievements (2011 PressTV)
Laser Technology advancements in Iran – Part I Part II Part III (2010 PressTV)
Iran's comprehensive scientific plan (2011 PressTV)
Nanotechnology in Iran (July 2011, PressTV)
Nanotechnology in Iran (October 2011, PressTV)
Iran surgical society (2011 PressTV)
A Review of Iran's Scientific Achievement in 2011 (March 2012, PressTV)
Scientific Ranking of Iran (2012 PressTV)
Iran's scientific breakthroughs (2016 PressTV)
Theories and sociology
Early cultures
The Golden Age of Islam
Medieval European
National Research Institute for Science Policy
Ministry of Science, Research and Technology
Institute of Standards and Industrial Research
Science and technology in Asia
|
CommonCrawl
|
On a certain kind of polynomials of degree 4 with disconnected Julia set
DCDS Home
Dynamics of local map of a discrete Brusselator model: eventually trapping regions and strange attractors
October 2008, 20(4): 961-974. doi: 10.3934/dcds.2008.20.961
Lyapunov exponents and the dimension of the attractor for 2d shear-thinning incompressible flow
P. Kaplický 1, and Dalibor Pražák 2,
Charles University, Faculty of Mathematics and Physics, Department of Mathematical Analysis, Sokolovská 83, 186 75 Prague 8
Charles University in Prague, Faculty of Mathematics and Physics, Dept. of Mathematical Analysis, Sokolovská 83, 186 75 Praha 8, Czech Republic
Received December 2006 Revised October 2007 Published January 2008
The equations describing planar motion of a homogeneous, incompressible generalized Newtonian fluid are considered. The stress tensor is given constitutively as $\T=\nu(1+\mu|\Du|^2)^{\frac{p-2}2}\Du$, where $\Du$ is the symmetric part of the velocity gradient. The equations are complemented by periodic boundary conditions.
For the solution semigroup the Lyapunov exponents are computed using a slightly generalized form of the Lieb-Thirring inequality and consequently the fractal dimension of the global attractor is estimated for all $p\in(4/3,2]$.
Keywords: global attractor, Lieb-Thirring inequality., Power-law fluids, Lyapunov exponents, fractal dimension, shear-thinning fluids.
Mathematics Subject Classification: Primary: 37L30; Secondary: 35K2.
Citation: P. Kaplický, Dalibor Pražák. Lyapunov exponents and the dimension of the attractor for 2d shear-thinning incompressible flow. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 961-974. doi: 10.3934/dcds.2008.20.961
M. Bulíček, Josef Málek, Dalibor Pražák. On the dimension of the attractor for a class of fluids with pressure dependent viscosities. Communications on Pure & Applied Analysis, 2005, 4 (4) : 805-822. doi: 10.3934/cpaa.2005.4.805
Hyeong-Ohk Bae, Hyoungsuk So, Yeonghun Youn. Interior regularity to the steady incompressible shear thinning fluids with non-Standard growth. Networks & Heterogeneous Media, 2018, 13 (3) : 479-491. doi: 10.3934/nhm.2018021
Alberto Gambaruto, João Janela, Alexandra Moura, Adélia Sequeira. Shear-thinning effects of hemodynamics in patient-specific cerebral aneurysms. Mathematical Biosciences & Engineering, 2013, 10 (3) : 649-665. doi: 10.3934/mbe.2013.10.649
Marilena Filippucci, Andrea Tallarico, Michele Dragoni. Simulation of lava flows with power-law rheology. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 677-685. doi: 10.3934/dcdss.2013.6.677
José A. Carrillo, Yanghong Huang. Explicit equilibrium solutions for the aggregation equation with power-law potentials. Kinetic & Related Models, 2017, 10 (1) : 171-192. doi: 10.3934/krm.2017007
Frank Jochmann. Power-law approximation of Bean's critical-state model with displacement current. Conference Publications, 2011, 2011 (Special) : 747-753. doi: 10.3934/proc.2011.2011.747
Luis Barreira, César Silva. Lyapunov exponents for continuous transformations and dimension theory. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 469-490. doi: 10.3934/dcds.2005.13.469
Asim Aziz, Wasim Jamshed. Unsteady MHD slip flow of non Newtonian power-law nanofluid over a moving surface with temperature dependent thermal conductivity. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 617-630. doi: 10.3934/dcdss.2018036
M. Bulíček, P. Kaplický. Incompressible fluids with shear rate and pressure dependent viscosity: Regularity of steady planar flows. Discrete & Continuous Dynamical Systems - S, 2008, 1 (1) : 41-50. doi: 10.3934/dcdss.2008.1.41
Muhammad Mansha Ghalib, Azhar Ali Zafar, Zakia Hammouch, Muhammad Bilal Riaz, Khurram Shabbir. Analytical results on the unsteady rotational flow of fractional-order non-Newtonian fluids with shear stress on the boundary. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 683-693. doi: 10.3934/dcdss.2020037
Neal Bez, Sanghyuk Lee, Shohei Nakamura, Yoshihiro Sawano. Sharpness of the Brascamp–Lieb inequality in Lorentz spaces. Electronic Research Announcements, 2017, 24: 53-63. doi: 10.3934/era.2017.24.006
Konstantina Trivisa. Global existence and asymptotic analysis of solutions to a model for the dynamic combustion of compressible fluids. Conference Publications, 2003, 2003 (Special) : 852-863. doi: 10.3934/proc.2003.2003.852
Bernard Ducomet, Eduard Feireisl, Hana Petzeltová, Ivan Straškraba. Global in time weak solutions for compressible barotropic self-gravitating fluids. Discrete & Continuous Dynamical Systems - A, 2004, 11 (1) : 113-130. doi: 10.3934/dcds.2004.11.113
Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887
Paolo Secchi. An alpha model for compressible fluids. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 351-359. doi: 10.3934/dcdss.2010.3.351
Peter Constantin. Transport in rotating fluids. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 165-176. doi: 10.3934/dcds.2004.10.165
Y. Charles Li. Chaos phenotypes discovered in fluids. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1383-1398. doi: 10.3934/dcds.2010.26.1383
D. Bresch, B. Desjardins, D. Gérard-Varet. Rotating fluids in a cylinder. Discrete & Continuous Dynamical Systems - A, 2004, 11 (1) : 47-82. doi: 10.3934/dcds.2004.11.47
Yukun Song, Yang Chen, Jun Yan, Shuai Chen. The existence of solutions for a shear thinning compressible non-Newtonian models. Electronic Research Archive, 2020, 28: 47-66. doi: 10.3934/era.2020004
Artur Avila. Density of positive Lyapunov exponents for quasiperiodic SL(2, R)-cocycles in arbitrary dimension. Journal of Modern Dynamics, 2009, 3 (4) : 631-636. doi: 10.3934/jmd.2009.3.631
P. Kaplický Dalibor Pražák
|
CommonCrawl
|
jeneen850
the smallest particle of an element that can be divided and have properties of element
negatively charged particle that orbits the nucleus of an atom
a pure substance made of one kind of atom
anything that has mass and takes up space (volume)
the particle with no charge in the nucleus
The center core of an atom that contains the protons and neutrons
a chart where all the elements are organized into periods and groups according to their properties
positively charged particle in the nucleus of an atom
3 parts of atom
proton (nucleus)
neutron (nucleus)
electron (orbits nucleus)
Protons and neutrons
What gives an atom its mass?
Protons and electrons
A balanced (neutral charged) atom has equal numbers of ____ and _____
Number of protons and also number of electrons in a neutral atom
Atomic mass or Atomic weight
Number of protons and neutrons
Charge of nucleus because the charge of protons
Charge of electron cloud because the charge of electrons
Number of electrons
Same as number of protons in a neutral atom
Number of neutrons
Atomic mass minus Atomic number
Number of protons in and atom of magnesium (Mg)
Number of valence electrons in boron, aluminum, gallium, indium
Number of neutrons in a sulfur atom
Atoms of the same element (same number of protons) that have different numbers of neutrons
Positively or negatively charged atoms due to different number of protons than electrons
Positive ion
An atom that has lost some of its electrons; an atom with more protons than electrons
Negative ion
An atom that has gained extra electrons; an atom with more electrons than protons
1 amu
atomic mass unit - mass of a proton, and mass of a neutron
Valence electrons
Electrons on the outermost energy level of an atom
Number of valence electrons
Elements in the same column have the same number of valence electrons, between 1-8 valence electrons.
The minimum mass of fissionable material needed in a reactor or nuclear bomb that will sustain a chain reaction
What relationship between electron orbits and light emission did Bohr postulate?
What is the ionising power for a beta particle like?
What modalities is pair production used in?
Unit 1 Vocabulary: Atoms
shanitalewis
Atom and periodic table
Austin_DiRenzo
Atoms and Elements
Element Builder Gizmo Vocabulary
ebalzano2018
Cell Structure /Organelles 1
CELL STRUCTURES (Images)
Cells and Cell structure 1
Themes and Scientific Method List t
briannalexiss
DH238L/239L Patient Positioning- seated clinician
oyster8585
jonminger
HEcon: Chapter 11 - Section 1
taylor_ascaPLUS
Birds of prey typically rise upward on thermals. The paths these birds take may be spiral-like. You can model the spiral motion as uniform circular motion combined with a constant upward velocity. Assume that a bird completes a circle of radius 6.00 m every 5.00 s and rises vertically at a constant rate of 3.00 m/s. Determine (a) the bird's speed relative to the ground; (b) the bird's acceleration (magnitude and direction); and (c) the angle between the bird's velocity vector and the horizontal.
To keep the calculations fairly simple, but still reasonable, we shall model a human leg that is 92.0 cm long (measured from the hip joint) by assuming that the upper leg and the lower leg (which includes the foot) have equal lengths and that each of them is uniform. For a 70.0-kg person, the mass of the upper leg would be 8.60 kg, while that of the lower leg (including the foot) would be 5.25 kg. Find the location of the center of mass of this leg, relative to the hip joint, if it is (a) stretched out horizontally and (b) bent at the knee to form a right angle with the upper leg remaining horizontal.
Escape speed at a distance d from the center of a body of mass M is $$ v _ { \text { escape } } = \sqrt { \frac { 2 G M } { d } } $$ Calculate the escape speed from the moon's surface (moon radius = $$ 1.74 \times 10^6 m, $$ moon mass = $$ 7.35 \times 10^{22}kg $$ .)
A sprinkler mounted on the ground sends out a jet of water at a 30 ∘ angle to the horizontal. The water leaves the nozzle at a speed of 11 m/s . How far does the water travel before it hits the ground?
|
CommonCrawl
|
Heterogeneous rates of genome rearrangement contributed to the disparity of species richness in Ascomycota
Ahmad Rajeh1,2,
Jie Lv3 &
Zhenguo Lin ORCID: orcid.org/0000-0002-8400-91381
Chromosomal rearrangements have been shown to facilitate speciation through creating a barrier of gene flow. However, it is not known whether heterogeneous rates of chromosomal rearrangement at the genome scale contributed to the huge disparity of species richness among different groups of organisms, which is one of the most remarkable and pervasive patterns on Earth. The largest fungal phylum Ascomycota is an ideal study system to address this question because it comprises three subphyla (Saccharomycotina, Taphrinomycotina, and Pezizomycotina) whose species numbers differ by two orders of magnitude (59,000, 1000, and 150 respectively).
We quantified rates of genome rearrangement for 71 Ascomycota species that have well-assembled genomes. The rates of inter-species genome rearrangement, which were inferred based on the divergence rates of gene order, are positively correlated with species richness at both ranks of subphylum and class in Ascomycota. This finding is further supported by our quantification of intra-species rearrangement rates based on paired-end genome sequencing data of 216 strains from three representative species, suggesting a difference of intrinsic genome instability among Ascomycota lineages. Our data also show that different rates of imbalanced rearrangements, such as deletions, are a major contributor to the heterogenous rearrangement rates.
Various lines of evidence in this study support that a higher rate of rearrangement at the genome scale might have accelerated the speciation process and increased species richness during the evolution of Ascomycota species. Our findings provide a plausible explanation for the species disparity among Ascomycota lineages, which will be valuable to unravel the underlying causes for the huge disparity of species richness in various taxonomic groups.
Chromosomal rearrangements, such as translocation, inversion, duplication or deletion events, have profound effects on organismal phenotype through impacting gene expression and disrupting the function of genes [1]. It is a long-held view that chromosomal rearrangements are generally deleterious [2]. Many studies found that chromosomal rearrangements reduced gene flow between populations in a wide range of taxonomic groups, such as sunflowers [3, 4], oilseed rape (Brassica napus) [5], fruit flies [6], shrews [7], mosquitoes [8], house mouse [9] and yeasts [10,11,12,13]. For example, crosses between different natural isolates of fission yeast Schizosaccharomyces pombe with different karyotypes displayed significantly lower hybrid viability than those with similar karyotypes [12]. Other studies also supported that chromosomal translocation is an important contributor to the yeast speciation process [11, 14, 15]. Therefore, the chromosomal speciation theory proposed that chromosomal rearrangements contribute to the speciation process through restricting gene flow between populations [16,17,18,19,20]. Two main models (hybrid-sterility models and suppressed recombination models) have been proposed to explain the mechanisms of chromosomal rearrangements in the process of speciation [21]. A natural question following the chromosomal speciation theory is whether the rates of chromosomal rearrangement at a genome-scale correlate with the rates of speciation, or species richness, among different groups of organisms. The huge disparity in species richness across the tree of life is one of the most remarkable and pervasive patterns on Earth [22]. Some groups, like beetles and flowering plants, are well-known for their enormous species diversity, while most other groups contain far fewer species [23]. It has been proposed that the species richness of a lineage depends on the interplay between evolutionary and ecological processes [24], such as ages of clades [25], net diversification rates (speciation minus extinction) [26], or ecological limits [27]. However, the impact of different rates of genome rearrangement in the formation of species richness disparity has not been systematically investigated.
Compared to the animals and plants, the fungal phylum Ascomycota can serve as an ideal system to study the connection between the rates of genome rearrangement and disparity of species richness. Ascomycota is one of the most diverse and ubiquitous phyla of eukaryotes with ~ 64,000 known species that accounts for approximately 75% of all described fungi [28]. Ascomycota comprises three subphyla (or subdivisions): Saccharomycotina (e.g., Saccharomyces, Pichia, Candida), Taphrinomycotina (e.g., Schizosaccharomyces, Pneumocystis), and Pezizomycotina (e.g., Aspergillus, Neurospora, Peziza) [29]. The species numbers of the three Ascomycota subphyla differ by at least two orders of magnitude. Pezizomycotina is the most species-rich subphylum, comprising nearly 59,000 known species [28]. Saccharomycotina contains ~ 1000 known species that are distributed in 12 families [30]. In contrast, Taphrinomycotina includes only six genera and 150 species [31]. Because the three subphyla have similar ages, which is ~ 500 million years [32], the huge disparity of species richness among them appears to be due to non-age factors, which remains to be elucidated.
The genomes of many Ascomycota species have been sequenced and well assembled, which make it possible to investigate the rates of genome rearrangement in each subphylum and to determine whether they are associated with the disparity in species richness. In addition, at least one well-studied model organism can be found in each Ascomycota subphylum, such as the budding yeast Saccharomyces cerevisiae of Saccharomycotina, Sch. pombe of Taphrinomycotina and Neurospora crassa of Pezizomycotina. The genomes of many populations or strains of the three species have been sequenced by Illumina paired-end sequencing, which can be used to quantify the rates of genome rearrangement under much smaller evolutionary timescales [33,34,35,36]. The rates of genome rearrangement inferred between different species and within a species can provide reliable measurements of genome instability and, together, provide the opportunity to test the correlation between genome instability and species richness. In this study, we used genomes of 71 Ascomycota species to estimate the rates of genome rearrangement between different species in each subphylum and used paired-end sequencing data from 216 strains to calculate rates of genome rearrangement within a species for the three model organisms. We found that the rates of genome rearrangement are positively correlated with species richness at both ranks of subphylum and class. Therefore, our study provides the first genome-scale evidence to support an important role of genome rearrangement in promoting species richness, and suggests that different rates of genome rearrangement at least partly explain the species richness disparity among different Ascomycota lineages. Our findings also provide a new direction in investigating the underlying causes for the disparity of species richness in many other lineages of organisms, such as insects, fishes, and flowering plants.
Inference of orthologous groups and evolutionary history of Ascomycota species examined
Chromosomal rearrangement events inevitably change the order of genes on a chromosome. Therefore, the degree of gene order divergence (GOD) reflects the rate of chromosomal rearrangement [37]. Using GOD also allows us to measure the degree of genome rearrangement between evolutionarily distantly-related species [38]. Considering that the divergence times between many species examined in this study may exceed 300 million years [32], using GOD to estimate the degree of genome rearrangement between species is a reasonable and feasible approach. Inference of GOD between two species requires accurate annotation of gene location in the genome and identification of orthologous genes. To provide an accurate estimation of rates of genome rearrangement, we only used genomes that are well-assembled (supercontigs < 50) and annotated (with complete coordination annotation of protein-coding sequences). A total number of 71 genomes that include 39 Pezizomycotina species, 27 Saccharomycotina species, and 5 Taphrinomycotina meet the above criteria and were retrieved from NCBI RefSeq database for our subsequent analyses (Additional file 1: Table S1). Orthologous groups between every pair of species were identified using InParanoid [39].
To infer the evolutionary relationships for the 71 Ascomycota species examined, we reconstructed a species phylogenetic tree through coalescent-based phylogenetic analyses using one-to-one orthologous groups (see Methods). A Basidiomycota species Ustilago maydis was included as an outgroup for species phylogeny inference. A total number of 160 one-to-one orthologous groups (Additional file 2: Table S2) were identified using InParanoid [39]. Three major monophyletic groups which are corresponding to the three subphyla can be identified from the coalescent-based species tree (Fig. 1). The subphylum Taphrinomycotina appears to be the first lineage that had diverged from the other two subphyla, which is consistent with previous work [40].
Phylogenetic relationships among 71 Ascomycota species examined. The phylogenetic relationships were inferred from coalescence-based analysis of 160 orthologous gene sets. A Basidiomycota species Ustilago maydis was used as an outgroup. Only bootstrap support values < 100 are shown. The branch is not drawn to scale. The species numbers of major clades were obtained from [28]. The green dot indicates the occurrence of whole genome duplication (WGD)
A prerequisite to calculating the rates of genome rearrangement between two species is their divergence times. Due to lack of fossil records, the dating of divergence times between fungal species is difficult, and it is inconsistent among studies [41]. The divergence of protein sequences has been commonly used to represent the evolutionary divergence time between two species based on the assumption that the difference of amino acid sequences increases approximately linearly with time [42]. In addition, it is more accurate to estimate the divergence time between two species using sequence divergence level based on concatenation of many protein sequences than using a single sequence or the average distance for all proteins [43]. Therefore, to infer the evolutionary times of all species examined, we calculated the sequence distances using concatenated protein sequences of the 160 orthologous groups (see Methods, Additional file 3: Table S3).
The relationships between gene order divergence and sequence distance in Ascomycota
We first estimated the degree of GOD between two species by calculating the proportion of gene orders or gene neighborhoods that are not conserved (pGOD), which was calculated by dividing the number of lost gene neighborhoods by all gene neighborhoods in the two species (see Methods). Within each subphylum, the values of pGOD vary greatly between different species pairs (Additional file 3: Table S3). Specifically, the pGOD values range from 0.03 to 0.796 between the 39 Pezizomycotina species, from 0.012 to 0.966 between the 27 Saccharomycotina species and 0.193 to 0.857 between the 5 Taphrinomycotina species. As the divergence times range from several to hundreds of million years between these species, it is expected to observe a wide range of variations in pGOD values. Considering that the conservation of gene order between the most distantly-related species within a subphylum is already close to nonexistent, we did not calculate the cross-subphyla gene order divergence.
To infer the relationships between pGOD values and divergence times, we plotted pGOD values against their corresponding sequence distances which were calculated based on the 160 concatenated protein sequences. It is a general pattern that pGOD values increase with the increase of sequence distance (Fig. 2). However, the trend of increase is different among the three subphyla. In Pezizomycotina and Saccharomycotina, we observed a non-linear correlation between pGOD and sequence distance. The increase of pGOD plateaus when sequence distance is large, which is an indication of saturation of pGOD. Such patterns can be fitted by a logarithmic regression model: y = 0.236ln(x) + 1.055 in Pezizomycotina, and y = 0.366ln(x) + 0.911 in Saccharomycotina. In contrast, pGOD values in Taphrinomycotina form a linear correlation with sequence distance (y = 0.7211× + 0.0678, r2 = 0.992). Based on the three regression models, the sequence distance to lose 50% of gene order, or gene order half-life, is 0.095 in Pezizomycotina, 0.325 in Saccharomycotina and 0.599 in Taphrinomycotina. If we use sequence distance as a proxy for divergence time, the gene order half-life of Pezizomycotina species is ~ 3.4× shorter than Saccharomycotina species, and is ~ 6.3× shorter than Taphrinomycotina species. Therefore, the large differences of gene order half-life indicate the divergence rates of gene order are heterogeneous rates among the three Ascomycota subphyla, and species-rich lineage has a much short gene order half-life than species-poor lineage.
The correlation between gene order divergence (pGOD) and sequence distance in the three Ascomycota subphyla. Each dot represents a pair of species compared. Protein sequence distance was measured based on concatenating protein sequence alignments of 160 orthologous groups
Rates of genome rearrangement correlate with species richness among Ascomycota subphyla
The saturation of accumulation of gene order divergence in Pezizomycotina and Saccharomycotina suggests that multiple breakages of a gene neighborhood may have occurred between distantly related species. Therefore, the degree of GOD could be underestimated, particularly for distantly related species, if multiple breakages of a gene neighborhood are not considered. If we assume for simplicity that the rates of gene order loss are the same for all neighborhoods, the probability of the number of loss events at a given gene neighborhood follows the Poisson distribution [44]. However, this assumption does not hold because significant variations of pGOD among different chromosomal regions were observed in all subphyla based on our sliding-window analysis of gene order divergence (Additional file 4: Figure S1). Therefore, a correction model also needs to take into consideration the variation of pGOD across different chromosomal regions, similar to the variation of amino acid substitutions. It has been recognized that the gamma distribution can effectively model the realistic variation in mutation rates of molecular sequences [45]. Therefore, we can apply the gamma distribution to estimate the degree of GOD, called here gamma distance of GOD (dGOD). The shape or gamma parameter α, was estimated based on the distributions of pGOD values across different chromosomal regions. Three model organisms (S. cerevisiae, N. crassa, and Sch. pombe) were used as representative species to estimate the α parameter for each subphylum (see Methods). The values of the α parameter values were relatively consistent among different comparisons and subphyla, ranging from 2.29 to 3.86 (Additional file 6: Table S4). The median α parameter values of each species (N. crassa: 2.83, S. cerevisiae: 2.69, Sch. pombe: 3.10) were used to calculate dGOD values for each subphylum.
In addition, because the variance of dGOD increases with the increase of gene order divergence, the dGOD for distantly related species may be inaccurate. Therefore, we only included species pairs with sequence distance < 0.6, which comprises most species examined within each class of Ascomycota. By plotting the dGOD values against their sequence distance, we found that the dGOD values correlate linearly with sequence distance in all three subphyla (Fig. 3a). Based on the linear regression model, the rate of genome rearrangement in Pezizomycotina (y = 8.40× - 0.44, r2 = 0.84) is 3.31× higher than Saccharomycotina species (y = 2.54× - 0.001, r2 = 0.30), and is 8.48× higher than Taphrinomycotina (y = 0.99× + 0.086, r2 = 0.96), which is similar to the results based on gene order half-life.
Heterogenous rates of gene order divergence among Ascomycota subphyla. a A lineage correlation between gamma distance of gene order divergence (dGOD) and sequence distance in all three subphyla. b Boxplot showing the different rates of dGOD among the three Ascomycota subphyla. The rate of dGOD was calculated as dGOD per unit of protein sequence distance
To quantify the degree of GOD per unit of divergence time for each subphylum, we normalized the dGOD by sequence distance for each pair of species compared. Highly heterogeneous rates of dGOD were detected among the three groups (ANOVA one-way test, p < 0.001, Fig. 3b). The average dGOD per genetic distance in Pezizomycotina is 7.26 ± 1.32, which is significantly higher than that of Saccharomycotina (2.54 ± 0.79, p < 0.001, Tukey post hoc test). The average dGOD per genetic distance in Saccharomycotina is also significantly higher than that of Taphrinomycotina (1.40 ± 0.57, p < 0.001), supporting a positive correlation between rates of genome rearrangement and species richness among the three subphyla of Ascomycota.
Rates of genome rearrangement positively correlated with species richness at the rank of class
Our data support a strong correlation between rearrangement and species richness at the rank of subphylum level in Ascomycota. To determine if the same pattern also presents at lower taxonomic ranks, we compared the rearrangement rates between different classes of Ascomycota species. To reduce the potential impact of small sample size, we only compared classes with at least four species examined in this study. In Pezizomycotina, three classes meet the threshold, which are Eurotiomycetes, Sordariomycetes and Dothideomycetes (Fig. 1, and Additional file 1: Table S1). The numbers of documented species in the three Pezizomycotina classes are 3400, 10,564, and 19,010 respectively [28]. All the Saccharomycotina species examined belong to the only class of this subphylum Saccharomycetes, which comprises ~ 1000 known species [30]. In Taphrinomycotina, only the class of Schizosaccharomycetes meet the criteria. Only four species (Schizosaccharomyces pombe, Sch. japonicus, Sch. octosporus and Sch. cryophilus) have been described in Schizosaccharomycetes [46]. It was suggested the Schizosaccharomycetes diverged from other Taphrinomycotina lineages nearly 500 MYA [46], indicating extremely limited species diversity. As shown in Fig. 4a, the most species-rich class, Dothideomycetes has the highest rearrangement rate among all classes examined, while the most species-poor class, Schizosaccharomycetes has the lowest rearrangement rate. By plotting the number of species against median rates of rearrangement of all classes (Fig. 4b), a significant positive correlation can be observed between the two variables (Pearson correlation coefficient r = 0.89), supporting that the rearrangement rates are also strongly correlated with species richness at the class level in Ascomycota.
Heterogeneous rates of gene order divergence within subphylum. a. Rates of genome rearrangement positively correlate with species richness at the level of class in Ascomycota. The rates of genome rearrangement were calculated as dGOD per unit of protein sequence distance. b. A scatter plot of the species number and median value of dGOD per unit of protein sequence distance in the five Ascomycota classes. A positive correlation can be observed between the two variables (Pearson correlation coefficient r = 0.89)
The impacts of whole genome duplication and lifestyle on rates of genome rearrangement
The scatter plot of dGOD against sequence distance shows that the rates of gene order divergence have noticeable variations among Saccharomycetes species (Fig. 3a), which is consistent with a previous study [37]. To infer other factors that might influence the rearrangement rates in Saccharomycetes, we further divided the Saccharomycetes species examined into different groups based on their evolutionary relationships. Two monophyletic clades with more than four species can be identified from the species tree in Fig. 1. One of them includes many pathogenic yeast Candida species and as well as non-pathogenic yeast Debaryomyces hansenii, which is the co-called CTG group because of the reassignment of the CUG codon [47]. The second monophyletic clade, which includes the model organisms S. cerevisiae, belongs to the Saccharomyces complex [48]. The Saccharomyces complex has experienced a whole genome duplication (WGD) about 100 MYA [49, 50]. Previous studies have shown that extensive genome rearrangement events have shaped the yeasts' genomes since WGD [51, 52]. Therefore, we divided the Saccharomyces complex into two groups: WGD and non-WGD, to better understand the impact of WGD on genome stability. In terms of divergence rate of gene order (Fig. 5), the WGD group is significantly higher than the other two groups (p < 0.001), while the CTG group has a much higher rate of dGOD than the non-WGD group. Therefore, our results support that whole genome duplication, as well as pathogenic lifestyle, may have elevated the rates of rearrangement, which is consistent with previous studies in Candida albicans [37] and pathogenic bacteria [53].
Heterogeneous rates of gene order divergence in the class of Saccharomycetes. The rates of genome rearrangement were calculated as dGOD per unit of protein sequence distance. The Saccharomycetes species that have experienced an ancient whole genome duplication have higher rates of genome rearrangement than the CTG group and non-WGD group
Imbalanced rearrangement as an important contributor to the heterogeneous rates of genome rearrangement
The gene order can be changed by both types of genome rearrangement, balanced and imbalanced. Unlike balanced rearrangements (e.g., inversions and reciprocal translocations), the imbalanced rearrangements (deletions and duplications) also change the gene dosage or gene content due to gain or loss of gene copies. To better understand the underlying causes for the heterogeneous rates of arrangements, we estimated the relative contribution of different types of genome rearrangement in each subphylum. If loss of gene order between two species is due to the absence of one or two orthologous genes in the other species, we considered it as deletion or imbalanced rearrangement. If the orthologous genes of two neighboring genes are located on different chromosomes in the other species, we considered it as inter-chromosomal translocation. If the orthologous genes of two neighboring genes are located on the same chromosome but are not neighboring genes in the other species, it is likely due to other balanced rearrangements, such as inversion or intra-chromosomal transaction, which is defined as "Others" type. We quantified the contributions of the three types of rearrangements for all pairwise genome comparisons in each subphylum (Fig. 6a and Additional file 3: Table S3). In most cases, deletions account for over 50% of gene order divergence, suggesting that imbalanced rearrangements play a major role in genome instability. Furthermore, deletions have the more contributions for gene order divergence in Pezizomycotina, with an average of 70.5 ± 4.4%, more than 56.5 ± 6.67% in Saccharomycotina and 53.2 ± 5.85% in Taphrinomycotina. To infer if the increased contribution of deletion is due to a high rate of gene loss, we calculated the rate of gene loss per unit of sequence distance for each pairwise comparison. In Pezizomycotina, the average rate of gene loss is 1.37 ± 0.63 per unit of sequence distance, which is much higher than Saccharomycotina (0.61 ± 0.15) and Taphrinomycotina (0.39 ± 0.15) (Fig. 6b). Lineage-specific gene losses have been shown to have the largest effect in terms of lowering the meiotic fertility of hybrids between Saccharomyces sensu stricto species and other yeasts that have inherited the same genome duplication [54]. Therefore, the elevated rate of deletions or imbalanced rearrangements in Pezizomycotina species is an important factor for their higher rates of genome rearrangement.
Gene loss as a major contributor to the heterogeneous rates of genome rearrangement among Ascomycota subphyla. a Boxplot showing the proportion of three types of rearrangements that contribute to gene order divergence in each subphylum. b Ascomycota sequences have the highest rates of gene loss per sequence distance among the three subphyla, while Taphrinomycotina species have the lowest rate. The outliers are not drawn in B for better readability
Pezizomycotina has the highest rearrangement rates within a species
The heterogeneous rates of genome rearrangement between different Ascomycota subphyla could be due to their different intrinsic genome instability, as well as the constraint of different environmental niches and lifestyle. As the divergence times of different populations within a species are much shorter than that between different species, the impacts of environmental constraint on the rates of genome rearrangement among populations are significantly reduced. Therefore, the rates of genome rearrangement between closely related strains or populations can be used to measure the intrinsic genome instability of a species. The genome rearrangement events between closely related organisms can be identified using paired-end mapping (PEM) based on high-quality paired-end sequencing data [33,34,35,36]. Because paired-end sequencing data of many strains are available in the three well-studied representative organisms: S. cerevisiae in Hemiascomycota, Sch. pombe in Taphrinomycotina and N. crassa in Pezizomycotina, they were used to obtain a reliable measurement of intrinsic genome instability for the three Ascomycota subphyla.
We identified structural variants (SVs) based on Illumina paired-end reads by combining split-read, read-depth, and local-assembly evidence (see Methods). We identified 15,251 SVs from 29 N. crassa strains (525.90 SVs/strain), 13,647 SVs from 155 S. cerevisiae strains (88.05 SVs/strain) and 1218 SVs from 32 Sch. pombe strains (38.06 SVs/strain) (Additional file 7: Table S5 and Additional file 8: Table S6). Considering that the genome sizes of the three species are different (40 Mb in N. crassa and ~ 12 Mb in S. cerevisiae and Sch. pombe) (Additional file 7: Table S6), and the divergence times between strains could also be different, the rates of genome rearrangement between two strains need to normalize the numbers of SVs by its genome size and divergence time. As the divergence times between most strains are not available, we used their genetic distance as a proxy. The genetic distance was calculated as the frequency of single nucleotide polymorphisms (SNPs) based on their sequencing reads (see Methods). For each strain, we calculated the number of SV breakpoints per 1 million base pairs (Mbp) per unit genetic distance to infer its rate of intra-species genome rearrangement. Highly heterogeneous rates of intra-species genome rearrangement are observed among the three species (Fig. 6a). Specifically, N. crassa has a significantly faster intra-species genome rearrangement than S. cerevisiae (p < 0.001, Student's T-test), and S. cerevisiae has a significantly faster genome rearrangement than Sch. pombe (p < 0.001). In addition, similar to the results of inter-species rearrangement, deletions account for the most of SVs between different strains in each species (Fig. 7b). Therefore, the patterns of intra-species genome rearrangement in the three subphyla is consistent with the inter-species gene order divergence, suggesting that the heterogeneous rates of genome rearrangement among the three Ascomycota subphyla are likely due to the difference of intrinsic genome instability.
Different intra-species rates of genome rearrangement between three representative species. The structural variants (SVs) of each strain were identified based on Illumina paired-end sequencing reads and validated by local assembly. a Normalized density of SVs support the highest rates of intra-species rates of rearrangement in N. crassa. b Deletion is the most abundant SV in all three species. DEL: deletion; DUP: tandem duplication; INS: insertion; INV: inversion, TRA: translocation
Transposable elements contributed differently to genome rearrangement between species
Transposable elements (TEs) have been shown to play a crucial role in genome shaping via recombination and expansion events, leading to chromosomal rearrangements and new gene neighborhoods [55,56,57]. In many pathogenic fungi, invasion and expansion of transposable elements have facilitated chromosomal rearrangements and gene duplications [57,58,59]. Recombination between transposable elements is a source of chromosomal rearrangements in the budding yeast S. cerevisiae [60]. Moreover, large genomic changes caused by transposons have been shown to contribute to rapid adaptation to changing environments [56]. Therefore, we investigated the contributions of TEs in the genomes of 216 strains examined. Most TEs found in fungal genomes belong to the Long Terminal Repeats (LTR) retrotransposons [61, 62]. Unlike animal and plant genomes, most fungal species have low TE contents. One hundred ninety complete LTR retrotransposons or LTR fragments were identified in N. crassa, which only account for 1.7% of its genome [62]. About 3% of the budding yeast S. cerevisiae genomes are transposable elements. In the fission yeast Sch. pombe, transposable elements only accounts for 1.18% of its genome. Massive loss of transposable elements was observed in three fission yeast genomes after their split from Sch. japonicas [46].
In S. cerevisiae, 8331 of 13,647 (61.1%) SVs were found within 100 bp of LTR retrotransposons or LTR fragments (Additional file 8: Table S6). Among them, 5585 SVs in S. cerevisiae are located within 100 bp of the 50 complete LTR retrotransposons, accounting for 40.9% of all SVs identified in the 155 S. cerevisiae strains. The substantial portion of SVs associated with LTRs in S. cerevisiae is consistent with a previous study based on a survey of spontaneous mutations [63]. In Sch. pombe, only 24.6% (300) SVs were found within 100 bp of LTRs. This number is further reduced to 1.47% (225 SVs) in N. crassa, suggesting TEs have contributed quite differently to the genome rearrangement among the three species. Therefore, TEs might play an important role in generating genome instability in S. cerevisiae, but its role is limited in the other fungal species, particularly in N. crassa. Furthermore, because the numbers of TEs are highly dynamic between different fungal species within a subphylum [62], it suggests that the number of TEs is probably not a leading factor for the heterogeneous rates of genome rearrangement among the three Ascomycota subphyla.
In this study, we found that the rates of genome rearrangements are highly heterogeneous among different lineages of fungal species and there is a positive correlation between the rates of genome rearrangement and species richness. These results offer a plausible explanation for the huge disparity of species richness among the three Ascomycota subphyla and between different classes. Therefore, our study extends the chromosomal theory of speciation to the genome-scale. Specifically, the level of chromosome rearrangement at the genome scale could impact species richness, providing a clue to study the underlying genetic basis of species richness variation among taxonomic groups. The species richness disparity is a pervasive phenomenon that is observed in many different lineages [23]. The underlying causes for the disparity of species richness in other lineages of organisms, such as insects, fishes, and flowering plants, remains to be elucidated. Here, we provided solid lines of evidence to support an important role of rates of genome rearrangement in promoting species richness. With rapid accumulations of genome sequencing data, it will soon become possible to determine the extent to which the heterogeneity of the rates of genome rearrangements contributed to the species richness disparity in those animal and plant lineages.
On the other hand, our study also raises some questions for future research. The first question is what major factors have resulted in the highly heterogeneous rates of chromosomal rearrangements among the three Ascomycota lineages? We showed here that the occurrence of whole genome duplication and pathogenic lifestyle might have elevated the gene order divergence and rates of genome rearrangements (Fig. 5). Nearly 90% of duplicate genes generated by WGD has lost after the occurrence of WGD [49, 50], which inevitably led to breakage of a large number of gene neighborhoods and increased the divergence of gene order. The pathogenic lifestyle of some species, such as C. albicans may have accumulated more rearrangements because of selective sweeps due to adaptation to narrow ecological niches, or less efficient selection due to smaller population size [37]. The rate of gene order divergence for the group of non-WGD and non-pathogenic budding yeast, such as Kluyveromyces lactis and Zygosaccharomyces rouxii, are not quite different from that of fission yeasts, supporting an influential impact of WGD and pathogenic lifestyle on the genome stability. Recombination between non-allelic homologous loci, particularly between transposable elements, is a major underlying mechanism of chromosomal rearrangements [64]. The three Ascomycota subphyla display sharp differences in the abundance of transposable elements. However, as above-mentioned, the different abundance of TEs is unlikely a leading factor because the numbers of TEs are also quite different among different fungal species within a subphylum [62]. Therefore, it remains largely unclear about why the Pezizomycotina species have significantly higher rates of genome rearrangement than the other two lineages.
The second question is that how chromosomal rearrangements were fixed in populations considering its deleterious effect on sexual reproduction? Avelar et al. demonstrated that the deleterious effect in sexual reproduction by chromosomal rearrangements in fission yeast could be compensated by a strong growth advantage in asexual reproduction, the dominant form in yeasts, in certain environments [12]. Thus, the fixation of chromosomal rearrangements can be promoted in a local population [65]. Furthermore, the natural life cycle of budding yeasts with one sexual cycle only every 1000 asexual generations [66], which makes them particularly susceptible to random drift. The genomes of budding yeast have undergone repeated bottleneck due to the expansion of local populations [67]. Therefore, we speculate that the fixation of chromosomal rearrangements by random drift may serve as a mechanism to facilitate species diversification. This hypothesis can be tested by future studies using experimental evolution approaches.
Based on comparative analysis of genomes of 71 species and 216 strains in Ascomycota, we found that rates of genome rearrangement are highly heterogeneous among Ascomycota lineages. The rates of genome rearrangement positively correlate with species richness in both ranks of subphylum and class. Furthermore, our data suggest that the different rates of imbalanced rearrangement, such as deletions, are a major contributor to the heterogeneous rearrangement rates. This study supports that a higher rate of genome rearrangement at the genome scale might have accelerated the speciation process and increased species richness during the evolution of Ascomycota species. Our findings provide a plausible explanation for the species richness disparity among Ascomycota lineages, which will be valuable to unravel the underlying causes for the species richness disparity in many other taxonomic groups.
The genomic sequences, protein sequences and genome annotation of fungal species examined were retrieved from the NCBI Reference Sequence Database (RefSeq) (Additional file 1: Table S1). Raw reads and genome assemblies for 155 S. cerevisiae strains were obtained from Gallone et al. [68]. Raw sequencing reads of 32 Sch. pombe and 29 N. crassa strains were downloaded from the NCBI SRA database (Additional file 7: Table S5).
Identification of orthologous groups and phylogenetic inference of species tree
Pairwise orthologous groups between two species were identified using InParanoid 8 [39]. We identified 160 sets of 1:1 orthologous protein groups from 71 Ascomycota species and a Basidiomycota species Ustilago maydis, which was used as an outgroup (Additional file 2: Table S2). The 1:1 orthologous protein group here was defined as a gene family that only contains a single copy in each of the 72 species. Multiple sequence alignments were generated using MUSCLE [69]. The poorly aligned regions were further trimmed using trimAl v1.2 [70]. A maximum likelihood (ML) analysis was performed for each of the 160 orthologous groups using RAxML v8.2.10 with 100 bootstrap replicates [71] under PROTGAMMAIJTTF model as recommended by ProtTest.3.4.2 [72]. Phylogenetic reconstruction was performed with all gene sets using the coalescence method implemented in ASTRAL v5.5.6 [73]. The genetic distance between two species was calculated based on the sequence alignment concatenated from the 160 alignments using PHYLIP [74] with Jones-Taylor-Thornton (JTT) substitution model (Additional file 3: Table S3).
Quantifying gene order divergence
To calculate the divergence of gene order, we first assign a number to each gene based on their coordination from 5'end to 3'end on each chromosome. Specifically, the genome coordination of gene i and j in the same chromosome of species A is denoted as L Ai and L Aj , respectively. For example, the first and second gene located on chromosome 1 of species A are given genome coordination L A1 = 10,001 and L A2 = 10,002. If L Ai and L Aj are neighboring genes, their gene order distance D ij in species A is calculated as the absolute number of genome coordination differences D Aij = | L Ai – L Aj | = 1. Similarly, the gene order of the orthologs of gene i and j in species B (D Bij )is calculated as | L Bi – L Bj |. Therefore, if the threshold to define a conserved gene order is D ij = 1, and D Bij = 1, the gene order of i and j between species A and B is considered as conserved (c ij = 1). If D Bij > 1, their gene order is considered divergent or lost (c ij = 0). As different conservation thresholds (D ij = 1 ~ 5) have been examined and similar patterns were observed. Thus, we only present the results based on threshold of D ij = 1. The proportion of gene order divergence (pGOD) between two genomes was calculated as the ratio of lost gene neighborhood among all gene neighborhoods:
$$ pGOD=1-\frac{\Sigma {c}_{ij}}{\left({N}_1+{N}_2-{n}_1-{n}_2\right)/2}, $$
where N 1 and N 2 are the numbers of genes of the two genomes examined, and n 1 and n 2 represent the numbers of chromosomes in the two genomes.
Although the loss of gene neighborhood occurred under a very low rate per generation, multiple breakages in the same gene neighborhood might have occurred if the divergence time between two species is sufficiently long. Moreover, the rates of gene order divergences are heterogeneous across different chromosomal regions. The probability of occurrence of a gene order divergence at a given neighborhood follows the gamma distribution. Therefore, the gamma distance of gene orders dGOD can be estimated by Eq. 2:
$$ dGOD=\alpha \left[{\left(1- pGOD\right)}^{-1/\alpha }-1\right], $$
where α is the shape or gamma parameter. The α values were estimated based on the distribution of pGOD values of all chromosomal regions. Specifically, we used a sliding-window analysis to obtain the pGOD values of all chromosomal regions between two genomes. To mitigate large variations due to small sample size, we used a window size of 50 genes and moved by every 25 genes. The α value was then calculated using the MASS package in R (Additional file 6: Table S4).
Sequencing read processing, genome assembly, and estimation of genetic distances between genomes
We assessed the quality of the raw reads using FastQC v0.11.3 (https://www.bioinformatics.babraham.ac.uk/projects/fastqc/). BBtools v35.51 (http://jgi.doe.gov/data-and-tools/bbtools/) was used to filter reads with low-quality bases. Both read-ends were trimmed by 5 bp. 3′-ends were trimmed until there were at least 5 consecutive bases with quality above 20. We filtered any reads with average quality below 20, more than 3 uncalled bases, or length shorter than 50 after trimming. De novo assembly of each strain's genome was carried out using SPAdes v3.6.2 [75]. We only used strains with sequencing coverage higher than 50X (Additional file 7: Table S5). Genetic distance (Additional file 7: Table S5) between each strain and the reference genome of respective species was estimated from genome assembly using Mash v1.1.1 [76].
Identification and validation of structural variations based on paired-end sequencing data
Paired-end reads were aligned to the reference genomes using BWA-MEM v0.7.15 [77]. Only uniquely-mapped reads, defined here as having mapping quality above 20, were used. Initial structural variant (SV) were identified using GRIDSS v1.4.0 [78], which utilizes local-assembly, split-read, and read-depth evidence. SV calls with one or more of the following criteria were filtered: size less than 100 bp, GRIDSS quality score less than 1000, left end not assembled, right end not assembled, or within 30 kbp of a telomeric or centromeric region. Because many deletions and insertions only included transposable elements, we also filtered deletion, insertion, inversion and duplication calls that had 90% or more reciprocal overlap with a transposable element using BEDtools v2.26.0 [79] and a custom script.
To further filter false positive SV calls and delineate breakpoints, we performed local assembly for all candidate SVs, inspired by Malhotra et al. [80]. Read pairs within 1 kbp of candidate breakpoints were extracted using SAMtools v1.3.1 [81] and re-synchronized using a custom script. De novo assembly of breakpoint-spanning contigs was performed using the overlap-based (OLC) assembler Fermi-lite [82], considering the number of reads in a 2 kbp window can be relatively small. Contigs were aligned to the reference using YAHA v0.1.83 [83], which is optimized for finding spilt-alignments. Split-alignments were allowed 75% of overlap in the contig. SV validity was then inferred from the alignment results. A deletion was considered valid if the distance between split-alignments was larger in the reference than in the contig by at least 100 bp. Similarly, an insertion was considered valid if the distance between split-alignments was larger in the contig than in the reference by at least 100 bp. An inversion was considered valid if a sequence larger than 100 bp aligned to its reverse complement. A duplication was judged valid if split-alignments had a 100 bp larger overlap in the reference than their overlap in the contig. A translocation was judged valid if split-alignments came from two different chromosomes. Secondary alignments were considered when validating duplications and translocations (YAHA parameter "-FBS Y"). For deletions, insertions and tandem duplications, we required that breakpoints reported by local assembly overlap within +/− 100 bp of GRIDSS breakpoints. For translocations, we required that one breakpoint reported by local assembly overlaps with a GRIDSS breakpoint +/− 100 bp, and that the other breakpoint reported by local assembly be from the same chromosome of the other GRIDSS breakpoint.
GOD:
Gene order divergence
WGD:
Whole genome duplication
Petkov PM, Graber JH, Churchill GA, DiPetrillo K, King BL, Paigen K. Evidence of a large-scale functional organization of mammalian chromosomes. PLoS Biol. 2007;5(5):e127. author reply e128
Stankiewicz P, Lupski JR. Genome architecture, rearrangements and genomic disorders. Trends Genet. 2002;18(2):74–82.
Rieseberg LH, Whitton J, Gardner K. Hybrid zones and the genetic architecture of a barrier to gene flow between two sunflower species. Genetics. 1999;152(2):713–27.
Lai Z, Nakazato T, Salmaso M, Burke JM, Tang S, Knapp SJ, Rieseberg LH. Extensive chromosomal repatterning and the evolution of sterility barriers in hybrid sunflower species. Genetics. 2005;171(1):291–303.
Pires JC, Zhao J, Schranz ME, Leon EJ, Quijada PA, Lukens LN, Osborn TC. Flowering time divergence and genomic rearrangements in resynthesized Brassica polyploids (Brassicaceae). Biol J Linn Soc. 2004;82(4):675–88.
Masly JP, Jones CD, Noor MA, Locke J, Orr HA. Gene transposition as a cause of hybrid sterility in drosophila. Science (New York, NY). 2006;313(5792):1448–50.
Basset P, Yannic G, Brunner H, Hausser J. Restricted gene flow at specific parts of the shrew genome in chromosomal hybrid zones. Evolution. 2006;60(8):1718–30.
Stump AD, Pombi M, Goeddel L, Ribeiro JM, Wilder JA, della Torre A, Besansky NJ. Genetic exchange in 2La inversion heterokaryotypes of Anopheles gambiae. Insect Mol Biol. 2007;16(6):703–9.
Panithanarak T, Hauffe HC, Dallas JF, Glover A, Ward RG, Searle JB. Linkage-dependent gene flow in a house mouse chromosomal hybrid zone. Evolution. 2004;58(1):184–92.
Ryu SL, Murooka Y, Kaneko Y. Reciprocal translocation at duplicated RPL2 loci might cause speciation of Saccharomyces bayanus and Saccharomyces cerevisiae. Curr Genet. 1998;33(5):345–51.
Delneri D, Colson I, Grammenoudi S, Roberts IN, Louis EJ, Oliver SG. Engineering evolution to study speciation in yeasts. Nature. 2003;422(6927):68–72.
Avelar AT, Perfeito L, Gordo I, Ferreira MG. Genome architecture is a selectable trait that can be maintained by antagonistic pleiotropy. Nat Commun. 2013;4:2235.
Zanders SE, Eickbush MT, Yu JS, Kang JW, Fowler KR, Smith GR, Malik HS. Genome rearrangements and pervasive meiotic drive cause hybrid infertility in fission yeast. elife. 2014;3:e02630.
Fischer G, James SA, Roberts IN, Oliver SG, Louis EJ. Chromosomal evolution in Saccharomyces. Nature. 2000;405(6785):451–4.
Liti G, Barton DB, Louis EJ. Sequence diversity, reproductive isolation and species concepts in Saccharomyces. Genetics. 2006;174(2):839–50.
White MJD. Modes of speciation. San Francisco, CA: W. H. Freeman; 1978.
Noor MA, Grams KL, Bertucci LA, Reiland J. Chromosomal inversions and the reproductive isolation of species. Proc Natl Acad Sci U S A. 2001;98(21):12084–8.
Rieseberg LH. Chromosomal rearrangements and speciation. Trends Ecol Evol. 2001;16(7):351–8.
Navarro A, Barton NH. Chromosomal speciation and molecular divergence--accelerated evolution in rearranged chromosomes. Science (New York, NY). 2003;300(5617):321–4.
Coyne JA, Orr HA. Speciation. Sunderland, Mass: Sinauer Associates; 2004.
Faria R, Navarro A. Chromosomal speciation revisited: rearranging theory with pieces of evidence. Trends Ecol Evol. 2010;25(11):660–9.
Kreft H, Jetz W. Global patterns and determinants of vascular plant diversity. Proc Natl Acad Sci U S A. 2007;104(14):5925–30.
Rabosky DL, Slater GJ, Alfaro ME. Clade age and species richness are decoupled across the eukaryotic tree of life. PLoS Biol. 2012;10(8):e1001381.
Butlin R, Bridle J, Schluter D. Speciation and patterns of diversity. Cambridge, UK. New York: Cambridge University Press; 2009.
McPeek MA, Brown JM. Clade age and not diversification rate explains species richness among animal taxa. Am Nat. 2007;169(4):E97–106.
Rabosky DL, Donnellan SC, Talaba AL, Lovette IJ. Exceptional among-lineage variation in diversification rates during the radiation of Australia's most diverse vertebrate clade. Proc Biol Sci R Soc. 2007;274(1628):2915–23.
Rabosky DL. Ecological limits and diversification rate: alternative paradigms to explain the variation in species richness among clades and regions. Ecol Lett. 2009;12(8):735–43.
Ainsworth GC, Bisby GR, Kirk PM, CABI Bioscience. Ainsworth & Bisby's dictionary of the fungi / by P.M. Kirk ... [et al.]; with the assistance of T.V. Andrianova ... [et al.]. 10th ed. Wallingford, Oxon: CABI; 2008.
Hibbett DS, Binder M, Bischoff JF, Blackwell M, Cannon PF, Eriksson OE, Huhndorf S, James T, Kirk PM, Lucking R, et al. A higher-level phylogenetic classification of the Fungi. Mycol Res. 2007;111(Pt 5):509–47.
Hittinger CT, Rokas A, Bai FY, Boekhout T, Goncalves P, Jeffries TW, Kominek J, Lachance MA, Libkind D, Rosa CA, et al. Genomics and the making of yeast biodiversity. Curr Opin Genet Dev. 2015;35:100–9.
Schoch CL, Sung GH, Lopez-Giraldez F, Townsend JP, Miadlikowska J, Hofstetter V, Robbertse B, Matheny PB, Kauff F, Wang Z, et al. The Ascomycota tree of life: a phylum-wide phylogeny clarifies the origin and evolution of fundamental reproductive and ecological traits. Syst Biol. 2009;58(2):224–39.
Prieto M, Wedin M. Dating the diversification of the major lineages of Ascomycota (Fungi). PLoS One. 2013;8(6):e65576.
Medvedev P, Stanciu M, Brudno M. Computational methods for discovering structural variation with next-generation sequencing. Nat Methods. 2009;6(11 Suppl):S13–20.
Chen K, Wallis JW, McLellan MD, Larson DE, Kalicki JM, Pohl CS, McGrath SD, Wendl MC, Zhang Q, Locke DP, et al. BreakDancer: an algorithm for high-resolution mapping of genomic structural variation. Nat Methods. 2009;6(9):677–81.
Hormozdiari F, Alkan C, Eichler EE, Sahinalp SC. Combinatorial algorithms for structural variation detection in high-throughput sequenced genomes. Genome Res. 2009;19(7):1270–8.
Korbel JO, Abyzov A, Mu XJ, Carriero N, Cayting P, Zhang Z, Snyder M, Gerstein MB. PEMer: a computational framework with simulation-based error models for inferring genomic structural variants from massive paired-end sequencing data. Genome Biol. 2009;10(2):R23.
Fischer G, Rocha EPC, Brunet F, Vergassola M, Dujon B. Highly variable rates of genome rearrangements between hemiascomycetous yeast lineages. PLoS Genet. 2006;2(3):253–61.
Huynen MA, Bork P. Measuring genome evolution. Proc Natl Acad Sci U S A. 1998;95(11):5849–56.
Sonnhammer EL, Ostlund G. InParanoid 8: orthology analysis between 273 proteomes, mostly eukaryotic. Nucleic Acids Res. 2015;43(Database issue):D234–9.
James TY, Kauff F, Schoch CL, Matheny PB, Hofstetter V, Cox CJ, Celio G, Gueidan C, Fraker E, Miadlikowska J, et al. Reconstructing the early evolution of Fungi using a six-gene phylogeny. Nature. 2006;443(7113):818–22.
Rolland T, Dujon B. Yeasty clocks: dating genomic changes in yeasts. C R Biol. 2011;334(8-9):620–8.
Nei M. Molecular evolutionary genetics. New York: Columbia University Press; 1987.
Nei M, Xu P, Glazko G. Estimation of divergence times from multiprotein sequences for a few mammalian species and several distantly related organisms. Proc Natl Acad Sci U S A. 2001;98(5):2497–502.
Nei M, Kumar S. Molecular evolution and phylogenetics. Oxford. New York: Oxford University Press; 2000.
Uzzell T, Corbin KW. Fitting discrete probability distributions to evolutionary events. Science (New York, NY). 1971;172(3988):1089–96.
Rhind N, Chen Z, Yassour M, Thompson DA, Haas BJ, Habib N, Wapinski I, Roy S, Lin MF, Heiman DI, et al. Comparative functional genomics of the fission yeasts. Science (New York, NY). 2011;332(6032):930–6.
Butler G, Rasmussen MD, Lin MF, Santos MA, Sakthikumar S, Munro CA, Rheinbay E, Grabherr M, Forche A, Reedy JL, et al. Evolution of pathogenicity and sexual reproduction in eight Candida genomes. Nature. 2009;459(7247):657–62.
Kurtzman CP, Robnett CJ. Phylogenetic relationships among yeasts of the 'Saccharomyces complex' determined from multigene sequence analyses. FEMS Yeast Res. 2003;3(4):417–32.
Kellis M, Birren BW, Lander ES. Proof and evolutionary analysis of ancient genome duplication in the yeast Saccharomyces cerevisiae. Nature. 2004;428(6983):617–24.
Wolfe KH, Shields DC. Molecular evidence for an ancient duplication of the entire yeast genome. Nature. 1997;387(6634):708–13.
Gordon JL, Byrne KP, Wolfe KH. Additions, losses, and rearrangements on the evolutionary route from a reconstructed ancestor to the modern Saccharomyces cerevisiae genome. PLoS Genet. 2009;5(5):e1000485.
Seoighe C, Wolfe KH. Extent of genomic rearrangement after genome duplication in yeast. Proc Natl Acad Sci U S A. 1998;95(8):4447–52.
Liang Y, Hou X, Wang Y, Cui Z, Zhang Z, Zhu X, Xia L, Shen X, Cai H, Wang J, et al. Genome rearrangements of completely sequenced strains of Yersinia pestis. J Clin Microbiol. 2010;48(5):1619–23.
Scannell DR, Byrne KP, Gordon JL, Wong S, Wolfe KH. Multiple rounds of speciation associated with reciprocal gene loss in polyploid yeasts. Nature. 2006;440(7082):341–5.
Rachidi N, Barre P, Blondin B. Multiple ty-mediated chromosomal translocations lead to karyotype changes in a wine strain of Saccharomyces cerevisiae. Mol Gen Genet. 1999;261(4-5):841–50.
Crombach A, Hogeweg P. Chromosome rearrangements and the evolution of genome structuring and adaptability. Mol Biol Evol. 2007;24(5):1130–9.
Raffaele S, Kamoun S. Genome evolution in filamentous plant pathogens: why bigger can be better. Nat Rev Microbiol. 2012;10(6):417–30.
de Jonge R, Bolton MD, Kombrink A, van den Berg GCM, Yadeta KA, Thomma BPHJ. Extensive chromosomal reshuffling drives evolution of virulence in an asexual pathogen. Genome Res. 2013;23(8):1271–82.
Manning VA, Pandelova I, Dhillon B, Wilhelm LJ, Goodwin SB, Berlin AM, Figueroa M, Freitag M, Hane JK, Henrissat B, et al. Comparative genomics of a plant-pathogenic fungus, Pyrenophora tritici-repentis, reveals transduplication and the impact of repeat elements on pathogenicity and population divergence. G3 (Bethesda, Md). 2013;3(1):41–63.
Mieczkowski PA, Lemoine FJ, Petes TD. Recombination between retrotransposons as a source of chromosome rearrangements in the yeast Saccharomyces cerevisiae. DNA Repair (Amst). 2006;5(9-10):1010–20.
Bleykasten-Grosshans C, Neuveglise C. Transposable elements in yeasts. C R Biol. 2011;334(8-9):679–86.
Muszewska A, Hoffman-Sommer M, Grynberg M. LTR retrotransposons in fungi. PLoS One. 2011;6(12):e29425.
Lynch M, Sung W, Morris K, Coffey N, Landry CR, Dopman EB, Dickinson WJ, Okamoto K, Kulkarni S, Hartl DL, et al. A genome-wide view of the spectrum of spontaneous mutations in yeast. Proc Natl Acad Sci U S A. 2008;105(27):9272–7.
Chen KS, Manian P, Koeuth T, Potocki L, Zhao Q, Chinault AC, Lee CC, Lupski JR. Homologous recombination of a flanking repeat gene cluster is a mechanism for a common contiguous gene deletion syndrome. Nat Genet. 1997;17(2):154–63.
Lande R. The fixation of chromosomal rearrangements in a subdivided population with local extinction and colonization. Heredity (Edinb). 1985;54(Pt 3):323–32.
Tsai IJ, Bensasson D, Burt A, Koufopanou V. Population genomics of the wild yeast Saccharomyces paradoxus: quantifying the life cycle. Proc Natl Acad Sci U S A. 2008;105(12):4957–62.
Dujon B. Yeast evolutionary genomics. Nature reviews. 2010;11(7):512–24.
Gallone B, Steensels J, Prahl T, Soriaga L, Saels V, Herrera-Malaver B, Merlevede A, Roncoroni M, Voordeckers K, Miraglia L, et al. Domestication and divergence of Saccharomyces cerevisiae beer yeasts. Cell. 2016;166(6):1397–410. e1316
Edgar RC. MUSCLE: a multiple sequence alignment method with reduced time and space complexity. BMC Bioinformatics. 2004;5:113.
Capella-Gutierrez S, Silla-Martinez JM. Gabaldon T: trimAl: a tool for automated alignment trimming in large-scale phylogenetic analyses. Bioinformatics. 2009;25(15):1972–3.
Stamatakis A. RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models. Bioinformatics. 2006;22(21):2688–90.
Abascal F, Zardoya R, Posada D. ProtTest: selection of best-fit models of protein evolution. Bioinformatics. 2005;21(9):2104–5.
Mirarab S, Reaz R, Bayzid MS, Zimmermann T, Swenson MS, Warnow T. ASTRAL: genome-scale coalescent-based species tree estimation. Bioinformatics. 2014;30(17):i541–8.
Felsenstein J. PHYLIP - phylogeny inference package (version 3.2). Cladistics. 1989;5:164–6.
Bankevich A, Nurk S, Antipov D, Gurevich AA, Dvorkin M, Kulikov AS, Lesin VM, Nikolenko SI, Pham S, Prjibelski AD, et al. SPAdes: a new genome assembly algorithm and its applications to single-cell sequencing. J Comput Biol. 2012;19(5):455–77.
Ondov BD, Treangen TJ, Melsted P, Mallonee AB, Bergman NH, Koren S, Phillippy AM. Mash: fast genome and metagenome distance estimation using MinHash. Genome Biol. 2016;17(1):132.
Li H, Durbin R. Fast and accurate long-read alignment with burrows-wheeler transform. Bioinformatics. 2010;26(5):589–95.
Cameron DL, Schroder J, Penington JS, Do H, Molania R, Dobrovic A, Speed TP, Papenfuss AT. GRIDSS: sensitive and specific genomic rearrangement detection using positional de Bruijn graph assembly. Genome Res. 2017; 27(12):2050–2060.
Quinlan AR, Hall IM. BEDTools: a flexible suite of utilities for comparing genomic features. Bioinformatics. 2010;26(6):841–2.
Malhotra A, Lindberg M, Faust GG, Leibowitz ML, Clark RA, Layer RM, Quinlan AR, Hall IM. Breakpoint profiling of 64 cancer genomes reveals numerous complex rearrangements spawned by homology-independent mechanisms. Genome Res. 2013;23(5):762–76.
Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, Marth G, Abecasis G, Durbin R, Genome Project Data Processing S. The sequence alignment/map format and SAMtools. Bioinformatics. 2009;25(16):2078–9.
Li H. Exploring single-sample SNP and INDEL calling with whole-genome de novo assembly. Bioinformatics. 2012;28(14):1838–44.
Faust GG, Hall IM. YAHA: fast and flexible long-read alignment with optimal breakpoint detection. Bioinformatics. 2012;28(19):2417–24.
We'd like to thank Dr. Wen-Hsiung Li for the valuable comments for improving the manuscript. This study is supported by the Saint Louis University start-up fund to Z. L.
The work was supported by Saint Louis University startup fund to Z.L.
The data sets supporting the results of this article are included in the article and its Additional files 1, 2, 3, 4, 5, 6, 7 and 8.
Department of Biology, Saint Louis University, St. Louis, MO, 63103, USA
Ahmad Rajeh & Zhenguo Lin
Department of Computer Science, Saint Louis University, St. Louis, MO, 63103, USA
Ahmad Rajeh
Department of BioSciences, Rice University, Houston, TX, 77005, USA
Jie Lv
Zhenguo Lin
Conceived and designed the study: ZL. Analyzed the data: ZL, AR and JL. Wrote the manuscript: ZL and AR. All authors have read and approved the manuscript.
Correspondence to Zhenguo Lin.
This study does not involve humans, human data or animals, and reporting health-related outcomes.
Table S1. List of species examined in this study and genome assembly version. (XLSX 16 kb)
Table S2. List of genes in 160 orthologous groups identified in this study. (XLSX 134 kb)
Table S3. Raw data of gene order divergence and sequence distance. (XLSX 294 kb)
Figure S1. Examples of significant variation of pGOD among different chromosomal regions in Saccharomyces cerevisiae, Schizosaccharomyces pombe and Neurospora crassa. A sliding-windows analysis was performed to calculate the pGOD values among different chromosomal regions. Each window includes 50 genes and moves by every 25 genes. (PNG 240 kb)
Figure S2. Examples of distribution of pGOD values in Saccharomyces cerevisiae, Schizosaccharomyces pombe and Neurospora crassa. The α value was calculated using the MASS package in R. (PNG 80 kb)
Table S4. Gamma parameters estimated in all pairwise comparison between Saccharomyces cerevisiae, Schizosaccharomyces pombe and Neurospora crassa and other species. (XLSX 11 kb)
Table S5. List of sequencing data information, assembly, and genetic distance for 216 strains in Saccharomyces cerevisiae, Schizosaccharomyces pombe and Neurospora crassa. (XLSX 25 kb)
Table S6. List of structural variants identified from the genome sequencing data of 216 strains. (XLSX 2424 kb)
Rajeh, A., Lv, J. & Lin, Z. Heterogeneous rates of genome rearrangement contributed to the disparity of species richness in Ascomycota. BMC Genomics 19, 282 (2018). https://doi.org/10.1186/s12864-018-4683-0
Chromosomal rearrangements
Species richness
Taphrinomycotina
Pezizomycotina
Saccharomycotina
|
CommonCrawl
|
Decline of protein structure rigidity with interatomic distance
Oliviero Carugo ORCID: orcid.org/0000-0002-2924-90161,2
Protein structural rigidity was analyzed in a non-redundant ensemble of high-resolution protein crystal structures by means of the Hirshfeld test, according to which the components (uX and uY) of the B-factors of two atoms (X and Y) along the interatomic direction is related to their degree of rigidity: the atoms may move as a rigid body if uX = uY and they cannot if uX ≠ uY.
It was observed that the rigidity degree diminishes if the number of covalent bonds intercalated between the two atoms (d_seq) increases, while it is rather independent on the Euclidean distance between the two atoms (d): for a given value of d_seq, the difference between uX and uY does not depend on d. No additional rigidity decline is observed when d_seq ≥ ~ 30 and this upper limit is very modest, close to 0.015 Å.
This suggests that protein flexibility is not fully described by B-factors that capture only partially the wide range of distortions that proteins can afford.
Molecule flexibility is inherent in thermodynamic stability and chemical reactivity [1]. In globular proteins, for example, the residual mobility of solvent exposed side-chains and loops may provide a favorable entropic contribution to the folding free energy [2, 3] and it may tune the thermodynamics of substrate access into active sites—and of course the exit of products [4, 5]—and of binding partner recognition [2, 6].
Studies on protein flexibility have addressed numerous molecular features by means of several methodological approaches. Atomic resolution crystallography allowed the characterization of conformationally disordered atoms [7,8,9,10]. Time resolved crystallography provided three-dimensional models of dynamical changes that occur during chemical reactions [11]. Molecular dynamics studies allowed simulations of macromolecular movements in silico [12, 13] and the estimation of thermodynamic state functions [14]. Other computational approaches, like normal mode analysis, have been used to identify the structural distortions of a protein about an equilibrium position [15].
Another source of information about protein flexibility is provided by the atomic displacement parameters—usually referred to as B-factor (B)—that monitor the positional displacements of the atoms around their equilibrium positions [16, 17]. B-factors have been used in numerous studies to analyze protein dynamics [18, 19]. Although they are, in general, determined and refined isotropically, they are particularly informative in atomic resolution protein crystal structures, when they can be refined anisotropically due the abundance of experimental diffraction data [20].
Here a new and insofar unexplored aspect is considered: how does flexibility decrease when the separation between atoms increases. It can be expected that flexibility is minimal for covalently bound atoms and, more in general, for atoms close to each other, since close interatomic contacts tend to be rigid [21]—this is reflected in molecular modelling by the attribute of hardness given to covalent bond and angles [22]. On the contrary, distant atoms are not expected to behave as a rigid body and their movements can be, to some extent at least, uncorrelated.
Flexibility degree can be monitored by means of the Hirshfeld test [23], which employs the B-factor: for a rigid contact between two atoms X and Y, the components along the interatomic direction of the B-factors of the two atoms (uX and uY) must be identical. This means that their difference (Delta-u) must be equal to zero Å:
$$Delta - u = \left| {u_{X} - u_{Y} } \right| = 0 {\AA}$$
On the contrary, Delta-u far from zero Å is expected for atoms that do not behave as a rigid body and have displacements and dispersions around their average locations independent of each other.
Atom pair separation is defined in two different ways. On the one hand, it is the Euclidean distance (d) between the atoms and, on the other, it is the number of covalent bonds intercalated between the atoms (covalent separation, d_seq).
It is observed that Delta-u values increase if d or d_seq increase. However, the dependence of Delta-u on d is likely to be due to the fact that d is proportional to d_seq. In fact, for a given value of d_seq, Delta-u does not depend on d.
Moreover, it is observed that Delta-u tends to rich its maximal value at d_seq ≈ 30 and to be nearly constant for d_seq > 30. This maximal value is considerably smaller if the Delta-u values are computed with anisotropic B-factors than with isotropic B-factors, suggesting that the isotropic B-factors overestimate protein flexibility.
The maximal Delta-u values are however very modest, close to 0.015 Å, indicating that B-factors are rather unrelated, on average, to the stereochemical rearrangements, which are known to confer high flexibility to proteins, for example for exchanging buried water molecules with the external solvent.
Delta-u values, Euclidean distances and covalent separations were computed for 6,794,404 pairs of atoms in 30 crystal structures, with covalent separation up to 50.
The relationships between Delta-u and Euclidean distance or covalent separation are shown in Fig. 1. Several, interesting observations can be done.
Relationships between isotropic and anisotropic Delta-u (Å) and Euclidean distance (Å; a) and covalent separation (b), and relationship between Euclidean distance (Å) and covalent separation (c; error bars show the estimated standard deviations)
First, the flexibility of atom pairs is clearly overestimated by isotropic Delta-u. This is not unexpected, since anisotropically refined B-factors represent better the positional scatter of the atoms. It is however surprising that the difference between isotropic and anisotropic Delta-u is so large: for atoms 30–35 Å apart, the isotropic Delta-u (ca. 0.08–0.09 Å) is about 4 times larger than its anisotropic counterpart (ca. 0.02 Å); and for atoms separated by 30 covalent bonds it (ca. 0.065 Å) is about 4 times larger than the anisotropic Delta-u (ca. 0.015 Å).
Second, a difference between Euclidean distances and covalent separation appears too. The Delta-us, both isotropic and anisotropic, tend to increase with Euclidean distance and the increase is rather linear for Euclidean distances larger than 10 Å (Fig. 1a). On the contrary, they do not increase monotonically when the covalent bond separation increases (Fig. 1b): in this case, the Delta-us reach a plateau when the covalent separation overtakes 25–30 covalent bonds. The different relationships between Delta-u and Euclidean distances, one the one hand, and covalent separation, on the other, might reflect the fact that the relationship between Euclidean distance and covalent separation is not linear (Fig. 1c).
Third, and this is not surprising, the rigidity of atom pairs decreases when the distance—either the Euclidean or the covalent separation—between them increases. It is obviously expected that covalently bound atoms present a rigid body behavior while distant atoms may present a considerable flexibility, limited by the natural compactness of the globular proteins.
Detailed data on the relationships of anisotropic Delta-u with Euclidean distance and covalent separation are shown in Table 1 (an analogous table is not reported here for isotropic Delta-u, since the same trends are observed). It appears that the dependence of Delta-u on the two distances is different. Given a certain covalent separation, Delta-u is substantially independent of the Euclidean distance. For example, at short covalent separations equal to 6, the Delta-u oscillates slightly between 0.007 and 0.008 Å if the Euclidean distance goes from 3.5 to 7.5 Å; and at longer covalent separation equal to 20, the Delta-u oscillates only between 0.010 and 0.013 Å if the Euclidean distance goes from 3.5 to 21.5 Å.
Table 1 Anisotropic Delta-u values (× 1000; Å) as a function of the Euclidean distance (horizontal, Å) and of the covalent separation (vertical)
This suggests that the rigidity decline is strongly connected to the covalent separation and its dependence on Euclidean distance is simply a consequence of the fact that Euclidean distance is somehow related to covalent separation.
To prove that these trends are significant, despite this is an observational study based on data available at the Protein Data Bank, the 30 crystal structures examined in this manuscript were randomly divided into three, equally populated groups. The relationships between Delta-u and covalent separation determined in the three subsets (Additional file 1: Figure S1) are very similar. This strongly supports the validity of the trends described above, though any deeper interpretation is hindered, at least in part, by the fact that the estimated errors of the B-factors deposited in the Protein Data Bank are unknown—as well as the estimated errors on the atomic coordinates.
The level of rigidity of protein structures can be estimated by the variable Delta-u (see Eqs. 3 and 5), the value of which is expected to be equal to zero for atom pairs that behave as a rigid body. Obviously, this occurs when the two atoms are covalently bound and very close to each other, while Delta-u values larger than zero are expected for atoms very distant from each other.
Actually, Delta-u values are observed to increase progressively if the interatomic distance increases, either when the interatomic distance is the Euclidean distance (Fig. 1a) or the number of covalent bonds intercalated between the two atoms (Fig. 1b).
However, the dependence of Delta-u on Euclidean distance is probably a consequence of the fact Euclidean distance depends on covalent separation (Fig. 1c). In fact, as it is shown in Table 1, Delta-u is rather independent of Euclidean distance at each value of covalent separation—each line in the table. This suggests that protein rigidity is largely due to its covalent structure and less to non-bonding interactions amongst moieties far from each other along the sequence. Certainly, covalent connections between atoms separated by numerous backbone covalent bonds can exist, for example disulfide bonds or contacts mediated by metal cations, and they contribute to confer some rigidity to the protein. However, most of the contacts between atoms separated by numerous backbone covalent bonds involve van der Waals interactions, which apparently do not confer much rigidity to the protein despite the high protein packing efficiency. Further studies are nevertheless necessary to reach a deeper understanding of this phenomenon.
At large distances, the Delta-u approaches the upper value close to 0.06–0.07 Å, computed with isotropic B-factors (Eq. 5), which is considerably larger than the upper value close to 0.015–0.02 Å, computed with anisotropic B-factors (Eq. 3). This clearly indicates that protein flexibility is enormously overestimates by isotropic B-factors.
These Delta-u values are nevertheless considerably small. This is quite surprising since globular proteins are known to be quite flexible, even if they are compact. For example, water molecules buried into the protein core easily exchange with bulk solvent by opening transient channels that allow the entrance/exit of water [24, 25]. Also, aromatic side-chains are known to flip, with 180° rotation, with high flip rates [26].
All these processes require atomic displacements that are considerably larger than the upper Delta-u limits observed in the present communication.
It can be hypothesized that these considerable local deformations, which allow water molecules to enter in and exit from the protein core and that allow aromatic ring flipping, are due to conformational transitions that do not depend on progressive rigidity loss. For example, it is possible to imagine side-chains that pass from a stable, rotameric conformation to another one, both being relatively rigid; or it is possible to imagine a rearrangement of the hydrogen bond network, with stable hydrogen bonds being broken and being replaced by equally stable, new hydrogen bonds. The classic hinge motions of rigid structural moieties might also disconnected from B-factors [27].
Therefore, even if B-factors are known since long time to monitor conformational strain [28], which larger B-factor being associated with dihedral angles far from their stable values, it is possible to hypothesize that B-factors cannot provide information about transitions from a stable structure to a similarly stable but different conformation, which are often referred to as conformational sub-states [29,30,31].
A metaphor for this phenomenon can be an auditorium, all the seats of which are occupied by spectators that can exchange their seats: before and after the exchange, the ensemble of spectators is rather compact and rigid, while a large flexibility is observed when the spectators move from a one seat to another, exchanging their position.
Interestingly, this trend seems to be independent of protein dimension, type of fold, secondary structure composition or biochemical function. As an example, Fig. 2 shows the relationship between Delta-u and covalent separation for three proteins, two of which are enzymes (human aldose reductase, 1us0, and human parvulin, a small peptidyl-prolyl isomerase, 3ui4) and one of which is not (Trichoderma reesei hydrophoibin, a small fungal protein that spontaneously forms amphiphilic monolayers). They adopt different fold types, a TIM-barrel for 1us0, essentially a β-barrel for 2b97, and a α-β-α roll for 3ui4, and one of them, 1us0, is much larger than the others. These proteins show similar trends and there are no enormous differences between them; furthermore, the difference between the two enzymes is comparable to their difference from hydrophoibin, and the largest protein (1us0) is intermediate between the other two.
Relationship between isotropic and anisotropic Delta-u and inter-atomic covalent separation for three proteins, chains A of 1us0 (human aldose reductase in complex with NADP (NDP) and an inhibitor (LFT)), chain A of 2b97 (Trichoderma reesei hydrophoibin), and chain A of 3ui4 (human parvulin 14)
Crystallographic B-factors are largely unable to monitor transitions amongst conformational sub-states. This has been observed, implicitly, in some previous studies. For example, according to a recent study, protein conformational entropy, defined as the movements of certain groups in proteins, is not monitored quantitatively by crystallographic B-factors [32]. Also, it was observed that crystallographic B-factors underestimate the positional heterogeneity in protein crystals [33].
These observations can be explicated as it follows. Crystal structures show the dominating and most stable protein conformation while alternative sub-states remain undetected, especially at low resolution. Some conformational disorder can be observed and refined experimentally only at high resolution [7,8,9,10]. B-factors therefore describe the positional scattering around one conformation and do not reflect the more complex conformational flexibility of proteins. Moreover, B-factors do not monitor only the atomic oscillations around equilibrium positions but depend also on crystal heterogeneity in spaced and time. Crystal structures are in effect representations of the electron density maps of the asymmetric unit, which are the average electron density maps computed (1) on all the asymmetric units present in the crystal and (2) with diffraction data measured over a certain time lapse.
As a consequence, B-factors can be computed quite successfully in—very—small molecule crystals, independently of diffraction data, where B-factors monitor quite effectively atomic fluctuations. The vibrational component of the atomic displacement parameter can be computed with quantum chemistry computations in crystals with very small asymmetric units. For example, density functional theory (DFT)-based methods were used for crystalline l-alanine and crystalline urea [34], and density functional perturbation theory was applied to stishovite and quartz [35]. Recently, B-factors have been computed from ab initio phonon frequencies and displacements for elemental crystals of magnesium, ruthenium, cadmium and silicon [36].
On the contrary, protein crystallographic B-factors are affected by too many non-vibrational components and cannot be predicted by computing the energy of the environment of the atoms by means of quantum chemistry approaches, though it has been shown that protein B-factors are somehow correlated to packing density [37]. At this regard, it is noteworthy that B-factors have also been used to estimate atomic coordinate errors [38, 39], based on the diffraction precision index of Cruickshank [40]. Consequently, they cannot be reproduced reliably in silico, independently of diffraction data.
It must be remembered too that most of protein crystal structure information is being produced at low temperature—100 K—and that a different flexibility might be detected at room temperature or at physiological temperature [41]. However, cryo-crystallography is the predominant form of macromolecular crystallography, given its advantages in reducing radiation damage, especially in modern, high brilliance synchrotron beam lines [42,43,44].
The above discussion does not imply that crystallographic B-factors are of limited value and disconnected from the physicochemical nature of proteins. For example, information about local flexibility can be extracted from B-factor analyses, for example for protein-DNA complexes [45], cold adaptation of psychrophilic enzymes has been shown to be closely related to B-factors [46, 47], and a procedure called B-Fit has been proposed for increasing the thermostability of enzymes and allows their use in chemistry and biotechnology [19]. More in general, protein regions characterized by large B-factors can be considered to be very mobile, though not necessarily rigid; it clearly appears that protein flexibility is not fully described by B-factors, which capture only partially the wide range of distortions that proteins can afford.
While covalently bound atoms form a rigid structural unit, this rigidity, monitored through the Hirshfeld Delta-u [23], is progressively lost if the number of covalent bonds intercalated between two atoms increases, until 30 covalent bonds, after which the Delta-u is rather constant, close to 0.065 Å, if the rigidity is estimated with isotropic B-factors, or close to 0.015 Å, if the rigidity is estimated with anisotropic B-factors. On the one hand, this clearly shows how rigidity is underestimated in isotropically refined crystal structures and, on the other hand, both upper Delta-u values are smaller than expected, suggesting that B-factors capture only partially the wide range of distortions that proteins can afford.
30 crystal structures were extracted from the Protein Data Bank [48, 49] according to the following criteria: redundancy was reduced to 40% pairwise sequence identity [50, 51] in a set of crystal structures determined at 90–110 K and refined at least at 0.8 Å resolution (Additional file 1: Table S1).
The Delta-u values were computed with anisotropic B-factors (U)
$${\mathbf{U}} = \left[ {\begin{array}{*{20}c} {{\rm U}_{11} } & {{\rm U}_{12} } & {{\rm U}_{13} } \\ {{\rm U}_{21} } & {{\rm U}_{22} } & {{\rm U}_{23} } \\ {{\rm U}_{31} } & {{\rm U}_{32} } & {{\rm U}_{33} } \\ \end{array} } \right]$$
$$Delta - u = \left| {{\varvec{n}}^{T} {\mathbf{U}}_{{\varvec{X}}} {\varvec{n}} - {\varvec{n}}^{T} {\mathbf{U}}_{{\varvec{Y}}} {\varvec{n}}} \right|$$
where n is the unit vector from atom X to atom Y. These values are referred to as anisotropic Delta-u, to distinguish them from the isotropic Delta-u, computed with the isotropic B-factor equivalent, defined as
$$B = 8\pi^{2} \frac{{{\rm U}_{11} + {\rm U}_{22} + {\rm U}_{33} }}{3},$$
by means of the following expression.
$$Delta - u = \left| {u_{X} - u_{Y} } \right| = \left| {\sqrt {\frac{{B_{X} }}{{8\pi^{2} }}} - \sqrt {\frac{{B_{Y} }}{{8\pi^{2} }}} } \right|$$
All computations were performed with locally written software.
All data generated or analysed during this study are included in this published article [and its Additional file 1].
Rundong Zhao R, Qi F, Zhang R-Q, Van Hove MA. How does the flexibility of molecules affect the performance of molecular rotors? J Phys Chem. 2018;122:25067–74.
Landry SJ, Taher A, Georgopoulos C, van de Vies SM. Interplay of structure and disorder in cochaperonin mobile loops. Proc Natl Acad Sci USA. 1996;93:11622–7.
Vihinen M. Relationship of protein flexibility to thermostability. Protein Eng. 1987;1:477–80.
Heringa J, Argos P. Strain in protein structures as viewed through nonrotameric side chains: I. Their position and interaction. Proteins. 1999;37:30–43.
Daniel RM, Dunn RV, Finney JL, Smith JC. The role of dynamics in enzyme activity. Annu Rev Biophys Biomol Struct. 2003;32:69–92.
Forrey C, Douglas JF, Gilson MK. The fundamental role of flexibility on the strength of molecular binding. Soft Matter. 2012;8:6385–92.
Longhi S, Czjzek M, Cambillau C. Messages from ultrahigh resolution crystal structures. Curr Opin Struct Biol. 1998;8:730–7.
Longhi S, Czjzek M, Lamzin V, Nicolas A, Cambillau C. Atomic resolution (1.0 Å) crystal structure of Fusarium solani cutinase: stereochemical analysis. J Mol Biol. 1997;8:730–7.
Dauter Z, Lamzin VS, Wilson KS, Dauter Z, Wilson KS. The benefits of atomic resolution. Curr Opin Struct Biol. 1997;7:681–8.
Sevcik J, Lamzin VS, Dauter Z, Wilson KS. Atomic resolution data reveal flexibility in the structure of RNase Sa. Acta Crystallogr. 2002;D58:1307–13.
Orville AM. Recent results in time resolved serial femtosecond crystallography at XFELs. Curr Op Struct Biol. 2020;65:193–208.
Roux B, Allen T, Bernèche S, Im W. Theoretical and computational models of biological ion channels. Q Rev Biophys. 2004;37:15–103.
Stank A, Kokh DB, Fuller JC, Wade RC. Protein Binding Pocket Dynamics. Acc Chem Res. 2016;49:809–15.
Polyansky AA, Zubac R, Zagrovic B. Estimation of conformational entropy in protein-ligand interactions: a computational perspective. Methods Mol Biol. 2012;819:327–53.
Bauer JA, Pavlovic J, Bauerova-Hlinkova V. Normal mode analysis as a routine part of a structural investigation. Molecules. 2019;24:3293.
Dunitz JD, Shomaker V, Trueblood KN. Interpretation of atomic displacement parameters from diffraction studies of crystals. J Phys Chem. 1988;92:856–67.
Trueblood KN, Bürgi H-B, Burzlaff H, Dunitz JC, Gramaccioli CM, Schulz HH, et al. Atomic dispacement parameter nomenclature. Report of a subcommittee on atomic displacement parameter nomenclature. Acta Cryst. 1996;A52:770–81.
Carugo O. Atomic displacement parameters in structural biology. Amino Acids. 2018;50:775–86. https://doi.org/10.1007/s00726-018-2574-y.
Sun ZQL, Qu G, Feng Y, Reetz MT. Utility of B-factors in protein science: interpreting rigidity, flexibility, and internal motion and engineering. Chem Rev. 2019;119:1626–65.
Merritt EA. Expanding the model: anisotropic displacement parameters in protein structure refinement. Acta Cryst. 1999;D55:1109–17.
Slater JC. Quantum theory of matter. New York: McGraw-Hill; 1968.
Holtje H-D, Sippl W, Rognan D, Folkers G. Molecular Modelling. Basic Principles and Applications. Weinheim: Wiley-VCH Verlag; 2003.
Hirshfeld FL. Can X-ray data distinguish bonding effects from vibrational smearing? Acta Cryst. 1976;A32:239–44.
Carugo O. Structure and function of water molecules buried in the protein core. Curr Protein Pept Sci. 2015;16:259–65.
Carugo O. Statistical survey of the buried waters in the Protein Data Bank. Amino Acids. 2016;48:193–202. https://doi.org/10.1007/s00726-015-2064-4.
Weininger U, Moding K, Akke M. Ring flips revisited: (13)C relaxation dispersion measurements of aromatic side chain dynamics and activation barriers in basic pancreatic trypsin inhibitor. Biochemistry. 2014;53:4519–25.
Gerstein M, Lesk AM, Chothia C. Structural mechanisms for domain movements in proteins. Biochemistry. 1994;33:6739–49.
Carugo O, Argos P. Correlation between side chain mobility and conformation in protein structures. Protein Eng. 1997;10:777–87.
Hartmann H, Parak F, Steigemann W, Petsko GA, Ponzi DR, Frauenfelder H. Conformational substates in a protein: structure and dynamics of metmyoglobin at 80 K. Proc Natl Acad Sci USA. 1982;79:4967–71.
Stein DL. A model of protein conformational substates. Proc Natl Acad Sci USA. 1985;82:3670–2.
Ramanathan A, Savol A, Burger V, Chennubhotla CS, Agarwal PK. No TitlProtein conformational populations and functionally relevant substatese. Acc Chem Res. 2014;47:149–56.
Caldararu O, Kumar R, Oksanen E, Logan DT, Ryde U. Are crystallographic B-factors suitable for calculating protein conformational entropy? Phzy Chem Chem Phys. 2019;21:18149.
Kuzmanic A, Pannu NS, Zagrovic B. X-ray refinement significantly underestimates the level of microscopic heterogeneity in biomolecular crystals. Nat Commun. 2014;5:3220.
Madsen AØ, Civalleri B, Ferrabone M, Pascale F, Erba A. Anisotropic displacement parameters for molecular crystals from periodic Hartree-Fock and density functional theory calculations. Acta Cryst. 2013;A69:309–21.
Lee C, Gonze X. Ab initio calculation of the thermodynamic properties and atomic temperature factors of SiO2 α-quartz and stishovite. Phys Rev. 1995;B51:8610–3.
Malica C, Dal Corso A. Temperature dependent atomic B factor: an ab initio calculation. Acta Cryst. 2019;A75:624–32.
Weiss MS. On the interrelationship between atomic displacement parameters (ADPs) and coordinates in protein structures. Acta Crystallogr. 2007;D63:1235–42.
Gurusaran M, Shankar M, Nagarajan R, Helliwell JR, Sekar K. Do we see what we should see? Describing non-covalent interactions in protein structures including precision. IUCrJ. 2014;1:74–81.
Dinesh Kumar KS, Gurusaran M, Satheesh SN, Radha P, Pavithra S, Thulaa Tharshan KPS, et al. Online_DPI: a web server to calculate the diffraction precision index for a protein structure. J Appl Cryst. 2015;48:939–42.
Cruickshank DWJ. Remarks about protein structure precision. Acta Cryst. 1999;D55:583–93.
Fenwick RB, van den Bedem H, Fraser JS, Wright PE. Integrated description of protein dynamics from room-temperature X-ray crystallography and NMR. Proc Natl Acad Sci USA. 2014;111:E445–54.
Garman E, Owen RL. Cryocrystallography of macromolecules. Pract Optim Methods Mol Biol. 2007;364:1–18.
Garman E. "Cool" crystals: macromolecular cryocrystallography and radiation damage. Curr Op Struct Biol. 2003;13:545–51.
Carugo O, Djinovic-Carugo K. When X-rays modify the protein structure: radiation damage at work. Trends Biochem Sci. 2005;30:213–9.
Schneider B, Gelly J-C, de Brevern AG, Cerny J. Local dynamics of proteins and DNA evaluated from crystallographic B factors. Acta Cryst. 2014;D70:2413–9.
Kim S-Y, Hwang KY, Kim S-H, Sung H-C, Han YS, Cho Y. Structural basis for cold adaptation sequence, biochemical properties, and crystal structure of malate dehydrogenase from a psychrophile Aquaspirillium arcticum. J Biol Chem. 1999;274:11761–7.
Merlino A, Krauss IR, Castellano I, Vendittis ED, Rossi B, et al. Structure and flexibility in coldadapted iron superoxide dismutases: The case of the enzyme isolated from Pseudoalteromonas haloplanktis. J Struct Biol. 2010;172:343–52.
Bernstein FC, Koetzle TF, Williams GJB, Meyer EFJ, Brice MD, Rodgers JR, et al. The Protein Data Bank: a computer-based archival file for macromolecular structures. J Mol Biol. 1977;112:535–42.
Berman HM, Westbrook J, Feng Z, Gilliland G, Bhat TN, Weissig H, et al. The protein data bank. Nucleic Acids Res. 2000;28:235–42.
Li W, Godzik A. Cd-hit: a fast program for clustering and comparing large sets of protein or nucleotide sequences. Bioinformatics. 2006;22:1658–9.
Fu L, Niu B, Zhu Z, Wu S, Li W. CD-HIT: accelerated for clustering the next generation sequencing data. Bioinformatics. 2012;28:3150–2.
Prof. K. Djinović is gratefully acknowledged for her kind hospitality and for fruitful discussions. Constant support by prof. B. Galuppi is also gratefully acknowledged.
No external funding was used for this study. I thank internal funding from the University of Pavia and of the University of Vienna.
Department of Chemistry, University of Pavia, Pavia, Italy
Oliviero Carugo
Department of Structural and Computational Biology, University of Vienna, Campus Vienna Biocenter 5, 1030, Vienna, Austria
OC, the sole author of the manuscript, is responsible for every aspect of it (Conceptualization; Methodology; Software; Validation; Formal Analysis; Investigation; Resources; Data Curation; Writing—Original Draft Preparation; Writing—Review and Editing; Visualization; Supervision; Project Administration; Funding Acquisition). All authors read and approved the final manuscript.
Correspondence to Oliviero Carugo.
The author does not declare any conflict of interest.
Additional file 1. Table S1:
List of the entries of the Protein Data Bank examined in the present article. Figure S1: Relationship between Delta-u and covalent separation in three equally populated subsets of the structures examined in the present communication.
Carugo, O. Decline of protein structure rigidity with interatomic distance. BMC Bioinformatics 22, 466 (2021). https://doi.org/10.1186/s12859-021-04393-0
B-factor
Hirshfeld test
Protein rigidity
|
CommonCrawl
|
Moment generating function of the inner product of two gaussian random vectors
Can anybody please suggest how I can compute the moment generating function of the inner product of two Gaussian random vectors, each distributed as $\mathcal N(0,\sigma^2)$, independent of each other? Is there some standard result available for this? Any pointer is highly appreciated.
normal-distribution mathematical-statistics multivariate-analysis moments moment-generating-function
kjetil b halvorsen♦
abhibhatabhibhat
First let's address the case $\Sigma = \sigma\mathbb{I}$. At the end is the (easy) generalization to arbitrary $\Sigma$.
Begin by observing the inner product is the sum of iid variables, each of them the product of two independent Normal$(0,\sigma)$ variates, thereby reducing the question to finding the mgf of the latter, because the mgf of a sum is the product of the mgfs.
The mgf can be found by integration, but there's an easier way. When $X$ and $Y$ are standard normal,
$$XY = ((X+Y)/2)^2 - ((X-Y)/2)^2$$
is a difference of two independent scaled Chi-squared variates. (The scale factor is $1/2$ because the variances of $(X\pm Y)/2$ equal $1/2$.) Because the mgf of a chi-squared variate is $1/\sqrt{1 - 2\omega}$, the mgf of $((X+Y)/2)^2$ is $1/\sqrt{1-\omega}$ and the mgf of $-((X-Y)/2)^2$ is $1/\sqrt{1+\omega}$. Multiplying, we find that the desired mgf equals $1/\sqrt{1-\omega^2}$.
(For later reference, notice that when $X$ and $Y$ are rescaled by $\sigma$, their product scales by $\sigma^2$, whence $\omega$ should scale by $\sigma^2$, too.)
This should look familiar: up to some constant factors and a sign, it looks like the probability density for a Student t distribution with $0$ degrees of freedom. (Indeed, if we had been working with characteristic functions instead of mgfs, we would obtain $1/\sqrt{1 + \omega^2}$, which is even closer to a Student t PDF.) Never mind that there is no such thing as a Student t with $0$ dfs--all that matters is that the mgf be analytic in a neighborhood of $0$ and this clearly is (by the Binomial Theorem).
It follows immediately that the distribution of the inner product of these iid Gaussian $n$-vectors has mgf equal to the $n$-fold product of this mgf,
$$(1 - \omega^2 \sigma^4)^{-n/2}, \quad n=1, 2, \ldots.$$
By looking up the characteristic function of the Student t distributions, we deduce (with a tiny bit of algebra or an integration to find the normalizing constant) that the PDF itself is given by
$$f_{n,\sigma}(x) = \frac{2^{\frac{1-n}{2}} |x|^{\frac{n-1}{2}} K_{\frac{n-1}{2}}\left(\frac{|x|}{\sigma ^2}\right)}{\sqrt{\pi } \sigma ^4 \Gamma \left(\frac{n}{2}\right)}$$
($K$ is a Bessel function).
For instance, here is a plot of that PDF superimposed on the histogram of a random sample of $10^5$ such inner products where $\sigma=1/2$ and $n=3$:
It's harder to confirm the accuracy of the mgf from a simulation, but note (from the Binomial Theorem) that
$$(1 + t^2 \sigma^4)^{-3/2} = 1-\frac{3 \sigma ^4 t^2}{2}+\frac{15 \sigma ^8 t^4}{8}-\frac{35 \sigma ^{12} t^6}{16}+\frac{315 \sigma ^{16} t^8}{128}+\ldots,$$
from which we may read off the moments (divided by factorials). Due to the symmetry about $0$, only the even moments matter. For $\sigma=1/2$ we obtain the following values, to be compared to the raw moments of this simulation:
k mgf simulation/k!
2 0.09375 0.09424920
4 0.00732422 0.00740436
10 2.58 e-6 2.17 e-6
As to be expected, the high moments of the simulation will begin departing from the moments given by the mgf; but at least up through the tenth moment, there is excellent agreement.
Incidentally, when $n=2$ the distribution is bi-exponential.
To handle the general case, begin by noting that the inner product is a coordinate-independent object. We may therefore take the principal directions (eigenvectors) of $\Sigma$ as coordinates. In these coordinates the inner product is the sum of independent products of independent Normal variates, each component distributed with a variance equal to its associated eigenvalue. Thus, letting the nonzero eigenvalues be $\sigma_1^2, \sigma_2^2, \ldots, \sigma_d^2$ (with $0 \le d \le n$), the mgf must equal
$$\left(\prod_{i=1}^d (1 - \omega^2\sigma_i^4)\right)^{-1/2}.$$
To confirm that I made no error in this reasoning, I worked out an example where $\Sigma$ is the matrix
$$\left( \begin{array}{ccc} 1 & \frac{1}{2} & -\frac{1}{8} \\ \frac{1}{2} & 1 & -\frac{1}{4} \\ -\frac{1}{8} & -\frac{1}{4} & \frac{1}{2} \end{array} \right)$$
and computed that its eigenvalues are
$$\left(\sigma_1^2, \sigma_2^2, \sigma_3^2\right) = \left(\frac{1}{16} \left(17+\sqrt{65}\right),\frac{1}{16} \left(17-\sqrt{65}\right),\frac{3}{8}\right)\approx \left(1.56639,0.558609,0.375\right).$$
It was possible to compute the PDF by numerically evaluating the Fourier Transform of the characteristic function (as derived from the mgf formula given here): a plot of this PDF is shown in the following figure as a red line. At the same time, I generated $10^6$ iid variates $X_i$ from the Normal$(0,\Sigma)$ distribution and another $10^6$ iid variates $Y_i$ in the same way, and computed the $10^6$ dot products $X_i\cdot Y_i$. The plot shows the histogram of these dot products (omitting some of the most extreme values--the range was from $-12$ to $15$):
As before, the agreement is excellent. Furthermore, the moments match well through the eighth and reasonably well even at the tenth:
2 1.45313 1.45208
8 11.0994 11.3115
10 24.4166 22.9982
(Added 9 August 2013.)
$f_{n,\sigma}$ is an instance of the variance-gamma distribution, which originally was defined as " the normal variance-mean mixture where the mixing density is the gamma distribution." It has a standard location ($0$), asymmetry parameter of $0$ (it is symmetric), scale parameter $\sigma^2$, and shape parameter $n/2$ (according to the Wikipedia parameterization).
whuber♦whuber
$\begingroup$ Hello whuber, thanks a lot for the detailed explanation. I have one doubt, though. When $\Sigma$ is general, the terms in the sum expansion of the inner product are not iid anymore; hence the mgf of the sum is no more the product of the mgfs. Then, how do we generalize the above analysis to a more general Sigma? $\endgroup$
– abhibhat
$\begingroup$ I added a new section to provide some of the (easy) details of this generalization, to make it clear that nothing new is involved here. You can also use basic properties of mgfs to write down the mgf in the case where the data have nonzero means, too, thereby resolving the problem in full generality. $\endgroup$
– whuber ♦
Not the answer you're looking for? Browse other questions tagged normal-distribution mathematical-statistics multivariate-analysis moments moment-generating-function or ask your own question.
Characteristic function of $Y=X_1X_2$, where $X_1$ and $X_2$ are standard normal
Sum of two normal products is Laplace?
Sum of Products of Rademacher random variables
How is a Bimodal distribution platykurtic?
Convergence of the Matérn covariance function to the squared exponential
Moment/mgf of cosine of directional vectors?
Distribution of $X'X$ if $X\in\mathbb{R}^{T \times N}$ and $X_i'\sim N(\mu,\sigma^2I_N)$
Large deviations results for cosine of two samples from Normal?
Random vector times random matrix
Standard deviation of weighted sums of random distributions with weights random but fixed
Bound on moment generating function
Distribution of scalar products of two random unit vectors in $D$ dimensions
Does the trick of multiplying moment generating functions for the sum of variables work when the distributions are not identical?
Possible to use moment generating function of standard normal to find variance of noncentral $\chi^{2}$?
By conditioning on $N$, show that the moment generating function of $Y$ is given by $m_Y(t)=m_N(\ln(m_X(t)))$
Tail bound for sum of i.i.d. random variables with common moment generating function
When to prefer the moment generating function to the characteristic function?
Moment generating function of non-central Chi-squared distribution with complex mean?
|
CommonCrawl
|
A Systematic Review: Family Support Integrated To Diabetes Self-Management among Glycemic Uncontrolled Type II DM Patients
Rian Adi Pamungkas, Kanittha Chamroonsawasdi, Paranee Vatanasomcoon
Subject: Behavioral Sciences, Other Keywords: diabetes self-management; family support; glycemic uncontrolled; type 2 DM; systematic review
Abstract Background: Diabetes mellitus is dramatically increasing in the wide world. The managing of diabetes care emphasized the self-management education and support into patients' care and family care. Objective: to review and synthesizes the effectiveness of DSME strategies involving family as a key person to provide social support for diabetes mellitus self-management of glycemic uncontrolled patients Method: Three databases through PubMed, CINAHL, and Scopus were reviewed to assess the relevant articles. The following search terms: "type 2 diabetes," "self-management," "family support," and "glycemic uncontrolled." We summarized details of family support on self-management among glycemic uncontrolled patients for 14 existing studies. Results: A total of 22 intervention studies were identified. Those studies have a heterogeneous of the education strategies, support perceived, follow-ups strategies and outcomes among type 2 DM. Family integration on diabetes self-management education (DSME) has a positive impact on several outcomes including, self-care behaviors, psychological outcomes, self-efficacy and clinical outcomes Conclusions: This systematic review found robust data related to the integration of family support on diabetes self-management among glycemic uncontrolled patients. Consequently, the improvement in outcomes was identified. Implications: The findings suggest model of family engagement is better and needed for sustaining the diabetes care in the long-term care
Apoptosis in Type 2 Diabetes, Can It be Prevented?
Agnieszka Kilanowska, Agnieszka Ziółkowska
Subject: Biology, Animal Sciences & Zoology Keywords: Apoptosis; preclinical research; diabetes type 2; HIPPO pathway
Diabetes mellitus is a heterogeneous disease of complex etiology and pathogenesis. Hyperglycemia leads to many serious complications, but also directly initiates the process of β cell apoptosis. A potential strategy for the preservation of pancreatic β cells in diabetes may be to inhibit the implementation of pro-apoptotic pathways or to enhance the action of pancreatic protective factors. The HIPPO signaling pathway is proposed and selected as a target to manipulate the activity of its core proteins in therapy - basic research. MST1 and LATS2 as major upstream signaling kinases of the Hippo pathway are considered as target candidates for pharmacologically induced tissue regeneration and inhibition of apoptosis. Manipulating the activity of components of the HiPPO pathway offers a wide range of possibilities, and thus is a potential tool in the treatment of diabetes and the regeneration of β cells. Therefore, it is important to fully understand the processes involved in apoptosis in diabetic states and to fully characterize the role of this pathway in diabetes. Therapy consisting in slowing down or stopping the mechanisms of apoptosis may be an important direction of diabetes treatment shortly.
A Topological Perspective for Interval Type-2 Fuzzy Hedges
Hime Oliveira
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: type-2 fuzzy sets; fiber bundles; differential topology
Online: 17 August 2019 (04:14:51 CEST)
Type-2 fuzzy sets were introduced by L. Zadeh aiming at modelling some settings in which fuzzy sets (usually called type-1 fuzzy sets) are not sufficient to reflect certain uncertainty degrees - loosely speaking, they are fuzzy sets whose membership degrees are ordinary fuzzy sets. On the other hand, fiber bundles are topological entities of extreme importance in Mathematics itself and many other scientific areas, like Physics (General Relativity, Field Theory etc.), finance modelling, and statistical inference. The present work introduces a conceptual link between the two ideas and conjectures about the potential mutual benefits that can be obtained from this viewpoint.As an objective and usable product of the presented ideas, it is described a framework for defining type-2 fuzzy hedges, proper to operate on interval type-2 fuzzy sets.
Association Between Transcription Factor 7-Like-2 Polymorphisms and Type 2 Diabetes Mellitus in a Ghanaian Population
Christian Obirikorang, Evans Asamoah Adu, Enoch Odame Odame, Emmanuel Acheampong, Lawrence Quaye, Brodrick Yeboah Amoah, Max Efui Annani-Akollor, Aaron Siaw. Kwakye, Foster Fokuoh, Michael Appiah, Eric Nana Yaw Nyarko, Freeman Aidoo, Eric Adua, Eben AFRIFA-YAMOAH, Lois Balmer, Wei Wang
Subject: Keywords: TCF7L2, type 2 diabetes mellitus, cardiometabolic risk factors, single nucleotide polymorphisms
Type-2 diabetes mellitus (T2DM) have been strongly associated with single nucleotide polymorphisms (SNPs) in the TCF7L2 gene. This study investigated the association between rs12255372, rs7903146 and T2DM in a Ghanaian population. A case-control study design was used for this study. A total of 106 T2DM patients and 110 control participants were selected. Basic data collected included body mass index, blood pressure and socio-demographics. Fasting blood samples were collected and used for serum lipid analysis, HbA1c, plasma glucose estimation and DNA extraction. Common and allele-specific primers were designed for genotyping using the Modified Tetra-Primer Amplification assay. Associations were evaluated using logistic regression models. The rs7903146 risk variant was significantly associated with 2.16 vs 4.06 increased odds for T2DM in patients <60 years vs ≥60 years. Both rs7903146 and rs12255372 are significantly associated with increased odds of T2DM in women, overweight/obese; T2DM negative family history (T2DM-NFH) and low-HDL-C. In a multivariate model, rs7903146 but not rs12255372 was significantly associated with 2.18, 5.01 and 2.25 increased odds of T2DM, under the codominant, recessive and additive model, respectively (p<0.05). The association between rs7903146 and rs12255372 with T2DM is more highly associated in a subgroup- women and those with T2DM-NFH, yet have cardiometabolic risk.
Differential Evolution With Shadowed and General Type-2 Fuzzy Systems for Dynamic Parameter Adaptation in Optimal Design of Fuzzy Controllers
Patricia Ochoa, Oscar Castillo, Patricia Melin, José Soria
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Shadowed Type-2 Fuzzy Sets; Generalized Type-2 Fuzzy Systems; Differential Evolution algorithm
This work is mainly focused on improving the differential evolution algorithm with the utilization of shadowed and general type 2 fuzzy systems to dynamically adapt one of the parameters of the evolutionary method. In this case, the mutation parameter is dynamically moved during the evolution process by using a shadowed and general type-2 fuzzy systems. The main idea of this work is to make a performance comparison between using shadowed and general type 2 fuzzy systems as controllers of the mutation parameter in differential evolution. The performance is compared with the problem of optimizing fuzzy controllers for a D.C. Motor. Simulation results show that general type-2 fuzzy systems are better when higher levels of noise are considered in the controller.
Association of KCNJ11 Genetic Variations with Risk of Type 2 Diabetes Mellitus (T2DM) in North Indian Population
Vasiuddin Khan, Deepti Bhatt, Shahbaz Khan, AMIT KUMAR VERMA, Rameez Hasan, Sahar Rafat, Yamini Goyal, Prahlad Singh Bharti, Mohammad Yaqub Shareef, Mohammed A. Alsahli, Arshad Husain Rahmani, Kapil Dev
Subject: Life Sciences, Endocrinology & Metabolomics Keywords: type 2 diabetes; KCNJ11; RFLP; SNP
Online: 5 July 2019 (04:53:09 CEST)
Type 2 diabetes mellitus (T2DM) is a polygenic metabolic disease described by hyperglycemia, which is caused by insulin resistance or reduced insulin secretion. Interaction between various genetic variants and environmental factors triggers T2DM. The main aim of this study was to find the risk associated with genetic variant (rs5210) of KCNJ11gene in the development of T2D in Indian Population. A total number of 300 cases of T2D and 100 control samples were studied to find the polymorphism in KCNJ11 through PCR-RFLP. The genotype and allele frequencies in T2DM cases were significantly different from the control population. We found a significant association of KCNJ11 (rs5210) gene polymorphism with T2DM in North Indian patients indicating the role of this variant in developing risk for T2DM.
The Impact of a Continuous Care Intervention for Treatment of Type 2 Diabetes on Health Care Utilization
Zachary Wagner, Nasir H. Bhanpuri, James P. McCarter, Neeraj Sood
Subject: Medicine & Pharmacology, Other Keywords: Type 2 diabetes, health care utilization
Introduction: Type 2 diabetes (T2D) is a major driver of health care costs, thus treatments enabling T2D reversal may reduce expenditures. We examined the impact of a T2D continuous care intervention (CCI) on health care utilization. Previous research documented that CCI, including individualized nutrition supported by remote care, simultaneously reduced hemoglobin A1c and medication use and improved cardiovascular status after two years; however, the impact on utilization is unknown. Methods: This study used four years of data (two years pre-intervention, two years post-intervention) from the Indiana Network for Patient Care (INPC) health record. Two methods estimated the impact of CCI on utilization. First, an interrupted time series (ITS) including only CCI participants (n=193) compared post-intervention utilization to expected utilization had the pre-intervention trend persisted. Deviation from the trend was estimated non-parametrically for each 6-month interval after the implementation of CCI . Second, a 1:3 matched comparator group (n=579) was constructed and used for a difference-in-differences (DiD) analysis. The primary outcome was annualized outpatient encounters. Secondary outcomes included emergency encounters and hospitalizations. Results: In two years prior to intervention, CCI participants had a mean of 5.77 annualized encounters (5.62 outpatient, 0.04 hospitalizations, 0.11 emergency). The CCI group showed a reduction in outpatient utilization after intervention. In ITS analysis, 1.6 to 1.9 fewer annualized outpatient encounters occurred in each 6-month interval post-intervention relative to expected utilization based on pre-intervention trends (p<0.01 each 6-month period; 28-33% reduction). The DiD analysis suggested a larger reduction; 5 fewer annualized outpatient encounters in the quarter after intervention, diminishing to 2.5 fewer after 2 years (p<0.01 each quarter). The study was underpowered to draw conclusions about hospitalization and emergency encounters due to the limited number of CCI patients and the rarity of encounters. Conclusions: Outpatient encounters were significantly reduced for a T2D patient population up to 2 years after receiving an individualized intervention supporting nutrition and behavior change through remote care.
Antibiotic Consumption Patterns in European Countries Might Be Associated with the Prevalence of Diabetes Type-1-2 (T1D, T2D)
Gábor Ternák, Márton Németh, Martin Rozanovic, Lajos Bogár
Subject: Medicine & Pharmacology, Pathology & Pathobiology Keywords: diabetes type-1; T1D; diabetes type-2; T2D; antibiotics; antibiotic classes; microbiome; dysbiosis; prevalence; concordance
Online: 3 December 2021 (12:45:23 CET)
Abstract: Several publications have raised the issue that the development of diabetes is preceded by alteration of the microbiome (dysbiosis) and hence, the role of environmental factors, triggering dysbiosis, should be considered. Antibiotics are powerful agents inducing dysbiosis and the authors wanted to explore the possible relationship between the consumption of different major classes of antibiotics and the prevalence of diabetes (type-1, /T1D/, type-2 /T2D/) in thirty European countries. According to our hypothesis, if such association exists, the dominant use of certain major antibiotic classes might be reflected in the prevalence of T1D and T2D in different countries. Comparisons were performed between the prevalence of diabetes (T1D and T2D) estimated for 2019 and featured in the Diabetes Atlas with the average yearly consumption of major antibiotic classes of the previous 10 years (2010-19) extracted from the ECDC yearly reports on antibiotic consumption in Europe. Pearson correlation and variance analysis were used to estimate the possible relationship. Strong, positive (enhancer) associations were found between the prevalence of T1D and the consumption of tetracycline (J01A /p: 0.001/) and the narrow spectrum penicillin (J01CE /p: 0,006/, CF /p: 0.018/). Strong negative (inhibitor) association was observed with broad-spectrum, beta-lactamase resistant penicillin (J01CR /p: 0.003/), macrolide (J01F /p: 0.008/) and quinolone (J01M /p: 0.001/). T2D showed significant positive associations with cephalosporin (J01D /p: 0.048/) and quinolone (J01M /p: 0.025/), and a non-significant negative association was detected with broad-spectrum, beta-lactamase-sensitive penicillin (J01CA /p: 0.67/). Countries with the highest prevalence of diabetes (first 10 positions) showed concordance with the higher consumption of "enhancer" and the lower consumption of "inhibitor" antibiotics (first 10 positions) as indicated by variance analysis. Countries with high prevalence of T1D showed high consumption of tetracycline (p: 0.015), and narrow spectrum, beta-lactamase sensitive penicillin (p: 0.008), and low consumption of "inhibitor" antibiotics (broad-spectrum, beta-lactamase resistant, combination penicillin (p: 0.005), cephalosporin (p: 0.036), and quinolone (p: 0.003). Countries with a high prevalence of T2D consumed more cephalosporin (p: 0.084), quinolone (p: 0.54), and less broad-spectrum, beta-lactamase sensitive penicillin (p: 0.012) than other countries. Conclusion/Interpretation: The development of diabetes-related dysbiosis might be attached to higher consumption of specific classes of antibiotics, showing positive (enhancer) associations with the prevalence of diabetes, and the low consumption of other classes of antibiotics shoving negative (inhibitory) associations. Those groups of antibiotics are different in T1D and T2D
A Novel Kind of the Type 2 Poly-Fubini Polynomials and Numbers
Waseem Khan
Subject: Keywords: Polyexponential functions, type 2 poly-Fubini polynomials, unipoly functions.
Motivation by the definition of the type 2 poly-Bernoulli polynomials introduced by Kim-Kim [9], in the present paper, we consider a new class of new generating function for the Fubini polynomials, called the type 2 poly-Fubini polynomials by means of the polyexponential function. Then, we derive some useful relations and properties. We show that the type 2 poly-Fubini polynomials equal a linear combination of the classical of the Fubini polynomials and Stirling numbers of the first kind. In a special case, we give a relation between the type 2 poly-Fubini polynomials and Bernoulli polynomials of order r. Moreover, inspired by the definition of the unipoly-Bernoulli polynomials introduced by Kim-Kim [9]. We introduce the type 2 unipoly-Fubini polynomials by means of unipoly function and give multifarious properties including derivative and integral properties. Furthermore, we provide a correlation between the unipoly-Fubini polynomials and the classical Fubini polynomials.
Vibration Control Design for a Plate Structure with Electrorheological ATVA Using Interval Type-2 Fuzzy System
Chih-Jer Lin, Chun-Ying Lee, Ying Liu
Subject: Engineering, Mechanical Engineering Keywords: Electro-Rheological fluid; Semi-active vibration control; tunable vibration absorber; type-1 fuzzy control; interval type-2 fuzzy control
This study presents a vibration control using actively tunable vibration absorbers (ATVA) to suppress vibration of a thin plate. The ATVA's is made of a sandwich hollow structure embedded with the electrorheological fluid (ERF). ERF is considered to be one of the most important smart fluids and it is suitable to be embedded in a smart structure due to its controllable viscosity property. ERF's apparent viscosity can be controlled in response to the electric field and the change is reversible in 10 microseconds. Therefore, the physical properties of the ERF-embedded smart structure, such as the stiffness and damping coefficients, can be changed in response to the applied electric field. A mathematical model is difficult to be obtained to describe the exact characteristics of the ERF embedded ATVA because of the nonlinearity of ERF's viscosity. Therefore, a fuzzy modeling and experimental validations of ERF-based ATVA from stationary random vibrations of thin plates are presented in this study. Because Type-2 fuzzy sets generalize Type-1 fuzzy sets so that more modelling uncertainties can be handled, a semi-active vibration controller is proposed based on Type-2 fuzzy sets. To investigate the different performances by using different types of fuzzy controllers, the experimental measurements employing type-1 fuzzy and interval type-2 fuzzy controllers are implemented by the Compact RIO embedded system. The fuzzy modeling framework and solution methods presented in this work can be used for design, performance analysis, and optimization of ATVA from stationary random vibration of thin plates.
Effects of Type 2 Diabetes Mellitus on Osteoclast Differentiation, Activity, and Cortical Bone Formation in Postmenopausal MRONJ Patient
Sung-Min Park, Jae-Hoon Lee
Subject: Medicine & Pharmacology, Dentistry Keywords: Type 2 Diabetes; Osteoporosis; Bisphosphonate; MRONJ; Osteoclast
Online: 31 March 2022 (13:44:57 CEST)
Osteoporosis is a common metabolic bone disease in patients with diabetes, which can develop simultane-ously with Type 2 diabetes (T2D) in postmenopausal women. Bisphosphonate (BP) is administered to pa-tients with both the conditions and may cause medication-related osteonecrosis of the jaws (MRONJ). It affects the differentiation and function of osteoclasts as well as thickness of cortical bone through bone mineralization. Therefore, this study aimed to investigate the effects of T2D on osteoclast differentiation and activity as well as cortical bone formation in postmenopausal patients with MRONJ. Tissue samples were collected from 10 patients diagnosed with T2D and Stage III MRONJ in the experimental group and from 10 patients without T2D in the control group. Histological examination was conducted, and expres-sion of dendritic cell-specific transmembrane protein (DC-STAMP) and tartrate resistant acid phosphatase (TRAP) was assessed. Cortical bone formation was analyzed using CBCT images. The number of TRAP-positive osteoclasts and DC-STAMP-positive mononuclear cells were significantly less in the experi-mental group (p < 0.05). Furthermore, the thickness and ratio of cortical bone were significantly greater in the experimental group (p < 0.05). In conclusion, T2D decreased the differentiation and function of osteo-clasts, and increased cortical bone formation in postmenopausal patients with MRONJ.
Walking Speed is the Sole Determinant of Mild Cognitive Impairment in Japanese Patients with type 2 Diabetes Mellitus
Noritaka Machii, Akihiro Kudo, Haruka Saito, Hayato Tanabe, Mariko Iwasaki, Hiroyuki Hirai, Hiroaki Masuzaki, Michio Shimabukuro
Subject: Medicine & Pharmacology, General Medical Research Keywords: type 2 diabetes mellitus; walking speed; sarcopenia
Diabetes is a risk factor for mild cognitive impairment (MCI) and dementia. However, how the clinical characteristics of type 2 diabetic patients with MCI are linked to sarcopenia and/or its criterion remain to be elucidated. Japanese patients with type 2 diabetes mellitus were categorized into the MCI group for MoCA-J (the Japanese version of the Montreal cognitive assessment) score <26, and into the non-MCI group for MoCA-J ≥26. Sarcopenia was defined by a low skeletal mass index along with low muscle strength (handgrip strength) or low physical performance (walking speed <1.0 m/s). Univariate and multivariate-adjusted odds ratio models were used to determine the independent contributors for MoCA-J <26. Among 438 participants, 221 (50.5%) and 217 (49.5%) comprised the non-MCI and MCI groups, respectively. In the MCI group, age (61 ± 12 vs. 71 ± 10 years, p < 0.01) and duration of diabetes (14 ± 9 vs. 17 ± 9 years, p < 0.01) were higher than those in the non-MCI group. Patients in the MCI group exhibited lower hand grip strength, walking speed, and skeletal mass index, but higher prevalence of sarcopenia. Only walking speed (rather than muscle loss or muscle weakness) was found to be an independent determinant of MCI after adjusting for multiple factors, such as age, gender, BMI, duration of diabetes, hypertension, dyslipidemia, smoking, drinking, eGFR, HbA1c, and history of coronary heart diseases and stroke. In subgroup analysis, a group consisting of male patients aged ≥65 years, with BMI <25, showed a significant OR for walking speed. This is the first study to show that slow walking speed is a sole determinant for the presence of MCI in patients with type 2 diabetes. It was suggested that walking speed is an important factor in the prediction and prevention of MCI development in patients with diabetes mellitus.
High Performance n-Type Ag2Se Film on Nylon Membrane for Flexible Thermoelectric Power Generator
Yufei Ding, Yang Qiu, Kefeng Cai, Qin Yao, Song Chen, Lidong Chen, Jiaqing He
Subject: Materials Science, Other Keywords: thermoelectric; flexible; n-type; Ag2Se; hot-pressing
Researches on flexible thermoelectric (TE) materials usually focus on conducting polymers (CPs) and CP-based composites; however, it is a great challenge to obtain high TE properties comparable to inorganic counterparts. Here, we report an n-type Ag2Se film on flexible nylon membrane with an ultrahigh power factor ~987.4 ± 104.1 μWm−1K−2 at 300 K and an excellent flexibility (93% of the original electrical conductivity retention after 1000 bending cycles around a 8-mm diameter rod). The flexibility is attributed to a synergetic effect of the nylon membrane and the Ag2Se film intertwined with numerous high-aspect-ratio Ag2Se grains. A TE prototype composed of 4-leg of the hybrid film generates a voltage and a maximum power of 19 mV and 460 nW, respectively, at a temperature difference of 30 K. This work opens opportunities of searching for high performance TE film for flexible TE devices.
Examining the Causal Link between and Air Pollution, Tuberculosis Type 2 Diabetes Mellitus
Purva Bhatter, Nerges Mistry
Subject: Biology, Other Keywords: Tuberculosis; Type 2; diabetes mellitus PM2.5; air pollution; inflammation
Rapid urbanization, increasing population and increased industrialization to cater to demands of the growing population has imposed upon us a huge environmental cost. The significantly deteriorated air quality across the globe is associated with a direct and indirect impact on public health. While associated disorders such as chronic obstructive pulmonary diseases, heart failures are well documented, less is known about the biological basis of the process. We hypothesize that the worsening air quality may likely impact common systemic inflammatory processes, thus driving communicable and non-communicable diseases alike.Receptor mediated entry of particulate matter (PM2.5) results in activation of signaling cascades which culminate in production of inflammatory chemokine and cytokine responses, traversing through the blood mediating impacting not only on other organs but also dysbiosis of microflora. For the purpose of the review we choose tuberculosis (TB) as a model for communicable infectious disease and type 2 diabetes mellitus (T2DM) as a marker for non-communicable disorder. The increasing prevalence of these co-morbidities and the burdening of public health systems justifies this example. However the hypothesis may be applicable to other inflammation driven disorders also.
Class I MHC Polymorphisms Associated with Type 2 Diabetes in Mexican Population
Paola Mendoza Ramirez, Mildred Alejandra López-Olaiz, Adriana Lizeth Morales-Fernandez, Maria Isabel Flores-Echiveste, Antonio de Jesus Casillas-Navarro, Marco Andrés Pérez-Rodríguez, Felipe de Jesús Orozco-Luna, Celso Cortés-Romero, Laura Yareni Zuñiga, María Guadalupe Sanchez Parada, Luis Daniel Hernandez-Ortega, Arieh Roldan Sesma Mercado, Raúl Cuauhtémoc Baptista Rosas
Subject: Life Sciences, Immunology Keywords: HLA; MHC class I; polymorphism; variant; type 2 diabetes; mexican
Online: 15 March 2022 (03:49:43 CET)
Type 2 diabetes has been linked to the expression of Human Leukocyte Antigens, principally to the major Histocompatibility Complex Class II and only scarce reports to Major Histocompatibility Complex class I in specific populations. The objective of the present work was to explore the presence of polymorphisms in the MHC class I related to Type 2 diabetes in the Mexican population using the GWAS SIGMA database. This database contains information of 3,848 Mexican individuals with type 2 diabetes and 4,366 control individuals from the same population without clinical or hereditary history of the disease. The searching criteria considered a P value < 0.005 and odds ratio, OR > 1.0. Ten novel statistically significant nucleotide variants were identified: four polymorphisms associated with HLA-A (A*03:01:01:01), and six with HLA-C (C*01:02:01:01). These alleles have a high prevalence in Latin American populations and could potentially be associated with autoimmunity mechanisms related with the development of Type 2 diabetes complications.
Mitochondrial-derived Peptide Single-nucleotide Polymorphisms Associated with Cardiovascular Complications in Type 2 Diabetes
Enrique García Gaona, Alhelí García Gregorio, Camila García Jiménez, Mildred Alejandra López-Olaiz, Paola Mendoza-Ramírez, Daniel Fernandez-Guzman, Laura Yareni Zuñiga, María Guadalupe Sánchez-Parada, Ana Elizabeth González Santiago, Luis Miguel Román Pintos, Rolando Castañeda Arellano, Luis Daniel Hernández-Ortega, Arieh Roldan Mercado Sesma, Felipe de Jesús Orozco-Luna, Raul C. Baptista Rosas
Subject: Life Sciences, Genetics Keywords: mitochondria; Type 2 diabetes; MDP; MOTS-c; Humanin; SHLP
Online: 5 October 2022 (03:43:17 CEST)
Since the discovery of mitochondrial-derived peptides (MDP), their participation in cellular metabolism is no longer considered as the sole function of the mitochondria, but importance was also attached to its role as a source of protective factors of metabolic stress. These peptides are encoded in the mitochondrial genome and translated into the mitochondria or cytoplasm, to signal within the cell or be released and bind to membrane receptors. The objective of this work was explored and compare the frequency of MT-RNR1 and MT-RNR2 variants in sequences obtained from T2D individuals and control population. 213 different mitochondrial polymorphisms previously reported in the literature associated with T2D and cardiovascular diseases were analyzed. We can found three variants in the MT-RNR1 not related with MOTS-c coding sequence: m.1189T>C (rs28358571), m.1420T>C (rs111033356), and m.1438A>G (rs2001030); and secondly, three polymorphisms associated to MT-RNR2 m.2667T>C (rs878870626) related to humanin, m.1811A>G (rs28358576) in SHPL3 and m.3027T>C (rs199838004) in SHPL6 associated with statistical differences between the T2D and control group. All these findings were previously related to cardiovascular complications in literature and, as far as we know, relating for the first time in diabetic patients.
High Energy and Carbohydrate Consumption among Mayan Community Women with Type 2 Diabetes
Karen Castillo-Hernández, Alan Espinosa-Marrón, Fernanda Molina-Seguí, Rossana Caracashian-Díaz, Hugo Laviada-Molina
Subject: Medicine & Pharmacology, Nutrition Keywords: diet composition; food culture; mayan community; type 2 diabetes mellitus.
Online: 14 July 2019 (17:29:13 CEST)
Aim: To perform a descriptive analysis of eating patterns and biophysical conditions of previously diagnosed and currently under treatment individuals from a semi-urban Mayan community of Yucatan, and to contrast them with T2DM therapeutic guidelines. Methods: The present study is derived from a randomized clinical trial conducted at Komchen, Yucatan. Participants' diagnosed with T2DM were included. A 24-hour dietary recall, anthropometric parameters (weight, visceral fat, height, and waist circumference), biochemical (HbA1c) and clinical (blood pressure) variables were evaluated and compared via hypothesis test with T2DM treatment cut-off points (based on World Health Organization criteria). Results: Anthropometric characteristics differ significantly from the ideal criteria. Obesity prevalence within women with T2DM was 92.9%. Only 21% of the participants were under T2DM control (≤7%). Energy and carbohydrates consumption, significantly exceed therapeutic guidelines, whereas protein, fat, and fiber intake were lower than the recommendations. Conclusions: Komchen's diet, concomitantly with food characteristics, could be related to glycemic decontrol. There is a disproportion in macronutrients consumption in favor of carbohydrates, probably associated with socioeconomic limitations, food availability, and price. Developing nutritional assistance programs which contemplate cultural and economic factors in this Mayan population must be taken into consideration.
Thermoelectric Properties of Bi2Te3: CuI and the Effect of Its Doping with Pb Atoms
Mi-Kyung Han, Yingshi Jin, Da-Hee Lee, Sung-Jin Kim
Subject: Chemistry, Inorganic & Nuclear Chemistry Keywords: Bi2Te3; Thermoelectric properties; co-doping; n-type
In order to understand the effect of Pb-CuI co-doping on the thermoelectric performance of Bi2Te3, n-type Bi2Te3 co-doped with x at% CuI and 1/2x at% Pb (x = 0, 0.01, 0.03, 0.05, 0.07, and 0.10) were prepared via high temperature solid state reaction and consolidated using spark plasma sintering. Electron and thermal transport properties, i.e., electrical conductivity, carrier concentration, Hall mobility, Seebeck coefficient, and thermal conductivity, of CuI-Pb co-doped Bi2Te3 were measured in the temperature range from 300 K to 523 K and compared to corresponding x% of CuI-doped Bi2Te3 and undoped Bi2Te3. The addition of a small amount of Pb significantly decreased the carrier concentration, which could be attributed to the holes from Pb atoms, thus the CuI-Pb co-doped samples show a lower electrical conductivity and a higher Seebeck coefficient compared to CuI-doped samples with similar x values. The incorporation of Pb into CuI-doped Bi2Te3 rarely changed the power factor because of the trade-off relationship between the electrical conductivity and the Seebeck coefficient. The total thermal conductivity(κtot) of co-doped samples (κtot ~1.4 W/m∙K at 300 K) is slightly lower than that of 1% CuI-doped Bi2Te3 (κtot~1.5 W/m∙K at 300 K) and undoped Bi2Te3 (κtot ~1.6 W/m∙K at 300 K) due to the alloy scattering. The 1% CuI-Pb co-doped Bi2Te3 sample shows the highest ZT value of 0.96 at 370 K. All data on electrical and thermal transport properties suggest that the thermoelectric properties of Bi2Te3 and its operating temperature can be controlled by co-doping.
In-Silico Study of the Immune Systems Associated Genes in Case of Type-2 Diabetes with Insulin Action and Resistance and in Obesity
Basmah Medhat Eldakhakhny, Hadeel Al Sadoun, Hani Choudhry, Mohammad Mobashir
Subject: Life Sciences, Biochemistry Keywords: Type 2 diabetes; cancer; shared pathways; shared genes and proteins; relationship between cancer and type 2 diabetes
Online: 15 October 2020 (09:47:35 CEST)
Obesity, type 2 diabetes, and different forms of cancers are among the leading human diseases and highly complex in terms of diagnostic and therapeutic approaches. Diabetes and cancer are among the most frequent and complex diseases and based on epidemiological evidence and study it can be concluded that the patients suffering from diabetes are considered to be significantly at higher risk for a number of cancer types. Both these diseases are among the highly complex and heterogeneous in nature. There are a number of evidences which support the hypothesis that these diseases interlinked and obesity may aggravate the risk(s) of both these diseases type 2 diabetes and different types of cancers. Multi-level unwanted alterations such as (epi-)genetic alterations, changes at the transcriptional level, and altered signaling pathways (receptor, cytoplasmic, and nuclear level) are the major source which promotes a number of complex diseases and such heterogeneous level of complexities are considered as the major barrier in the development of therapeutic. With so many known challenges, it is critical to understand the relationships and the common shared causes between type 2 diabetes and cancer which is difficult to unravel and understand. Furthermore, the real complexity arises during diagnosis from contended corroborations that specific drug(s) (individually or in combination) during diagnosis process of type 2 diabetes may increase or decrease the cancer risk or affect cancer prognosis. In this review article, we have presented the recent and most updated evidences from the studies where the origin, biological background, correlation between them have been presented or proved. Furthermore, we have summarized the methodological challenges and tasks that are frequently encountered. we have also outlined the physiological links between type 2 diabetes and cancers. Finally, we have presented and summarized the outline of the hallmarks for both these diseases diabetes and cancer.
Glomerular Collagen Deposition and Lipocalin-2 Expression Are Early Signs of Renal Injury in Prediabetic Obese Rats
Eva Nora Bukosza, Tamás Kaucsár, Mária Godó, Enikő Lajtár, Pál Tod, Gábor Koncsos, Viktor Zoltán Varga, Tamás Baranyai, Minh Tu Nguyen, Helga Schachner, Csaba Sőti, Péter Ferdinandy, Zoltán Giricz, Gábor Szénási, Peter Hamar
Subject: Medicine & Pharmacology, Other Keywords: obesity; renal injury; lipocalin-2; collagen type IV; inflammation
Rats fed a high-fat diet with a single streptozotocin (STZ) injection developed obesity, prediabetes, cardiac hypertrophy and diastolic dysfunction. Here we aimed to explore the renal consequences of prediabetes in the same groups of rats. Male Long-Evans rats were fed normal chow (CON; n = 9) or high-fat diet containing 40% lard and were administered STZ at 20 mg/kg (i.p.) at week four (prediabetic rats, PRED, n = 9). At week 21 cardiac functions were examined (Koncsos et al., 2016) and blood and urine samples were taken. Kidney samples were collected for histology, immunohistochemistry and for analysis of gene expression. High-fat diet and streptozotocin increased body weight gain and visceral adiposity, and plasma leptin, elevated fasting blood glucose levels, impaired glucose and insulin tolerance, despite hyperleptinemia, plasma C-reactive protein concentration decreased in PRED rats. Immunohistochemistry revealed elevated collagen IV protein expression in the glomeruli, and Lcn2 mRNA expression increased, while Il-1β mRNA expression decreased in both the renal cortex and medulla in PRED vs. CON rats. Kidney histology, urinary protein excretion, plasma creatinine, glomerular Feret diameter, desmin protein expression and cortical and medullary mRNA expression of TGF-β1, Nrf2, PPARγ were similar in CON and PRED rats. Reduced AMPKα phosphorylation of the autophagy regulator Akt was the first sign of liver damage, while serum lipid and liver enzyme levels were similar. In conclusion, glomerular collagen deposition and increased lipocalin-2 expression were the early signs of kidney injury, while most biomarkers of inflammation, oxidative stress and fibrosis were negative in the kidneys of obese, prediabetic rats with mild heart and liver injury.
Role of Advanced Glycation End Products on Aortic Calcification in Patients with Type 2 Diabetes Mellitus
Pilar Sanchis, Rosmeri Rivera, Regina Fortuny, Carlos Rio, Miguel Mas-Gelabert, Marta Gonzalez-Freire, Felix Grases, Luis Masmiquel
Subject: Life Sciences, Endocrinology & Metabolomics Keywords: AGEs; aortic calcification; type 2 diabetes mellitus; diabetes-related complications
The aim of this study was to evaluate the relationship between serum levels of advanced glycation end products (AGEs) and abdominal aortic calcification (AAC) in patients with type 2 diabetes mellitus (DM2). This was a prospective cross-sectional study conducted from January 2017 to June 2018. One-hundred and four consecutive patients with DM2 were given lateral lumbar X-rays in order to quantify aortic abdominal calcification AAC. Circulating levels of AGEs and classical cardiovascular risk factors were determined. Clinical history was also registered. Patients with higher AGEs values had higher grades of aortic calcification and higher number of diabetic related complications. Multivariate logistic regression analysis showed that being older, male and having high levels of AGEs and triglycerides were the independent risk factors associated to moderate-severe AAC when compared to no-mild AAC. Our results suggest that AGEs plays a role in the pathogenesis of aortic calcifications. In addition, the measurement of AGEs levels may be useful for assessing the severity of AAC in the setting of diabetic complications.
Factors Associated With Chronic Kidney Disease in Patients With Type 2 Diabetes in Bangladesh
Sheikh Mohammed Shariful Islam, Masudus Salehin, Sojib Bin Zaman, Tania Tansi, Rajat Das Gupta, Lingkan Barua, Palash C Banik, Riaz Uddin
Subject: Medicine & Pharmacology, General Medical Research Keywords: Type 2 Diabetes Mellitus; Chronic Kidney Diseases; Hypertension; Risk Factors; Bangladesh
Diabetes and chronic kidney disease (CKD) are a major public health burden in low-and-middle-income countries. This study aimed to explore factors associated with CKD in patients with type 2 diabetes (T2D) in Bangladesh. A cross-sectional study was conducted among 315 adults with T2D presenting at the outpatient department of Bangladesh Institute of Health Sciences (BIHS) hospital between July 2013 to December 2013. CKD was diagnosed based on estimated Glomerular Filtration Rate using the 'Modification of Diet in Renal Disease' equations and presence of albuminuria estimated by the albumin-to-creatinine ratio. Multivariate logistic regression analysis was used to determine the factors associated with CKD. The overall prevalence of CKD among patients with T2D was 21.3%. In the unadjusted model Factors associated with CKD were: aged 40-49 years (OR: 5.7, 95% CI: 1.3-25.4), age 50-59 years (7.0, 1.6-39), age ≥60 years (7.6, 1.7-34); being female (2.2, 1.2-3.8), hypertensive (1.9, 1.1-3.5) and household income between 128.2-256.4 US$ (2.9, 1.0-8.2) compared with income ≤128.2$. However, after adjustment of other covariates, only duration of hypertension and household income (128.2-256.4 US$) remained statistically significant. There is a need to implement policies and programs for early detection and management of hypertension and CKD in T2D patients in Bangladesh.
A Note on Type 2 Degenerate Poly-Frobenius-Euler Polynomials
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: polylogarithm function; Frobenius-Euler polynomials; type 2 degenerate poly-Frobenius-Euler polynomials; unipoly functions
In this paper, we construct the degenerate poly-Frobenius-Genocchi polynomials, called the type 2 degenerate poly-Frobenius-Euler polynomials, by means of polyexponential function. We derive explicit expressions and some identities of those polynomials. In the last section, we introduce type 2 degenerate unipoly-Frobenius-Genocchi polynomials by means of unipoly function and derive explicit multifarious properties.
A Note on Type 2 Degenerate Poly-Fubini Polynomials and Numbers
Subject: Keywords: Modified degenerate polexponential function; Fubini polynomials; type 2 degenerate poly-Fubini polynomials; unipoly functions
In this paper, we construct the degenerate poly-Fubini polynomials, called the type 2 degenerate poly-Fubini polynomials, by using the modified degenerate polyexponential function and derive several properties on the degenerate poly-Fubini polynomials and numbers. In the last section, we introduce type 2 degenerate unipoly- Fubini polynomials attached to an arithmetic function, by using the modified degenerate polyexponential function and investigate some identities for those polynomials. Furthermore, we give some new explicit expressions and identities of degenerate unipoly polynomials related to special numbers and polynomials.
Dyslipidemia in Adults with Type 2 Diabetes in a Rural Community in Ganadougou, Mali: A Cross-Sectional Study
Abdoulaye Diawara, Djibril Mamadou Coulibaly, Drissa Kone, Mama A. Traore, Dicko S. Bazi, Oumar Kassogue, Djeneba Sylla, Fatoumata Gniné Fofana, Oudou Diabaté, Mamadou Sangaré, Ibrahim Antoine Nieantao, Kaly Keїta, Mamadou Diarra, Jian Li, Cheickna Cisse, Talib Yusuf Abbas, Crystal Zheng, Segun Fatumo, Kassim Traore, Mamadou Wele, Mahamadou Diakité, Seydou O. Doumbia, Jeffrey G. Shaffer
Subject: Medicine & Pharmacology, General Medical Research Keywords: cholesterol; cross-sectional study; dyslipidemia; lipids; Mali; type 2 diabetes
Dyslipidemia is a disorder where abnormally lipid concentrations circulate in the bloodstream. The disorder is common in type 2 diabetics (T2D) and is linked with T2D comorbidities, particularly cardiovascular disease. Dyslipidemia in T2D is typically characterized by elevated plasma triglyceride and low high-density lipoprotein cholesterol (HDL-C) levels. There is a significant gap in the literature regarding dyslipidemia in rural parts of Africa, where lipid profiles may not be routinely captured through standard surveillance activities. This study aimed to characterize the prevalence and demographic profile of dyslipidemia in T2D patients in the rural community of Ganadougou, Mali. We performed a cross-sectional study of 104 subjects with T2D in Ganadougou between November 2021 and March 2022. Demographic and lipid profiles were collected through cross-sectional surveys and blood tests. The overall prevalence of dyslipidemia in T2D patients was 87.5% (91/104), which did not differ by sex (p = .368). High low-density lipoprotein cholesterol (LDL-C) was the most common lipid abnormality (78.9%, [82/104]). Dyslipidemia was associated with age and hypertension status (p = .013 and p = .036, respectively). High total and high LDL-C parameters were significantly associated with hypertension (p = .029 and p = .006, respectively). In low-resource settings such as rural Mali, there is a critical need to improve infrastructure for routine dyslipidemia screening to guide its prevention and intervention approaches. The high rates of dyslipidemia observed in Gandadougou, consistent with concomitant increases in cardiovascular diseases in Africa suggest that lipid profile assessments should be incorporated into routine medical care for T2D patients in African rural settings.
The Association between Non-Alcoholic Fatty Liver Disease and Dynapenia in Men Diagnosed with Type 2 Diabetes Mellitus
Atilla Bulur, Ridvan Sivritepe
Subject: Life Sciences, Other Keywords: type 2 diabetes mellitus; non-alcoholic fatty liver disease; dynapenia
Background: Dynapenia and non-alcoholic fatty liver disease (NAFLD) are common, especially in the middle and advanced-age diabetic male population. We aimed to examine the clinical features, NAFLD severity, and parameters associated with the presence of dynapenia in type 2 diabetes mellitus (T2DM) cases. Material and Methods: One hundred thirty-five male patients diagnosed with T2DM between 45 and 65 years of age were included. Patients were staged by ultrasonography according to NAFLD status. Results: There were significant differences in muscle strength, upper arm circumference, calf circumference, and up-and-go test scores between the NAFLD groups (p<0.001 for all). The frequency of dynapenia was lower, and arm and calf circumferences were higher in patients without NAFLD. The muscle strength, upper arm circumference, calf circumference, and up-and-go test scores were significantly lower in the dynapenic group compared to the non-dynapenic group (p<0.005 for all). The prevalence of dynapenia increased along with the increase in NAFLD stages (p<0.001). Conclusions: We detected a significant association between NAFLD and dynapenia in middle-aged men with T2DM. As muscle strength decreases, the amount of fat in the liver increases, and as the fat in the liver increases, muscle strength decreases.
Optimal Type 2 Diabetes Mellitus Management and Active Ageing
Alessia Maria Calabrese, Valeria Calsolaro, Sara Rogani, Chukwuma Okoye, Nadia Caraccio, Monzani Fabio
Subject: Medicine & Pharmacology, Allergology Keywords: type 2 diabetes mellitus (T2DM); older people; frailty; antidiabetic drugs; comprehensive geriatric assessment; therapeutic targets; hypoglycemia.
Online: 3 August 2021 (09:07:51 CEST)
Type two diabetes mellitus (T2DM) represents a chronic condition with increasing prevalence worldwide among the older population. T2DM condition increases the risk of micro and macro-vascular complications as well as the risk of geriatric syndromes as falls, fractures and cognitive impairment. The management of T2DM in the older population represents a challenge for the cli-nician, and a Comprehensive Geriatric Assessment should always be prioritized, in order to tailor the glycate haemoglobin target according to functional and cognitive status comorbidities, life ex-pectancy and type of therapy. According to the most recent guidelines, older adults with T2DM should be cathegorized in three groups: healthy patients with good functional status, patients with complications and reduced functionality and patients at the end of life; for each group the target for the glycemic control is different, also according to the type of treatment drug. The therapeutic ap-proach should always begin with lifestyle changes; after that, several lines of therapies are available, with different mechanism of action and potential effect other than glucose level reduction. Partic-ular interest is growing around sodium-glucose cotransporter-2 inhibitors, due to their effect on the cardiovascular system. In this review we evaluate the therapeutic options available for the treat-ment of older diabetic patients, to ensure a correct treatment approach
Serum 25-hydroxyvitamin D Levels and Youth-onset Type 2 Diabetes: A Two-sample Mendelian Randomization Study
Benjamin De La Barrera, Despoina Manousaki
Subject: Medicine & Pharmacology, Pediatrics Keywords: vitamin D; pediatric type 2 diabetes; Mendelian randomization; GWAS; causal inference
Observational studies have linked vitamin D insufficiency to pediatric type 2 diabetes (T2D) , but evidence from vitamin D supplementation trials is sparse. Given the rising prevalence of pediatric T2D in all ethnicities, determining a protective role of vitamin D has significant public health importance. We tested whether serum 25-hydroxyvitamin D (25OHD) levels are causally linked to youth-onset T2D risk using Mendelian randomization (MR). We selected 54 single nucleotide polymorphisms (SNPs) associated with 25OHD in a European genome-wide association study (GWAS) on 443,734 individuals and obtained their effects on pediatric T2D from the multi-ethnic PRODIGY GWAS (3,006 cases/6,061 controls). We applied inverse variance weighted (IVW) MR, and a series of MR methods to control for pleiotropy. We undertook sensitivity analyses in ethnic sub-cohorts of PRODIGY, and using SNPs in core vitamin D genes or ancestry-informed 25OHD SNPs. Multivariable MR accounted for mediating effects of body mass index. We found that a standard deviation increase in 25OHD in the logarithmic scale did not affect youth-onset T2D risk (IVW MR odds ratio (OR) = 1.04, 95% CI=0.96-1.13, P=0.35) in the multi-ethnic analysis, and sensitivity, ancestry-specific and multivariable MR analyses showed consistent results. Our study had limited power to detect small/moderate effects of 25OHD (OR of pediatric T2D < 1.39 to 2.1). In conclusion, 25OHD levels are unlikely to have large effects on risk of youth-onset T2D across different ethnicities.
Relationships between Urine Albumin Excretion and Dietary Habits in Patients with Type 2 Diabetes
Sadako Matsui, Yasuhisa Someya, Hiroshi Yoshida
Subject: Medicine & Pharmacology, Nutrition Keywords: Type 2 diabetes; urea albumin excretion; food frequency questionnaire survey; β-cryptoxanthin; fruits
Background: The dietary factors and nutrients contributing to the prevention of microalbuminuria in type 2 diabetic nephropathy is unclear, so we investigated dietary factors affecting urinary albumin excretion in patients with type 2 diabetes. Methods: 42 patients with type 2 diabetes were participated, the subjects were divided to a normal albuminuria group (urinary albumin / creatinine ratio of less than 30 mg / g Cr) and a microalbuminuria group of 30 mg / g to 299 mg / g Cr. We performed casual blood sampling and conducted a food frequency questionnaire survey. Results: There were no significant differences in age, BMI and other physiological and biochemical data, the average daily intake of energy and many of nutrients, while β-cryptoxanthin was significantly lower in the microalbuminuria group than in the normal group (506.4 ± 793.9 μg/day vs. 715.3 ± 500.3 μg/day, p <0.05). The intake of 17 food groups per day showed that the intakes of fruits were significantly lower in the microalbuminuria group than in the normal group (76.9 ± 134.1 g vs. 111.9 ± 84.5 g, p <0.05). Conclusion: These results suggest that fruits and foods rich in β-cryptoxanthin would make it possible to prevent diabetic nephropathy progression.
Room-Temperature H2 Gas Sensing Characterization of Graphene-Doped Porous Silicon via a Facile Solution Dropping Method
Nu Si A Eom, Hong-Baek Cho, Yoseb Song, Woojin Lee, Tohru Sekino, Yong-Ho Choa
Subject: Materials Science, Other Keywords: graphene-doped porous silicon; p-type silicon; hydrogen sensor; sensing mechanism
In this study, a graphene-doped porous silicon (G-doped/p-Si) substrate for low ppm H2 gas detection by an inexpensive synthesis route was proposed as a potential noble graphene-based gas sensor material and to understand the sensing mechanism. The G-doped/p-Si gas sensor was synthesized by a simple capillary force-assisted solution dropping method on p-Si substrates, whose porosity was generated through an electrochemical etching process. G-doped/p-Si was fabricated with various graphene concentrations and exploited as a H2 sensor operated at room temperature. The sensing mechanism of the sensor with/without graphene decoration on p-Si was proposed to elucidate the synergetic gas sensing effect generated from the interface between the graphene and p-type silicon.
Efficacy and Safety of Nutrient Supplements for Glycemic Control and Insulin Resistance in Type 2 Diabetes: An Umbrella Review and Hierarchical Evidence Synthesis
Charmie Fong, Simon Alesi, Aya Mousa, Lisa Moran, Gary Deed, Suzanne Grant, Kriscia Tapia, Carolyn Ee
Subject: Medicine & Pharmacology, Nutrition Keywords: type 2 diabetes; glycemic control; insulin resistance; nutrients; umbrella review
Online: 9 May 2022 (05:05:33 CEST)
Background: Nutrient supplements are widely used for type 2 diabetes (T2D) yet evidence-based guidance for clinicians is lacking. Methods: We searched the four electronic databases from November 2015–December 2021. The most recent, most comprehensive, high-ranked systematic reviews, meta-analyses and/or umbrella reviews of randomised controlled trials in adults with T2D were included. Data were extracted on study characteristics, aggregate outcome measures per group (glycemic control, measures of insulin sensitivity and secretion), adverse events, and GRADE assessments. Quality was assessed using AMSTAR-2. Results: Twelve meta-analyses and one umbrella review were included. There was very low certainty evidence that chromium, Vitamin C and omega-3 polyunsaturated fatty acids (-3 PUFAs) were superior to placebo for the primary outcome of HbA1c (MD -0.54%, -0.54% and ES -0.27 respectively). Probiotics were superior to placebo for HbA1c (WMD -0.43%). There was very low certainty evidence that Vitamin D was superior to placebo for lowering HbA1c in trials of <6 months (MD -0.17%). Magnesium, zinc, Vitamin C, probiotics and polyphenols were superior to placebo for FBG. Vitamin D was superior to placebo for insulin resistance. Data on safety was limited. Conclusions: Future research should identify who may benefit from nutrient supplementation, safety, and optimal regimens and formulations.
Association of Short Stature with An Increased Risk of End-Stage Renal Disease in Type 2 Diabetic Patients: A Nationwide Population-Based Cohort Study
Yu Ah Hong, Kyung-Do Han, Jae-Seung Yun, Eun Sil Koh, Seung-Hyun Ko, Sungjin Chung
Subject: Medicine & Pharmacology, Other Keywords: short stature; type 2 diabetes; end-stage renal disease; mortality
Online: 16 December 2019 (11:12:15 CET)
Short stature has been associated with increased various disease and all-cause death, but no reliable data exist the association between height and end-stage renal disease (ESRD) in diabetic patients. We investigated the relationship between short stature, development of ESRD, and mortality in type 2 diabetes. This study analyzed clinical data using the National Health Insurance Database in Korea. Height was stratified by five groups according to age and sex. Risk of ESRD and all-cause mortality was analyzed with Cox proportional hazards models. During a 6.9-year follow-up period, 220,457 subjects (8.4%) died and 28,704 subjects (1.1%) started dialysis. Short stature significantly increased the incidence of ESRD and all-cause mortality in the overall cohort analysis. In multivariable analysis, hazard ratios (HR) for development of ESRD comparing the highest versus lowest quartiles of adult height were 0.86 (95% confidence interval (CI), 0.83–0.89). All-cause mortality also decreased with highest height compared to patients with lowest height after fully adjusting for confounding variables (HR 0.79, 95% CI, 0.78–0.81). Adult height had an inverse relationship with newly diagnosed ESRD and all-cause in both males and females. Short stature is strongly associated with an increased risk of ESRD and all-cause mortality in type 2 diabetes.
An Integrated Best-Worst and Interval Type-2 Fuzzy TOPSIS Methodology for Green Supplier Selection
Melih Yucesan, Suleyman Mete, Faruk Serin, Erkan Celik, Muhammet Gul
Subject: Mathematics & Computer Science, Other Keywords: MCDM; BWM, interval type-2 fuzzy sets; TOPSIS; green supplier selection, plastic injection molding
Supplier selection is one of the most important multi criteria decision making (MCDM) problems for decision makers in competitive market. Organizations of today's world are seeking new ways to reduce negative effects of their organizations to the environment and to reach a greener system. At this point, green supplier selection concept has gained great importance with its ability on incorporating environmental or green criteria into the classical supplier selection practices. Therefore, in this study, it is aimed at proposing a multi-phase MCDM model based on Best-Worst Method (BWM) and interval type-2 fuzzy technique for order preference by similarity to ideal solution (TOPSIS). A case study in a plastic injection molding facility in Turkey is performed to show the applicability of the proposed integrated methodology. The paper ensures insights into the decision making, methodology, and managerial implications. Results of the case study are examined and suggestions for future research are provided.
A Meta-analysis of Comorbidities in COVID-19: Which Diseases increase the Susceptibility of SARS-CoV-2 Infection?
Srinivasan Ramachandran, Manoj Kumar Singh, Ahmed Mobeen, Amit Chandra, Sweta Joshi
Subject: Life Sciences, Immunology Keywords: COVID-19; comorbidity; SARS-CoV-2; leukemia; NAFLD; psoriasis; cancer; type II diabetes
Background: Comorbidities have been frequently reported in COVID-19 patients, which often lead to more severe outcomes. The underlying molecular mechanisms behind these clinical observations have not yet been explained. Herein, we investigated the disease-specific gene expression signatures that may induce susceptibility to SARS-CoV-2 infection. Methods: We studied 30 frequently occurring acute, chronic, or infectious diseases of recent times that have shown comorbidity in one or another respiratory disease(s) caused by pathogenic human infecting coronaviruses, especially SARS-CoV-2. We retrieved array-based gene expression data for each disease and control from relevant datasets. Subsequently, all the datasets were quantile normalized, and log-2 transformed data was used for analysis. Results The expression of ACE2 receptor and host proteases, namely FURIN and TMPRSS2 that are essential for cellular entry of SARS-CoV-2, was upregulated in all six studied subtypes of leukemia (hereafter, referred as leukemia). The expression of ACE2 was also increased in psoriasis, lung cancer, Non-alcoholic fatty liver disease (NAFLD), breast cancer, and pulmonary arterial hypertension patients. The expression of FURIN was higher in psoriasis, NAFLD, lung cancer, and in type II diabetic liver, whereas it was lowered in breast cancer. Similarly, the expression of TMPRSS2 was increased during lung cancer and type II diabetes; it was decreased during psoriasis, NAFLD, lung cancer, breast cancer, and cervical cancer.Furthermore, a heightened expression of genes that are involved in immune response was observed in leukemia patients, as shown by the higher expression of IFNA2, IFNA8, IFNA10, IFNA14, IFNA16, IFNA21, IFNB1, CXCL10, and IL6. The expression of JAK1, STAT1, IL6, and CXCL10 was higher in NAFLD. Besides, JAK1 and STAT1 were upregulated in type II diabetic muscles. In addition, most of the upregulated genes in COVID-19 patients showed a similar trend in leukemia, NAFLD, and psoriasis. Furthermore, SARS-CoV-2, SARS-CoV and MERS CoV, were found to commonly alter two genes, namely, CARBONIC ANHYDRASE 11 and CLUSTERIN.Conclusions: The genes that may confer susceptibility to SARS-CoV-2 infection are mostly upregulated in leukemia patients; hence, leukemia patients are relatively more susceptible to develop COVID-19, followed by other chronic disorders, such as, NAFLD, type II diabetes, psoriasis, and hypertension. This study identifies key genes that are altered in the studied diseases types, which may aid in the infection of SARS-CoV-2 and underlie COVID-19 associated comorbidities.
Proposing a Framework for Airline Service Quality Evaluation Using Type-2 Fuzzy TOPSIS and Non-parametric Analysis
Navid Haghighat
Subject: Social Sciences, Microeconomics And Decision Sciences Keywords: airline service quality; passenger satisfaction; non-parametric analysis; Type-2 Fuzzy Set; Fuzzy TOPSIS
This paper focuses on evaluating airline service quality from the perspective of passengers view. Until now a lot of researches has performed in airline service quality evaluation in the world but a little research has been conducted in Iran, yet. In this research a framework for measuring airline service quality in Iran is proposed. After reviewing airline service quality criteria, SSQAI model was selected because of its comprehensiveness in covering airline service quality dimensions. SSQAI questionnaire items were redesigned to adopt with Iranian airlines requirements and environmental circumstances in the Iran's economic and cultural context. This study includes fuzzy decision-making theory, considering the possible fuzzy subjective judgment of the evaluators during airline service quality evaluation. Fuzzy TOPSIS have been applied for ranking airlines service quality performances. Three major Iranian airlines which have the most passenger transfer volumes in domestic and foreign flights, were chosen for evaluation in this research. Results demonstrated Mahan airline has got the best service quality performance rank in gaining passengers' satisfaction with delivery of high quality services to its passengers, among the three major Iranian airlines. IranAir and Aseman airlines placed in the second and third rank, respectively, according to passenger's evaluation.Statistical analysis have been used in analyzing passenger responses. Due to abnormality of data, Non-parametric tests were applied. To demonstrate airline ranks in every criterion separately, Friedman test was performed. Variance analysis and Tukey test were applied to study the influence of increasing in age and educational level of passengers' on degree of their satisfaction from airline's service quality. Results showed that age has not significant relation with passenger satisfaction of airlines, however increasing in educational level demonstrated a negative impact on passengers' satisfaction from airline's service quality.
miR-221-3p/222-3p Cluster Expression in Human Adipose Tissue is Related to Obesity and Type 2 Diabetes
Adriana-Mariel Gentile, Said Lhamyani, Mercedes Clemente-Postigo, Enrique Estepa, Maria Mengual-Mesa, Francisco J. Bermúdez-Silva, Alberto Rodríguez-Cañete, Francisco J. Tinahones, Gabriel Olveira, Rajaa El Bekay
Subject: Medicine & Pharmacology, Other Keywords: miR-221-3p/222-3p cluster; human adipose tissue; obesity; type 2 diabetes
Online: 24 February 2022 (09:54:33 CET)
Background: The course of obesity and type 2 diabetes (T2D) development is highly dependent on adipose tissue (AT) angiogenesis. Moreover, angiogenic microRNAs (miRNAs) play pivotal role in AT functionality. The aim of this study was to analyze the relationship of the human AT miR-221-3p/222-3p cluster and their regulatory network with obesity and T2D. Methods: miR-221-3p/222-3p and their target genes (TG) expression levels were measured in visceral and subcutaneous ATs from patients classified according to their BMI and to their glycemic status with a high degree of insulin resistance (IR) and T2D. In silico analyses of miR-221-3p/222-3p and their TGs were performed to identify relevant signaling pathways. Results: A multivariate analysis, including the simultaneous expression of miR-221-3p and miR-222-3p as dependent variables, showed significant differences considering the variables; tissue depot, obesity, IR and T2D altogether as independent variables. In addition, miRNAs and their TGs were differentially expressed according to obesity degree, glycemic status, and AT depot type. Our in silico analysis showed that miR-221-3p/222-3p cluster TGs are mostly involved in angiogenesis, WNT signaling pathway and apoptosis. Conclusion: These findings suggest that the miR-221-3p/222-3p cluster and their related regulatory networks could represent tangible targets for the management of obesity and associated metabolic disorders
Improved Rapid Visual Earthquake Hazard Safety Evaluation of Existing Buildings Using Type-2 Fuzzy Logic Model
Ehsan Harirchian, Tom Lahmer
Subject: Engineering, Civil Engineering Keywords: seismic vulnerability; fuzzy logic system; Interval Type-2 Fuzzy logic; retrofit prioritization; damage category classification
Rapid Visual Screening (RVS) is a procedure that estimates structural scores for buildings and prioritize their retrofit and upgrade requirements. Despite the speed and simplicity of RVS, many of the collected parameters are non-commensurable and include subjectivity due to visual observations. It might cause uncertainties in the evaluation, which emphasizes the use of a fuzzy-based method. This study aims to propose a novel RVS methodology based on the interval type-2 fuzzy logic system (IT2FLS) to set the priority of vulnerable building to undergo detailed assessment while covering uncertainties and minimizing their effects during evaluation. The proposed method estimates the vulnerability of a building, in terms of Visual Damage Index, considering the number of stories, age of building, plan irregularity, vertical irregularity, building quality, and peak ground velocity, as inputs with a single output variable. Applicability of the proposed method has been investigated using a post-earthquake damage database of 28 reinforced concrete buildings from the Bingöl earthquake in Turkey.
Variability in the Control of Type 2 Diabetes in Primary Care and Its Association with Hospital Admissions for Vascular Events. The APNA Study.
Sara Guillen-Aguinaga, Luis Forga, Antonio Brugos-Larumbe, Francisco Guillen-Grima, Laura Guillen-Aguinaga, Ines Aguinaga-Ontoso
Subject: Medicine & Pharmacology, General Medical Research Keywords: Healthcare Disparities; Diabetes Mellitus, Type 2; Vascular Diseases; Primary Health Care; Cohort
Type 2 diabetes (T2D) is associated with increased cardiovascular morbidity, mortality, and hospital admissions. There is variability in clinical practice. The objectives are to analyze the variability in the control of Blood Pressure (BP), HbA1c, and LDL-C in T2D patients and its influence on admissions due to cardiovascular events (CVE) Methods: We analyzed the electronic records in Primary Care Health centers in Navarra (Spain) and hospital admission for CVE. We follow 480637 people from 2012 to 2016. We calculated indicators of control of patients with T2D for each year, percentage with: HbA1c < 7%; HbA1c >= 9%; BP <140/90 mmHg; LDL-C <100 mg/dl. We used logistic and Cox regression. Results: Patients in the best control GP practices cluster are 2.5 times more likely to have HbA1c <7% [OR: 2.46 (95% CI: 2.29-3.64)]. Poor HbA1c control ≥ 9% is more likely in the worst control cluster [OR: 1.73 (95% CI:1.63-1.83)]. The probability of admission for CVE increases with age, being male, low income, obesity, history of CVE, having HbA1c ≥ 9%, and belonging to a GP practice in the cluster of HbA1C ≥ 9% worst control. In contrast, it decreases in patients with HbA1c <7%, BP<140/90 mmHg and LDL <100 mg/dl.
Anti-diabetic Effects of Wild Soybean Glycine Soja Seed Extract on Type 2 Diabetic Mice and Human Hepatocytes induced Insulin Resistance
Eunjung Son, Hye Jin Choi, Seung-Hyung Kim, Dong-Gyu Jang, Jimin Cha, Jeong June Choi, Mee Ree Kim, Dong-Seon Kim
Subject: Life Sciences, Biochemistry Keywords: glycine soja seed; type 2 diabetes mellitus; antidiabetic; AMPK; Akt; PPAR-γ
Anti-diabetic effects of Glycine soja seed extract (GS) on Type 2 diabetes mellitus mouse model and human hepatocytes induced insulin resistance were investigated. 3 weeks old db/db mice were divided into 5 groups (n = 6) including two control groups and 3 GS treated groups with different doses. Oral administration of GS for 6 weeks to diabetic db/db mice reduced blood glucose level significantly in a dose dependent manner by 44.7% (300 mg/kg/day), 30.9% (150 mg/kg/day) and 21.1% (75 mg/kg/day). GS treatment also lowered significantly plasma level of HbA1c, insulin, IGF-1 and leptin, and increased that of adiponectin. GS treatment activated AMPK, and down-regulated GLUT2 in liver tissues of mice while up-regulated GLUT4 in muscle tissues of mice. In in vitro study with insulin resistance induced human hepatocyte, GS treatment increased glucose uptake and increased the activities of Akt and PPAR-γ in response to insulin. Treatment of GS appears to reduce blood glucose level by regulating energy metabolism positively through various metabolic pathways and reducing insulin resistance in Type 2 diabetes mellitus.
The Role of Natural Factors that Optimize Redox Status in Combating Type 2 Diabetes
Dawn S. Tuell, Evan A. Los, George A. Ford, William L. Stone
Subject: Medicine & Pharmacology, General Medical Research Keywords: antioxidants; oxidative stress; reactive oxygen species; type 2 diabetes; pediatrics; redox; glycemic control; exercise; vitamin E; glutathione
The worldwide prevalence of type 2 diabetes (T2D) and prediabetes is rapidly increasing, particularly in children, adolescents, and young adults. Oxidative stress (OxS) has emerged as a likely initiating factor in T2D. The role of natural antioxidant products in combating T2D is best evaluated in the context of the complex physiological processes that modulate T2D-OxS such as glycemic control and exercise. The role of natural antioxidant compounds such as vitamin E in T2D must likewise be considered beyond their roles as inhibitors of OxS. In addition to antioxidant properties, vitamin E vitamers (tocopherols and tocotrienols) also exhibit distinct abilities to regulate cellular signal transduction pathways important to T2D progression. Most research on the role of vitamin E in T2D or prediabetes has been limited to tocopherols (Ts) but emerging trials with tocotrienols (T3s) show promise. Minimizing factors that induce chronic damaging OxS and maximizing natural antioxidant protective factors may provide a means of preventing or slowing T2D progression. This "optimal redox" (OptRedox) approach also provides a framework in which to discuss the potential benefits of natural antioxidant factors such as antioxidant products. Since early, effective intervention is critical, the OptRedox strategy would be optimally effective if implemented in the pediatric population.
Modeling of Renewable Energy Systems by a Self-Evolving Nonlinear Consequent Part Recurrent Type-2 Fuzzy System (NCPRT2FS) for Power Prediction
Jafar Tavoosi, Amir Abolfazl Suratgar, Mohammad Bagher Menhaj, Amir Mosavi, Ardashir Mohammadzadeh, Ehsan Ranjbar
Subject: Engineering, Control & Systems Engineering Keywords: Self-Evolving, Recurrent Type-2 Fuzzy, Nonlinear Consequent Part, Convergence Analysis, Renewable Energy.
Online: 5 March 2021 (09:57:24 CET)
Not only does this paper present a novel type-2 fuzzy system for identification and behavior prognostication of an experimental solar cell set and a wind turbine, but also it brings forward an exquisite technique to acquire an optimal number of membership functions and the corresponding rules. It proposes a seven-layered NCPRT2FS. For fuzzification in the first two layers, Gaussian type-2 fuzzy membership functions with uncertainty in the mean, are exploited. The third layer comprises rule definition and the forth one embeds fulfillment of type reduction. The three last remained layers are the ones in which resultant left–right firing points, two end-points and output all get assessed correspondingly. It should not be neglected off the nutshell that recurrent feedback at the fifth layer exerts delayed outputs ameliorating efficiency of the suggested NCPRT2FS. Later in the paper, a modern structural learning, established on type-2 fuzzy clustering, is held forth. An adaptively rated learning back-propagation algorithm is extended to adjust the parameters ensuring the convergence as well. Eventually, solar cell photo-voltaic and wind turbine are deemed as case studies. The experimental data are exploited and the consequent yields emerge so persuasive.
Combination of Aronia, Red Ginseng and Shiitake Mushroom Potentiated Insulin Secretion and Reduced Insulin Resistance with Improving Gut Microbiome Dysbiosis in Insulin Deficient Type 2 Diabetic Rats
Hye Jeong Yang, Min Jung Kim, Dae Young Kwon, Da Sol Kim, Ting Zhang, Chulgyu Ha, Sunmin Park
Subject: Life Sciences, Endocrinology & Metabolomics Keywords: aronia; ginseng; mushroom; pancreatectomy; type 2 diabetes; gut microbiome; insulin secretion
The combination of freeze-dried aronia, red ginseng, ultraviolet-irradiated shiitake mushroom and natokinase (AGM; 3.4: 4.1: 2.4: 0.1) was examined to evaluate its effects on insulin resistance, insulin secretion and gut microbiome in a non-obese type 2 diabetic animal model. Pancreatectomized (Px) rats were provided high fat diets supplemented with either of 1) 0.5 g AGM (AGM-L), 2) 1 g AGM (AGM-H), 3) 1 g dextrin (control), or 4) 1g dextrin with 120 mg metformin (positive-control) per kg body weight for 12 weeks. AGM (1 g) contained 6.22 mg cyanidin-3-galactose, 2.5 mg ginsenoside Rg3 and 0.6 mg β-glucan. Px rats had decreased bone mineral density in the lumbar spine and femur and lean body mass in the hip and leg compared to the normal-control and AGM-L and AGM-H prevented the decrease. Visceral fat mass was lower in the control group than the normal-control group and its decrease was smaller by AGM-L and AGM-H. HOMA-IR was lower in descending order of the control, positive-control, AGM-L, AGM-H and normal-control groups. Glucose tolerance was deteriorated in the control group and it was improved by AGM-L and AGM-H more than in the positive-control group. Glucose tolerance is associated with insulin resistance and insulin secretion. Insulin tolerance indicated insulin resistance was highly impaired in diabetic rats, but it was improved in the ascending order of the positive-control, AGM-L and AGM-H. Insulin secretion capacity, measured by hyperglycemic clamp, was much lower in the control group than the normal-control group and it was improved in the ascending order of the positive-control, AGM-L and AGM-H. Diabetes modulated the composition of gut microbiome and AMG prevented the modulation of gut microbiome. In conclusion, AGM improved glucose metabolism by potentiating insulin secretion and reducing insulin resistance in insulin deficient type 2 diabetic rats. The improvement of diabetic status alleviated body composition changes and prevented changes of gut microbiome composition.
Subject: Keywords: Glycine soja seed; Type 2 diabetes mellitus; Antidiabetic; AMPK; Akt; PPAR-γ
Hsa_circ_0054633 in Peripheral Blood Can be Used as a Diagnostic Biomarker of Pre-Diabetes and Type 2 Diabetes Mellitus
Muwei Li, Zhenzhou Zhao, Xuejie Li, Chuanyu Gao, Dongdong Jian, Peiyuan Hao, Lixin Rao
Subject: Medicine & Pharmacology, Cardiology Keywords: circular RNAs (circRNAs); circulating circRNA; type 2 diabetes mellitus (T2DM); pre-diabetes; microarray analysis; biomarker
The purpose of current study was to investigate the expression characteristic of circular RNAs (circRNAs) in peripheral blood of type 2 diabetes mellitus (T2DM) patients and their potentials as diagnostic biomarkers for pre-diabetes and T2DM. In present study, the circRNAs in the peripheral blood from 6 healthy individuals and 6 T2DM patients were collected for microarray analysis. The results indicated that there were 489 differentially expressed circRNAs, of which 78 were upregulated and 411 were downregulated in the T2DM group. Then we selected 5 circRNAs as the candidate biomarkers under a stricter screening criteria and further verified them in another cohort (control group, n=20; pre-diabetes group, n =20; T2DM group; n=20). 3 of the 5 circRNAs presented upregulated expression in the experimental groups, including 2 circRNAs of the T2DM group that had higher expression than the pre-diabetes group. Hsa_circ_0054633 was identified to have the largest area value under the carve (AUC). In another independent cohort (control group, n=60; pre-diabetes group, n=63; T2DM group, n=64), the diagnostic capacity of hsa_circ_0054633 was tested. The results showed that the AUC for the diagnosis of pre-diabetes was 0.751(95% confidence interval=[0. 666-0.835], P<0.001) while it was 0.793 ([0.716-0.871], P<0.001) for the diagnosis of T2DM. After including the risk factors of T2DM, the AUC increased to 0.841 ([0.773-0.910], P <0.001) and 0.834 ([0.762-0.905], P <0.001), respectively. Hsa_circ_0054633 presented a certain diagnostic capability for pre-diabetes and T2DM.
Impact of Glucose-Lowering Medications on Cardiometabolic Risk in Type 2 Diabetes
Angelo Maria Patti, Ali A Rizvi, Rosaria Vincenza Giglio, Anca Pantea Stoian, Daniela Ligi, Ferdinando Mannello
Subject: Medicine & Pharmacology, Other Keywords: cardiovascular risk; dipeptidyl peptidase-4 inhibitors; glucagon like peptide-1 receptor agonists; sodium glucose cotransporter-2 inhibitors; type 2 diabetes mellitus
Type 2 Diabetes Mellitus (T2DM) is associated with a high risk of atherosclerotic cardiovascular (CV) disease. Contributing pathophysiologic factors include endothelial dysfunction caused by excessive production of reactive oxygen species (ROS), increased activity of nuclear factor kB (NFkB), altered macrophage polarization, and reduced synthesis of endothelial progenitor cells (EPC). Consequently, there can be a potentially rapid progression of the atherosclerotic disease with a higher propensity to unstable plaque, leading to increased cardiovascular mortality. Management is aimed at prevention, early diagnosis, and treatment of hyperglycemia and vascular complications. Innovative therapeutic approaches for T2DM seek to customize the antidiabetic treatment to each patient in order to optimize glucose-lowering effects, minimize hypoglycemia and adverse effects, and prevent cardiovascular events. The newer drugs (Glucagon Like Peptide-1 Receptor Agonists, GLP-1 RAs; Sodium GLucose coTransporter-2 inhibitors, SGLT2is; DiPeptidyl Peptidase-4 inhibitors, DPP4is) impact body weight, lipid parameters, and blood pressure, as well as endothelial function, inflammatory markers, markers of oxidative stress, and subclinical atherosclerosis. The present review summarizes the results of trials that evaluated the cardiovascular safety of these drugs and found them to be safe from the CV standpoint.
Preprint CASE REPORT | doi:10.20944/preprints202107.0661.v1
Missense Variation in TPP1 Gene causes Neuronal Ceroid Lipofuscinosis Type 2 in a Family from Jammu and Kashmir-India
Arshia Angural, Kalaiarasan Ponnusamy, Diksha Langeh, Mamta Kumari, Akshi Spolia, Ekta Rai, Ankush Sharma, Kamal Kishore Pandita, Swarkar Sharma
Subject: Life Sciences, Genetics Keywords: CLN2; epilepsy; Jammu and Kashmir; loss of ambulation; neuronal ceroid lipofuscinoses type 2; neuroregression; seizures; TPP1
We report diagnosis of Neuronal Ceroid Lipofuscinosis Type 2 (CLN2), a rare, hereditary neurodegenerative disease of childhood, in a four and a half year old girl, the first child of non-consanguineous parents with no family history. Despite extensive efforts by the parents, her clinical condition remained undiagnosed and without management, until recently. Our published "Bottom-up Approach", based on comprehensive and multidisciplinary clinical, pathological, radiographical and genetic evaluations, played key role in diagnosis of the disease. Detailed analyses involving Next Generation Sequencing confirmed a missense variation NC_00011.10:g.6616374C>T (NP_000382.3:p.Arg339Gln; rs765380155) in exon 8 of TPP1 gene. In silico analyses predicted it to be highly pathogenic. Further family screening (including her both unaffected parents and asymptomatic, one year old younger sister) of the identified variation through Sanger Sequencing, revealed a perfect autosomal recessive segregation in the family. This study is the first case report on classic CLN2 from Jammu and Kashmir-India. This study is also indicating the effectiveness of our "Bottom-up Approach" in understanding rare disorders in low resource regions and the importance of timely diagnosis. Like in the proband, had diagnosis been established a bit early, the family might have benefitted at least with reference to their second child through counselling programmes.
Notes on $q$-Hermite Based Unified Apostol Type Polynomials
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: $q$-Hermite type polynomials, $q$-unified Apostol type polynomials, $q$-Hermite based unified Apostol type polynomials.
In this article, a new class of $q$-Hermite based unified Apostol type polynomials is introduced by means of generating function and series representation. Several important formulas and recurrence relations for these polynomials are derived via different generating methods. We also introduce $q$-analog of Stirling numbers of second kind of order $\nu$ by which we construct a relation including aforementioned polynomials.
On Approximation Methods in Some Geodesic Spaces without the Nice Projection Property
Pongsakorn Yotkaew
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Browder's type iteration; CAT(1) space; fixed point; Halpern's type iteration; Moudafi's viscosity type method; nonexpansive mapping
The purpose of this paper is to prove strong convergent theorems for Browder's type iterations and Halpern's type iterations of a family of nonexpansive mappings in a complete geodesic space with curvature bounded above by a positive number. Moudafi's viscosity type methods are also discussed without the nice projection property.
Extension of eigenvalue problems on Gauss map of ruled surfaces
Miekyung Choi, Young Ho Kim
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: ruled surface,; pointwise $1$-type Gauss map; generalized $1$-type Gauss map; conical surface of $G$-type
A finite-type immersion or smooth map is a nice tool to classify submanifolds of Euclidean space, which comes from eigenvalue problem of immersion. The notion of generalized 1-type is a natural generalization of those of 1-type in the usual sense and pointwise 1-type. We classify ruled surfaces with generalized 1-type Gauss map as part of a plane, a circular cylinder, a cylinder over a base curve of an infinite type, a helicoid, a right cone and a conical surface of $G$-type.
Patient and Provider Dilemmas of type 2 Diabetes Self-Management: a Qualitative Study in Socioeconomically Disadvantaged Communities in Stockholm
Juliet Aweko, Jeroen De Man, Pilvikki Absetz, Claes-Göran Östenson, Stefan Swartling Peterson, Helle Mölsted Alvesson, Meena Daivadanam
Subject: Behavioral Sciences, Other Keywords: Self-management, type 2 diabetes, immigrants, health systems, chronic diseases, qualitative study, lifestyle change, thematic analysis, socioeconomically disadvantaged, Stockholm
Studies comparing provider and patient views and experiences of self-management within primary healthcare are particularly scarce in disadvantaged settings. In this qualitative study, patient and provider perceptions of self-management were investigated in five socio-economically disadvantaged communities in Stockholm. Twelve individual interviews and three group interviews were conducted. Semi-structured interview guides included questions on perceptions of diabetes diagnosis, diabetes care services available at primary health care centers, patient and provider interactions, and self-management support. Data was analysed using thematic analysis. Two overarching themes were identified. These were characterized by inherent dilemmas representing confusions and conflicts that patients and providers experienced in their daily life or practice respectively: adopting and maintaining new routines through practical and appropriate lifestyle choices (patients); and balancing expectations and pre-conceptions of self-management (providers). Patients found it difficult to tailor information and lifestyle advice to fit their daily life. Healthcare providers recognized that patients were in need of support to change behavior, but saw themselves as inadequately equipped to deal with the different cultural and social aspects of self-management. This study highlights patient and provider dilemmas that influence the interaction and collaboration between patients and providers with respect to communication and uptake of self-management advice.
Logarithmic-Time Addition for BIT-Predicate With Applications for a Simple and Linear Fast Adder And Data Structures
Juan Ramírez
Subject: Mathematics & Computer Science, Logic Keywords: Structuralism; Set Theory; Type Theory; Arithmetic Model; Data Type; Tree, Group
A construction for the systems of natural and real numbers is presented in Zermelo-Fraenkel Set Thoery, that allows for simple proofs of the properties of these systems, and practical and mathematical applications. A practical application is discussed, in the form of a Simple and Linear Fast Adder (Patent Pending). Applications to finite group theory and analysis are also presented. A method is illustrated for finding the automorphisms of any finite group $G$, which consists of defining a canonical block form for finite groups. Examples are given, to illustrate the procedure for finding all groups of $n$ elements along with their automorphisms. The canonical block form of the symmetry group $\Delta_4$ is provided along with its automorphisms. The construction of natural numbers is naturally generalized to provide a simple and sound construction of the continuum with order and addition properties, and where a real number is an infinite set of natural numbers. A basic outline of analysis is proposed with a fast derivative algorithm. Under this representation, a countable sequence of real numbers is represented by a single real number. Furthermore, an infinite $\infty\times\infty$ real-valued matrix is represented with a single real number. A real function is represented by a set of real numbers, and a countable sequence of real functions is also represented by a set of real numbers. In general, mathematical objects can be represented using the smallest possible data type and these representations are calculable. In the last section, mathematical objects of all types are well assigned to tree structures in a proposed type hierarchy.
Galway Point Mutation (FecXG) in the Bone Morphogenetic Protein 15 Gene (BMP15) Is Associated With Prolificacy in the Sudanese Desert Sheep Ecotypes
Amani Z. Abdelgader, Lutfi M. A. Musa, Mitsuru Tsubo, Faisal M. El-Hag, Aubai O. Saleem, Yasunori Kurosaki, Khaleel I. Jawasreh, Mohammed-Khair A. Ahmed
Subject: Biology, Anatomy & Morphology Keywords: BMP15 gene; Ewe; Sudanese Sheep; Residue; Wild type; Mutant type; dryland
This study tested the association between FecXG point mutation located in exon 2 of BMP15 gene and the prolificacy of Dubasi, Shugor and Watish sheep ecotypes, under dryland farming, Sudan. Blood samples were randomly collected from unrelated 100 ewes (Dubasi; n= 30, Shugor: n= 30, and Watish: n= 40). Bone Morphogenetic protein (BMP15) gene was amplified using PCR-RFLP. Two genotypes were found in all studied breeds (heterozygous and wild type). The calculated total genotype frequencies of BB, Bb and bb genotypes were 0.31, 0.69 and 0.00, respectively, while allele frequencies were 0.66 for B and 0.34 for b. Litter size was influenced by the genotypes of BMP15 gene, parities and subtypes (p<0.05), highest for Watish and 4th parity. Alignment of BMP15 samples along with database reference sequence revealed that most sequence regions were identical except for one variable nucleotide at position 111 bp where a guanine (G) was replaced by adenine (A) in Watish and Shugor samples. All amino acids were the same at residue 275. Watish and Shugor breeds are more related. The study concluded that the presence of one copy of FecXG point mutation of BMP15 gene increased the litter size by 0.17 lambs in the studied ecotypes.
Classification Theorems in Minkowski 3-space
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: ruled surface; null scroll; Minkowski space; pointwise 1-type Gauss map; generalized $1$-type Gauss map; conical surface of $G$-type
By generalizing the notion of pointwise 1-type Gauss map, the generalized 1-type Gauss map has been recently introduced. Without any assumption, we classified all possible ruled surfaces with generalized 1-type Gauss map in a 3-dimensional Minkowski space. In particular, null scrolls do not have the proper generalized 1-type Gauss map. In fact, it is harmonic.
Role of Structure and Composition on the Performances of p-Type Tin Oxide Thin-Film Transistors Processed at Low-Temperatures
Raquel Barros, Kachirayil J. Saji, João C. Waerenborgh, Pedro Barquinha, Luis Pereira, Rodrigo Martins, Elvira Fortunato
Subject: Materials Science, Other Keywords: p-type TFT; p-type oxide semiconductors; SnO electrical properties; oxide structure analysis
Online: 3 January 2019 (10:13:59 CET)
This work reports the role of structure and composition on the determination of the performances of p-type SnOx TFTs deposited by rf magnetron sputtering at room temperature, followed by a post-annealed step up to 200 °C at different oxygen partial pressures (Opp), between 0% and 20%, but where the p-type conduction was only observed between 2.8–3.8%. The role of structure and composition were evaluated by XRD and Mössbauer spectroscopic studies. The study allows to identify the best phases/compositions and thicknesses (around 12 nm) to be used that lead to the production of TFTs with a bottom gate configuration, on glasses coated with conductive Indium Tin Oxide, followed by Aluminium Titanium Oxide dielectric layer with saturation mobility of 4.6 cm2V−1s−1 and on-off ratio above 7 × 104, operating at the enhancement mode with a saturation voltage of −10 V.
A New Class of Integrals Involving Extended Mittag–Leffler Functions
Gauhar Rahman, Abdul Ghaffar, Kottakkaran Sooppy Nisar, Shahid Mubeen
Subject: Mathematics & Computer Science, Analysis Keywords: extended Mittag–Leffler function; Wright-type hypergeometric functions; extended Wright-type hypergeometric functions
The main aim of this paper is to establish two generalized integral formulas involving the extended Mittag–Leffler function based on the well known Lavoie and Trottier integral formula and the obtain results are express in term of extended Wright-type function. Also, we establish certain special cases of our main result.
Pharmacophore Based Screening & Modification of Amiloride Analogs for Targeting the NhaP-type Cation-Proton Antiporter in Vibrio cholerae
Muntahi Mourin, Arittra Bhattacharjee, Alvan Wai, George Hausner, Joe O'Neil, Pavel Dibrov
Subject: Life Sciences, Biochemistry Keywords: NhaP2-type cation-proton antiporter; Vibrio cholerae; Amiloride Analogs; Inhibitors against NhaP-type antiporters
The genome of Vibrio cholerae contains three structural genes for the NhaP-type cation-proton antiporter paralogues, Vc-NhaP1, 2 and 3 mediating exchange of K+ and or Na+ for protons across the membrane. Based on phenotype analysis of chromosomal Vc-NhaP1, 2 and 3 triple deletion mutants we suggested that Vc-NhaP paralogues might play a role in the Acid Tolerance Response (ATR) of V. cholerae as it passes through the gastric acid barrier of the stomach. Comparison of the biochemical properties of Vc-NhaP isoforms revealed that Vc-NhaP2 is the most active among all three paralogues. Therefore, Vc-NhaP2 antiporter is a plausible therapeutic target for developing novel inhibitors targeting these ion exchangers. Our structural and mutational analysis of Vc-NhaP2 identified a putative cation binding pocket formed by antiparallel extended regions of two transmembrane segments (TMSs V/XII) along with TMS VI. Molecular Dynamics (MD) simulations suggested that the flexibility of TMS-V/XII is crucial for the intra-molecular conformational events in Vc-NhaP2. In this study, we developed some putative Vc-NhaP2 inhibitors from Amiloride analogs (AAs). Amiloride is a potent inhibitor of human Na+/H+ exchanger-1 (NHE1). Based on the pharmacokinetic properties and potential binding affinity scores we chose six AAs showing high binding affinity scores to Vc-NhaP2. In silico, the six AAs interacted with the functionally important amino acid residues located in TMSs III, IV, V, VI, VIIII and IX either from the cytoplasmic side (three AAs) or the periplasmic side (three AAs) of Vc-NhaP2. Four AAs were modified to reduce their toxicity profile compared to the original AAs. Molecular docking of the modified AAs revealed promising binding. The four selected drugs interacted with functionally important amino acid residues located on the cytoplasmic side of TMS VI, the extended chain region of TMS V and TMS XII and the loop region between TMSs VIIII and IX. Molecular dynamics simulations revealed that binding of the selected drugs destabilized the Vc-NhaP2 and altered the flexibility of functionally important TMS VI.
COVID-19: Considerations for Children and Adolescents with Diabetes
Devi Dayal
Subject: Medicine & Pharmacology, Pediatrics Keywords: Coronavirus disease 2019; COVID-19; children; diabetes; type 1 diabetes; type 2 diabetes; recommendations
Recent reports suggest that the clinical course of coronavirus disease 2019 (COVID-19) in previously healthy children is usually milder as compared to adults. However, children with comorbid conditions such as diabetes are at increased risk of infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) as well as morbidity and mortality due to COVID-19. Experience in adults with diabetes shows that they are prone to faster metabolic decompensation, develop diabetes-related complications, and have a poor prognosis when hospitalized with COVID-19. Data on children are limited. The aim of this mini-review is to discuss the possible risks to children and adolescents with diabetes during the current pandemic and the special considerations in management in those affected with COVID-19. The challenges for children who develop new-onset type 1 diabetes during the COVID-19 lockdown, especially in accessing healthcare, are also discussed.
Type 2 Diabetes Mellitus Increases The Risk of Late-Onset Alzheimer's Disease: Ultrastructural Remodeling of the Neurovascular Unit and Diabetic Gliopathy
Melvin Hayden
Subject: Medicine & Pharmacology, Clinical Neurology Keywords: Aging; Alzheimer's disease; brain insulin resistance; db/db diabetic mouse model; diabetic cognopathy; insulin resistance; metabolic syndrome; mixed dementia; obesity; type 2 diabetes mellitus
Type 2 diabetes mellitus (T2DM) and late-onset Alzheimer's disease-dementia (LOAD) are increasing in global prevalence and current predictions indicate they will only increase over the coming decades. These increases may be a result of the concurrent increases of obesity and aging. T2DM is associated with cognitive impairments associated with metabolic factors and increases the cellular vulnerability to develop the age-related increased risk of LOAD. This review addresses possible mechanisms due to obesity, aging, multiple intersections between T2DM and LOAD and mechanisms for the continuum of progression. Multiple ultrastructural images in female diabetic db/db models are utilized to demonstrate marked cellular remodeling changes of mural and glia cells and provide for the discussion of functional changes in T2DM. Throughout this review multiple endeavors to demonstrate how T2DM increases the vulnerability of the brain's neurovascular unit (NVU), neuroglia and neurons are presented. Five major intersecting links are considered: i. aging (chronic age-related diseases); ii. metabolic (hyperglycemia - advanced glycation end-products and its receptor (AGE/RAGE) interactions and hyperinsulinemia – insulin resistance (a linking linchpin); iii. oxidative stress (reactive oxygen-nitrogen species); iv. inflammation (peripheral macrophage and central brain microglia); v. vascular (macrovascular accelerated atherosclerosis - vascular stiffening and microvascular NVU/neuroglial remodeling) with resulting impaired cerebral blood flow.
Recent Expeditious Growth of Type 1 Diabetes in the Gulf Arab Countries
Mohamed Jahromi, Mona Al Sheikh, Jaakko Tuomilehto
Subject: Life Sciences, Molecular Biology Keywords: Type 1 diabetes; human leukocyte antigen; Kuwait Type 1 Diabetes Study; Islet autoantibodies; Insulin; prediction
The incidence of Type 1 Diabetes (T1D) in the Arab world, particularly, oil and gas rich Gulf Cooperative Council (GCC) countries has more than doubled in the last twenty years. Therefore, there is a dire need for careful systematic familial cohort studies, especially in high-risk populations. Several immunogenetic factors affect the pathogenesis of the disease. Genes in the human leukocyte antigen (HLA) account for the major genetic susceptibility to the disease. The triggering agents initiate disease onset by destruction of pancreatic β-cells. The autoantibodies against glutamic acid decarboxylase (GADA), insulinoma antigen-2 (IA-2A), insulin (IAA), and zinc transporter-8 (ZnT-8A) comprise the most reliable biomarkers for T1D in both children and adults. Although three of the GCC countries, namely Kuwait, Saudi Arabia and Qatar are among the top 10 countries with high incidence rate of T1D, no proper diagnostic and prediction tools were applied in the region. Understanding the disease sequelae in a homogenous gene pool with high consanguinity in the GCC could help solve the challenges in understanding pathogenesis, as well as hasten the prevention of T1D. Arab states must incorporate T1D predictive and intervention policies on a war-footing basis to minimize the burden of this serious disease.
Intravenous Immunoglobulin for Treating Bacterial Infections: One More Mechanism of Action
Teiji Sawa, Mao Kinoshita, Keita Inoue, Junya Ohara, Kiyoshi Moriyama
Subject: Life Sciences, Immunology Keywords: immunoglobulin; IVIG; LcrV; PcrV; translocation; type III secretory toxin; type III secretion system; V-antigen
The mechanisms underlying the effects of γ-globulin therapy for bacterial infections are thought to involve bacterial cell lysis via complement activation, phagocytosis via bacterial opsonization, toxin neutralization, and antibody-dependent cell-mediated cytotoxicity. Nevertheless, recent advances in the study of pathogenicity in gram-negative bacteria have raised the possibility of an association between γ-globulin and bacterial toxin secretion. Over time, new toxin secretion systems like the type III secretion system have been discovered in many pathogenic gram-negative bacteria. With this system, the bacterial toxins are directly injected into the cytoplasm of the target cell through a special secretory apparatus without any exposure to the extracellular environment and, therefore, with no opportunity for antibodies to neutralize the toxin. However, because antibodies against the V-antigen, which is located on the needle-shaped tip of the bacterial secretion apparatus, can inhibit toxin translocation, this raises the hope that the toxin might be susceptible to antibody targeting. Because multi-drug resistant bacteria are now prevalent, inhibiting this secretion mechanism is attractive as an alternative or adjunctive therapy against lethal bacterial infections. Thus, it would not be unreasonable to define the blocking effect of anti-V-antigen antibodies as the fifth mechanism for immunoglobulin action against bacterial infections.
Canonical Description of Group Theory: A Linear Order on All Finite Groups
Juan Pablo Ramirez
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Finite Group; Finite Permutation; Set Theory; Mathematical Structuralism; Type Theory; Tree; Data Type; Benacerraf's Identification Problem
We provide an axiomatic base for the set of natural numbers, that has been proposed as a canonical construction, and use this definition of $\mathbb N$ to find several results on finite group theory. Every finite group $G$, is well represented with a natural number $N_G$; if $N_G=N_H$ then $H,G$ are in the same isomorphism class. We have a linear order, on the quotient space of isomorphism classes of finite groups, that is well behaved with respect to cardinality. If $H,G$ are two finite groups such that $|H|=m<n=|G|$, then $H<\mathbb Z_n\leq G\leq\mathbb Z_{p_1}^{n_1}\oplus\mathbb Z_{p_2}^{n_2}\oplus\cdots\oplus\mathbb Z_{p_k}^{n_k}$ where $n=p_1^{n_1}p_2^{n_2}\cdots p_{k}^{n_k}$ is the prime factorization of $n$. We find a canonical order for the objects of $G$ and define equivalent objects of $G$, thus finding the automorphisms of $G$. The Cayley table of $G$ takes canonical block form, and we are provided with a minimal set of independent equations that define the group. We show how to find all groups of order $n$, and order them. We give examples using all groups with order smaller than $10$, and we find the canonical block form of the symmetry group $\Delta_4$. In the next section, we extend our results to the infinite case, which defines a real number as an infinite set of natural numbers. A real function is a set of real numbers, and a sequence of real functions $f_1,f_2,\ldots$ is well represented by a set of real numbers, as well. In general, we represent mathematical objects using the smallest possible data-type. In the last section, mathematical objects are well assigned to tree structures. We conclude with brief comments on type theory and future work on computational aspects of these representations.
Multifarious Results for q-Hermite Based Frobenius Type Eulerian Polynomials
Waseem Khan, Idrees Ahmad Khan, Mehmet Acikgoz, Ugur Duran
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Hermite polynomials, Frobenius type Eulerian polynomials, Hermite based Frobenius type Eulerian polynomials, q-numbers, q-polynomials.
In this paper, a new class of q-Hermite based Frobenius type Eulerian polynomials is introduced by means of generating function and series representation. Several fundamental formulas and recurrence relations for these polynomials are derived via different generating methods. Furthermore, diverse correlations including the q-Apostol-Bernoulli polynomials, the q-Apostol-Euler poynoomials, the q-Apostol-Genocchi polynomials and the q-Stirling numbers of the second kind are also established by means of the their generating functions.
A Novel Kind of Hermite Based Frobenius Type Eulerian Polynomials
Waseem Ahmad Khan, Kottakkaran Sooppy Nisar, Mehmet Acikgoz, Ugur Duran
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: hermite polynomials; Frobenius type Eulerian polynomials; Hermite-based Frobenius type Eulerian polynomials; summation formulae; symmetric identities
We introduce a new kind of extended Hermite-based Frobenius type Eulerian polynomials and then derive diverse explicit and implicit summation equations including some symmetric formulas by utilizing series manupulation method. Multifarious summation formulas and identities given earlier for some well known polynomials such as Eulerian polynomials and Frobenius type Eulerian polynomials are generalized.
Viral Innate Immune Evasion and the Pathogenesis of Emerging RNA Virus Infections
Tessa Nelemans, Marjolein Kikkert
Subject: Life Sciences, Virology Keywords: positive-sense single-stranded rna viruses; innate immune evasion; type 1 interferon; viral pathogenesis; type 3 interferon
Positive-sense single-stranded RNA (+ssRNA) viruses comprise many (re-)emerging human pathogens that pose a public health problem. Our innate immune system and in particular the interferon response form the important first line of defense against these viruses. Given their genetic flexibility, these viruses have therefore developed multiple strategies to evade the innate immune response in order to optimize their replication capacity. Already many molecular mechanisms of innate immune evasion by +ssRNA viruses have been identified. However, research addressing the effect of host innate immune evasion on the pathology caused by the viral infection is less prevalent in literature, though very relevant and interesting. Since interferons have been implicated in inflammatory diseases and immunopathology in addition to their protective role in infection, the influence of antagonizing the immune response may have an ambiguous effect on the clinical outcome of the viral disease. Therefore, this review discusses what is currently known about the role of interferons and host immune evasion in the pathogenesis of emerging viruses belonging to the coronaviruses, alphaviruses and flaviviruses.
CoMnO2-Decorated Polyimide-Based Carbon Fiber Electrodes for Wire-Type Asymmetric Supercapacitor Applications
Young-Hun Cho, Jae-Gyoung Seong, Jae-Hyun Noh, Da-Young Kim, Yong-Sik Chung, Tae Hoon Ko, Byoung-Suhk Kim
Subject: Materials Science, Nanotechnology Keywords: carbon fiber; wire-type; CoMn3O4; supercapacitor electrodes
In this work, we report the carbon fiber-based wire-type asymmetric supercapacitors (ASCs). The highly conductive carbon fibers were prepared by the carbonized and graphitized process using the polyimide (PI) as a carbon fiber precursor. To assemble the ASC device, the CoMnO2-coated and Fe2O3-coated carbon fibers were used as the cathode and the anode materials, respectively. FE-SEM analysis confirmed that the CoMnO2-coated carbon fiber electrode exhibited the porous hierarchical interconnected nanosheet structures, depending on the added amounts of ammonium persulfate (APS) as an oxidizing agent, and Fe2O3-coated carbon fiber electrode showed a uniform distribution of porous Fe2O3 nanorods over the surface of carbon fibers. The nanostructured CoMnO2 were directly deposited onto carbon fibers by a chemical oxidation route without high temperature treatments. In particular, the electrochemical properties of the CoMnO2-coated carbon fiber with the concentration of 6 mmol APS presented the enhanced electrochemical activity, probably due to its porous morphologies and good conductivity. Further, to reduce the interfacial contact resistance as well as improve the adhesion between transition metal nanostructures and carbon fibers, the carbon fibers were pre-coated with the Ni layer as a seed layer using an electrochemical deposition method. The fabricated ASC device delivered a specific capacitance of 221 F g-1 at 0.7 A g-1 and good rate capability of 34.8% at 4.9 A g-1. Moreover, the wire-type device displayed the superior energy density of 60.16 Wh kg-1 at a power density of 490 W kg-1 and excellent capacitance retention of 95% up to 3,000 charge/discharge cycles.
Fundamental Elucidation of HLA in Type 1 Diabetes of Arab Populations
Mohamed Mirza Jahromi
Subject: Medicine & Pharmacology, Other Keywords: HLA; type 1 diabetes; ethnicity; screening; haplotype
Aims/Hypothesis): Type 1 diabetes is an immune-mediated disease with destruction of the pancreatic β-cells, a process that is conditioned by multiple genes and other factors. HLA counts as the major susceptibility gene. Significant variations in HLA genetic susceptibility to type 1 diabetes between Caucasians, African and Asian and other ethnic groups may have led to the variation in incidence of type 1 diabetes globally. Type 1 diabetes is characterized upon HLA identification. In this chapter we discuss global variations in genetic susceptibility of HLA with regard to type 1 diabetes globally with a particular attention to Arab population. Methods): Haplotype configuration of HLA class I A, B, C and Class II –DR/DQ/DP were studied in Caucasians, African and Asian and in Arab population to see if that is responsible for the exponential rise in the rate of type 1 diabetes. Results): Although Arabs have one of the highest global incidence and prevalence rates of type 1 diabetes, unfortunately, there is a dearth amount of information regarding HLA genetic susceptibility to type 1 diabetes in the Arab world. HLA haplotype configurations contribute to its risk value. However, out of an insufficient present study there are examples of misjudgment of HLA risk according to HLA alleles rather than haplotypes. Conclusion): To date HLA outlooks for the characterization of type 1 diabetes. There is an ethnicity difference in HLA characteristics which is responsible for variation in type 1 diabetes. Although Arab population have contributed heavily in the rise of burden of type 1 diabetes, however, there is significantly a dearth amount of studies on HLA in Arab population. Obviously, any future prediction, prevention or cure of the disease will be based on the HLA genetics. There is a dire need for a systematic screening of HLA for Arab population with type 1 diabetes, identification of Arab HLA-risk values and identify those who are prone to get the disease.
Membranous Glomerulonephritis: Overview of the Role of Serum and Urine Biomarkers in the Management
Sadiq Mu'azu Maifata, Rafidah Hod, Nor Fadhina Zakaria, Fauzah Abd Ghani
Subject: Medicine & Pharmacology, General Medical Research Keywords: M-type phospholipase A2; Thrombospondin type containing domain A7; Retinol-binding protein; Beta-2 microglobulin; membranous glomerulonephritis; neutral endopeptidase
Detection of PLA2R and THSD7A among primary membranous glomerulonephritis (MGN) patients transformed the diagnosis, treatment monitoring and prognosis. Anti-PLA2R can be detected in 70-90% of primary MGN patients while anti-THSD7A in 2-3% of anti-PLA2R negative primary MGN patients depending on the technique used. Serum and urine samples are less invasive and non-invasive respectively and can detect the presence of anti-PLA2R and anti-THSD7A with higher sensitivity and specificity, significant in patients' monitoring and prognosis better than exposing patients to frequent biopsy which is an invasive procedure. Different techniques of detection of PLA2R and THSD7A in patients' urine and sera were reviewed with the aim of providing newer and alternative techniques. We proposed the use of biomarkers (PLA2R and THSD7A) in making the diagnosis, treatment decision and follow up of patients with primary MGN. We also reviewed other prognostic renal biomarkers like retinol binding protein (RBP) and beta-2 microglobulin in order to detect progression of renal damage for early intervention.
Interplay between Autophagy and Herpes Simplex Virus Type 1: ICP34.5, One of the Main Actors
Inés Ripa, Sabina Andreu, José Antonio López-Guerrero, Raquel Bello-Morales
Subject: Biology, Other Keywords: ICP34.5; autophagy; herpes simplex virus type 1; neurovirulence
Herpes simplex virus type 1 (HSV-1) is a neurotropic virus that, occasionally, may spread to the central nervous system (CNS), being the most common cause of sporadic encephalitis. One of the main neurovirulence factors of HSV-1 is the protein ICP34.5 that, although initially seemed to be relevant only in neuronal infections, it can also promote viral replication in non-neuronal cells. New ICP34.5 functions have been discovered during the recent years, and some of them have been questioned. This review describes the mechanisms of ICP34.5 to control cellular antiviral responses and debates its most controversial functions. One of the most discussed roles of ICP34.5 is autophagy inhibition. Although autophagy is considered a defense mechanism against viral infections, current evidence suggests that this antiviral function is only one side of the coin. Different types of autophagic pathways interact with HSV-1 impairing or enhancing the infection, and both the virus and the host cell modulate these pathways to tip the scales in its favor. In this review, we will summarize the recent progress on the interplay between autophagy and HSV-1, focusing on the intricate role of ICP34.5 in the modulation of this pathway to gain the battle against cellular defenses.
Phylogenomic Placement of American Southwest-Associated Clinical and Veterinary Isolates Expands Evidence for Distinct Cryptococcus Gattii VGVI.
Juan Monroy-Nieto, Jolene Bowers, Parker Montfort, Guillermo Adame, Constanza Giselle Taverna, Hayley Yaglom, Jane E. Sykes, Shane Brady, A. Brian Mochon, Wieland Meyer, Kenneth Komatsu, David M. Engelthaler
Subject: Medicine & Pharmacology, Other Keywords: Cryptococcus; Whole-Genome Sequencing; VGVI; phylogenomics; Molecular Type
Whole-genome sequencing has advanced our understanding of the population structure of the pathogenic species complex Cryptococcus gattii, which has allowed for the phylogenomic specification of previously described major molecular type groupings and novel lineages. Recently, isolates collected in Mexico in the 1960s were determined to be genetically distant from other known molecular types and were classified as VGVI. We sequenced four clinical isolates and one veterinary isolate collected in the southwestern U.S. and Argentina during 2012-2021. Phylogenomic analysis groups these genomes with those of the Mexican VGVI isolates, expanding VGVI into a clade and establishing this molecular type as a clinically important population. These findings also potentially expand the known Cryptococcus ecological range with a previously unrecognized endemic area.
Does Botulinum Toxin Type–a Improve Mandibular Motion and Muscle Sensibility in Myofascial Pain Tmd Subjects? A Randomized, Controlled Clinical Trial
Rodrigo Lorenzi Poluha, Célia Mariza Rizzatti-Barbosa, Natalia Alvarez Pinzón, Bruno Rodrigues Da Silva, Andre Mariz Almeida, Malin Ernberg, Ana Cristina Manso, Leonardo Rigoldi Bonjardim, Giancarlo De la Torre Canales
Subject: Medicine & Pharmacology, Dentistry Keywords: Botulinum toxin type A; Myofascial pain; Temporomandibular disorders
Online: 9 June 2022 (10:59:51 CEST)
To demonstrate if botulinum toxin type A (BoNT-A) improves mandibular range of motion and muscle sensibility to palpation in refractory myofascial pain (MFP) patients. METHODS: Eighty consecutive female subjects with refractory MFP, were randomly divided into four equal groups (n=20): BoNT-A low (BoNTA-L/10 U in each temporalis and 30 U in each masseter), BoNT-A medium (BoNTA-M/20 U in each temporalis and 50 U in each masseter), BoNT-A high (BoNTA-H/25 U in each temporalis and 75 U in each masseter) and saline solution 0.9% (SS, placebo control group/0.4 mL in each temporalis and 0.6 mL in each masseter). Clinical measurements of the mandibular movements included: pain-free opening, maximum unassisted and assisted opening, and right and left lateral movements. Palpation tests were performed bilaterally in masseter and temporalis muscle. Results were expressed as median, minimum, maximum, and means ± standard deviation (SD). Chi-Square Test was used to compare differences among groups. A 5% probability level was considered significant in all tests RESULTS: Regardless of dose, all parameters of mandibular range of motion significantly improve after 180 days in BoNT- A groups, when compared to the control group. Pain to palpation on muscles, significantly reduced in all BoNT- A groups regardless of dose, when compared to the control group, after 28 and 180 days of treatment. CONCLUSIONS: Independent of doses, BoNT-A improved mandibular range of motion and muscle sensibility to palpation in refractory MFP patients when compared to SS injections.
Accuracy Assessment of the GlucoMen® Day CGM System in Individuals with Type 1 Diabetes: A Pilot Study
Daniel A Hochfellner, Amra Simic, Marlene T Taucher, Lea S Sailer, Julia Kopanz, Tina Pöttler, Julia K Mader
Subject: Medicine & Pharmacology, Other Keywords: Diabetes Technology; CGM; Accuracy; Type 1 Diabetes; Sustainability
Show abstract| Supplementary Files| Share
Aim of this study was to evaluate the accuracy and usability of a novel continuous glucose moni-toring (CGM) system designed for needle-free insertion and reduced environmental impact. We assessed sensor performance of two GlucoMen® Day CGM systems worn simultaneously in eight participants with type 1 diabetes. Self-monitoring of blood glucose (SMBG) was performed reg-ularly over 14 days at home. Participants underwent two standardized 5-hour meal challenges with frequent plasma glucose (PG) measurements using a laboratory reference instrument at the research center. When comparing CGM to PG the overall mean absolute relative difference (MARD) was 9.7 [2.6-14.6]%. The overall MARD of CGM vs SMBG was 13.1 [3.5-18.6]%. In the consensus error grid (CEG) analysis, 98% of both CGM/PG and CGM/SMBG pairs were in the clinically acceptable zones A and B. The analysis confirms that GlucoMen® Day CGM meets the clinical requirements for state-of-the-art CGM. The needle-free insertion technology is well toler-ated by users and reduces medical waste compared to conventional CGM systems.
Mesenchymal Stem Cells Remodeling of Adsorbed Type I Collagen – Effect of Collagen Oxidation
Regina Komsa Penkova, Galya Stavreva, Stanimir Kyurkchiev, Kalina Belemezova, Svetla Todinova, George Altankov
Subject: Life Sciences, Biotechnology Keywords: Mesenchymal stem cells; collagen type I; remodeling; oxidation
Abstract: The effect of collagen type 1 (Col I) oxidation on Adipose Tissue-Derived Mesenchymal Stem Cells (ADMSCs) remodeling is described as a model for acute oxidative stress. Morphologically, remodeling was presented by a mechanical rearrangement of adsorbed FITC-Col I and a trend for its organization in a fibril-like pattern - a process strongly abrogated in oxidized samples, but without visible changes in cell morphology. The cellular proteolytic activity was quantified in multiple samples utilizing fluorescence de-quenching (FRET effect). In the presence of ADMSCs a significant increase of native FITC-Col I fluorescence was observed, almost absent in the oxidized samples. Parallel studies in cell-free systems confirmed the enzymatic de-quenching of native FITC-Col I by Clostridial collagenase, again showing significant inhibition in oxidized samples. The structural changes in the oxidized Col I was further studied by Differential Scanning Calorimetry: an additional endotherm at 33.6°C along with the typical for native Col I at 40.5°C with sustained enthalpy (∆H) was observed in oxidized samples. Collectively, it has been evidenced that remodeling of Col I by ADMSCs is altered upon oxidation due to the intrinsic changes in the protein structure, thus presenting a novel mechanism for the control of stem cells' behavior toward collagen.
Copper Metabolism of Newborns Is Adapted to Milk Ceruloplasmin As a Nutritive Source of Copper
Ludmila V. Puchkova, Polina S. Babich, Yulia A. Zatulovskaia, Ekaterina Y. Ilyechova, Francesca Di Sole
Subject: Life Sciences, Biochemistry Keywords: embryonic type copper metabolism; milk ceruloplasmin; baby formula
Copper, which can potentially be a highly toxic agent, is an essential nutrient due to its role as a co-factor for cuproenzymes and participation in signaling pathways. In mammals, the liver is a central organ that controls copper turnover throughout the body: copper absorption, distribution, and excretion. In ontogenesis, there are two types of copper metabolism: embryonic and adult, which maintain the balance of copper in each of these periods, respectively. In the liver cells, these types are characterized by specific expression patterns and activity levels of the genes encoding ceruloplasmin, which is the main extracellular ferroxidase and copper transporter and proteins mediating ceruloplasmin metalation. In newborns, the molecular-genetic mechanisms responsible for copper homeostasis and the ontogenetic switch from embryonic to adult copper metabolism are highly adapted to milk ceruloplasmin as a dietary source of copper. In the mammary gland cells, the level of ceruloplasmin gene expression and the alternative splicing of its pre-mRNA govern the amount of ceruloplasmin in milk, and thus, the amount of copper absorbed by the newborn is controlled. In the newborns, absorption, distribution, and accumulation copper are adapted to milk ceruloplasmin. In the newborns, which are not breast-fed at the early stages of postnatal development, the control for alimentary copper balance is absent. We tried to focus on the neonatal consequences of a violation of the balance of copper in the mother / newborn system. Although there is still much to be learned, the time to pay attention to this problem came because the neonatal misbalance of copper may provoke the development of copper related disorders for future life.
Neurocognitive Dynamics of Prosodic Salience over Semantics during Explicit and Implicit Processing of Basic Emotions in Spoken Words
Yi Lin, Xinran Fan, Yueqi Chen, Hao Zhang, Fei Chen, Hui Zhang, Hongwei Ding, Yang Zhang
Subject: Arts & Humanities, Linguistics Keywords: emotional speech processing; communication channel; emotion category; task type
How language mediates emotional perception and experience is poorly understood. The present event-related potential (ERP) study examined the explicit and implicit processing of emotional speech to differentiate the relative influences of communication channel, emotion category and task type in the prosodic salience effect. Thirty participants (15 women) were presented with spoken words denoting happiness, sadness and neutrality in either the prosodic or semantic channel. They were asked to judge the emotional content (explicit task) and speakers' gender (implicit task) of the stimuli. Results indicated that emotional prosody (relative to semantics) triggered larger N100 and P200 amplitudes with greater delta, theta and alpha inter-trial phase coherence (ITPC) values in the corresponding early time windows, and continued to produce larger LPC amplitudes and faster responses during late stages of higher-order cognitive processing. The relative salience of prosodic and semantics was modulated by emotion and task, though such modulatory effects varied across different processing stages. The prosodic salience effect was reduced for sadness processing and in the implicit task during early auditory processing and decision-making but reduced for happiness processing in the explicit task during conscious emotion processing. Additionally, across-trial synchronization of delta, theta and alpha bands predicted the ERP components with higher ITPC values significantly associated with stronger N100, P200 and LPC enhancement. These findings reveal the neurocognitive dynamics of emotional speech processing with prosodic salience tied to stage-dependent emotion- and task-specific effects, which can reveal insights to research reconciling language and emotion processing from cross-linguistic/cultural and clinical perspectives.
Association between Diabetes Status and Breast Cancer in Us Adults: A Cross-Sectional Study
Xingyu Sun, Pengcheng Hu, Xiaozhu Liu, Jialing Liu, Yulu Yan, Chenyu Sun, Vicky Yau, Scott Lowe, Muzi Meng, Ziru Liu
Subject: Life Sciences, Endocrinology & Metabolomics Keywords: diabetes status; prediabetes; type 2 diabetes; breast cancer; NHANES
Abstract Objectives: The purpose of this study was to determine whether breast cancer and diabetes status are related in adult Americans. Methods: We conducted a cross-sectional study of 7,599 individuals from the National Health and Nutrition Examination Survey (NHANES). Diabetes was classified as type 2 diabetes and pre-diabetes. Both prediabetes and diabetes were diagnosed according to ADA 2014 guidelines. Multiple logistic regression analysis was used to explore the relationship between diabetes status and breast cancer. Results: We found that prediabetes (OR = 0. 60, 95% CI:(0. 40, 0. 88), P= 0. 009613) and non-diabetes (OR = 0.05.3,95% CI: (0.34, 0.83), P = 0. 006014) were associated with a reduced risk of breast cancer in comparison to Type 2 diabetes (literature). Prediabetes in non-Hispanic blacks was associated with a reduced risk of breast cancer (OR=0. 55,95%CI:0. 40-0. 75, P<0. 001). Using two segmented linear regression models to fit the relationship between BMI and breast cancer, we found that the relationship between BMI and breast cancer was nonlinear, but there was a threshold effect. The threshold effect analysis found that BMI affcted breast cancer at an inflection point 26. 3 Kg/m2. Adjusted OR (95% CI) on both sides of the turning point was 1. 0799 ( 1. 0029, 1. 1629 ) and 0. 9873 ( 0. 9638, 1. 0115 ), respectively. Conclusions: Diabetes status is associated with the risk of breast cancer development. Moreover, the risk of developing breast cancer steadily increased from nondiabetes to prediabetes and type 2 diabetes. In addition, the prevalence of breast cancer showed a gradual increase withincreasing BMI up to 26. 3 Kg/m2 with the highest prevalence of breast cancer. There was an inverse U-shaped relationship between BMI and the breast cancer prevalence.
Type I interferon receptor subunit 1 deletion attenuates experimental abdominal aortic aneurysm formation
Takahiro Shoji, Jia Guo, Yingbin Ge, Yankui Li, Gang Li, Toru Ikezoe, Wei Wang, Xiaoya Zheng, Sihai Zhao, Naoki Fujimura, Jianhua Huang, Baohui Xu, Ronald L. Dalman
Subject: Medicine & Pharmacology, Pathology & Pathobiology Keywords: Abdominal aortic aneurysm; Type I interferon receptor, Leukocytes; Angiogenesis
Objective: Type I interferon receptor (IFNAR) signaling contributes to several autoimmune and vascular diseases such as atherosclerosis and stroke. The purpose of this study was to assess the influence of IFNAR1 deficiency on the formation and progression of experimental abdominal aortic aneurysms (AAAs). Methods: AAAs were induced in type I interferon receptor subunit 1 (IFNAR1) deficient and wild type control male mice via intra-infrarenal aortic infusion of porcine pancreatic elastase. Immunostaining for IFNAR1 was evaluated in experimental and clinical aneurysms. The initiation and progression of experimental AAAs was assessed via ultrasound imaging prior to (day 0) and 3-, 7-, and 14-days following elastase infusion. Aneurysmal histopathology was analyzed at sacrifice. Results: Increased aortic medial and adventitial IFNAR1 expression was present in both clinical AAAs harvested at surgery and experimental AAAs. Following AAA initiation, wild type mice experienced progressive, time-dependent infrarenal aortic enlargement. This progression was substantially attenuated in IFNAR1 deficient mice. On histological analyses, medial elastin degradation, smooth muscle cell depletion, leukocyte accumulation and neoangiogenesis were markedly diminished in IFNAR1 deficient as compared to wild type mice. Conclusion: IFNAR1 deficiency limited experimental AAA progression in response to intra-aortic elastase infusion. Combined with clinical observations, these results suggest a regulatory role for IFNAR1 activity in AAA pathogenesis.
The Potential of Exosomes in Allergy Immunotherapy
Paul Engeroff, Monique Vogel
Subject: Life Sciences, Immunology Keywords: Type I hypersensitivity; IgE; AIT; SIT; extracellular vesicles; vaccine
Allergic diseases represent a global health and economic burden of increasing significance. The lack of disease-modifying therapies besides specific allergen immunotherapy (AIT) which is not available for all types of allergies, necessitates the study of novel therapeutic approaches. Exosomes are small endosome-derived vesicles delivering cargo between cells and thus allowing inter-cellular communication. Since immune cells make use of exosomes to boost, deviate, or suppress immune responses, exosomes are intriguing candidates for immunotherapy. Here, we review the role of exosomes in allergic sensitization and inflammation and we discuss the mechanisms by which exosomes could be used in immunotherapeutic approaches for the treatment of allergic diseases. We propose the following approaches: a) Mast cell derived exosomes expressing IgE receptor FcεRI could absorb IgE and down-regulate systemic IgE levels. b) Tolerogenic exosomes could suppress allergic immune responses via induction of regulatory T cells. c) Exosomes could promote TH1-like responses towards an allergen. d) Exosomes could modulate IgE-facilitated antigen presentation.
Eight-Year Retrospective Study of Young Adults in a Diabetes Transition Clinic
Aarooran Sritharan, Uchechukwu Levi Osuagwu, Manjula Ratnaweera, David Simmons
Subject: Medicine & Pharmacology, Psychiatry & Mental Health Studies Keywords: Diabetic Ketoacidosis; Mental health; Type 1 diabetes; Transition; Glycemia
The transition of people from paediatric to adult diabetes services is associated with worsening glycaemia and increased diabetes-related hospitalisation. This study compared the clinical characteristics of those with and without mental health conditions among attenders at a diabetes young adult clinic diabetes before and after changes in service delivery. Retrospective review of 200 people with diabetes attending a Sydney public hospital over eight years corresponding to the period before (2012-2016) and after (2017-2018) restructuring of a clinic for young adults aged 16-25 years. Characteristics of those with and without mental health conditions (depression, anxiety, diabetes related distress, eating disorders), were compared. Among clinic attenders (type 1 diabetes n=184, 83.2%), 40.5% (n=89) had a mental health condition particularly, depression (n=57, 64%), which was higher among Indigenous than non-Indigenous people (5.6% vs 0.8% p=0.031) but similar between diabetes type. Over eight years, those with, compared with those without a mental health condition had higher HbA1c at the last visit (9.4%[79 mmol/mol] vs 8.7% [71 mmol/mol], p=0.027), the proportion with diabetic ketoacidosis (DKA 60.7% vs 42.7%,p=0.009), smoking (38.4 vs 13.6%,p=0.009), retinopathy (9.0 vs 2.3%,p=0.025), multiple DKAs (28.4 vs 16.0%,p=0.031) were significantly higher. Having a mental health condition was associated with 2.02 (95% Confidence intervals 1.1-3.7) fold increased risk of HbA1c ≥ 9.0%[75mmol/mol]. Changes to the clinic were not associated with improvements in mental health condition (39.0% vs 32.4%, p=0.096). In conclusion, we found that mental health conditions, particularly depression, are common in this population and are associated with diabetes complications. Diabetes type and clinic changes did not affect the reported mental health conditions. Additional strategies are required to reduce complication risks among those with mental health conditions. .
The Structural, Elastic, Electronic, Vibrational and Gravimetric Hydrogen Capacity Properties of the Perovskite Type Hydrides: DFT Study
Ülkü Bayhan, İnanç Yilmaz
Subject: Physical Sciences, General & Theoretical Physics Keywords: Hydrogen; DFT; Electronic Properties; Energy Storage; Perovskite type Hydrides
The structural, elastic, anisotropic elastic, electronic, vibrational and properties of the Perovskite type Hydrides RbXH3 (X = Be, Ca, Mg) were performed via Vienna Ab – initio Simulation Pac-kage (VASP) based on Density Functional Theory (DFT). Our results have exhibited a well-agreement with previous calculations and experiments for each compound. In order to de-termine physical properties of RbXH3 has been used the Generalized Gradient Approximation (GGA) with Perdew-Burke-Ernzerhof (PBE) functional at this study. Present compounds were found to be mechanically stable as well as their gravimetric hydrogen storage capacities has been investigated. The Perovskite type Hydrides RbBeH3 and RbMgH3 has an indirect bandgap of 0.274 eV and 2.209 eV while RbCaH3 has a direct bandgap of 3.274 eV respectively and therefo-re these compounds has shown a semiconductor behaviour at equilibrium. Besides directional dependence of anisotropic properties was visualized by representing them with maximum - mi-nimum points..
The Role of Probiotic Supplementation on Insulin Resistance in Obesity-Associated Diabetes: A Mini-Review
Seeme Saha, Nazmun Nahar Alam, S M Niazur Rahman
Subject: Medicine & Pharmacology, Other Keywords: Probiotics; Gut microbiota; Obesity; Insulin resistance; Type 2 Diabetes
Background: Obesity and diabetes are two metabolic disorders linked by an inflammatory process named insulin resistance (IR). Various research on the role of gut microbiota in developing obesity and its associated disorders has led to the growing interest in probiotic supplementation. Considering the life-threatening complications of diabesity this mini-review explored the effects of probiotic supplementation on IR in obesity-associated diabetes. Method: This review is based on recent articles from 2005-2020, studying the role of probiotic supplementation on glucose and insulin parameters in healthy and diabetic mouse models. Result: Probiotic supplementation altered the gut microbiota composition, increased short-chain fatty acid production, and decreased pro-inflammatory cytokines. Additionally, they decreased intestinal permeability, circulating lipopolysaccharide, and metabolic endotoxemia hence improved insulin sensitivity and reduced obesity. Although multi-strain probiotic supplementation showed greater benefits than single strain interventions, variations in the concentration of probiotics used and the duration of treatment also influenced the results. Conclusion: Probiotic supplementation could manipulate the gut microbiota by reducing intestinal permeability, inflammation and ameliorate IR and obesity-associated diabetes in animal models which requires further long-term clinical studies in humans.
High Diversity of Leptospira Species Infecting Bats Captured in the Uraba Region (Antioquia-Colombia)
Fernando P Monroy, Sergio Solari, Juan Alvaro Lopez, Piedad Agudelo-Florez, Ronald Guillermo Pelaez-Sanchez
Subject: Life Sciences, Microbiology Keywords: Leptospira; bats; Colombia; leptospirosis; species; type; 16S ribosomal gene
Leptospirosis is a globally distributed zoonotic disease caused by pathogenic bacteria of the genus Leptospira. This zoonotic disease affects humans, domestic, or wild animals. Colombia is considered an endemic country for leptospirosis; and Antioquia is the second department in Colombia with the highest number of reported leptospirosis cases. Currently, many studies report bats as reservoirs of Leptospira spp. but its prevalence in these mammals is unknown. In the present study we aimed to better understand the role of bats as reservoir hosts of Leptospira species and to evaluate the genetic diversity of circulating Leptospira species in Antioquia-Colombia. We captured 206 bats in the municipalities of Chigorodó (43 bats), Carepa (43 bats), Apartadó (39 bats), Turbo (40 bats), and Necoclí (41 bats) in the Urabá region (Antioquia-Colombia). Twenty bats were positive for Leptospira spp. infection (20/206 - 9,70%) and the species of infected bats were Carollia perspicillata, Dermatura rava, Glossophaga soricina, Molossus molossus, Artibeus planirostris, and Uroderma convexum. These species have different feeding strategies such as frugivorous, insectivores, and nectarivores. The infecting Leptospira species identified were Leptospira borgpetersenii (3/20 – 15%), Leptospira alexanderi (2/20 – 10%), Leptospira noguchii (6/20 – 30%), Leptospira interrogans (3/2 – 15%), and Leptospira kirschneri (6/20 – 30%). The results of this research show the importance of bats in the epidemiology, ecology and evolution of Leptospira in this host-pathogen association. This is the first step in deciphering the role played by bats in the epidemiology of human leptospirosis in the endemic region of Uraba (Antioquia-Colombia).
The microRNA Landscape of Acute Beta Cell Destruction in Type 1 Diabetic Recipients of Intraportal Islet Grafts
Geert Antoine Martens, Geert Stange, Lorenzo Piemonti, Jasper Anckaert, Zhidong Ling, Daniel Pipeleers, Frans Gorus, Pieter Mestdagh, Dieter De Smet, Jo Vandesompele, Bart Keymeulen, Sarah Roels
Subject: Life Sciences, Biochemistry Keywords: beta cell, type 1 diabetes, islet transplantation, biomarkers, microRNA
Ongoing beta cell death in type 1 diabetes (T1D) can be detected using biomarkers selectively discharged by dying beta cells into plasma. MicroRNA-375 (miR-375) ranks among top biomarkers based on studies in animal models and human islet transplantation. Our objective was to identify additional microRNAs that are co-released with miR-375 proportionate to the amount of beta cell destruction. RT-PCR profiling of 733 microRNAs in a discovery cohort of T1D patients 1 hour before/after islet transplantation indicated increased plasma levels of 22 microRNAs. Sub-selection for beta cell selectivity resulted in 15 microRNAs that were subjected to double-blinded multicenter analysis. This led to identification of 8 microRNAs that were consistently increased during early graft destruction: besides miR-375, these included miR-132/204/410/200a/429/125b, microRNAs with known function and enrichment in beta cells. Their potential clinical translation was investigated in a third independent cohort of 46 transplant patients, by correlating post-transplant microRNA levels to C-peptide levels 2 months later. Only miR-375 and miR-132 had prognostic potential for graft outcome and none of the newly identified microRNAs outperformed miR-375 in multiple regression. In conclusion, this study reveals multiple beta cell-enriched microRNAs that are co-released with miR-375 and can be used as complementary biomarkers of beta cell death.
Influences of CO2 on the Microstructure in Sheared Olivine Aggregates
Huihui Zhang, Ningli Zhao, Chao Qi, Xiaoge Huang, Greg Hirth
Subject: Earth Sciences, Atmospheric Science Keywords: olivine aggregates; CO2; crystallographic preferred orientation; AG-type fabric
Shear deformation of a solid-fluid, two-phase material induces a fluid segregation process that produces fluid-enriched bands and fluid-depleted regions, and crystallographic preferred orientation (CPO) characterized by girdles of [100] and [001] axes sub-parallel to the shear plane and a cluster of [010] axes sub-normal to the shear plane, namely the AG-type fabric. Based on experiments of two-phase aggregates of olivine + basalt, a two-phase flow theory and a CPO-formation model were established to explain these microstructures. Here, we investigate the microstructure in a two-phase aggregate with supercritical CO2 as the fluid phase and examine the theory and model, as CO2 is different from basaltic melt in rheological properties. We conducted high‐temperature and high-pressure shear deformed experiments at 1 GPa and 1100ºC in a Griggs-type apparatus on samples made of olivine + dolomite, which decomposed into carbonate melt and CO2 at experimental conditions. After deformation, CO2 segregation and an AG-type fabric occurred in these CO2-bearing samples, inconsistency with basaltic melt-bearing samples. The SPO-induce CPO model was used to explain the formation of the fabric. Our results suggest that the influences of CO2 as a fluid phase on the microstructure of a two-phase olivine aggregate is similar to that of basaltic melt and can be explained by the CPO-formation model for the solid-fluid system.
Immunohistochemical Detection of Enteroviruses in Pancreatic Tissues of Patients with Type 1 Diabetes Using a Polyclonal Antibody Against 2A Protease of Coxsackievirus
Erika Jimbo, Tetsuro Kobayashi, Akira Takeshita, Keiichiro Mine, Seiho Nagafuchi, Tomoyasu Fukui, Soroku Yagihashi
Subject: Biology, Anatomy & Morphology Keywords: Enterovirus; Coxsackievirus; 2A protease; polyclonal antibody; type 1 diabetes
Online: 5 February 2021 (11:34:57 CET)
The need for antiserum for immunohistochemical (IHC) detection of enterovirus (EV) in formaldehyde fixed and paraffin-embedded (FFPE) specimens is increasing. The standard monoclonal antibody against EV-envelope protein (VP1), clone 5D8/1, was proven to cross-react with other proteins. Another candidate marker of EV proteins is 2A protease (2Apro), which is coded by the EV gene and translated by host cells during EV replication. We raised polyclonal antiserum by immunizing rabbits with an 18-mer peptide of Coxsackievirus B1 (CVB1)-2A protease (2Apro) and examined the specificity and sensitivity for EV on FFPE tissue samples. ELISA study showed a high titer of antibody for CVB1-2Apro. IHC demonstrated that antiserum against 2Apro reacted with CVB1-infected Vero-cells. Confocal microscopy demonstrated that 2Apro labelled by the antibody located in the same cell with VP1 stained with 5D8/1. IHC demonstrated dense positive reactions pancreatic islets of EV-associated fulminant type 1 diabetes (FT1DM), and located in the same cell stained positive with 5D8/1. Specificity of IHC staining FT1DM pancreas was confirmed by absorption with an excessive concentration of immunized peptide. In conclusion, our study provides a new polyclonal antiserum against CVB1 2Apro which may be useful for detection of EV-infected human tissues stored as archive of FFPE tissue samples.
Antieukaryotic Type Six Secretion System Virulence Factors of Bacteria
Silindile Maphosa, Lucy Moleleki, Thabiso Motaung
Subject: Life Sciences, Microbiology Keywords: Type VI Secretion System; antieukaryotic effectors; interkingdom competition; virulence
The type 6 protein secretion system (T6SS) is prevalently utilized by Gram-negative bacteria to compete for resources and space. Upon activation, toxic effectors from this secretion system are translocated into the competitor prokaryote or eukaryote in a contact-dependent manner. While much has been reported on T6SS-mediated prokaryotic competition, very little is understood about the mechanisms of bacterial interactions with eukaryotic hosts. Likewise, many virulent T6SS effectors are known to be antibacterial. In recent years, however, evidence has emerged on numerous T6SS effectors that interact with related immunity proteins in a range of eukaryotic hosts. Insights into how this effector-immunity pairing alters the physiological responses of the recipient organism might provide opportunities relating to the T6SS agricultural and biotherapeutic applications. We, therefore, summarize the impacts of the T6SS effectors with a special focus on bacterial interactions with animals, plants, and fungi. We further briefly discuss pipelines that are currently used to characterize antieukaryotic T6SS effectors.
Spatial Construction, Form and Effectiveness Analysis of Large-Scale Waterfront Park System in Island-type Cities——The Case of Xiamen, China
XIAOLEI SANG, Chih-Hong Huang
Subject: Social Sciences, Other Keywords: island-type city; city park; waterfront area; space syntax
The bay is a space barrier for the development of island-type cities and a high-quality waterfront landscape resource. This study takes Xiamen a typical island city in China as an example. First, It use the method of satellite telemetry technology combined with GIS software and spatial syntax, respectively, from the material space level and social space level, to summarize the rapid urbanization process of this city since 1990-2018, focusing on the construction process of three large-scale waterfront park systems in the transition period of inter-island development in it, and comparing the similarities and differences of their spatial forms. Further, from the choice of the axis model and the integrated analysis results, we discuss the spatial efficiency changes. The construction of the three major bay waterfront park systems in this city reflects a huge change in development pattern from lagging construction, synchronous planning, to advanced layout, providing a continuous and variable spatial form for the development of the bay region and improving space efficiency, which one of the important ways to develop and transform island-type cities. We hope to provide the reference for the development including sustainable development of other island cities around the world
Bioguided Fractionation of Hypoglycaemic Component in Methanol Extract of Vernonia amygdalina: An In Vivo Study
Stanley Irobekhian Reuben Okoduwa, Isma'ila A. Umar, Dorcas B. James, Hajiya M. Inuwa, James D. Habila, Alessandro Venditti
Subject: Life Sciences, Biochemistry Keywords: Anti-diabetic; hyperglycaemia; hypoglycaemic; Vernonia amygdalina; Type-2 diabetes
Nine components (C1-C9) were isolated from chloroform fraction of fractionated methanol extracts of Vernonia amygdalina leaves (FMEVA) by column chromatography. All the components C1 to C9 were purified and screened for hypoglycaemic activities in type-2 diabetic rats. The most potent hypoglycaemic component was elucidated on the basis of extensive spectroscopic (1D-, 2D-NMR, GC-MS, FTIR) data analysis. The Component C5 was found to be the most potent hypoglycaemic in reducing blood glucose by 12.55 ± 3.55% at 4 h post-oral administration, when compared to the positive (18.07 ± 1.20%) and negative (-1.99 ± 0.43%) controls. The spectroscopic data analysis reveals that the isolated compound has a structure consistent with 11β,13-dihydrovernolide. The isolated compound is part of the hypoglycaemic components present in V. amygdalina leaves that is responsible for the anti-diabetic activities. Further research is needed in the development of this compound or its derivatives for pharmaceutical use.
Description and Genome Analysis of Methylotetracoccus aquaticus sp. nov. , a Novel Tropical Wetland Methanotroph, with the Amended Description of Methylotetracoccus gen. nov.
Monali Rahalkar, Kumal Khatri, Jyoti Mohite, Pranitha Pandit, Rahul Bahulikar
Subject: Life Sciences, Microbiology Keywords: wetlands; methanotrophs; India; tropical; novel species; Type Ib; Methylotetracoccus
We enriched and isolated a novel gammproteobacterial methanotroph; strain FWC3, from tropical freshwater wetland, near Nagaon beach, Alibag, India. FWC3 is a coccoid, flesh pink/peach pigmented, non-motile methanotroph and the cells are present in pairs and as tetracocci. The culture can grow on methane (20%) as well as on a wide range of methanol from concentrations (0.02%-5%). Based on the comparison of genome data, FAME analysis, morphological characters and biochemical characters, FWC3 belongs to the tentatively and newly but not validly described genus 'Methylotetracoccus' of which only a single species strain was described, Methylotetracoccus oryzae C50C1. The ANI index between FWC3 and C50C1 strains is 94%, and the DDH value is 55.7%, less than the cut-off values 96% and 70%, respectively. The genome size of FWC3 is smaller (3.4 Mbp) compared to that of C50C1 (4.8 Mbp). Additionally, the FAME profile of FWC3 shows differences in cell wall fatty acid profiles compared to Methylotetracoccus oryzae C50C1. Also, there are other differences on the morphological, physiological and genomic levels. We propose FWC3 to be a member of a novel species of the genus Methylotetracoccus, for which the name Methylotetracoccus aquaticus is proposed. Also, an amended description of the genus Methylotetracoccus gen. nov. is given here. FWC3 is available in two international culture collections with the accession numbers: MCC 4198 (Microbial Culture collection, India) and JCM 33786 (Japan Collection of Microorganisms, Japan).
Influence of Disease Duration on Circulating Levels of miRNAs in Children and Adolescents with New Onset Type 1 Diabetes
Nasim Samandari, Aashiq H Mirza, Simranjeet Kaur, Philip Hougaard, Lotte Broendum Nielsen, Siri Fredheim, Henrik B Mortensen, Flemming Pociot
Subject: Life Sciences, Molecular Biology Keywords: children, immunology, miRNA, partial remission phase, type 1 diabetes
The objective of this study was to identify circulating miRNAs affected by disease duration in newly diagnosed children with type 1 diabetes. Forty children and adolescents from The Danish Remission Phase Cohort were followed with blood samples drawn at 1, 3, 6, 12 and 60 months after diagnosis. Pancreatic autoantibodies were measured at each visit. Cytokines were measured only the first year. miRNA expression profiling was performed by RT-qPCR and quantified for 179 human plasma miRNAs. The effect of disease duration was analyzed by mixed models for repeated measurements, adjusted for sex and age. Eight miRNAs (hsa-miR-10b-5p, hsa-miR-17-5p, hsa-miR-30e-5p, hsa-miR-93-5p, hsa-miR-99a-5p, hsa-miR-125b-5p, hsa-miR-423-3p and hsa-miR-497-5p) were found to significantly change expression (adjusted p-value < 0.05) with disease progression. Three pancreatic autoantibodies ICA, IA-2A, GADA65 and 4 cytokines IL-4, IL-10, IL-21, IL-22 were associated with the miRNAs at different time points. Pathway analysis revealed association with various immune-mediated signaling pathways. Eight miRNAs, involved in immunological pathways changed expression levels during the first five years after diagnosis in children with type 1 diabetes, and were associated with variations in cytokine and pancreatic antibodies, suggesting a possible effect on the immunological processes in the early phase of the disease.
Failure Modes, Effects and Criticality Analysis for Wind Turbines Considering Climatic Regions and Comparing Geared and Direct Drive Wind Turbines
Samet Ozturk, Vasilis Fthenakis, Stefan Faulstich
Subject: Engineering, Civil Engineering Keywords: Reliability; FMEA; wind turbines; climatic conditions; wind turbine type
The wind industry is looking for ways to accurately predict the reliability and availability of newly installed wind turbines. Failure modes, effects and criticality analysis (FMECA) is a technique utilized for determining the critical subsystems of wind turbines. There are several studies which applied FMECA for wind turbines in the literature, but no studies so far have considered different weather conditions or climatic regions. Furthermore, various design types of wind turbines have been analyzed applying FMECA but no study so far has applied FMECA to compare the reliability of geared and direct-drive wind turbines. We propose to fill these gaps by using Koppen-Geiger climatic regions and two different turbine models of direct-drive and geared-drive concepts. A case study is applied on German wind farms utilizing the WMEP database which contains wind turbine failure data from 1989 to 2008. This proposed methodology increases the accuracy of reliability and availability predictions and compares different wind turbine design types and eliminates underestimation of impacts of different weather conditions.
Olive Oil and Diabetes: From Molecules to Lifestyle Disease Prevention
Ahmad Alkhatib, Catherine Tsang, Jaakko Tuomilehto
Subject: Medicine & Pharmacology, Nutrition Keywords: olive nutraceuticals; functional foods; exercise; nutrition; type-2 diabetes
Lifestyle is the primary prevention of diabetes, especially type-2 diabetes (T2D). Nutritional intake of olive oil (OO), the key Mediterranean diet component has been associated with the prevention and management of many chronic diseases including T2D. Several OO bioactive compounds such as monounsaturated fatty acids, and key polyphenols including hydroxytyrosol and oleuropein, have been associated with preventing inflammation and cytokine-induced oxidative damage, glucose lowering, reducing carbohydrate absorption and increasing insulin sensitivity and related gene expression. However, research into the interaction of OO nutraceuticals with lifestyle components, especially physical activity is lacking. Promising postprandial effects have been reported when OO or other similar monounsaturated fatty acids was the main dietary fat compared with other diets. Animal studies have shown a potential anabolic effect of oleuropein. Such effects could be further potentiated via exercise, especially strength training, which is an essential exercise prescription for individuals with T2D. There is also an evidence from in vitro, animal and limited human studies for a dual preventative role of OO polyphenols in diabetes and cancer, especially that they share similar risk factors. Putative anti-oxidative and anti-inflammatory mechanisms and associated gene expressions resulting from OO phenolics, have produced paradoxical results making suggested inferences from dual prevention T2D and cancer outcomes difficult. Well-designed human interventions and clinical trials are needed to decipher such a potential dual anti-cancer and anti-diabetic effects of OO nutraceuticals. Exercise combined with OO consumption, individually or as part of a healthy diet is likely to induce reciprocal action for T2D prevention outcomes.
High Bacterial Agglutination Activity in a Single-CRD C-Type Lectin from Spodoptera exigua (Lepidoptera: Noctuidae)
Leila Gasmi, Juan Ferré, Salvador Herrero
Subject: Life Sciences, Biotechnology Keywords: C-type lectin; agglutination; CRD; bacterial detection; E. coli
Lectins are carbohydrate-interacting proteins playing a pivotal role in multiple physiological and developmental aspects of all organisms. They can specifically interact with different bacterial and viral pathogens through the carbohydrate-recognition domains (CRD). In addition, lectins are also of biotechnological interest because of their potential use as biosensor for capturing and identification of bacterial species. In this work, we have characterized the bacterial agglutination properties of three C-type lectins from the Lepidoptera Spodoptera exigua. One of these lectins, BLL2, was able to agglutinate cells from a broad range of bacterial species at an extremely low concentration, becoming a very interesting protein to be used as biosensor or other biotechnological applications involving bacterial capturing.
Certain Results on (p,q)-Hermite Based Apostol Type Frobenius-Euler Polynomials
Waseem Khan, Idrees Ahmad Khan, Ugur Duran, Mehmet Acikgoz
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Hermite polynomials, Apostol type Frobenius-Euler polynomials, Hermite based Apostol type Frobenius Euler polynomials, Stirling numbers of the second kind, (p,q)-numbers.
In the present paper, the (p,q)-Hermite based Apostol type Frobenius-Euler polynomials and numbers are firstly considered and then diverse basic identities and properties for the mentioned polynomials and numbers, including addition theorems, difference equations, integral representations, derivative properties, recurrence relations. Moreover, we provide summation formulas and relations associated with the Stirling numbers of the second kind.
Dual Variational Formulations for a Large Class of Non-Convex Models in the Calculus of Variations
Fabio Botelho
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Duality principles; Generalized method of lines; Ginzburg-Landau type equations
This article develops dual variational formulations for a large class of models in variational optimization. The results are established through basic tools of functional analysis, convex analysis and duality theory. The main duality principle is developed as an application to a Ginzburg-Landau type system in superconductivity in the absence of a magnetic field. In the first sections, we develop new general dual convex variational formulations, more specifically, dual formulations with a large region of convexity around the critical points which are suitable for the non-convex optimization for a large class of models in physics and engineering. Finally, in the last section we present some numerical results concerning the generalized method of lines applied to a Ginzburg-Landau type equation.
A Multi-agent Approach to Predict Long-Term Glucose Oscillation in Individuals with Type 1 Diabetes
João Paulo Aragão Pereira, Anarosa Alves Franco Brandão, Joyce da Silva Bevilacqua, Maria Lúcia Cardillo Côrrea-Giannella
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Glucose Oscillation; Prediction; Multi-agent; Type 1 Diabetes; Personalized; Recommendation
The glucose-insulin regulatory system and its glucose oscillations is a recurring theme in the literature because of its impact on human lives, mostly the ones affected by diabetes mellitus. Several approaches were proposed, from mathematical to data-based models, with the aim of modeling the glucose oscillation curve. Having such a curve, it is possible to predict, when injecting insulin in type 1 diabetes (T1D) individuals. However, the literature presents prediction horizons no longer than 6 hours, which could be a problem considering their sleeping time. This work presents Tesseratus, a model that adopts a multi-agent approach to combine machine learning and mathematical modeling to predict the glucose oscillation up to 8 hours. Tesseratus uses the pharmacokinetics of insulins and data collected from T1D individuals. Its outcome can support endocrinologists while prescripting daily treatment for T1D individuals, and provide personalized recommendations for such individuals, to keep their glucose concentration in the ideal range. Tesseratus brings pioneering results for prediction horizons of 8 hours for nighttime, in an experiment with seven real T1D individuals. It is our claim that Tesseratus will be a reference for classification of glucose prediction model, supporting the mitigation of short- and long-term complications in the T1D individuals.
Cocoa Extract Exerts Sex-Specific Effects in an Aggressive Hyper-Glycemia Model: A Pilot Study
Kathryn C. Racine, Lisard Iglesias-Carres, Jacob A. Herring, Mario G. Ferruzzi, Colin D. Kay, Jeffery S. Tessem, Andrew P. Neilson
Subject: Medicine & Pharmacology, Nutrition Keywords: BTBR; ob/ob; type-2 diabetes; flavanol; insulin; beta cell
Type 2 diabetes (T2D) is characterized by hyperglycemia and insulin resistance. Cocoa may slow T2D development and progression. This study employed male and female BTBR.Cg-Lepob/ob/WiscJ and wild type (WT) controls to assess the potential for cocoa to ameliorate progressive T2D and compare responses between sexes. Mice received diet without (WT, ob/ob) or with cocoa extract (ob/ob + c) for 10 weeks. Glucose and insulin tolerance tests (GTT/ITT) were conducted at weeks 1, 5 and 2, 6, respectively. Cocoa provided mild non-significant protection against weight gain vs. ob/ob control in males but not females. Male ob/ob + c had increasing fasting glucose at weeks 1 and 5 GTTs, with significantly higher levels of fasting glucose than ob/ob control at week 5. This was not seen in females. Cocoa protected against elevated 4-hour fasting glucose in week 2, but not week 6, ITTs. Cocoa partly suppressed hyperinsulinemia in males but significantly amplified it in females and protected against loss of beta cell area in females only. The mechanisms of these sex-specific effects remain to be elucidated. This study informs additional experiments with larger sample sizes and demonstrates that sex differences must be considered when designing dietary interventions for T2D.
Pumping Schedule Optimization in Acid Fracturing Treatment by Unified Fracture Design
Rahman Lotfi, Mostafa Hosseini*, Davood Aftabi, Alireza Baghbanan, Guanshui Xu
Subject: Engineering, Energy & Fuel Technology Keywords: Acid fracturing; UFD; Optimization; Fracture geometry; Acid type; Design parameters
Acid fracturing simulation is used widely to optimize carbonate reservoirs and improve acid fracturing treatment performance. In this study, a method was used to minimize the risk of the acid fracturing treatment. First, optimal fracture geometry parameters with UFD methods are calculated. After that, design components change as long as fracture geometry parameters reach their optimal values. The results showed a high flow rate needed to achieve optimal fracture geometry parameters with increasing acid volume. Sensitivity analysis was performed on controllable and reservoir parameters. It observed that a high flow rate should be applied for a low fluid viscosity to achieve the optimization goals. Straight acid reaches optimal conditions at a high flow rate and low volume. These conditions for retarded acids appear only at a low flow rate and high volume. The study of the acid concentration for gelled acid showed that as it increased, the flow rate and volume increased. Besides, for low permeability formation, a large fracture half-length and small fracture width are desirable. In this case, a higher flow rate will be required. The sensitivity analysis showed that the optimum flow rate and acid volume increase and decrease for the high Young's modulus. The effect of closure stress was also investigated and observed for a sample with high closure stress, low flow rate, and high acid volume are required.
Tuning the Electrical Parameters of p-NiOx Based Thin Film Transistors (Tfts) by Pulsed Laser Irradiation
Poreddy Manojreddy, Srikanth Itapu, Jammalamadaka Krishna Ravali, Selvendran S
Subject: Materials Science, Biomaterials Keywords: Laser irradiation; p-type NiO; SiO2 layer; Thin film transistor
We utilized laser irradiation as a potential technique in tuning the electrical performance of NiOx/SiO2 thin film transistors (TFTs). By optimizing the laser fluence and the number of laser pulses, the TFT performance is evaluated in terms of mobility, threshold voltage, on/off current ratio and subthreshold swing, all of which were derived from the transfer and output characteristics. The 500 laser pulses irradiated NiOx/SiO2 TFT exhibited an enhanced mobility of 3 cm2/V-s from a value of 1.25 cm2/V-s for as-deposited NiOx/SiO2 TFT. The laser-irradiated NiOx material likely has a significant concentration of defect gap states, which could also be involved in light absorption processes. Second, and more importantly, the excess energy that the photogenerated charge carriers possess (due to the significant difference between the photon energy and the bandgap of NiOx), combined with the very high light intensity, would result in complex thermal and photo thermal changes thus resulting in an enhanced electrical performance of p-type NiOx/SiO2 TFT structure.
Impact of Neck and/or Shoulder Pain on Headache
Charly Gaul, Martin C. Michel, Heidemarie Gräter, Anette Lampert, Manuel Plomer, Thomas Weiser, Stefanie Förderreuther
Subject: Medicine & Pharmacology, Allergology Keywords: tension-type headache; migraine; neck and shoulder pain; ibuprofen; caffeine
As neck and/or shoulder pain (NSP) frequently occur together with tension-type headache (TTH) and migraine, we explored how concomitant NSP affects perceived treatment responses to an analgesic. An anonymous survey was performed among 895 TTH and migraine sufferers who used the analgesic 400 mg ibuprofen/100 mg caffeine. NSP was relatively abundant among patients (42.4% for TTH; 39.2% for migraine), and associated with >1 additional day with headache per month. Reported pain reduction was independent from NSP for TTH and migraine. More patients became pain-free at 2 h in migraine with NSP (42.9%) compared to migraine without NSP (32.2%), which is different from TTH with NSP (60.6%) and TTH without NSP (71.4%). For both, migraine and TTH, a recurrence of headache on the same day was more prevalent in those with concomitant NSP leading to a greater likelihood of taking a second dose of the analgesic. NSP frequently occurs together with TTH and migraine patients. In migraine, NSP seems to be associated with a better treatment response at 2 h. The more frequent recurrence of pain in those with concomitant NSP indicates that NSP makes both headache types worse. Further studies are needed to substantiate these effects.
Coronavirus 2019 Infection in People with Associated Co-Morbidities: Case Fatality and ACE2 Inhibitors Treatment Concerns
Maryam Honardoost, Rokhsareh Aghili, Mohammad E. Khamseh
Subject: Medicine & Pharmacology, General Medical Research Keywords: COVID-19; associated comorbidities; treatment; ACE2 inhibitors; Type 2 diabetes
The Corona Virus Disease 2019 (COVID-19) outbreak is becoming pandemic with the highest mortality in people with associated comorbidities. These RNA viruses containing four structural proteins usually use spike protein to enter the host cell. It has been demonstrated that Angiotensin Converting Enzyme 2 (ACE2) ,as a part of renin-angiotensin-aldosterone system (RAAS), acts as a host receptor for the virus which is the main target of therapeutic approaches. However, medications acting on RAAS can lead to serious complications especially in people with diabetes and hypertension. To avoid this, other potential treatment modalities should be used in COVID-19 patients with associated comorbidities.
|
CommonCrawl
|
Origin of Q for the set of rational numbers?
It seems many sources$^1$ attribute the use of the letter "Q" to represent the rationals to the N. Bourbaki group (in the 1930's); however, the Wikipedia entry on rational numbers claims that Giuseppe Peano introduced the notation in 1895 (unfortunately, no link to support that claim & no mention of the title).
The Peano etymology suggests it is from the Italian quoziente (see the Wikipedia article quoted above) whereas the Bourbaki etymology is traced to the German quotient (see Wolfram Mathworld link, below).
I suspect that the letter 'Q' was in use prior to Bourbaki, and that Bourbaki was responsible for the introduction of the double struck blackboard bold $\mathbb{Q}$, and that this was conflated with the introduction of the letter Q, itself, but have not found any sources to confirm this.
Any suggestions for sources would be greatly appreciated.
$^1$: For example:
https://jeff560.tripod.com/nth.html
https://mathworld.wolfram.com/RationalNumber.html
Screenshot of Wikipedia paragraph referring to Peano:
Based on @user6530's citations (at least of the Formulario Mathematico), it would seem that the claim that Peano introduced the use of 'Q' for the rationals is inaccurate.
notation set-theory
Rax Adaam
Rax AdaamRax Adaam
Dedekind was the first to use a letter (R) for sets of rational numbers in 1872, then, starting from 1895, Peano began to use the letter r (lowercase) to denote the same set (and, from 1889, R for the set of positive rationals). Other authors proposed different letters, and only in the early forties Bourbaki introduced the letter Q (not the blackboard bold $\mathbb{Q}$). Possibly, the blackboard bold version was introduced (maybe) in the late fifties or (more likely) in the early sixties, but this is another question.
Now, some references.
Dedekind used the letter R (uppercase) for the set of rational numbers in Stetigkeit und irrationale Zahlen (1872), $\S 3$, page 16 ("die Gerade L ist unendlich viel reicher an Punkt-Individuen, als das Gebiet R der rationalen Zahlen an Zahl-Individuen", i.e. "the straight line L is infinitely richer in point-individuals than the domain R of rational numbers in number-individuals", here the English translation).
About Peano, Wikipedia "clearly" refers to somewhere in the Formulaire de mathématiques/Formulaire Mathématique/Formulario Mathematico, where Peano actually used extensively letters to denote sets (classe) of numbers. There's no doubt about this, as this is the only work of Peano concerning the subject for the year 1895 (see here for the complete bibliography of Peano, the works for the year 1895 start at page 45). The problem is that (see the wiki page)
the five editions of the Formulario [are not] editions in the usual sense of the word. Each is essentially a new elaboration, although much material is repeated. Moreover, the title and language varied: the first three, titled Formulaire de Mathématiques, and the fourth, titled, Formulaire Mathématique, were written in French, while Latino sine flexione, Peano's own invention, was used for the fifth edition, titled Formulario Mathematico. ... Ugo Cassina lists no less than twenty separately published items as being parts of the 'complete' Formulario!
and moreover the Formulario was written by many Peano's collaborators, such as Giovanni Vailati, Mario Pieri, Alessandro Padoa, Giovanni Vacca, Vincenzo Vivanti, Gino Fano and Cesare Burali-Forti, so when one writes "in the Formulario Peano says..." one have to understand that actually one is referring to Peano or to one of his collaborators.
The following quotes are taken from the Formulario Mathematico edited in 1908, for the reason that it is the clearer and more complete exposition of the subject. One can read the 1908 Formulario here, while different editions of the Formulaire de mathématiques are on Gallica, for example here.
First Peano writes (in I.$\S 1$, of course) the symbol $N_0$, together with $0$ and $+$ as "idea primitivo" [sic], i.e. undefined primitive ideas used to define all the other symbols of the "Arithmetica", and whose sense is determined by a system of proposition, the first is:
$\cdot 0\quad N_0\ \varepsilon\ \text{Cls}$
that he read "$N_0$ is a class", and so on. In II.$\S 5$ we find $N_1=N_0+1$, so $N_1$ is the set of strictly positive natural numbers, and in II.$\S 6$ he uses $+N_0$ and $-N_0$ for positive and negative numbers, and $n$ for the union (so $n$ stands for $\mathbb{Z}$). Then in III.$\S 8$ we read
$\cdot 2 \quad R=N_1/N_1$
$R=$ "Numero rationale". Illo es omni expressione de forma $b/a$, ubi $a$ et $b$ es numero naturale...
i.e., $R$ denotes a rational number, and it is any expression of the for $a/b$ where $a$ and $b$ are natural numbers; clearly Peano also introduces $r=+R\cup -R\cup \iota 0$ (III.$\S 9 1\cdot 0$). Moreover he adds
$\cdot 01\quad r=n/N_1$
$r=$ numero rationale relativo
and so $r$ stands for $\mathbb{Q}$. Finally, in III.$\S 12$ we find
$5\cdot 0\quad Q=l'`[\text{Cls}` R \cap u\ni(\exists u.\exists R=\eta u)])$
that Peano reads
$Q$, lege "quantitate reale positivo" es omni limite supero de aliquo classe $u$ de rationale, existente, et tale que existe aliquo rationale maiore de omni $u$
i.e., $Q$, that reads as "real positive numbers", is any supremum of some existing set (classe) $u$ of rationals, such that there exists some ratonal number greater then all elements in $u$; clearly in III.$\S 13$ Peano defines $q$ as $Q\cup -Q\cup \iota 0$, ans so $q$ stands for our $\mathbb{R}$.
In the Formulaire de mathématiques one can read more or less the same things (in French, of course), in particular in the index at page 56 we find "$r$ nombre rationnel".
Anyway, Peano used letters for sets of numbers before 1895, see Arithmetices principia, nova methodo exposita (1889), the Signorum Tabula at page 13 ("$Q$ quantitas, sive numerus realis positivus", i.e. "$Q$ [stands for] quantity, that is, a real positive number", while $R$ is used for positive rationals, no symbol for the rationals, positive or negative).
Summing up: yes, Peano uses letters to denote classes of numbers, but no, his use is different (indeed, antithetical) from the modern one; moreover, capital letters are used for set of only positive or only negative numbers of some kinds, while lowercase letters for either positive or negatives numbers ($n$ for $\mathbb{Z}$, $r$ for $\mathbb{Q}$, $q$ for $\mathbb{R}$ [sic]).
Furthermore, I cannot find any source independent of Wikipedia about the use of letter $Q$ from the Italian "quoziente", and moreover, in Italian the elements of $Q$ are called "frazioni" (fractions) or "numeri razionali" (rational numbers, from the Latin word "ratio"), while "quoziente" is the result of an operation. The choice of letters made by Peano is transparent: $n$ is for "numerus" (whole number, "numero" in Italian), "r" for "(numero) rationale" (rational number, but also "rapporto" in Italian, ratio in English), "q" for "quantitas" (Italian "quantità", English quantity).
Now, some words about Bourbaki. Yes, "they" uses $Q$ for rational numbers, and no, they does not use blackboard bold $\mathbb{Q}$ (at least in 1940s papers). An early occurence (maybe the earliest printed on paper) of $Q$ to denote the set of rational numbers is here at page 3 in the number 5 (7-10 December 1940) of La Tribu, the Bourbaki's internal newsletter. We read
$Q$ est ordonné [...] Topologie de $Q$ [...] Complétion de $Q$ : nombres réels
so there is no doubt that here $Q$ refers to our $\mathbb{Q}$. Clearly, we find $Q$ for rational numbers in the 1942 Algèbre (from page 29).
$\begingroup$ thank you for the detailed response. Re: your first statement "Wikipedia clearly refers to the Formulario Mathematico...", I revisited the Wiki article to check the source; however, I didn't find any reference to the text you quoted. I've updated the question with a screenshot of the paragraph that attributes (apparently without source) the notation to Peano (N.B. I checked the list of sources at the end of the article, and it isn't mentioned there, either), $\endgroup$
– Rax Adaam
$\begingroup$ @RaxAdaam I edited my answer $\endgroup$
$\begingroup$ Thank you so much for taking the time to clarify! This is not a field I'm familiar with (hence not recognizing the work that the wiki was "clearly" quoting :D), so the edit was very clarifying, for me. One question: you use "stays for" throughout your answer -- is this a technical sense of the word? or does it just mean "stands for"? Again, thank you for your time & sharing your knowledge! $\endgroup$
$\begingroup$ @RaxAdaam no technical use, simply I'm not a native speaker :) I added a summary and some words about Dedekind, and corrected a mistake, the Arithmetices principia were edited in 1889, not 1902. $\endgroup$
$\begingroup$ Please take my misunderstanding about "stays for" as a compliment: your mastery of the language is so thorough-going that, even though this possibility occurred to me, I was certain it must've been my own lack of knowledge. Thank you for the further additions. I greatly appreciate the time and effort you've put into answering this question. $\endgroup$
Not the answer you're looking for? Browse other questions tagged notation set-theory or ask your own question.
Origin / first use of $\mathbb{Z}$ (blackboard bold Z)?
When was the short notation for large numbers first introduced and why?
How did mathematicians notate the empty set before $\varnothing$?
What did Dedekind's The Nature and Meaning of Numbers contribute to the founding of Set Theory?
First use of curly braces to denote a set?
Who gets credit for the real numbers?
Is the symbol for set membership $\in$ derived from greek letter $\epsilon$?
Where was it first proved that the cardinality of the continuum equals the cardinality of the power set of the naturals?
Who was (were) the first mathematician(s) who did not doubt the empty set?
History of exponential notation for the set of functions between two sets
|
CommonCrawl
|
How long until the next bomb? Why there's no reason to think that nuclear deterrence works
Every day one sees politicians on TV assuring us that nuclear deterrence works because there no nuclear weapon has been exploded in anger since 1945. They clearly have no understanding of statistics.
With a few plausible assumptions, we can easily calculate that the time until the next bomb explodes could be as little as 20 years.
Be scared, very scared.
The first assumption is that bombs go off at random intervals. Since we have had only one so far (counting Hiroshima and Nagasaki as a single event), this can't be verified. But given the large number of small influences that control when a bomb explodes (whether in war or by accident), it is the natural assumption to make. The assumption is given some credence by the observation that the intervals between wars are random [download pdf].
If the intervals between bombs are random, that implies that the distribution of the length of the intervals is exponential in shape, The nature of this distribution has already been explained in an earlier post about the random lengths of time for which a patient stays in an intensive care unit. If you haven't come across an exponential distribution before, please look at that post before moving on.
All that we know is that 70 years have elapsed since the last bomb. so the interval until the next one must be greater than 70 years. The probability that a random interval is longer than 70 years can be found from the cumulative form of the exponential distribution.
If we denote the true mean interval between bombs as $\mu$ then the probability that an intervals is longer than 70 years is
\[ \text{Prob}\left( \text{interval > 70}\right)=\exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} \]
We can get a lower 95% confidence limit (call it $\mu_\mathrm{lo}$) for the mean interval between bombs by the argument used in Lecture on Biostatistics, section 7.8 (page 108). If we imagine that $\mu_\mathrm{lo}$ were the true mean, we want it to be such that there is a 2.5% chance that we observe an interval that is greater than 70 years. That is, we want to solve
\[ \exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} = 0.025\]
That's easily solved by taking natural logs of both sides, giving
\[ \mu_\mathrm{lo} = \frac{-70}{\ln{\left(0.025\right)}}= 19.0\text{ years}\]
A similar argument leads to an upper confidence limit, $\mu_\mathrm{hi}$, for the mean interval between bombs, by solving
\[ \exp{\left(\frac{-70}{\mu_\mathrm{hi}}\right)} = 0.975\]
\[ \mu_\mathrm{hi} = \frac{-70}{\ln{\left(0.975\right)}}= 2765\text{ years}\]
If the worst case were true, and the mean interval between bombs was 19 years. then the distribution of the time to the next bomb would have an exponential probability density function, $f(t)$,
\[ f(t) = \frac{1}{19} \exp{\left(\frac{-70}{19}\right)} \]
There would be a 50% chance that the waiting time until the next bomb would be less than the median of this distribution, =19 ln(0.5) = 13.2 years.
In summary, the observation that there has been no explosion for 70 years implies that the mean time until the next explosion lies (with 95% confidence) between 19 years and 2765 years. If it were 19 years, there would be a 50% chance that the waiting time to the next bomb could be less than 13.2 years. Thus there is no reason at all to think that nuclear deterrence works well enough to protect the world from incineration.
Another approach
My statistical colleague, the ace probabilist Alan Hawkes, suggested a slightly different approach to the problem, via likelihood. The likelihood of a particular value of the interval between bombs is defined as the probability of making the observation(s), given a particular value of $\mu$. In this case, there is one observation, that the interval between bombs is more than 70 years. The likelihood, $L\left(\mu\right)$, of any specified value of $\mu$ is thus
\[L\left(\mu\right)=\text{Prob}\left( \text{interval > 70 | }\mu\right) = \exp{\left(\frac{-70}{\mu}\right)} \]
If we plot this function (graph on right) shows that it increases with $\mu$ continuously, so the maximum likelihood estimate of $\mu$ is infinity. An infinite wait until the next bomb is perfect deterrence.
But again we need confidence limits for this. Since the upper limit is infinite, the appropriate thing to calculate is a one-sided lower 95% confidence limit. This is found by solving
\[ \exp{\left(\frac{-70}{\mu_\mathrm{lo}}\right)} = 0.05\]
which gives
\[ \mu_\mathrm{lo} = \frac{-70}{\ln{\left(0.05\right)}}= 23.4\text{ years}\]
The first approach gives 95% confidence limits for the average time until we get incinerated as 19 years to 2765 years. The second approach gives the lower limit as 23.4 years. There is no important difference between the two methods of calculation. This shows that the bland assurances of politicians that "nuclear deterrence works" is not justified.
It is not the purpose of this post to predict when the next bomb will explode, but rather to point out that the available information tells us very little about that question. This seems important to me because it contradicts directly the frequent assurances that deterrence works.
The only consolation is that, since I'm now 79, it's unlikely that I'll live long enough to see the conflagration.
Anyone younger than me would be advised to get off their backsides and do something about it, before you are destroyed by innumerate politicians.
While talking about politicians and war it seems relevant to reproduce Peter Kennard's powerful image of the Iraq war.
and with that, to quote the comment made by Tony Blair's aide, Lance Price
It's a bit like my feeling about priests doing the twelve stations of the cross. Politicians and priests masturbating at the expense of kids getting slaughtered (at a safe distance, of course).
Tagged bomb, deterrant, exponential distribution, Markov, nuclear, politics, statistics, Trident | 4 Comments
Regulation of alternative medicine: why it doesn't work, and never can
The Scottish Universities Medical Journal asked me to write about the regulation of alternative medicine. It's an interesting topic and not easy to follow because of the veritable maze of more than twenty overlapping regulators and quangos which fail utterly to protect the public against health fraud. In fact they mostly promote health fraud. The paper is now published, and here is a version with embedded links (and some small updates).
We are witnessing an increasing commercialisation of medicine. It's really taken off since the passage of the Health and Social Security Bill into law. Not only does that mean having NHS hospitals run by private companies, but it means that "any qualified provider" can bid for just about any service. The problem lies, of course, in what you consider "qualified" to mean. Any qualified homeopath or herbalist will, no doubt, be eligible. University College London Hospital advertised for a spiritual healer. The "person specification" specified a "quallfication", but only HR people think that a paper qualification means that spiritual healing is anything but a delusion.
The vocabulary of bait and switch
First, a bit of vocabulary. Alternative medicine is a term that is used for medical treatments that don't work (or at least haven't been shown to work). If they worked, they'd be called "medicine". The anti-malarial, artemesinin, came originally from a Chinese herb, but once it had been purified and properly tested, it was no longer alternative. But the word alternative is not favoured by quacks. They prefer their nostrums to be described as "complementary" –it sounds more respectable. So CAM (complementary and alternative medicine became the politically-correct euphemism. Now it has gone a stage further, and the euphemism in vogue with quacks at the moment is "integrated" or "integrative" medicine. That means, very often, integrating things that don't work with things that do. But it sounds fashionable. In reality it is designed to confuse politicians who ask for, say, integrated services for old people.
Put another way, the salespeople of quackery have become rather good at bait and switch. The wikepedia definition is as good as any.
Bait-and-switch is a form of fraud, most commonly used in retail sales but also applicable to other contexts. First, customers are "baited" by advertising for a product or service at a low price; second, the customers discover that the advertised good is not available and are "switched" to a costlier product.
As applied to the alternative medicine industry, the bait is usually in the form of some nice touchy-feely stuff which barely mentions the mystical nonsense. But when you've bought into it you get the whole panoply of nonsense. Steven Novella has written eloquently about the use of bait and switch in the USA to sell chiropractic, acupuncture, homeopathy and herbal medicine: "The bait is that CAM offers legitimate alternatives, the switch is that it primarily promotes treatments that don't work or are at best untested and highly implausible.".
The "College of Medicine" provides a near-perfect example of bait and switch. It is the direct successor of the Prince of Wales' Foundation for Integrated Health. The Prince's Foundation was a consistent purveyor of dangerous medical myths. When it collapsed in 2010 because of a financial scandal, a company was formed called "The College for Integrated Health". A slide show, not meant for public consumption, said "The College represents a new strategy to take forward the vision of HRH Prince Charles". But it seems that too many people have now tumbled to the idea that "integrated", in this context, means barmpottery. Within less than a month, the new institution was renamed "The College of Medicine". That might be a deceptive name, but it's a much better bait. That's why I described the College as a fraud and delusion.
Not only did the directors, all of them quacks, devise a respectable sounding name, but they also succeeded in recruiting some respectable-sounding people to act as figureheads for the new organisation. The president of the College is Professor Sir Graham Catto, emeritus professor of medicine at the University of Aberdeen. Names like his make the bait sound even more plausible. He claims not to believe that homeopathy works, but seems quite happy to have a homeopathic pharmacist, Christine Glover, on the governing council of his college. At least half of the governing Council can safely be classified as quacks.
So the bait is clear. What about the switch? The first thing to notice is that the whole outfit is skewed towards private medicine: see The College of Medicine is in the pocket of Crapita Capita. The founder, and presumably the main provider of funds (they won't say how much) is the huge outsourcing company, Capita. This is company known in Private Eye as Crapita. Their inefficiency is legendary. They are the folks who messed up the NHS computer system and the courts computer system. After swallowing large amounts of taxpayers' money, they failed to deliver anything that worked. Their latest failure is the court translation service.. The president (Catto), the vice president (Harry Brunjes) and the CEO (Mark Ratnarajah) are all employees of Capita.
The second thing to notice is that their conferences and courses are a bizarre mixture of real medicine and pure quackery. Their 2012 conference had some very good speakers, but then it had a "herbal workshop" with Simon Mills (see a video) and David Peters (the man who tolerates dowsing as a way to diagnose which herb to give you). The other speaker was Dick Middleton, who represents the huge herbal company, Schwabe (I debated with him on BBC Breakfast), In fact the College's Faculty of Self-care appears to resemble a marketing device for Schwabe.
Why regulation isn't working, and can't work
There are various levels of regulation. The "highest" level is the statutory regulation of osteopathy and chiropractic. The General Chiropractic Council (GCC) has exactly the same legal status as the General Medical Council (GMC). This ludicrous state of affairs arose because nobody in John Major's government had enough scientific knowledge to realise that chiropractic, and some parts of osteopathy, are pure quackery,
The problem is that organisations like the GCC function more to promote chiropractic than to regulate them. This became very obvious when the British Chiropractic Association (BCA) decided to sue Simon Singh for defamation, after he described some of their treatments as "bogus", "without a jot of evidence".
In order to support Singh, several bloggers assessed the "plethora of evidence" which the BCA said could be used to justify their claims. When, 15 months later, the BCA produced its "plethora" it was shown within 24 hours that the evidence was pathetic. The demolition was summarised by lawyer, David Allen Green, in The BCA's Worst Day.
In the wake of this, over 600 complaints were made to the GCC about unjustified claims made by chiropractors, thanks in large part to heroic work by two people, Simon Perry and Allan Henness. Simon Perry's Fishbarrel (browser plugin) allows complaints to be made quickly and easily -try it). The majority of these complaints were rejected by the GCC, apparently on the grounds that chiropractors could not be blamed because the false claims had been endorsed by the GCC itself.
My own complaint was based on phone calls to two chiropractors, I was told such nonsense as "colic is down to, er um, faulty movement patterns in the spine". But my complaint never reached the Conduct and Competence committee because it had been judged by a preliminary investigating committee that there was no case to answer. The impression one got from this (very costly) exercise was that the GCC was there to protect chiropractors, not to protect the public.
The outcome was a disaster for chiropractors, wno emerged totally discredited. It was also a disaster for the GCC which was forced to admit that it hadn't properly advised chiropractors about what they could and couldn't claim. The recantation culminated in the GCC declaring, in August 2010, that the mythical "subluxation" is a "historical concept " "It is not supported by any clinical research evidence that would allow claims to be made that it is the cause of disease.". Subluxation was a product of the fevered imagination of the founder of the chiropractic cult, D.D. Palmer. It referred to an imaginary spinal lesion that he claimed to be the cause of most diseases. .Since 'subluxation' is the only thing that's distinguished chiropractic from any other sort of manipulation, the admission by the GCC that it does not exist, after a century of pretending that it does, is quite an admission.
The President of the BCA himself admitted in November 2011
"The BCA sued Simon Singh personally for libel. In doing so, the BCA began one of the darkest periods in its history; one that was ultimately to cost it financially,"
As a result of all this, the deficiencies of chiropractic, and the deficiencies of its regulator were revealed, and advertisements for chiropractic are somewhat less misleading. But this change for the better was brought about entirely by the unpaid efforts of bloggers and a few journalists, and not at all by the official regulator, the GCC. which was part of the problem. not the solution. And it was certainly not helped by the organisation that is meant to regulate the GCC, the Council for Health Regulatory Excellence (CHRE) which did nothing whatsoever to stop the farce.
At the other end of the regulatory spectrum, voluntary self-regulation, is an even worse farce than the GCC. They all have grand sounding "Codes of Practice" which, in practice, the ignore totally.
The Society of Homeopaths is just a joke. When homeopaths were caught out recommending sugar pills for prevention of malaria, they did nothing (arguably such homicidal advice deserves a jail sentence).
The Complementary and Natural Healthcare Council (CNHC) is widely know in the blogosphere as Ofquack. I know about them from the inside, having been a member of their Conduct and Competence Committee, It was set up with the help of a £900,000 grant from the Department of Health to the Prince of Wales, to oversee voluntary self-regulation. It fails utterly to do anything useful.. The CNHC code of practice, paragraph 15 , states
"Any advertising you undertake in relation to your professional activities must be accurate. Advertisements must not be misleading, false, unfair or exaggerated".
When Simon Perry made a complaint to the CNHC about claims being made by a CNHC-registered reflexologist, the Investigating Committee upheld all 15 complaints. But it then went on to say that there was no case to answer because the unjustified claims were what the person had been taught, and were made in good faith.
This is precisely the ludicrous situation which will occur again and again if reflexologists (and many other alternative therapies) are "accredited". The CNHC said, correctly, that the reflexologist had been taught things that were not true, but then did nothing whatsoever about it apart from toning down the advertisements a bit. They still register reflexologists who make outrageously false claims.
Once again we see that no sensible regulation is possible for subjects that are pure make-believe.
The first two examples deal (or rather, fail to deal) with regulation of outright quackery. But there are dozens of other quangos that sound a lot more respectable.
European Food Standards Agency (EFSA). One of the common scams is to have have your favourite quack treatment classified as a food not as a medicine. The laws about what you can claim have been a lot laxer for foods. But the EFSA has done a pretty good job in stopping unjustified claims for health benefits from foods. Dozens of claims made by makers of probiotics have been banned. The food industry, needless to say, objects very strongly to be being forced to tell the truth. In my view, the ESFA has not gone far enough. They recently issued a directive about claims that could legally be made. Some of these betray the previously high standards of the EFSA. For example you are allowed to say that "Vitamin C contributes to the reduction of tiredness and fatigue" (as long as the product contains above a specified amount of Vitamin C. I'm not aware of any trials that show vitamin C has the slightest effect on tiredness or fatigue, Although these laws do not come into effect until December 2012, they have already been invoked by the ASA has a reason not to uphold a complaint about a multivitamin pill which claimed that it "Includes 8 nutrients that can contribute to the reduction in tiredness and fatigue"
The Advertising Standards Authority (ASA). This is almost the only organisation that has done a good job on false health claims. Their Guidance on Health Therapies & Evidence says
"Whether you use the words 'treatment', 'treat' or 'cure', all are likely to be seen by members of the public as claims to alleviate effectively a condition or symptom. We would advise that they are not used"
"Before and after' studies with little or no control, studies without human subjects, self-assessment studies and anecdotal evidence are unlikely to be considered acceptable"
They are spot on.
The ASA's Guidance for Advertisers of Homeopathic Services is wonderful.
"In the simplest terms, you should avoid using efficacy claims, whether implied or direct,"
"To date, the ASA has have not seen persuasive evidence to support claims that homeopathy can treat, cure or relieve specific conditions or symptoms."
That seems to condemn the (mis)labelling allowed by the MHRA as breaking the rules.. Sadly, though, the ASA has no powers to enforce its decisions and only too often they are ignored. The Nightingale collaboration has produced an excellent letter that you can hand to any pharmacist who breaks the rules
The ASA has also judged against claims made by "Craniosacral therapists" (that's the lunatic fringe of osteopathy). They will presumably uphold complaints about similar claims made (I'm ashamed to say) by UCLH Hospitals.
The private examination company Edexcel sets exams in antiscientific subjects, so miseducating children. The teaching of quackery to 16 year-olds has been approved by a maze of quangos, none of which will take responsibility, or justify their actions. So far I've located no fewer than eight of them. The Office of the Qualifications and Examinations Regulator (OfQual), Edexcel, the Qualifications and Curriculum Authority (QCA), Skills for Health, Skills for Care, National Occupational Standards (NOS), private exam company VTCT and the schools inspectorate, Ofsted.. Asking any of these people why they approve of examinations in imaginary subjects meets with blank incomprehension. They fail totally to protect tha public from utter nonsense.
The Department of Education has failed to do anything about the miseducation of children in quackery. In fact it has encouraged it by, for the first time, giving taxpayers' money to a Steiner (Waldorf) school (at Frome, in Somerset). Steiner schools are run by a secretive and cult-like body of people (read about it). They teach about reincarnation, karma, gnomes, and all manner of nonsense, sometimes with unpleasant racial overtones. The teachers are trained in Steiner's Anthroposophy, so if your child gets ill at school they'll probably get homeopathic sugar pills. They might well get measles or mumps too, since Steiner people don't believe in vaccination.
Incredibly, the University of Aberdeen came perilously close to appointing a chair in anthroposophical medicine. This disaster was aborted by bloggers, and a last minute intervention from journalists. Neither the university's regulatory mechanisms. nor any others, seemed to realise that a chair in mystical barmpottery was a bad idea.
Trading Standards offices and the Office of Fair Trading.
It is the statutory duty of Trading Standards to enforce the Consumer Protection Regulations (2008) This European legislation is pretty good. it caused a lawyer to write " Has The UK Quietly Outlawed "Alternative" Medicine?". Unfortunately Trading Standards people have consistently refused to enforce these laws. The whole organisation is a mess. Its local office arrangement fails totally to deal with the age of the internet. The situation is so bad that a group of us decided to put them to the test. The results were published in the Medico-Legal Journal, Rose et al., 2012. "Spurious Claims for Health-care Products: An Experimental Approach to Evaluating Current UK Legislation and its Implementation". They concluded "EU directive 2005/29/EC is
largely ineffective in preventing misleading health claims for consumer products in
the UK"
Skills for Health is an enormous quango which produces HR style "competences" for everything under the son. They are mostly quite useless. But those concerned with alternative medicine are not just useless. They are positively harmful. Totally barmy. There are competences and National Occupational Standards for every lunatic made-up therapy under the sun. When I phoned them to discover who'd written them, I learned that the had been drafted by the Prince of Wales' Foundation for Magic Medicine. And when I joked by asking if they had a competence for talking to trees, I was told, perfectly seriously, "You'd have to talk to LANTRA, the land-based organisation for that."
That was in January 2008. A lot of correspondence with the head of Skills for Health got nowhere at all. She understood nothing and it hasn't improved a jot.
This organisation costs a lot of taxpayers' money and it should have been consigned to the "bonfire of the quangos" (but of course there was no such bonfire in reality). It is a disgrace.
The Quality Assurance Agency (QAA) is supposed to ensure the quality of university courses. In fact it endorses courses in nonsense alternative medicine and so does more harm than good. The worst recent failure of the QAA was in the case of the University of Wales: see Scandal of the University of Wales and the Quality Assurance Agency. The university was making money by validating thousands of external degrees in everything from fundamentalist theology to Chinese Medicine. These validations were revealed as utterly incompetent by bloggers, and later by BBC Wales journalist Ciaran Jenkins (now working for Channel 4).
The mainstream media eventually caught up with bloggers. In 2010, BBC1 TV (Wales) produced an excellent TV programme that exposed the enormous degree validation scam run by the University of Wales. The programme can be seen on YouTube (Part 1, and Part 2). The programme also exposed, incidentally, the uselessness of the Quality Assurance Agency (QAA) which did nothing until the scam was exposed by TV and blogs. Eventually the QAA sent nine people to Malaysia to investigate a dodgy college that had been revealed by the BBC. The trip cost £91,000. It could have been done for nothing if anyone at the QAA knew how to use Google.
The outcome was that the University of Wales stopped endorsing external courses, and it was soon shut down altogether (though bafflingly, its vice-chancellor, Marc Clement was promoted). The credit for this lies entirely with bloggers and the BBC. The QAA did nothing to help until the very last moment.
Throughout this saga Universities UK (UUK), has maintained its usual total passivity. They have done nothing whatsoever about their members who give BSc degrees in anti-scientific subjects. (UUK used to known as the Committee of Vice-Chancellors and Principals).
Council for Health Regulatory Excellence (CHRE), soon to become the PSAHSC,
Back now to the CHRE, the people who failed so signally to sort out the GCC. They are being reorganised. Their consultation document says
"The Health and Social Care Act 20122 confers a new function on the Professional Standards Authority for Health and Social Care (the renamed Council for Healthcare Regulatory Excellence). From November 2012 we will set standards for organisations that hold voluntary registers for people working in health and social care occupations and we will accredit the register if they meet those standards. It will then be known as an 'Accredited Register'. "
They are trying to decide what the criteria should be for "accreditation" of a regulatory body. The list of those interested has some perfectly respectable organisations, like the British Psychological Society. It also contains a large number of crackpot organisations, like Crystal and Healing International, as well as joke regulators like the CNHC.
They already oversee the Health Professions Council (HPC) which is due to take over Herbal medicine and Traditional Chinese Medicine, with predictably disastrous consequences.
Two of the proposed criteria for "accreditation" appear to be directly contradictory.
Para 2.5 makes the whole accreditation pointless from the point of view of patients
2.5 It will not be an endorsement of the therapeutic validity or effectiveness of any particular discipline or treatment.
Since the only thing that matters to the patient is whether the therapy works (and is safe), accrediting of organisations that ignore this will merely give the appearance of official approval of crystal healing etc etc. This appears to contradict directly
A.7 The organisation can demonstrate that there either is a sound knowledge base underpinning the profession or it is developing one and makes that explicit to the public.
A "sound knowledge base", if it is to mean anything useful at all, means knowledge that the treatment is effective. If it doesn't mean that, what does it mean?
It seems that the official mind has still not grasped the obvious fact that there can be no sensible regulation of subjects that are untrue nonsense. If it is nonsense, the only form of regulation that makes any sense is the law.
Please fill in the consultation. My completed return can be downloaded as an example, if you wish.
Medicines and Healthcare products Regulatory Agency (MHRA) should be a top level defender of truth. Its strapline is
"We enhance and safeguard the health of the public by ensuring that medicines and medical devices work and are acceptably safe."
The MHRA did something (they won't tell me exactly what) about one of the most cruel scams that I've ever encountered, Esperanza Homeopathic Neuropeptide, peddled for multiple sclerosis, at an outrageous price ( £6,759 for 12 month's supply). Needless to say there was not a jot of evidence that it worked (and it wasn't actually homeopathic).
Astoundingly, Trading Standards officers refused to do anything about it.
The MHRA admit (when pushed really hard) that there is precious little evidence that any of the herbs work, and that homeopathy is nothing more than sugar pills. Their answer to that is to forget that bit about "ensuring that medicines … work"
Here's the MHRA's Traditional Herbal Registration Certificate for devils claw tablets.
The wording "based on traditional use only" has to be included because of European legislation. Shockingly, the MHRA have allowed them to relegate that to small print, with all the emphasis on the alleged indications. The pro-CAM agency NCCAM rates devil's claw as "possibly effective" or "insufficient evidence" for all these indications, but that doesn't matter because the MHRA requires no evidence whatsoever that the tablets do anything. They should, of course, added a statement to this effect to the label. They have failed in their duty to protect and inform the public by allowing this labelling.
But it gets worse. Here is the MHRA's homeopathic marketing authorisation for the homeopathic medicinal product Arnicare Arnica 30c pillules
It is nothing short of surreal.
Since the pills contain nothing at all, they don't have the slightest effect on sprains, muscular aches or bruising. The wording on the label is exceedingly misleading.
If you "pregnant or breastfeeding" there is no need to waste you doctor's time before swallowing a few sugar pills.
"Do not take a double dose to make up for a missed one". Since the pills contain nothing, it doesn't matter a damn.
"If you overdose . . " it won't have the slightest effect because there is nothing in them
And it gets worse. The MHRA-approved label specifies ACTIVE INGREDIENT. Each pillule contains 30c Arnica Montana
No, they contain no arnica whatsoever.
It truly boggles the mind that men with dark suits and lots of letters after their names have sat for hours only to produce dishonest and misleading labels like these.
When this mislabeling was first allowed, it was condemned by just about every scientific society, but the MHRA did nothing.
The Nightingale Collaboration.
This is an excellent organisation, set up by two very smart skeptics, Alan Henness and Maria MacLachlan. Visit their site regularly, sign up for their newsletter Help with their campaigns. Make a difference.
The regulation of alternative medicine in the UK is a farce. It is utterly ineffective in preventing deception of patients.
Such improvements as have occurred have resulted from the activity of bloggers, and sometime the mainstream media. All the official regulators have, to varying extents, made things worse.
The CHRE proposals promise to make matters still worse by offering "accreditation" to organisations that promote nonsensical quackery. None of the official regulators seem to be able to grasp the obvious fact that is impossible to have any sensible regulation of people who promote nonsensical untruths. One gets the impression that politicians are more concerned to protect the homeopathic (etc, etc) industry than they are to protect patients.
Deception by advocates of alternative medicine harms patients. There are adequate laws that make such deception illegal, but they are not being enforced. The CHRE and its successor should restrict themselves to real medicine. The money that they spend on pseudo-regulation of quacks should be transferred to the MHRA or a reformed Trading Standards organisation so they can afford to investigate and prosecute breaches of the law. That is the only form of regulation that makes sense.
The shocking case of the continuing sale of "homeopathic vaccines" for meningitis, rubella, pertussis etc was highlighted in an excellent TV programme by BBC South West. The failure of the MHRA and the GPC do take any effective action is a yet another illustration of the failure of regulators to do their job. I have to agree with Andy Lewis when he concludes
"Children will die. And the fault must lie with Professor Sir Kent Woods, chairman of the regulator."
Tagged Academia, acupuncture, alternative medicine, badscience, British Chiropractic Association, BTEC, chiropractic, CNHC, College of medicine, Complementary and Natural Healthcare Council, Department of Health, Edexcel, evidence, General Chiropractic Council, Health Professions Council, herbalism, homeopathy, National Occupational Stardards, OfQual, Ofsted, Prince of Wales, Prince's Foundation, QCA, quackademia, quackery, Skills for Care, Skills for Health, Steiner, TCM, Traditional Chinese medicine, University of Westminster, VTCT | 14 Comments
The General Election 2010: why it has to be Lib Dem this time
I voted labour in every election (apart from my very first) up to and including 1997. This is about my feelings for the 2010 election. Make up your own mind (but don't let Rupert Murdoch manipulate you).
Downloadable button from Mark Golding at http://www.coia.org.uk
Don't Get Fooled Again "I agree with Rupert"
By 2001 election, I had been forced to the conclusion that Tony Blair had views that were well to the right of Margaret Thatcher's, in many areas that mattered to me. so I voted Lib Dem. That was before 9/11 After that event, all doubt was gone, so the 2005 election it was Lib Dem again.
I won't even consider the Conservative party much. I have never understood how anyone could vote for them, ever. The only choice for me is Lib Dem versus Labour. Let's try to be fair. Labour has done some good things (though most of them would probably have been done by Lib Dems too).
Minimum Wage Act 1998 was a great innovation
The Freedom of Information Act (2000) was a major step forward for openness and democracy.
Nursery school places have increased
Heating allowances for pensioners (though not sure that I should have got it)
The funding for the NHS was increased considerably and it has been very good for me (see Why I love the NHS).
Funding for science increased considerably
Against the big increases for the NHS must be set the huge increase in the number of highly-paid managers, relative to the number of nurses and doctors, that has occurred under Labour.
Bad things that labour has done
It was obvious from an early stage that Labour were in favour of selective schools (but were not honest about). They certainly favoured religious selective schools, and still do.
The explicit support of Tony Blair for creationist schools and his implicit support for homeopathy are distasteful, but not in themselves sufficient reason for voting against him. The decisive thing for me is the Labour government's careless attitude to human rights and free speech.
Nothing made that clearer than the Iraq war and its aftermath.
Saddam Hussein was a wicked dictator, Sadly the world has many wicked dictators. One wishes they would all go away. But only one of the world's wicked dictators was singled out to be invaded. It was already clear before 1997that Iraq had been picked out by American neoconservatives as a 'special case'. They didn't get far until the election of George Bush in 2001, and the tragedy of the twin towers, 9/11, gave them the chance they sought.
George Bush was perhaps the most extreme right wing president in US History (as well as one of the most stupid). As someone who seemed to have difficulty in distinguishing between real life and a B-movie, his behaviour may not be surprising, but it brought shame on his country. His regime's legitimisation of torture is, to my mind, the greatest disgrace that has happened during my adult lifetime.
It was with increasing incredulity that I watched Tony Blair's poodle-like behaviour to Bush. It seemed incredible that any normal human. let alone a Labour prime minister could behave like that. The sight of two such men, both believing that god was on their side was scary in the extreme.
Some things are in danger of being forgotten with the passing of time. All these and much more were documented on my politics blog, up to the point when Blair left office.
Remember the US governments legalisation of torture. That caused no wavering in Blair's support.
Remember the plagiarised dossier? Any student would have been fired for that, but Blair shrugged it off.
Remember how the attorney general mysteriously changed his mind about the legality of the war?
Remember Abu Ghraib? If not, read Seymour Hersh.
Remember the ex-aide to Blair who said
"I couldn't help feeling TB was rather relishing his first blooding as PM, sending the boys into action. Despite all the necessary stuff about taking action 'with a heavy heart', I think he feels
it is part of his coming of age as a leader."
and how the government tried to tone down his remarks.
Remember David Kelly? The death of a good man must be largely the fault of Blair's government.
Remember how, eventually, generals and even neocons turned on Bush, but Blair would still not admit any mistake?
Remember the Hutton report, and the vicious attacks on the BBC's independence that followed it?
Remember the attempts to conceal 'rendition' (i.e. .torture by proxy).
Remember the wonderful efforts of UCL lawyer, Phillipe Sands, to expose illegal activities by both US and UK governments. He is someone of whom UCL can be very proud.
The good done by the Freedom of Information Act has to be set against their sloppy attitude to human rights, as evidenced by their constant attempts to extended detention without charge or trial. In 2004 I made the following poster, based on a dramatic front page of the Independent, 18th Dec. 2004. It is still relevant.
This followed the ruling pf the Law Lords that the government's detention policy was illegal
"The real threat to the life of the nation, in the sense of a people living in accordance with its traditional laws and political values, comes not from terrorism but from laws such as these. That is the true measure of what terrorism may achieve. It is for Parliament to decide whether to give the terrorists such a victory." Lord Justice Hoffmann, in the 8-1 ruling of the Law Lords that the UK government's policy of detention without charge is illegal. [Washington Post] , [Original report]
The sight of Blair acquiescing to the wish of the most right-wing neoconservative government in the western world sickened me unspeakably, and still does, The happy days of 1997 seemed to be a long way away.
That was Blair, but Gordon Brown and most of the Labour cabinet looked on and did nothing.
David Miliband said "You've punished us enough about Iraq". Well no, you haven't been punished at all, Yet. As someone said on twitter, resuscitate the 100,000 dead and we'll forgive you.
I'm still baffled about why the crowd that gathered in UCL's quad for the start of the second great march on 20th March 2003, were able to predict the outcome of the invasion so much more accurately than the government.
UCL quadrangle 20 March 2003
Click here to download high resolution
Apart from the war
Brown is guilty not only of supporting the war.
He has supported segregated religious schools and the reintroduction of "academy" schools, both being ways of surreptitiously re-introducing selection into the education system
He and Blair presided over an endless multiplication of box-ticking quangos. The intention was, no doubt to increase quality, but the effect has been exactly the opposite. Just look, for example, at Skills for Health, the QAA, the QCA and a multitude of others.
These are some of the reasons that I cannot vote "Labour" this time. They have become. in many ways, a party of the right, barely distinguishable from the Conservative party (and in some respects, further to the right). Remember that the Conservatives supported Blair in his love affair with George Bush, they support selective schools, they support religious schools. And they are even more likely that Labour to sell their soul to Rupert Murdoch. Imagine Fox "News" coming to the UK and be afraid, very afraid.
Why Liberal Democrats?
Since I find it impossible to vote Labour this time. they are the only option. But I think one can be a bit more positive than that.
Some of the reasons why are listed in a letter in today's Guardian (the list of signatories is remarkable). The Lib Dem manifesto is here.
The Lib Dems are more likely than the other parties to roll back New Labour's attack on civil liberties
Lib Dems tax and green policies look pretty good to me.
The cost of replacing Trident missiles could be around £100 billion, and if that were spent it is doubtful whether what you get is useful under present conditions. Only Lib Dems would rethink this ghastly waste of money. Brown and Cameron prefer macho posturing.
Brown's judgment about banks was wrong, yet he still won't separate the casino banks and the savings banks. Lib Dem's would.
Lib Dems have been more open about how cuts would be made than other parties (if not 100%). Vince Cable for Chancellor.
Nick Clegg's response to the letter sent party leaders by the Campaign for Science & Engineering CaSE) was clearly better than the others,in many respects. See also Lib Dems science policy test
Can you imagine a better science minister than Dr Evan Harris?. I can't.
Tagged David Cameron, Evan Harris, Gordon Brown, Lib Dems, Nick Clegg, politics | 31 Comments
Management speak strikes again
Des Spence, a general practitioner in Glasgow, has revealed a memorandum that was allegedly leaked from the Department of Health. It was published in the Britsh Medical Journal (17 June 2009, doi:10.1136/bmj.b2466, BMJ 2009;338:b2466). It seemed to me to deserve wider publicity, so with the author's permission, I reproduce it here. It may also provide a suitable introduction to a forthcoming analysis of a staff survey.
Re: The use of 'note pads' in the NHS and allied service based agencies.
Hi, all care providers, managers of care, care managers, professions allied to care providers, carers' carers, and stakeholders whose care is in our care. (And a big shout to all those service users who know me.)
We report the findings from a quality based review, with a strong strategic overview, on the use of "note pads" across all service user interfaces. This involved extensive consultation with focus groups and key stakeholders at blue sky thinking events (previously erroneously known as brain storming). This quality assured activity has precipitated some heavy idea showers, allowing opinion leaders to generate a national framework of joined-up thinking. This will take this important quality agenda forward. A 1000 page report is available to cascade to all relevant stakeholders.
The concentric themes underpinning this review are of confidentiality. Notes have been found on the visual interface devices on computers and writing workstations throughout the NHS work space. Although no actual breach of confidentiality has been reported, the independent external consultants reported that note pads "present a clear and present danger" to the NHS, and therefore there is an overarching responsibility to protect service users from scribbled messages in felt tip pen. Accordingly all types of note pads will be phased out in the near time continuum. A validated algorithm is also attached to aid this process going forward.
This modernising framework must deliver a paradigm shift in the use of note pads. Care provider leaders must employ all their influencing and leverage talents to win the hearts and minds of the early adopter. A holistic cradle to grave approach is needed, with ownership being key, and with a 360 degree rethink of the old think. All remaining note pads must be handed over in the next four week " note pad armistice" to be shredded by a facilitator (who is currently undergoing specialist training) and who will sign off and complete the audit trail.
(Please note that the NHS's email system blocks all attachments, so glossy, sustainable, wood based hard copies will be sent directly to everyone's waste recycling receptacles.)
Cite this as: BMJ 2009;338:b2466
Spence added a footnote, Note: The BMJ's lawyers have insisted that I make it clear that this is a spoof, just in case you were wondering.
Here are a few more
There is an initiative underway to determine what we do as an organisation in the realms of drug discovery. The intention is to identify internal and appropriate external capabilities to foster a pipeline of competencies that enable some of our basic research outputs to better impact healthcare.
Tagged Academia, Add new tag, Department of Health, HR bollocks, Human resources, management bollocks, managerialism, politics, Universities | 4 Comments
NICE falls for Bait and Switch by acupuncturists and chiropractors: it has let down the public and itself
First the MHRA lets down the public by allowing deceptive labelling of sugar pills (see here, and this this blog). Now it is the turn of NICE to betray its own principles.
The National Institute for Health and Clinical Excellence (NICE) describes its job thus
"NICE is an independent organisation responsible for providing national guidance on promoting good health and preventing and treating ill health."
Its Guidance document on Low Back Pain will be published on Wednesday 27 May 2009, but the newspapers have already started to comment, presumably on the assumption that it will have changed little from the Draft Guidance of September 2008. These comments may have to be changed as soon as the final version becomes available.
The draft guidance, though mostly sensible, has two recommendations that I believe to be wrong and dangerous. The recommendations include (page 7) these three.
Consider offering a course of manual therapy including spinal manipulation of up to 9 sessions over up to 12 weeks.
Consider offering a course of acupuncture needling comprising up to 10 sessions over a period of up to 12 weeks.
Consider offering a structured exercise programme tailored to the individual.
All three of this options are accompanied by a footnote that reads thus.
"A choice of any of these therapies may be offered, taking into account patient preference."
On the face if it, this might seem quite reasonable. All three choices seem to be about as effective (or ineffective) as each other, so why not let patients choose between them?
Actually there are very good reasons, but NICE does not seem to have thought about them. In the past I have had a high opinion of NICE but it seems that even they are now getting bogged down in the morass of political correctness and officialdom that is the curse of the Department of Health. It is yet another example of DC's rule number one.
Never trust anyone who uses the word 'stakeholder'.
They do use it, often.
So what is so wrong?
For a start, I take it that the reference to "spinal manipulation" in the first recommendation is a rather cowardly allusion to chiropractic. Why not say so, if that's whar you mean? Chiropractic is mentioned in the rest of the report but the word doesn't seem to occur in the recommendations. Is NICE perhaps nervous that it would reduce the credibility of the report if the word chiropractic were said out loud?
Well, they have a point, I suppose. It would.
That aside, here's what's wrong.
I take as my premise that the evidence says that no manipulative therapy has any great advantage over the others. They are all more or less equally effective. Perhaps I should say, more or less equally ineffective, because anyone who claims to have the answer to low back pain is clearly deluded (and I should know: nobody has fixed mine yet). So for effectiveness there are no good grounds to choose between exercise, physiotherapy, acupuncture or chiropractic. There is, though, an enormous cultural difference. Acupuncture and chiropractic are firmly in the realm of alternative medicine. They both invoke all sorts of new-age nonsense for which there isn't the slightest good evidence. That may not poison your body, but it certainly poisons your mind.
Acupuncturists talk about about "Qi", "meridians", "energy flows". The fact that "sham" and "real" acupuncture consistently come out indistinguishable is surely all the evidence one needs to dismiss such nonsense. Indeed there is a small group of medical acupuncturists who do dismiss it. Most don't. As always in irrational subjects, acupuncture is riven by internecine strife between groups who differ in the extent of their mystical tendencies,
Chiropractors talk of "subluxations", an entirely imaginary phenomenon (but a cause of much unnecessary exposure to X-rays). Many talk of quasi-religious things like "innate energy". And Chiropractic is even more riven by competing factions than acupuncture. See, for example, Chiropractic wars Part 3: internecine conflict.
The bait and switch trick
This is the basic trick used by 'alternative therapists' to gain respectability.
There is a superb essay on it by the excellent Yale neurologist Steven Novella: The Bait and Switch of Unscientific Medicine. The trick is to offer some limited and reasonable treatment (like back manipulation for low back pain). This, it seems, is sufficient to satisfy NICE. But then, once you are in the showroom, you can be exposed to all sorts of other nonsense about "subluxations" or "Qi". Still worse, you will also be exposed to the claims of many chiropractors and acupuncturists to be able to cure all manner of conditions other than back pain. But don't even dare to suggest that manipulation of the spine is not a cure for colic or asthma or you may find yourself sued for defamation. The shameful legal action of the British Chiropractic Association against Simon Singh (follow it here) led to an addition to DC's Patients' Guide to Magic Medicine.
(In the face of such tragic behaviour, one has to be able to laugh).
Libel: A very expensive remedy, to be used only when you have no evidence. Appeals to alternative practitioners because truth is irrelevant.
NICE seems to have fallen for the bait and switch trick, hook line and sinker.
The neglected consequences
Once again, we see the consequences of paying insufficient attention to the Dilemmas of Alternative Medicine.
The lying dilemma
If acupuncture is recommended we will have acupuncturists telling patients about utterly imaginary things like "Qi" and "meridians". And we will have chiropractors telling them about subluxations and innate energy. It is my opinion that these things are simply make-believe (and that is also the view of a minority of acupuncturist and chiropractors). That means that you have to decide whether the supposed benefits of the manipulation are sufficient to counterbalance the deception of patients.
Some people might think that it was worth it (though not me). What is unforgivable is not to consider even the question. The NICE guidance says not a word about this dilemma. Why not?
The training dilemma
The training dilemma is even more serious. Once some form of alternative medicine has successfully worked the Bait and Switch trick and gained a toehold in the NHS, there will be an army of box-ticking HR zombies employed to ensure that they have been properly trained in "subluxations" or "Qi". There will be quangos set up to issue National Occupational Standards in "subluxations" or "Qi". Skills for Health will issue "competences" in "subluxations" or "Qi" (actually they already do). There will be courses set up to teach about "subluxations" or "Qi", some even in 'universities' (there already are).
The respectability problem
But worst of all, it will become possible for aupuncturists and chiropractors to claim that they now have official government endorsement from a prestigious evidence-based organisation like NICE for "subluxations" or "Qi". Of course this isn't true. In fact the words "subluxations" or "Qi" are not even mentioned in the draft report. That is the root of the problem. They should have been. But omitting stuff like that is how the Bait and Switch trick works.
Alternative medicine advocates crave, above all, respectability and acceptance. It is sad that NICE seems to have given them more credibility and acceptance without having considered properly the secondary consequences of doing so,
How did this failure of NICE happen?
It seems to have been a combination of political correctness, failure to consider secondary consequences, and excessive influence of the people who stand to make money from the acceptance of alternative medicine.
Take, for example, the opinion of the British Pain Society. This organisation encompasses not just doctors. It
includes "doctors, nurses, physiotherapists, scientists, psychologists, occupational therapists and other healthcare professionals actively engaged in the diagnosis and treatment of pain and in pain research for the benefit of patients". Nevertheless, their response to the draft guidelines pointed out that the manipulative therapies as a whole were over-represented.
The guidelines assess 9 large groups of interventions of which manual therapies are only one part. The full GDG members panel of 13 individuals included two proponents of spinal manipulation/mobilisation (P Dixon and S Vogel). In addition, the chair of the panel (M Underwood) is the lead author of the UKBEAM trial on which the positive recommendation for
manipulation/mobilisation seems to predominately rest. Proponents of spinal manipulation/mobilisation were therefore over-represented in the generation of these guidelines, which, in turn could have generated the over-optimistic conclusion regarding this intervention.
It seems that the Pain Society were quite right.
LBC 97.3 Breakfast Show (25 May 2009) had a quick discussion on acupuncture (play mp3 file). After I had my say, the other side was put by Rosey Grandage. She has (among other jobs) a private acupuncture practice so she is not quite as unbiassed as me). As usual, she misrepresents the evidence by failing to distinguish between blind and non-blind studies. She also misrepresented what I said by implying that I was advocating drugs. That was not my point and I did not mention drugs (they, like all treatments, have pretty limited effectiveness, and they have side effects too). She said "there is very good evidence to show they ('Qi' and 'meridians'] exist". That is simply untrue.
There can't be a better demonstration of the consequences of falling for bait and switch than the defence mounted by Rosey Grandage. NICE may not mention "Qi" and "meridians"; but the people they want to allow into the NHS have no such compunctions.
I first came across Rosey Grandage when I discovered her contribution to the Open University/BBC course K221. That has been dealt with elsewhere. A lot more information about acupuncture has appeared since then. She doesn't seem to have noticed it. Has she not seen the Nordic Cochrane Centre report? Nor read Barker Bausell, or Singh & Ernst? Has she any interest in evidence that might reduce her income? Probably not.
Where to find out more
An excellent review of chiropractic can be found at the Layscience site. It was written by the indefatigable 'Blue Wode' who has provided enormous amounts of information at the admirable ebm-first site (I am authorised to reveal that 'Blue Wode' is the author of that site). There you will also find much fascinating information about both acupuncture and about chiropractic.
I'm grateful to 'Blue Wode' for some of the references used here.
Tagged acupuncture, Anti-science, Back pain, badscience, Bait and switch, British Chiropractic Association, CAM, chiropractic, chiropractor, Department of Health, NICE, nutribollocks, Open University, regulation, Rosey Grandage, Skills for Health, TCM | 58 Comments
Prince of Wales Foundation for magic medicine: spin on the meaning of 'integrated'.
The Prince of Wales' Foundation for Integrated Health (FiH) is a propaganda organisation that aims to persuade people, and politicians, that the Prince's somewhat bizarre views about alternative medicine should form the basis of government health policy.
His attempts are often successful, but they are regarded by many people as being clearly unconstitutional.
The FiH's 2009 AnnualConferen ce conference was held at The King's Fund, London 13 – 14 May 2009. It was, as always, an almost totally one-sided affair devoted to misrepresentation of evidence and the promotion of magic medicine. But according to the FiH, at least, it was a great success. The opening speech by the Quacktitioner Royal can be read here. It has already been analysed by somebody who knows rather more about medicine than HRH. He concludes
"It is a shocking perversion of the real issues driven by one man; unelected, unqualified and utterly misguided".
We are promised some movie clips of the meeting. They might even make a nice UK equivalent of "Integrative baloney @ Yale".
This post is intended to provide some background information about the speakers at the symposium. But let's start with what seems to me to be the real problem. The duplicitous use of the word "integrated" to mean two quite different things.
The problem of euphemisms: spin and obfuscation
One of the problems of meetings like this is the harm done by use of euphemisms. After looking at the programme, it becomes obvious that there is a rather ingenious bit of PR trickery going on. It confuses (purposely?) the many different definitions of the word "integrative" . One definition of "Integrative medicine" is this (my emphasis).
" . . . orienting the health care process to engage patients and caregivers in the full range of physical, psychological, social, preventive, and therapeutic factors known to be effective and necessary for the achievement of optimal health."
That is a thoroughly admirable aim. And that, I imagine, is the sense in which several of the speakers (Marmot, Chantler etc) used the term. Of course the definition is rather too vague to be very helpful in practice, but nobody would dream of objecting to it.
But another definition of the same term 'integrative medicine' is as a PR-friendly synonym for 'alternative medicine', and that is clearly the sense in which it is used by the Prince of Wales' Foundation for Integrated Health (FIH), as is immediately obvious from their web site.
The guide to the main therapies supports everything from homeopathy to chiropractic to naturopathy, in a totally uncritical way. Integrated service refers explicitly to integration of 'complementary' medicine, and that itself is largely a euphemism for alternative medicine. For example, the FIH's guide to homeopathy says
"What is homeopathy commonly used for?
Homeopathy is most often used to treat chronic conditions such as asthma; eczema; arthritis; fatigue disorders like ME; headache and migraine; menstrual and menopausal problems; irritable bowel syndrome; Crohn's disease; allergies; repeated ear, nose, throat and chest infections or urine infections; depression and anxiety."
But there is not a word about the evidence, and perhaps that isn't surprising because the evidence that it works in any of these conditions is essentially zero.
The FIH document Complementary Health Care: A Guide for Patients appears to have vanished from the web after its inaccuracy received a very bad press, e.g. in the Times, and also here. It is also interesting that the equally widely criticised Smallwood report (also sponsored by the Prince of Wales) seems to have vanished too).
The programme for the meeting can be seen here, for Day 1, and Day 2
Conference chair Dr Phil Hammond, GP, comedian and health service writer. Hammond asked the FIH if I could speak at the meeting to provide a bit of balance. Guess what? They didn't want balance.
Dr Michael Dixon OBE
09:30 Introduction: a new direction for The Prince's Foundation for Integrated Health and new opportunities in integrated health and care. Dr Michael Dixon, Medical Director, FIH
Michael Dixon is devoted to just about every form of alternative medicine. As well as being medical director of the Prince's Foundation he also runs the NHS Alliance. Despite its name, the NHS Alliance is nothing to do with the NHS and acts, among other things, as an advocate of alternative medicine on the NHS, about which it has published a lot.
Dr Dixon is also a GP at College Surgery, Cullompton, Devon, where his "integrated practice" includes dozens of alternative practitioners. They include not only disproven things like homeopathy and acupuncture, but also even more bizarre practitioners in 'Thought Field Therapy' and 'Frequencies of Brilliance'.
To take only one of these, 'Frequencies of Brilliance' is bizarre beyond belief. One need only quote its founder and chief salesperson.
"Frequencies of Brilliance is a unique energy healing technique that involves the activation of energetic doorways on both the front and back of the body."
"These doorways are opened through a series of light touches. This activation introduces high-level Frequencies into the emotional and physical bodies. It works within all the cells and with the entire nervous system which activates new areas of the brain."
Or here one reads
"Frequencies of Brilliance is a 4th /5th dimensional work. The process is that of activating doorways by lightly touching the body or working just above the body."
"Each doorway holds the highest aspect of the human being and is complete in itself. This means that there is a perfect potential to be accessed and activated throughout the doorways in the body."
Best of all, it can all be done at a distance (that must help sales a lot). One is reminded of the Skills for Health "competence" in distant healing (inserted on a government web site at the behest (you guessed it) of the Prince's Foundation, as related here)
"The intent of a long distance Frequencies of Brilliance (FOB) session is to enable a practitioner to facilitate a session in one geographical location while the client is in another.
A practitioner of FOB that has successfully completed a Stage 5 Frequency workshop has the ability to create and hold a stable energetic space in order to work with a person that is not physically present in the same room.
The space that is consciously created in the Frequencies of Brilliance work is known as the "Gap". It is a space of nonlinear time. It contains "no time and no space" or respectively "all time and all space". Within this "Gap" a clear transfer of the energies takes place and is transmitted to an individual at a time and location consciously intended. Since this dimensional space is in non-linear time the work can be performed and sent backward or forward in time as well as to any location.
The Frequencies of Brilliance work cuts through the limitations of our physical existence and allows us to experience ourselves in other dimensional spaces. Therefore people living in other geographic locations than a practitioner have an opportunity to receive and experience the work.
The awareness of this dimensional space is spoken about in many indigenous traditions, meditation practices, and in the world of quantum physics. It is referred to by other names such as the void, or vacuum space, etc."
This is, of course, preposterous gobbledygook. It, and other things in Dr Dixon's treatment guide, seem to be very curious things to impose on patients in the 21st century.
Latest news. The Mid-Devon Star announces yet more homeopathy in Dr Dixon's Cullompton practice. This time it comes in the form of a clinic run from the Bristol Homeopathic Hospital. I guess they must be suffering from reduced commissioning like all the other homeopathic hospitals, but Dr Dixon seems to have come to their rescue. The connection seems to be with Bristol's homeopathic consultant, Dr Elizabeth A Thompson. On 11 December 2007 I wrote to Dr Thompson, thus
In March 2006, a press release http://www.ubht.nhs.uk/press/view.asp?257 announced a randomised trial for homeopathic treatment of asthma in children.
This was reported also on the BBC http://news.bbc.co.uk/1/hi/england/bristol/4971050.stm .
I'd be very grateful if you could let me know when results from this trial will become available.
David Colquhoun
The reply, dated 11 December 2007, was unsympathetic
I have just submitted the funders report today and we have set ourselves the deadline to publish two inter-related papers by March 1st 2007.
Can I ask why you are asking and what authority you have to gain this information. I shall expect a reply to my questions,
I answered this question politely on the same day but nevertheless my innocent enquiry drew forth a rather vitriolic complaint from Dr Thompson to the Provost of UCL (dated 14 December 2007). In this case, the Provost came up trumps. On 14 January 2008 he replied to Thompson: "I have looked at the email that you copied to me, and I must say that it seems an entirely proper and reasonable request. It is not clear to me why Professor Colquhoun should require some special authority to make such direct enquiries". Dr Thompson seems to be very sensitive. We have yet to see the results of her trial in which I'm still interested.
Not surprisingly, Dr Dixon has had some severe criticism for his views, not least from the UK's foremost expert on the evidence for efficacy, Prof Edzard Ernst. Accounts of this can be found in Pulse,
and on Andrew Lewis's blog.
Dixon is now (in)famous in the USA too. The excellent Yale neurologist, Steven Novella, has written an analysis of his views on Science Based Medicine. He describes Dr. Michael Dixon as "A Pyromaniac In a Field of (Integrative) Straw Men"
Peter Hain
09:40 Politics and people: can integrated health and care take centre stage in 2009/2010? Rt Hon Peter Hain MP
It seems that Peter Hain was converted to alternative medicine when his first baby, Sam, was born with eczema. After (though possibly not because of) homeopathic treatment and a change in diet, the eczema got better. This caused Hain, while Northern Ireland Secretary to spend £200,000 of taxpayers' money to set up a totally uninformative customer satisfaction survey, which is being touted elsewhere in this meeting as though it were evidence (see below). I have written about this episode before: see Peter Hain and Get Well UK: pseudoscience and privatisation in Northern Ireland.
I find it very sad that a hero of my youth (for his work in the anti-apartheid movement) should have sunk to promoting junk science, and even sadder that he does so at my expense.
There has been a report on Hain's contribution in Wales Online.
09:55 Why does the Health Service need a new perspective on health and healing? Sir Cyril Chantler, Chair, King's Fund, previous Dean, Guy's Hospital and Great Ormond Street
Cyril Chantler is a distinguished medical administrator. He also likes to talk and we have discussed the quackery problem several times. He kindly sent me the slides that he used. Slide 18 says that in order to do some good we "need to demonstrate that the treatment is clinically effective and cost effective for NHS use". That's impeccable, but throughout the rest of the slides he talks of integrating with complementary" therapies, the effectiveness of which is either already disproved or simply not known.
I remain utterly baffled by the reluctance of some quite sensible people to grasp the nettle of deciding what works. Chantler fails to grasp the nettle, as does the Department of Health. Until they do so, I don't see how they can be taken seriously.
10.05 Panel discussion
10:20 Integrated Health Awards 2009 Introduction: a review of the short-listed applications
10:45 Presentations to the Award winners by the special guest speaker
11:00 Keynote address by special guest speaker
Getting integrated
Dr David Peters
12:00 Integration, long term disease and creating a sustainable NHS. Professor David Peters, Clinical Director and Professor of Integrated Healthcare, University of Westminster
I first met David Peters after Nature ran my article, Science Degrees without the Science. .One of the many media follow-ups of that article was on Material World (BBC Radio 4). This excellent science programme, presented by Quentin Cooper, had a discussion between me and David Peters ( listen to the mp3 file).
There was helpful intervention from Michael Marmot who had talked, in the first half of the programme, about his longitudinal population studies.
Marmot stressed the need for proper testing. In the case of
homeopathy and acupuncture, that proper testing has largely been done. The tests were failed.
The University of Westminster has, of course, gained considerable notoriety as the university that runs more degree programmes in anti-scientific forms of medicine than any other. Their lecture on vibrational medicine teaches students that amethysts "emit high Yin energy so transmuting lower energies and clearing and aligning energy disturbances at all levels of being". So far their vice-chancellor, Professor Geoffrey Petts, has declined to answer enquiries about whether he thinks such gobbledygook is appropriate for a BSc degree.
But he did set up an internal enquiry into the future of their alternative activities. Sadly that enquiry seems to have come to the nonsensical conclusion that the problem can be solved by injection of good science into the courses, as reported here and in the Guardian.
It seems obvious that if you inject good science into their BSc in homeopathy the subject will simply vanish in a puff of smoke.
In 2007, the University of Westminster did respond to earlier criticism in Times Higher Education, but their response seemed to me to serve only to dig themselves deeper into a hole.
Nevertheless, Westminster has now closed down its homeopathy degree (the last in the country to go) and there is intense internal discussion going on there. I have the impression that Dr Peters' job is in danger. The revelation of more slides from their courses on homeopathy, naturopathy and Chinese herbal medicine shows that these courses are not only barmy, but also sometimes dangerous.
Professor Chris Fowler
12:10 Educating tomorrow's integrated doctors. Professor Chris Fowler, Dean for Education, Barts and The London School of Medicine and Dentistry
I first came across Dr Fowler when I noticed him being praised for his teaching of alternative medicine to students at Barts and the London Medical School on the web site of the Prince's Foundation. I wrote him a polite letter to ask if he really thought that the Prince of Wales was the right person to consult about the education of medical students. The response I got was, ahem, unsympathetic. But a little while later I noticed that two different Barts students had set up public blogs that criticised strongly the nonsense that was being inflicted on them.
At that point, I felt it was necessary to support the students who, it seemed to me, knew more about medical education than Professor Fowler. It didn't take long to uncover the nonsense that was being inflicted on the students: read about it here.
There is a follow-up to this story here. Fortunately, Barts' Director of Research, and, I'm told, the Warden of Barts, appear to agree with my view of the harm that this sort of thing can do to the reputation of Barts, so things may change soon,
Dame Donna Kinnair
12:30 Educating tomorrow's integrated nurses.
Dame Donna Kinnair, Director of Nursing, Southwark PCT
As far as I can see, Donna Kinnair has no interest in alternative medicine. She is director of nursing at Southwark primary care trust and was an adviser to Lord Laming throughout his inquiry into the death of Victoria Climbié. I suspect that her interest is in integrating child care services (they need it, judging by the recent death of 'Baby P'). Perhaps her presence shows the danger of using euphemisms like 'integrated medicine' when what you really mean is the introduction of unproven or disproved forms of medicine.
Michael Dooley
12:40 Integrating the care of women: an example of the new paradigm. Michael Dooley, Consultant Obstetrician and Gynecologist
DC's rule 2. Never trust anyone who uses the word 'paradigm'. It is a sure-fire sign of pseudoscience. In this case, the 'new paradigm' seems to be the introduction of disproven treatment. Dooley is a gynaecologist and Medical Director of the Poundbury Clinic. His clinic offers a whole range of unproven and disproved treatments. These include acupuncture as an aid to conception in IVF. This is not recommended by the Cochrane review, and one report suggests that it hinders conception rather than helps.
12.40 Discussion
13.00 – 14.00 Lunch and Exhibition
15.30 Tea
Boo Armstrong and Get Well UK
16.00 Integrated services in action: The Northern
Ireland experience: what has it shown us and what are its implications?
Boo Armstrong of Get Well UK with a team from the NI study
I expect that much will be made of this "study", which, of course, tells you absolutely nothing whatsoever about the effectiveness of the alternative treatments that were used in it. This does not appear to be the view of Boo Armstrong, On the basis of the "study", her company's web site proclaims boldly
"Complementary Medicine Works
Get Well UK ran the first government-backed complementary therapy project in the UK, from February 2007 to February 2008″
This claim appears, prima facie, to breach the Unfair Trading Regulations of May 2008. The legality of the claim is, at the moment, being judged by a Trading Standards Officer. In any case, the "study" was not backed by the government as a whole, but just by Peter Hain's office. It is not even clear that it had ethical approval.
The study consisted merely of asking people who had seen an alternative medicine practitioner whether they felt better or worse. There was no control group; no sort of comparison was made. It is surely obvious to the most naive person that a study like this cannot even tell you if the treatment has a placebo effect, never mind that it has any genuine effects of its own. To claim that it does so seems to be simply dishonest. There is no reason at all to think that the patients would not have got better anyway.
It is not only Get Well UK who misrepresent the evidence. The Prince's
Foundation itself says
"Now a new, year long trial supported by the Northern Ireland health service has . . . demonstrated that integrating complementary and conventional medicine brings measurable benefits to patients' health."
That is simply not true. It is either dishonest or stupid. Don't ask me which, I have no idea.
This study is no more informative than the infamous Spence (2005) 'study' of the same type, which seems to be the only thing that homeopaths can produce to support their case.
There is an excellent analysis of the Northern Ireland 'study' by Andy Lewis, The Northern Ireland NHS Alternative Medicine 'Trial'. He explains patiently, yet again, what constitutes evidence and why studies like this are useless.
His analogy starts
" . . . the Apple Marketing Board approach the NHS and ask for £200,000 to do a study to show the truth behind the statement 'An apple a day keeps the doctor away'. The Minister, being particularly fond of apples, agrees and the study begins."
16.30 Social enterprise and whole systems integrated care. Dee Kyne, Sandwell PCT and a GP. Developing an integrated service in secondary care
Dee Kyne appears to be CEO of KeepmWell Ltd (a financial interest that is not mentioned).
Peter Mackereth, Clinical Lead, Supportive Services, Christie Hospital NHS Foundation Trust
I had some correspondence with Mackereth when the Times (7 Feb 2007) published a picture of the Prince of Wales inspecting an "anti-MRSA aromatherapy inhaler" in his department at the Christie. It turned out that the trial they were doing was not blind No result has been announced anyway, and on enquiry, I find that the trial has not even started yet. Surprising, then to find that the FIH is running the First Clinical Aromatherapy Conference at the Christie Hospital, What will there be to talk about?
Much of what they do at the Christie is straightforward massage, but they also promote the nonsensical principles of "reflexology" and acupuncture.
The former is untested. The latter is disproven.
Parallel Sessions
Developing a PCT funded musculoskeletal service Dr Roy Welford, Glastonbury Health Centre
Roy Welford is a Fellow of the Faculty of Homeopathy, and so promotes disproven therapies. The Glastonbury practice also advertises acupuncture (disproven), osteopathy and herbal medicine (largely untested so most of it consists of giving patients an unknown dose of an ill-defined drug, of unknown effectiveness and unknown safety).
Making the best of herbal self-prescription in integrated practice: key remedies and principles. Simon Mills, Project Lead: Integrated Self Care in Family Practice, Culm Valley Integrated Centre for Health, Devon
Simon Mills is a herbalist who now describes himself as a "phytotherapist" (it sounds posher, but the evidence, or lack of it, is not changed by the fancy name). Mills likes to say things like "there are herbs for heating and drying", "hot and cold" remedies, and to use meaningless terms like "blood cleanser", but he appears to be immune to the need for good evidence that herbs work before you give them to sick people. He says, at the end of a talk, "The hot and the cold remain the trade secret of traditional medicine". And this is the 21st Century.
Practical ways in which complementary approaches can improve the treatment of cancer. Professor Jane Plant, Author of "Your life in your hands" and Chief Scientist, British Geological Society and Professor Karol Sikora, Medical Director, Cancer Partners UK
Jane Plant is a geologist who, through her own unfortunate encounter with breast cancer, became obsessed with the idea that a dairy-free diet cured her. Sadly there is no good evidence for that idea, according to the World Cancer Research Fund Report, led by Professor Sir Michael Marmot. No doubt her book on the subject sells well, but it could be held that it is irresponsible to hold out false hopes to desperate people. She is a supporter of the very dubious CancerActive organisation (also supported by Michael Dixon OBE –see above) as well as the notorious pill salesman, Patrick Holford (see also here).
Karol Sikora, formerly an oncologist at the Hammersmith Hospital, is now Dean of Medicine at the University of Buckingham (the UK's only private university). He is also medical director at CancerPartners UK, a private cancer company.
He recently shot to fame when he appeared in a commercial in the USA sponsored by "Conservatives for Patients' Rights", to pour scorn on the NHS, and to act as an advocate for the USA's present health system. A very curious performance. Very curious indeed.
His attitude to quackery is a mystery wrapped in an enigma. One was somewhat alarmed to see him sponsoring a course at what was, at first, called the British College of Integrated Medicine, and has now been renamed the Faculty of Integrated Medicine That grand title makes it sound like part of a university. It isn't.
The alarm was as result of the alliance with Dr Rosy Daniel (who promotes an untested herbal conconction, Carctol, for 'healing' cancer) and Dr Mark Atkinson (a supplement salesman who has also promoted the Qlink pendant. The Qlink pendant is a simple and obvious fraud designed to exploit paranoia about WiFi killing you.
The first list of speakers on the proposed diploma in Integrated Medicine was an unholy alliance of outright quacks and commercial interests. It turned out that, although Karol Sikora is sponsoring the course, he knew nothing about the speakers. I did and when I pointed this out to Terence Kealey, vice-chancellor of Buckingham, he immediately removed Rosy Daniel from directing the Diploma. At the moment the course is being revamped entirely by Andrew Miles. There is hope that he'll do a better job. It has not yet been validated by the University of Buckingham. Watch this space for developments.
Stop press It is reported in the Guardian that Professor Sikora has been describing his previous job at Imperial College with less than perfect accuracy. Oh dear. More developments in the follow-up.
The role of happy chickens in healing: farms as producers of health as well as food – the Care Farm Initiative Jonathan Dover, Project Manager, Care Farming, West Midlands.
"Care farming is a partnership between farmers, participants and health & social care providers. It combines the care of the land with the care of people, reconnecting people with nature and their communities."
Sounds lovely, I wonder how well it works?
What can the Brits learn from the Yanks when it comes to integrated health? Jack Lord, Chief Executive Humana Europe
It is worth noticing that the advisory board of Humana Europe includes Micheal Dixon OBE, a well known advocate of alternative medicine (see
above). Humana Europe is a private company, a wholly owned subsidiary of Humana Inc., a health benefits company with 11 million members and 22,000 employees and headquarters in Louisville, Kentucky. In 2005 it entered into a business partnership with Virgin Group. Humana was mentioned in the BBC Panorama programme "NHS for Sale". The company later asked that it be pointed out that they provide commissioning services, not clinical services [Ed. well not yet anyway].
Humana's document "Humana uses computer games to help people lead healthier lives" is decidedly bizarre. Hang on, it was only a moment ago that we were being told that computer games rewired your brain.
Day 2 Integrated health in action
09.00 Health, epidemics and the search for new solutions. Sir Michael Marmot, Professor of Epidemiology and Public Health, Royal Free and University College Medical School
It is a mystery to me that a distinguished epidemiologist should be willing to keep such dubious company. Sadly I don't know what he said, but judging my his publications and his appearence on Natural World, I can't imagine he'd have much time for homeopaths.
9.25 Improving health in the workplace. Dame Carol Black, National Director, Health and Work, Department of Health
This is not the first time that Dame Carol has been comtroversial.
9.45 Integrated health in focus: defeating obesity. Professor Chris Drinkwater, President, NHS Alliance.
The NHS Alliance was mentioned above. Enough said.
10.00 Integrated healthcare in focus: new approaches to managing asthma, eczema and allergy. Professor Stephen Holgate, Professor of Immunopharmacology, University of Southampton
10.15 Using the natural environment to increase activity. The Natural England Project: the results from year one. Dr William Bird and Ruth Tucker, Natural England.
10.45 Coffee
Self help in action
11.10 Your health, your way: supporting self care through care planning and the use of personal budgets. Angela Hawley, Self Care Lead, Department of Health
11.25 NHS Life Check: providing the signposts to
integrated health. Roy Lambley, Project Director, NHS LifeCheck Programme
This programme was developed with the University of Westminster's "Health and Well-being Network". This group, with one exception, is separate from Westminster's extensive alternative medicine branch (it's mostly psychologists).
11.45 The agony and the ecstasy of helping patients to help themselves: tips for clinicians, practices and PCTs. Professor
Ruth Chambers, FIH Foundation Fellow.
11.55 Providing self help in practice: Department of Health Integrated Self Help Information Project. Simon Mills, Project Lead: Integrated Self Care in Family Practice, Culm Valley Integrated Centre for Health, Devon and Dr Sam Everington, GP, Bromley by Bow.
The Culm Valley Integrated Centre for health is part of the College Surgery Partnership, associated with Michael Dixon OBE (yes, again!).
Simon Mills is the herbalist who says "The hot and the cold remain the trade secret of traditional medicine" .
Sam Everington, in contrast, seems to be interested in 'integration' in the real sense of the word, rather than quackery.
Integrated health in action
How to make sense of the evidence on complementary approaches: what works? What might work? What doesn't work?
Dr Hugh MacPherson, Senior Research Fellow in Health Sciences, York University and Dr Catherine Zollman, Bravewell Fellow
Hugh MacPherson's main interest is in acupuncture and he publishes in alternative medicine journals. Since the recent analysis in the BMJ from the Nordic Cochrane Centre (Madsen et al., 2009) it seems that acupuncture is finally dead. Even its placebo effect is too small to be useful. Catherine Zollman is a Bristol GP who is into homeopathy as well as acupuncture. She is closely connected with the Prince's Foundation via the Bravewell Fellowship. That fellowship is funded by the Bravewell Collaboration, which is run by Christie Mack, wife of John Mack ('Mack the Knife'), head of Morgan Stanley (amazingly, they still seem to have money). This is the group which, by sheer wealth, has persuaded so many otherwise respectable US universities to embrace every sort of quackery (see, for example, Integrative baloney @ Yale)
The funding of integrated services
14.15 How to get a PCT or practice- based commissioner to fund your integrated service. A PCT Chief Executive and a Practice-Based Commissioning lead.
14.30 How I succeeded: funding an integrated service. Dr John Ribchester, Whitstable
14.45 How we created an acupuncture service in St Albans and Harpenden PBC group. Mo Girach, Chief Executive, STAHCOM
Uhuh Acupunture again. Have these people never read Bausell's
book? Have they not read the BMJ? Acupuncture is now ell-established to be based on fraudulent principles, and not even to have a worthwhile placeobo effect. STAHCOM seem to be more interested in money than in what works.
Dragon's Den. Four pitchers lay out their stall for the commissioning dragons
And at this stage there is no prize for guessing that all four are devoted to trying to get funds for discredited treatments
An acupuncture service for long-term pain. Mike Cummings Chair, Medical Acupuncture Association
Manipulation for the treatment of back pain Simon Fielding, Founder Chairman of the General Osteopathic Council
Nigel Clarke, Senior Partner, Learned Lion Partners Homeopathy for long term conditions
Peter Fisher, Director, Royal Homeopathic Hospital
Sadly it is not stated who the dragons are. One hopes they will be more interested in evidence than the supplicants.
Mike Cummings at least doesn't believe the nonsense about meridians and Qi. It's a pity he doesn't look at the real evidence though.
You can read something about him and his journal at BMJ Group promotes acupuncture: pure greed.
Osteopathy sounds a bit more respectable than the others, but in fact it has never shaken off its cult-like origins. Still many osteopaths make absurd claims to cure all sorts of diseases. Offshoots of osteopathy like 'cranial osteopathy' are obvious nonsense. There is no reason to think that osteopathy is any better than any other manipulative therapy and it is clear that all manipulative therapies should be grouped into one.
Osteopathy and chiropractic provide the best ever examples of the folly of giving official government recognition to a branch of alternative medicine before the evidence is in.
Learned Lion Partners is a new one on me. It seems it is
part of Madsen Gornall Ashe Chambers ('MGA Chambers') "a grouping of top level, independent specialists who provide a broad range of management consultancy advice to the marketing community". It's a management consultant and marketing outfit. So don't expect too much when it comes to truth and evidence. The company web site says nothing about alternative medicine, but only that Nigel Clarke
". . . has very wide experience of public affairs issues and campaigns, having worked with clients in many sectors in Europe, North America and the Far East. He has particular expertise in financial, competition and healthcare issues. "
However, all is revealed when we see that he is a Trustee of the Prince's Foundation where his entry says
"Nigel Clarke is senior partner of Learned Lion Partners. He is a director of Vidapulse Ltd, Really Easy Ltd, Newscounter Ltd and Advanced Transport Systems Ltd. He has worked on the interfaces of public policy for 25 years. He has been chair of the General Osteopathic Council since May 2001, having been a lay member since it was formed. He is now a member of the Council for Healthcare Regulatory Excellence"
The Council for Healthcare Regulatory Excellence is yet another quango that ticks boxes and fails absolutely to grasp the one important point, does it work?. I came across them at the Westminster Forum, and they seemed a pretty pathetic way to spend £2m per year.
Peter Fisher is the last supplicant to the Dragons. He is clinical director of the Royal London Homeopathic Hospital (RLHH), and Queen's homeopathic physician, It was through him that I got an active interest in quackery. The TV programme QED asked me to check the statistics in a paper of his that claimed that homeopathy was good for fibrositis (there was an elementary mistake and no evidence for an effect). Peter Fisher is also remarkable because he agreed with me that BSc degrees in homeopathy were not justified (on TV –see the movie). And he condemned homeopaths who were caught out recommending their sugar pills for malaria. To that extent Fisher represents the saner end of the homeopathic spectrum. Nevertheless he still maintains that sugar pills work and have effects of their own, and tries to justify the 'memory of water' by making analogies with a memory stick or CD. This is so obviously silly that no more comment is needed.
Given Fisher's sensible condemnation of the malaria fiasco, I was rather surprised to see that he appeared on the programme of a conference at the University of Middlesex, talking about "A Strategy To Research The Potential Of Homeopathy In Pandemic Flu". The title of the conference was Developing Research Strategies in CAM. A colleague, after seeing the programme, thought it was more like "a right tossers' ball".
Much of the homeopathy has now vanished from the RLHH as a result of greatly reduced commissioning by PCTs (read about it in Fisher's own words). And the last homeopathy degree in the UK has closed down. It seems an odd moment for the FIH to be pushing it so hard.
Stop press It is reported in the Guardian (22 May 2009) that Professor Sikora has been describing his previous job at Imperial College with less than perfect accuracy. Oh dear, oh dear.
This fascinating fact seems to have been unearthed first by the admirable NHS Blog Doctor, in his post 'Imperial College confirm that Karol Sikora does not work for them and does not speak on their behalf'.
Tagged Academia, acupuncture, Add new tag, alternative medicine, badscience, Barts and the London, CAM, cancer, chiropractic, Cyril Chantler, Department of Health, Fair trading, homeopathy, HRH, Karol Sikora, Michael Dixon, Michael Marmot, nutritional therapy, Prince Charles, Prince of Wales, Prince's Foundation, quackery, Universities, Westminster university | 26 Comments
The Prince of Wales joins the "Detox" fraud
It's only a matter of weeks since a lot of young scientists produced a rather fine pamphlet pointing out that the "detox" industry is simply fraud. They concluded
"There is little or no proof that these products work, except to part people from their cash."
With impeccable timing, Duchy Originals has just launched a "detox" product.
Duchy Originals is a company that was launched in 1990 by the Prince of Wales, Up to now, it has limited itself to selling overpriced and not particularly healthy stuff like Chocolate Butterscotch Biscuits and Sandringham Strawberry Preserve. Pretty yummy if you can afford them.
The move of HRH into herbal concoctions was first noted in the blogosphere (as usual) in December, by Quackometer. It was reported recently in the Daily Telegraph (23rd January).
Expect a media storm.
Aha so it is a "food supplement" not a drug. Perhaps Duchy Originals have not noticed that there are now rather strict regulations about making health claims for foods?
And guess who's selling it? Yes our old friend, for which no deception is too gross, Boots the Chemists.
That's £10 for 50 ml. Or £200 per litre.
And what's in it?
Problem 1. The word detox has no agreed meaning. It is a marketing word, designed to separate the gullible from their money
Problem 2. There isn't the slightest reason to think that either artichoke or dandelion will help with anything at all. Neither appears at all in the Cochrane reviews. So let's check two sources that are both compiled by CAM sympathisers (just so I can't be accused of prejudice).
National Electronic Library of CAM (NELCAM) reveals nothing useful.
There is no good evidence that artichoke leaf extract works for lowering cholesterol. No other indications are mentioned.
Dandelion doesn't get any mention at all.
The US National Center for Complementary and Alternative Medicine (NCCAM) has spent almost $1 billion on testing alternative treatments So far they have produced no good new remedies (see also Integrative baloney @ Yale).They publish a database of knowledge about herbs. This is what they say.
Dandelion. There is no compelling scientific evidence for using dandelion as a treatment for any medical condition.
Artichoke isn't even mentioned anywhere.
If "detox" is meant to be a euphemism for hangover cure, then look at the review by Pittler et al (2005), 'Interventions for preventing or treating alcohol hangover: systematic review of randomised controlled trials'.
"Conclusion No compelling evidence exists to suggest that any conventional or complementary intervention is effective for preventing or treating alcohol hangover. The most effective way to avoid the symptoms of alcohol induced hangover is to practise abstinence or moderation."
Problem 3. The claim that the product is "cleansing and purifying" is either meaningless or false. Insofar as it is meaningless, it is marketing jargon that is designed to deceive. The claim that it supports "the body's natural elimination and detoxification processes, and helps maintain healthy digestion" is baseless. It is a false health claim that, prima facie, is contrary to the Unfair Trading law, and/or European regulation on nutrition and health claims made on food, ref 1924/2006 , and which therefore should result in prosecution.
Two more Duchy herbals
Duchy are selling also Echinacea and Hypericum (St John's Wort).
The evidence that Echinacea helps with colds is, to put it mildly, very marginal.
Of St John's Wort, NCCAM says
"There is some scientific evidence that St. John's wort is useful for treating mild to moderate depression. However, two large studies, one sponsored by NCCAM, showed that the herb was no more effective than placebo in treating major depression of moderate severity."
As well as having dubious effectiveness it is well known that St John's Wort can interact with many other drugs, a hazard that is not mentioned by Duchy Originals
These two are slightly different because they appear to have the blessing of the MHRA.
The behaviour of the MHRA in ignoring the little question of whether the treatment works or not has been condemned widely. But at least the MHRA are quite explicit. This is what the MHRA says of St. John's Wort (my emphasis).
"This registration is based exclusively upon evidence of traditional use of Hypericum perforatum L. as a herbal medicine and not upon data generated from clinical trials.. There is no requirement under the Traditional Herbal Registration scheme to prove scientifically that the product works."
But that bit about "There is no requirement under the Traditional Herbal Registration scheme to prove scientifically that the product works" does not appear in the Duchy Originals advertisement. On the contrary, this is what they say.
Yes, they claim that "the two tinctures [echinacea and hypericum] -in terms of their safety, quality and efficacy -by the UK regulatory authorities"
That is simply not true.
On the contrary, anyone without specialist knowledge would interpret bits like these as claims that there will be a health benefit.
That is claim to benefit your health. So are these.
Michael McIntyre is certainly a high profile herbalist.
He was founder president of the European Herbal Practitioners Association and a trustee of the Prince of Wales's Foundation for Integrated Medicine. It seems that he a great believer in the myth of "detox", judging by his appearance on the Firefly tonics web site. They will sell you
"Natural healthy energy" in a drink
That's what we wanted…
A Wake up for that drowsy afternoon… Detox for a dodgy Friday morning…
Sharpen up for that interminable meeting.
We left the herbs to our wonderful herbalists.
Their De-tox contains lemon, lime, ginger, sarsparilla and angelica. I expect it tastes nice. All the rest is pure marketing rubbish. It does not speak very well of Michael McIntyre that he should lend his name to such promotions.
Nelsons, who actually make the stuff, is better known as a big player in the great homeopathic fraud business. They will sell you 30C pills of common salt at £4.60 for 84. Their main health-giving virtue is that they're salt free.
If you want to know what use they are, you are referred here, where it is claimed that it is "used to treat watery colds, headaches, anaemia, constipation, and backache". Needless to say there isn't a smidgeon of reason to believe it does the slightest good for them.
And remember what Nelson's advisor at their London pharmacy told BBC TV while recommending sugar pills to prevent malaria?
"They make it so your energy doesn't have a malaria-shaped hole in it so the malarial mosquitos won't come along and fill that in."
The Prince of Wales has some sensible things to say in other areas, such as the world's over-reliance on fossil fuels. Even his ideas about medicine are, no doubt, well-intentioned. It does seem a shame that he just can't get the hang of the need for evidence. Wishful thinking just isn't enough.
Some more interesting reading about the Prince of Wales.
Michael Baum's An open letter to the Prince of Wales: with respect, your highness, you've got it wrong"
Gerald Weissman's essay Homeopathy: Holmes, Hogwarts, and the Prince of Wales.
Channel 4 TV documentary HRH "meddling in politics"
11 March 2009 The MHRA have censured Duchy Originals for the claims made for these products. and in May 2009, two complaints to the Advertising Standards Authority were upheld.
Tagged alternative medicine, Boots, CAM, Duchy Originals, herbal medicine, herbalism, Michael McIntyre, Prince Charles, Prince of Wales, Prince's Foundation | 18 Comments
And then they came for me
One of the most extraordinary bits of journalism I've read for a long time appeared as an editorial in the Sri Lankan newspaper, the Sunday Leader, on Sunday January 11th 2009 It was reproduced in the Guardian on 13th January, and in The Times. It was written by Lasantha Wickrematunge, editor of the Sunday Leader, and it was the last thing he wrote. Days after writing it he was assassinated.
It is a plea for freedom of speech. In particular, for the freedom of journalists to tell the truth, It is deeply moving and it is also written in more beautiful English than many native speakers can manage. The second person to leave a comment in the Guardian said
"Extraordinary, humbling and deeply moving.
Cif Eds, please leave this at the top of the page for about a week, and then nail copies it to every available surface at Guardian HQ."
Writing blogs like this one (and a thousand others) need some of the skills of investigative journalism. Those skills are not so different from those you need in science, Curiosity, a willingness to look under stones, a preference for truth over myth, some skill with Google and a good deal of tenacity. You also need to be resilient to abuse and defamation by people who disagree with you. But you do not risk your life. It does not take much courage. That isn't true in large parts of the world.
Read it all. Here are a few quotations to persuade you it's worth the time.
"No other profession calls on its practitioners to lay down their lives for their art save the armed forces – and, in Sri
Lanka , journalism. In the course of the last few years, the independent media have increasingly come under attack. Electronic and print institutions have been burned, bombed, sealed and coerced. Countless journalists have been harassed, threatened and killed. It has been my honour to belong to all those categories, and now especially the last."
"The Sunday Leader has been a controversial newspaper because we say it like we see it: whether it be a spade, a thief or a murderer, we call it by that name. We do not hide behind euphemism. The investigative articles we print
are supported by documentary evidence thanks to the public-spiritedness of citizens who at great risk to themselves pass on this material to us. We have exposed scandal after scandal, and never once in these 15 years has anyone proved us wrong or successfully prosecuted us."
"The free media serve as a mirror in which the public can see itself sans mascara and styling gel. From us you learn the state of your nation, and especially its management by the people you elected to give your children a better future."
"It is well known that I was on two occasions brutally assaulted, while on another my house was sprayed with machine-gun fire. Despite the government's sanctimonious assurances, there was never a serious police inquiry into the perpetrators of these attacks, and the attackers were never apprehended.
In all these cases, I have reason to believe the attacks were inspired by the government. When finally I am killed, it will be the government that kills me."
"In the wake of my death I know you will make all the usual sanctimonious noises and call upon the police to hold a swift and thorough inquiry.
But like all the inquiries you have ordered in the past, nothing will come of this one, too. For truth be told, we both know who will be behind my death, but dare not call his name. Not just my life but yours too depends on it.
As for me, I have the satisfaction of knowing that I walked tall and bowed to no man. And I have not travelled this journey alone. Fellow journalists in other branches of the media walked with me: most are now dead, imprisoned
without trial or exiled in far-off lands."
"People often ask me why I take such risks and tell me it is a matter of time before I am bumped off. Of course I know that: it is inevitable. But if we do not speak out now, there will be no one left to speak for those who cannot,
whether they be ethnic minorities, the disadvantaged or the persecuted. An example that has inspired me throughout my career in journalism has been that of the German theologian, Martin Niemöller. In his youth he was an antisemite and an admirer of Hitler. As nazism took hold of Germany, however, he saw nazism for what it was. It was not just the Jews Hitler sought to extirpate, it was just about anyone with an alternate point of view. Niemöller spoke out, and for his trouble was incarcerated in the Sachsenhausen and Dachau concentration camps from 1937 to 1945, and very nearly executed. While incarcerated, he wrote a poem that, from the first time I read it in my teenage years, stuck hauntingly in my mind:
First they came for the Jews
and I did not speak out because I was not a Jew.
Then they came for the Communists
and I did not speak out because I was not a Communist.
Then they came for the trade unionists
and I did not speak out because I was not a trade unionist.
Then they came for me and there was no one left to speak out for me.
If you remember nothing else, let it be this: the Leader is there for you, be you Sinhalese, Tamil, Muslim, low-caste, homosexual, dissident or disabled."
This man puts to shame the those who won't speak out in the safety of the West, despite the fact that they have nothing to lose but their ministerial jobs or their knighthoods. Or running the risk of being sued by chiropractors.
How about some nominations for Western journalists who live up to these ideals? I'd start with Seymour Hersh and Paul Krugman in the USA, and our own Ben Goldacre. It's interesting though, that two of these three are not full time journalists. Blogs do rather better than most newspapers. They have become an important force for freedom of speech. That more than counterbalances the use of the web for promoting junk. It is a lot harder to keep a secret than it used to be.
There is an obituary of Lasantha Wickrematunge in the Sunday Leader, and a report from Amnesty International.
Tagged Freedom of speech, Journalism, Lasantha Wickrematunge, politics, Sri Lanka | 3 Comments
Most alternative medicine is illegal
I'm perfectly happy to think of alternative medicine as being a voluntary, self-imposed tax on the gullible (to paraphrase Goldacre again). But only as long as its practitioners do no harm and only as long as they obey the law of the land. Only too often, though, they do neither.
When I talk about law, I don't mean lawsuits for defamation. Defamation suits are what homeopaths and chiropractors like to use to silence critics. heaven knows, I've becomes accustomed to being defamed by people who are, in my view. fraudsters, but lawsuits are not the way to deal with it.
I'm talking about the Trading Standards laws Everyone has to obey them, and in May 2008 the law changed in a way that puts the whole health fraud industry in jeopardy.
The gist of the matter is that it is now illegal to claim that a product will benefit your health if you can't produce evidence to justify the claim.
I'm not a lawyer, but with the help of two lawyers and a trading standards officer I've attempted a summary. The machinery for enforcing the law does not yet work well, but when it does, there should be some very interesting cases.
The obvious targets are homeopaths who claim to cure malaria and AIDS, and traditional Chinese Medicine people who claim to cure cancer.
But there are some less obvious targets for prosecution too. Here is a selection of possibilities to savour..
Universities such as Westminster, Central Lancashire and the rest, which promote the spreading of false health claims
Hospitals, like the Royal London Homeopathic Hospital, that treat patients with mistletoe and marigold paste. Can they produce any real evidence that they work?
Edexcel, which sets examinations in alternative medicine (and charges for them)
Ofsted and the QCA which validate these exams
Skills for Health and a whole maze of other unelected and unaccountable quangos which offer "national occupational standards" in everything from distant healing to hot stone therapy, thereby giving official sanction to all manner of treatments for which no plausible evidence can be offered.
The Prince of Wales Foundation for Integrated Health, which notoriously offers health advice for which it cannot produce good evidence
Perhaps even the Department of Health itself, which notoriously referred to "psychic surgery" as a profession, and which has consistently refused to refer dubious therapies to NICE for assessment.
The law, insofar as I've understood it, is probably such that only the first three or four of these have sufficient commercial elements for there to be any chance of a successful prosecution. That is something that will eventually have to be argued in court.
But lecanardnoir points out in his comment below that The Prince of Wales is intending to sell herbal concoctions, so perhaps he could end up in court too.
The laws
We are talking about The Consumer Protection from Unfair Trading Regulations 2008. The regulations came into force on 26 May 2008. The full regulations can be seen here, or download pdf file. They can be seen also on the UK Statute Law Database.
The Office of Fair Trading, and Department for Business, Enterprise & Regulatory Reform (BERR) published Guidance on the Consumer Protection from Unfair Trading Regulations 2008 (pdf file),
Statement of consumer protection enforcement principles (pdf file), and
The Consumer Protection from Unfair Trading Regulations: a basic guide for business (pdf file).
Has The UK Quietly Outlawed "Alternative" Medicine?
On 26 September 2008, Mondaq Business Briefing published this article by a Glasgow lawyer, Douglas McLachlan. (Oddly enough, this article was reproduced on the National Center for Homeopathy web site.)
"Proponents of the myriad of forms of alternative medicine argue that it is in some way "outside science" or that "science doesn't understand why it works". Critical thinking scientists disagree. The best available scientific data shows that alternative medicine simply doesn't work, they say: studies repeatedly show that the effect of some of these alternative medical therapies is indistinguishable from the well documented, but very strange "placebo effect" "
"Enter The Consumer Protection from Unfair Trading Regulations 2008(the "Regulations"). The Regulations came into force on 26 May 2008 to surprisingly little fanfare, despite the fact they represent the most extensive modernisation and simplification of the consumer protection framework for 20 years."
The Regulations prohibit unfair commercial practices between traders and consumers through five prohibitions:-
General Prohibition on Unfair Commercial
Practices (Regulation 3)
Prohibition on Misleading Actions (Regulations 5)
Prohibition on Misleading Omissions (Regulation 6)
Prohibition on Aggressive Commercial Practices (Regulation 7)
Prohibition on 31 Specific Commercial Practices that are in all Circumstances Unfair (Schedule 1). One of the 31 commercial practices which are in all circumstances considered unfair is "falsely claiming that a product is able to cure illnesses, dysfunction or malformations". The definition of "product" in the Regulations includes services, so it does appear that all forms medical products and treatments will be covered.
Just look at that!
One of the 31 commercial practices which are in all circumstances considered unfair is "falsely claiming that a product is able to cure illnesses, dysfunction or malformations"
Section 5 is equally powerful, and also does not contain the contentious word "cure" (see note below)
Misleading actions
5.—(1) A commercial practice is a misleading action if it satisfies the conditions in either paragraph (2) or paragraph (3).
(2) A commercial practice satisfies the conditions of this paragraph—
(a) if it contains false information and is therefore untruthful in relation to any of the matters in paragraph (4) or if it or its overall presentation in any way deceives or is likely to deceive the average consumer in relation to any of the matters in that paragraph, even if the information is factually correct; and
(b) it causes or is likely to cause the average consumer to take a transactional decision he would not have taken otherwise.
These laws are very powerful in principle, But there are two complications in practice.
One complication concerns the extent to which the onus has been moved on to the seller to prove the claims are true, rather than the accuser having to prove they are false. That is a lot more favourable to the accuser than before, but it's complicated.
The other complication concerns enforcement of the new laws, and at the moment that is bad.
Who has to prove what?
That is still not entirely clear. McLachlan says
"If we accept that mainstream evidence based medicine is in some way accepted by mainstream science, and alternative medicine bears the "alternative" qualifier simply because it is not supported by mainstream science, then where does that leave a trader who seeks to refute any allegation that his claim is false?
Of course it is always open to the trader to show that his the alternative therapy actually works, but the weight of scientific evidence is likely to be against him."
On the other hand, I'm advised by a Trading Standards Officer that "He doesn't have to refute anything! The prosecution have to prove the claims are false". This has been confirmed by another Trading Standards Officer who said
"It is not clear (though it seems to be) what difference is implied between "cure" and "treat", or what evidence is required to demonstrate that such a cure is false "beyond reasonable doubt" in court. The regulations do not provide that the maker of claims must show that the claims are true, or set a standard indicating how such a proof may be shown."
The main defence against prosecution seems to be the "Due diligence defence", in paragraph 17.
Due diligence defence
17. —(1) In any proceedings against a person for an offence under regulation 9, 10, 11 or 12 it is a defence for that person to prove—
(a) that the commission of the offence was due to—
(i) a mistake;
(ii) reliance on information supplied to him by another person;
(iii) the act or default of another person;
(iv) an accident; or
(v) another cause beyond his control; and
(b) that he took all reasonable precautions and exercised all due diligence to avoid the commission of such an offence by himself or any person under his control.
If "taking all reasonable precautions" includes being aware of the lack of any good evidence that what you are selling is effective, then this defence should not be much use for most quacks.
Douglas McLachlan has clarified, below, this difficult question
False claims for health benefits of foods
A separate bit of legislation, European regulation on nutrition and health claims made on food, ref 1924/2006, in Article 6, seems clearer in specifying that the seller has to prove any claims they make.
Scientific substantiation for claims
1. Nutrition and health claims shall be based on and substantiated by generally accepted scientific evidence.
2. A food business operator making a nutrition or health claim shall justify the use of the claim.
3. The competent authorities of the Member States may request a food business operator or a person placing a product on the market to produce all relevant elements and data establishing compliance with this Regulation.
That clearly places the onus on the seller to provide evidence for claims that are made, rather than the complainant having to 'prove' that the claims are false.
On the problem of "health foods" the two bits of legislation seem to overlap. Both have been discussed in "Trading regulations and health foods", an editorial in the BMJ by M. E. J. Lean (Professor of Human Nutrition in Glasgow).
"It is already illegal under food labelling regulations (1996) to claim that food products can treat or prevent disease. However, huge numbers of such claims are still made, particularly for obesity "
"The new regulations provide good legislation to protect vulnerable consumers from misleading "health food" claims. They now need to be enforced proactively to help direct doctors and consumers towards safe, cost effective, and evidence based management of diseases."
In fact the European Food Standards Agency (EFSA) seems to be doing a rather good job at imposing the rules. This, predictably, provoked howls of anguish from the food industry There is a synopsis here.
"Of eight assessed claims, EFSA's Panel on Dietetic Products, Nutrition and Allergies (NDA) rejected seven for failing to demonstrate causality between consumption of specific nutrients or foods and intended health benefits. EFSA has subsequently issued opinions on about 30 claims with seven drawing positive opinions."
". . . EFSA in disgust threw out 120 dossiers supposedly in support of nutrients seeking addition to the FSD's positive list.
If EFSA was bewildered by the lack of data in the dossiers, it needn't hav been as industry freely admitted it had in many cases submitted such hollow documents to temporarily keep nutrients on-market."
Or, on another industry site, "EFSA's harsh health claim regime"
"By setting an unworkably high standard for claims substantiation, EFSA is threatening R&D not to mention health claims that have long been officially approved in many jurisdictions."
Here, of course,"unworkably high standard" just means real genuine evidence. How dare they ask for that!
Enforcement of the law
Article 19 of the Unfair Trading regulations says
19. —(1) It shall be the duty of every enforcement authority to enforce these Regulations.
(2) Where the enforcement authority is a local weights and measures authority the duty referred to in paragraph (1) shall apply to the enforcement of these Regulations within the authority's area.
Nevertheless, enforcement is undoubtedly a weak point at the moment. The UK is obliged to enforce these laws, but at the moment it is not doing so effectively.
A letter in the BMJ from Rose & Garrow describes two complaints under the legislation in which it appears that a Trading Standards office failed to enforce the law. They comment
" . . . member states are obliged not only to enact it as national legislation but to enforce it. The evidence that the government has provided adequate resources for enforcement, in the form of staff and their proper training, is not convincing. The media, and especially the internet, are replete with false claims about health care, and sick people need protection. All EU citizens have the right to complain to the EU Commission if their government fails to provide that protection."
This is not a good start. A lawyer has pointed out to me
"that it can sometimes be very difficult to get Trading Standards or the OFT to take an interest in something that they don't fully understand. I think that if it doesn't immediately leap out at them as being false (e.g "these pills cure all forms of cancer") then it's going to be extremely difficult. To be fair, neither Trading Standards nor the OFT were ever intended to be medical regulators and they have limited resources available to them. The new Regulations are a useful new weapon in the fight against quackery, but they are no substitute for proper regulation."
Trading Standards originated in Weights and Measures. It was their job to check that your pint of beer was really a pint. Now they are being expected to judge medical controversies. Either they will need more people and more training, or responsibility for enforcement of the law should be transferred to some more appropriate agency (though one hesitates to suggest the MHRA after their recent pathetic performance in this area).
Who can be prosecuted?
Any "trader", a person or a company. There is no need to have actually bought anything, and no need to have suffered actual harm. In fact there is no need for there to be a complainant at all. Trading standards officers can act on their own. But there must be a commercial element. It's unlikely that simply preaching nonsense would be sufficient to get you prosecuted, so the Prince of Wales is, sadly, probably safe.
Universities who teach that "Amethysts emit high Yin energy" make an interesting case. They charge fees and in return they are "falsely claiming that a product is able to cure illnesses".
In my view they are behaving illegally, but we shan't know until a university is taken to court. Watch this space.
The fact remains that the UK is obliged to enforce the law and presumably it will do so eventually. When it does, alternative medicine will have to change very radically. If it were prevented from making false claims, there would be very little of it left apart from tea and sympathy
New Zealand must have similar laws.
Just as I was about to post this I found that in New Zealand a
"couple who sold homeopathic remedies claiming to cure bird flu, herpes and Sars (severe acute respiratory syndrome) have been convicted of breaching the Fair Trading Act."
They were ordered to pay fines and court costs totalling $23,400.
A clarification form Douglas McLachlan
On the difficult question of who must prove what, Douglas McLachlan, who wrote Has The UK Quietly Outlawed "Alternative" Medicine?, has kindly sent the following clarification.
"I would agree that it is still for the prosecution to prove that the trader committed the offence beyond a reasonable doubt, and that burden of proof is always on the prosecution at the outset, but I think if a trader makes a claim regarding his product and best scientific evidence available indicates that that claim is false, then it will be on the trader to substantiate the claim in order to defend himself. How will the trader do so? Perhaps the trader might call witness after witness in court to provide anecdotal evidence of their experiences, or "experts" that support their claim – in which case it will be for the prosecution to explain the scientific method to the Judge and to convince the Judge that its Study evidence is to be preferred.
Unfortunately, once human personalities get involved things could get clouded – I could imagine a small time seller of snake oil having serious difficulty, but a well funded homeopathy company engaging smart lawyers to quote flawed studies and lead anecdotal evidence to muddy the waters just enough for a Judge to give the trader the benefit of the doubt. That seems to be what happens in the wider public debate, so it's easy to envisage it happening a courtroom."
The "average consumer".
The regulations state
(3) A commercial practice is unfair if—
(a) it contravenes the requirements of professional diligence; and
(b) it materially distorts or is likely to materially distort the economic behaviour of the average consumer with regard to the product.
It seems,therefore, that what matters is whether the "average consumer" would infer from what is said that a claim was being made to cure a disease. The legal view cited by Mojo (comment #2, below) is that expressions such as "can be used to treat" or "can help with" would be considered by the average consumer as implying successful treatment or cure.
The drugstore detox delusion. A nice analysis "detox" at .Science-based Pharmacy
Tagged Academia, alternative medicine, Anti-science, antiscience, CAM, Central Lancashire, chiropractic, Fair trading, herbalism, homeopathy, Law, New Zealand, nutribollocks, nutrition, nutritional therapy, Prince's Foundation, Trading Standards, Unfair Trading, Universities, Westminster university | 30 Comments
Medicines that contain no medicine and other follies
The National Health Executive ("the Independent Journal for Senior Health Service Managers) asked for an article about quackery. This is a version of that article with live links.
Download the pdf version.
There is a Russian translation here (obviously I can't vouch for its accuracy).
On May 23 th 2006 a letter was sent to the chief executives of 467 NHS Trusts. It was reported as a front page story in the Times, and it was the lead item on the Today programme. The letter urged the government not to spend NHS funds on "unproven and disproved treatments". Who can imagine anything more simple and self-evident than that? But in politics nothing is simple.
It turns out that quite a lot of patients are deeply attached to unproven and disproved treatments. They clamour for them and, since "patient choice" is high on the agenda at the moment, they quite often get them. Unproven and disproved treatments cost quite a lot of money that the NHS should be spending on things that work.
In January 2007, the Association of Directors of Public Health issued its own list of unproven and disproved treatments. It included, among others, tonsillectomy and adenoidectomy, carpal tunnel surgery and homeopathy. They all matter, but here I'll concentrate on alternative treatments, of which homeopathy is one of the most widespread.
It should be simple. We have a good mechanism for deciding which treatments are cost-effective, in the form of the National Institute for Clinical Excellence (NICE). If homeopathy and herbalism are not good ways to spend NHS money, why has NICE not said so? The answer to that is simple. NICE has not been asked. It can consider only those questions that are referred to it by the Department of Health (DoH).
The government often says that it takes the best scientific advice, but the DoH seems to have something of a blank spot when it comes to alternative medicine. Nobody knows why. Perhaps it is the dire lack of anyone with a scientific education in government. Or could there be something in the rumour that the DoH lives in terror of being at the receiving end of a rant from the general direction of Clarence House if it doesn't behave? Whatever the reason, the matter has still not been referred to NICE, despite many requests to do so.
A judgement from NICE would be useful, but it is hardly essential. It isn't hard to understand. At its simplest the whole problem can be summed up very briefly.
Homeopathy: giving patients medicines that contain no medicine whatsoever.
Herbal medicine: giving patients an unknown dose of a medicine, of unknown effectiveness and unknown safety.
Acupuncture: a rather theatrical placebo, with no real therapeutic benefit in most if not all cases.
Chiropractic: an invention of a 19 th century salesmen, based on nonsensical principles, and shown to be no more effective than other manipulative therapies, but less safe.
Reflexology: plain old foot massage, overlaid with utter nonsense about non-existent connections between your feet and your thyroid gland.
Nutritional therapy: self-styled 'nutritionists' making unjustified claims about diet to sell unnecessary supplements.
Of these, 'nutritional therapy', or 'nutritional medicine', is a relative newcomer. At their worst, they claim that Vitamin C can cure AIDS, and have been responsible for many deaths in Africa. There isn't the slightest need for them since the nutrition area is already covered by registered dietitians who have far better training.
There have been several good honest summaries of the evidence that underlies these interpretations, written in a style quite understandable by humanities graduates. Try, for example, Trick or Treatment (Singh & Ernst, Bantam Press 2008): a copy should be presented to every person in the DoH and every NHS manager. In some areas the evidence is now quite good. Homeopathy, when tested properly, comes out no different from placebo. That is hardly surprising because the 'treatment' pill contains no medicine so it is the same as the placebo pill.
Acupuncture has also been tested well in the last 10 years. A lot of ingenuity has been put into designing sham acupuncture to use as a control. There is still a bit of doubt in a few areas, but overwhelmingly the results show that real acupuncture is not distinguishable from sham. Acupuncture, it seems, is nothing more than a particularly theatrical placebo. All the stuff about meridians and "Qi" is so much mumbo-jumbo. In contrast, herbal medicines have hardly been tested at all.
It is quite easy to get an impression that some of these fringe forms of medicine work better than they do. They form efficient lobby groups and they have friends in high places. They long for respectability and they've had a surprising amount of success in getting recognised by the NHS. Some (like chiropractic) have even got official government recognition.
One can argue about whether it was money well-spent, but in the USA almost a billion dollars has been spent on research on alternative medicine by their National Center for Complementary and Alternative Medicine (NCCAM), which was set up as a result of political pressure from the (huge) alternative medicine industry. That has produced not a single effective alternative treatment, but at least it has shown clearly that most don't work.
The letter of 23 May 2006 proved to be remarkably effective. Tunbridge Wells Homeopathic Hospital has closed and commissioning of homeopathic services has fallen drastically. That has released money for treatments that work, and providing treatments that work is the job of the NHS.
It is sometimes asked, what is wrong with placebo effects as long as the patient feels better? First it must be said that much of the apparent benefit of placebos like homeopathy isn't a placebo effect, but merely spontaneous recovery. Echinacea cures your cold in only seven days when otherwise it would have taken a week. But when there is a genuine psychosomatic placebo effect, it can be a real benefit. As always, though, one must consider the cost as well as the benefit.
And there are a lot of hidden costs in this approach. One cost is the need to lie to patients to achieve a good placebo effect. That contradicts the trend towards more openness in medicine. And there is a major cost to the taxpayer in the training of people. If the NHS employs homeopaths or spiritual healers because they are nice people who can elicit a good placebo effect, the Human Resources department will insist that they are fully-qualified in myths. ""Full National Federation of Spiritual Healer certificate. or a full Reiki Master qualification, and two years post certificate experience" (I quote). That is one reason why you can find in UK universities, undergraduates being taught at taxpayers' expense, that "amethysts emit high Yin energy".
There is a solution to all of this. There is room in the NHS for nice, caring people, to hold the hands of sick patients. They might be called 'healthcare workers in supportive and palliative care'. They could do a good job, without any of the nonsense of homeopathy or spiritualism. Likewise, manipulative therapists could get together to dispense with the nonsense elements in chiropractic, and to make a real attempt to find out what works best.
All that stands in the way of this common sense approach is the rigidity of Human Resources departments which demand formal qualifications in black magic before you can cheer up sick patients. The over-formalisation of nonsense has done great harm. You have only to note that Skills for Health has listed 'competences' in Distant Healing (in the presence of the client or in the absence of the client).
When I asked Skills for Health if they would be defining a 'competence' in talking to trees, I was told, in all seriousness, ""You'd have to talk to LANTRA, the land-based organisation for that".
I'm not joking. I wish I were.
Tagged CAM, Department of Health, homeopathy, National Health Executive, National Health Service, NHS, Rachel Roberts, Society of Homeopaths | 16 Comments
Teaching bad science to children: OfQual and Edexcel are to blame
It's hard enough to communicate basic ideas about how to assess evidence to adults without having the effort hindered by schools.
The teaching of quackery to 16 year-olds has been approved by a maze of quangos, none of which will take responsibility, or justify their actions. So far I've located no fewer than eight of them.
[For non-UK readers, quango = Quasi-Autonomous Non-Governmental Organisation].
A lot of odd qualifications are accredited by OfQual (see here). Consider, for example, Edexcel Level 3 BTEC Nationals in Health and Social Care (these exams are described here), Download the specifications here and check page 309.
Unit 23: Complementary Therapies for Health and Social Care
NQF Level 3: BTEC National
Guided learning hours: 60
Unit abstract
"In order to be able to take a holistic view towards medicine and health care, health and social care professionals need to understand the potential range of complementary therapies available and how they may be used in the support of conventional medicine."
Well, Goldacre has always said that homeopathy makes the perfect vehicle for teaching how easy it is to be deceived by bad science, so what's wrong? But wait
"Learners will consider the benefits of complementary therapies to health and wellbeing, as well as identifying any contraindications and health and safety issues in relation to their use."
Then later
"The holistic approach to illnesses such as cancer could be used as a focus here. For example, there could be some tutor input to introduce ideas about the role of complementary therapies in the treatment and management of cancer, this being followed up by individual or small group research by learners using both the internet and the services available locally/regionally. If available, a local homeopathic hospital, for example, would be an interesting place to visit."
It's true that to get a distinction, you have to "evaluate the evidence relating to the use of complementary therapies in contemporary society", but it isn't at all clear that this refers to evidence about whether the treatment works.
The really revealing bit comes when you get to the
"Indicative reading for learners
There are many resources available to support this unit.
www.acupuncture.org.uk British Acupuncture Council
www.bant.org.uk British Association for Nutritional Therapy
www.exeter.ac.uk/sshs/compmed Exeter University's academic department of Complementary medicine
www.gcc-uk.org General Chiropractic Council
www.nimh.org.uk National Institute of Medical Herbalists
www.nursingtimes.net The Nursing Times
www.osteopathy.org.uk General Osteopathic Council
www.the-cma.org.uk The Complementary Medical Association"
This list is truly astonishing. Almost every one of them can be relied on to produce self-serving inaccurate information about the form of "therapy" it exists to promote. The one obvious exception is the reference to Exeter University's academic department of complementary medicine (and the link to that one is wrong). The Nursing Times should be an exception too, but their articles about CAM are just about always written by people who are committed to it.
It is no consolation that the 2005 version was even worse. In its classification of 'therapies' it said "Pharmaceutically mediated: eg herbalism, homeopathy ". Grotesque! And this is the examinng body!
This particular educational disaster came to my attention when I had a letter from a teacher. She had been asked to teach this unit, and wanted to know if I could provide any resources for it. She said that Edexcel hadn't done so. She asked " Do you know of any universities that teach CT's [sic] so I could contact them about useful teaching resources?." She seemed to think that reliable information about homeopathy could be found from a 'university' homeopathy teacher. Not a good sign. It soon emerged why.
She said.
"My students are studying BTEC National Health Studies and the link is Edexcel BTEC National Complimentary [sic] studies."
"I am a psychotherapist with an MA in Education and Psychology. I am also trained in massage and shiatsu and have plenty of personal experience of alternative therapy"
Shiatsu uh? It seems the teacher is already committed to placebo medicine. Nevertheless I spent some time looking for some better teaching material for 16 year-old children. There is good stuff at Planet
Science, and in some of the pamphlets from Sense about Science, not least their latest, I've got nothing to lose by trying it – A guide to weighing up claims about cures and treatments. I sent all this stuff to her, and prefaced the material by saying
"First of all, I should put my cards on the table and say that I am quite appalled by the specification of Unit 23. In particular, it has almost no emphasis at all on the one thing that you want to know about any therapy, namely does it work? The reference list for reading consists almost entirely of organisations that are trying to sell you various sorts of quackery, There is no hint of balance; furthermore it is all quite incompatible with unit 22, which IS concerned with evidence."
At this point the teacher the teacher came clean too, As always, anyone who disagrees with the assessment (if any) of the evidence by a true believer is unmeasured and inflammatory.
"I have found your responses very unmeasured and inflammatory and I am sorry to say that this prejudicial attitude has meant that I have not found your comments useful."
shortly followed by
"I am not coming from a scientific background, neither is the course claiming to be scientific."
That will teach me to spend a couple of hours trying to help a teacher.
What does Edexcel say?
I wrote to Edexcel's science subject advisors with some questions about what was being taught. The response that I got was not from the science subject advisors but from the Head of Customer support, presumably a PR person.
From: (Bola Arabome) 12/11/2008 04.31 PM
Dear Professor Colquhoun
Thank you for email communication concerning the complementary therapies unit which is available in our BTEC National in Health and BTEC National in Health and Social Care qualifications. I have replied on behalf of Stephen Nugus, our science subject advisor, because your questions do not refer to a science qualification. I would like to answer your questions as directly as possible and then provide some background information relating to the qualifications.
The units and whole qualifications for all awarding bodies are accredited by the regulator, the Qualifications and Curriculum Authority. The resource reading list is also produced by us to help teachers and learners. The qualification as a whole is related to the National Occupational Standards for the vocational sectors of Health and Health and social care with consultation taken from the relevant sector skills councils . As you will be aware many of these complementary therapies are available in care centres and health centres under the NHS and in the private sector. The aim of BTEC qualifications is to prepare people for work in these particular sectors. Clearly a critical awareness is encouraged with reference to health and safety and regulation. There are other units, in some cases compulsory, within the qualification with a scientific approach.
' ' ' ' '
Head of Customer Support
Aha, so it seems that teaching people to treat sick patients is "not a science qualification". Just a business qualification perhaps?. I haven't yet managed to reach the people who make these decisions, so I persisted with the PR man. Here is part of the next letter (Edexcel's reply in italic).
I find it quite fascinating that Edexcel regards the treatment of sick patients as not being part of science ("do not refer to a science qualification").
Does that mean Edexcel regard the "Health" part of "Health and Social Care" as being nothing to do with science, and that it therefore doesn't matter if Health Care is unscientific, or even actively anti-scientific?
I am sorry if my answer lacked clarity. My comment, that I had taken your enquiry on behalf of our Science Advisor because this was not a science qualification, was intended to explain why I was replying. It was not intended as a comment on the relationship between Health and Social Care and science. At Edexcel we use bureaucratic categories where we align our management of qualifications with officially recognised occupational sectors. Often we rely on sector bodies such as Sector Skills Councils to endorse or even approve the qualifications we offer. Those involved in production of our Science qualifications and our
Health and Social care qualifications are, as far as I can ascertain, neither anti-scientific nor non-scientific in their approach
(4) You say "The qualification as a whole is related to the National Occupational Standards for the vocational sectors of Health and Health and social care with consultation taken from the relevant sector skills councils". Are you aware that the Skills for Health specifications for Alternative medicine were written essentially by the Prince of Wales Foundation?
When I asked them if they would be writing a competence in talking to trees, they took the question totally seriously!! (You can see the transcript of the conversation at http://dcscience.net/?p=215 ).
The qualification was approved by both 'Skills for Health' and 'Skills for Care and Development' prior to being accredited by QCA. It uses the NOS in Health and Social Care as the basis for many of the mandatory units. The 'Complementary Therapies' NOS were not used. This was not a requirement of a 'Health and Social Care' qualification.
"Are the NOS in Health and Social Care that you mention the ones listed here? http://www.ukstandards.org/Find_Occupational_Standards.aspx?NosFindID=1&ClassificationItemId=174 If so, I can see nothing there about 'complementary therapies'. if I have missed it, I'd be very grateful if you could let me know where it is. If it is not there, I remain very puzzled about the provenance of Unit 23, since you say it is not based on Skills for Health."
Now we are immediately at sea, struggling under a tidal wave of acronyms for endless overlapping quangos. In this one short paragraph we have no fewer than four of them. 'Skills for Health', 'Skills for Care and Development' , 'Quality and Curriculum Authority (QCA) and NOS.
It seems that the specification of unit 23 was written by Edexcel, but Harris (25 Nov) declines to name those responsible
"When I refer to our "Health and Social care team" I mean the mix of Edexcel Staff and the associates we employ on a contract basis as writers, examiners and external verifiers. The writers are generally recruited from those who are involved in teaching and assessment the subjects in schools and colleges. The editorial responsibility lies with the Edexcel Staff. I do not have access to the names of the writers and in any case would not be able to pass on this information. Specifications indicate the managers responsible for authorising publication"
"Edexcel takes full responsibility for its ethical position on this and other issues. However we can not accept responsibility for the opinions expressed in third party materials. There is a disclaimer to this effect at the beginning of the specification. "
" You have the correct link to the Health NOS . These are the standards, which where appropriate, influence our qualifications. However in the case of Unit 23 I understand that there is no link with the Health NOS. I don't know if the NOS cover the unit 23 content."
So, contrary to what I was told at first, neither Skills for Health, nor NOS were involved Or were they (see below)?
So who does take responsibility? Aha that is secret. And the approval by the QCA is also secret.
"I cannot provide you with copies of any correspondence between Skills for Health and Edexcel. We regard this as confidential. "
What does the QCA say?
The strapline of the QCA is
"We are committed to building a world-class education and training framework. We develop and modernise the curriculum, assessments, examinations and qualifications."
Referring school children to the Society of Homeopaths for advice seems to be world-class bollocks rather than world-class education.
When this matter was brought to light by Graeme Paton in the Daily Telegraph, he quoted Kathleen Tattersall, CEO of the QCA. She said
"The design of these diplomas has met Ofqual's high standards. We will monitor them closely as they are delivered to make sure that learners get a fair deal and that standards are set appropriately."
Just the usual vacuous bureaucratic defensive sound-bite there. So I wrote to Kathleen Tattersall myself with some specific questions. The letter went on 2nd September 2008. Up to today, 26 November, I had only letters saying
"Thank you for your email of 12 November addressed to Kathleen Tattersall, a response is being prepared which will be forwarded to you shortly."
"Thank you for your email of 25th November addressed to Kathleen Tattersall. A more detailed response is being prepared which will be sent to you shortly."
Here are some of the questions that I asked.
I wrote to Edexcel's subject advisors about unit 23 and I was told "your questions do not refer to a science qualification". This seems to mean that if it comes under the name "Health Care" then the care of sick patients is treated as though it were nothing to do with science, That seems to me to be both wrong and dangerous, and I should like to hear your view about that question.
Clearly the fundamental problem here is that the BTEC is intended as a vocational training for careers in alternative medicine, As a body concerned with education, surely you cannot ignore the view of 99% of scientists and doctors that almost all alternative medicine is fraud. That doesn't mean that you can't make a living from it, but it surely does create a dilemma for an educational organisation. What is your view of that dilemma?
Eventually, on 27th November, I get a reply (of sorts) It came not from the Kathleen Tattersall of the QCA but from yet another regulatory body, OfQual, the office of the Qualifications and Examinations Regulator. You'd think that they'd know the answers, but if they do they aren't telling, [download whole letter. It is very short. The "more detailed response" says nothing.
Ofqual does not take a view on the detailed content of vocational qualifications as that responsibility sits with the relevant Sector Skills Council which represents employers and others involved in the sector. Ofqual accredits the specifications, submitted by sector-skilled professionals, after ensuring they meet National Occupational Standards. Ofqual relies on the professional judgement of these sector-skilled professionals to include relevant subjects and develop and enhance the occupational standards in their profession.
The accreditation of this BTEC qualification was supported by both Skills for Health, and Skills for Care and Development, organisations which represent the emerging Sector Qualifications Strategies and comply with the relevant National Occupational Standards
Isabel Nisbet
Acting Chief Executive
So no further forward. Every time I ask a question, the buck gets passed to another quango (or two, or three). This letter, in any case, seems to contradict what Edexcel said about the involvement of Skills for Health (that's the talking to trees outfit),
A nightmare maze of quangos
You may well be wondering what the relationship is between Ofqual and the QCA. There is an 'explanation' here.
Ofqual will take over the regulatory responsibilities of the Qualifications and Curriculum Authority (QCA), with stronger powers in relation to safeguarding the standards of qualifications and assessment and an explicit remit as a market regulator. The QCA will evolve into the Qualifications and Curriculum Development Agency (QCDA): supporting Ministers with advice and undertaking certain design and delivery support functions in relation to the curriculum, qualifications, learning and development in the Early Years Foundation Stage, and National Curriculum and Early Years Foundation Stage assessments.
Notice tha QCA won't be abolished. There will be yet another quango.
The result of all this regulatory bureaucracy seems to be worse regulation, Exactly the same thing happens with accreditiation of dodgy degrees in universities.
At one time, a proposal for something like Unit 23 would have been shown to any competent science teacher, who would have said"you must be joking" and binned it. Now a few hundred bureaucrats tick their boxes and rubbish gets approved.
There seems to be nobody in any of these quangos with the education to realise that if you want to know the truth about homeopathy, the last person you ask is the Society of Homeopaths or the Prince of Wales.
So the mystery remains. I can't find out who is responsible for the provenance of the appallingly anti-science Unit 23, and I can't find out how it got approved. Neither can I get a straight answer to the obvious question about whether it is OK to encourage vocational qualifications for jobs that are bordering on being fraudulent.
.All I can get is platitudes and bland assurances. Everything that might be informative is clouded in secrecy.
The Freedom of Information requests are in. Watch this space. But don't hold your breath.
Here are some attempts to break through the wall of silence.
Edexcel. I sent them this request.
I should like to see please all documents from Edexcel and OfQual or QCA (and communications between then) that concern the formulation and approval of Unit 23 (Complementary Therapies) in the level3 BTEC (page 309 in attached document). In vew of the contentious nature of the subject matter, I believe that is is in the public interest that this information be provided
The answer was quite fast, and quite unequivocal, Buzz off.
Dear Mr Colquhoun,
Thank you of your e-mail of today's date. I note your request for information pursuant to The Freedom of Information Act. As you may know this Act only applies to public bodies and not to the private sector. Edexcel Limited is privately owned and therefore not subject to this Act. Edexcel is therefore not obliged to provide information to you and is not prepared to give you the information you seek.
Please do not hesitate to contact me again if you have any further queries.
Director of Legal Services
Pearson Assessments & Testing
One90 High Holborn, London, WC1V 7BH
T: +44 (0)20 7190 5157 / F: +44 (0)207 190 5478
Email: kate.gregory@pearson. com
This lack of public accountability just compounds their appalling inability to distinguish education from miseducation.
International Therapy Examination Council (ITEC)
Mojo's comment, below, draws attention to the Foundation degree in Complementary Therapies offered by Cornwall College, Camborne, Cornwall (as well as to the fact that the Royal National Lifeboat Institution has been wasting money on 'research' on homeopathy –write to them).
At least the courses are held on the Camborne campus of Cornwall College, not on the Duchy campus (do we detect the hand of the Quacktitioner Royal in all this nonsense?).
Cornwall College descends to a new level of barminess in its course Crystal Healing VTCT Level 3
This course is designed to enhance the skills of the Holistic Therapist. Crystals may be used on their own in conjunction with other therapies such as Indian Head Massage, Aromatherapy and Reflexology. Due to the nature of the demands of the holistic programme this course is only suitable for students over the age of 18."
"What will I be doing on the course?
Students will study the art of Crystal healing which is an energy based treatment where crystals and gemstones are used to channel and focus various energy frequencies."
.VTCT stands for the Vocational Training Charitable Trust.
It is yet another organisation that runs vocational exams, and it is responsible for this particular horror
The crystals are here. I quote.
the use of interpersonal skills with client
how to complement other therapies with crystals
the types and effects of different crystals
uses of crystals including cleansing, energising, configurations
concepts of auras and chakras
This is, of course, pure meaningless nonsense. Utter bollocks being offered as further education
Cornwall College has many courses run by ITEC.
The College says
"You will become a professional practitioner with the International Therapy Examination Council (ITEC), study a number of essential modules to give a vocational direction to your study that include: Homeopathy and its application,"
Who on earth, I hear you cry, are ITEC? That brings us to the seventh organisation in the maze of quangos and private companies involved in the miseducation of young people about science and medicine. It appears, like Edexcel, to be a private company though its web site is very coy about that.
After the foundation degree you can go on to "a brand new innovative BSc in Complementary Health Studies (from Sept 2009)"
The ITEC web site says
ITEC qualifications are accredited by the Office of the Qualifications and Examination Regulator (OFQUAL)
ITEC qualifications are funded in the uk by the on behalf of Department for Innovation, Universities and Skills (DIUS)
ITEC qualifications have been mapped to the National Occupational Standards, where they exist
Oddly enough, there is no mention of accreditation by a University (not that that is worth much). So a few more Freedom of Information requests are going off, in an attempt to find out why are kids are being miseducated about science and medicine.
Meanwhile you can judge the effect of all that education in physiology by one of the sample questions for ITEC Unit 4, reflexology.
The pancreas reflex:
A Extends across both feet
B Is on the right foot only
C Is on the left foot only
D Is between the toes on both feet
Uhuh, they seem to have forgotten the option 'none of the above'.
Or how about a sample question from ITEC Unit 47 – Stone Therapy Massage
Which organ of the body is associated with the element fire?
B Liver
C Spleen
D Pancreas
Or perhaps this?
Which incantation makes hot stones work best?
A Incarcerous
B Avada Kedavra,
C Dissendium
D Expelliarmus.
(OK I made the last one up, with help from Harry Potter, but it makes just about as much sense as the real ones).
And guess what? You can't use the Freedom of Information Act to find out how this preposterous rubbish got into the educational system because " ITEC is a private organisation therefore does not come under this legislation". The ability to conduct business in secret is a side effect of the privatisation of public education is another reason why it's a bad idea.
Ofsted has inspected Cornwall College. They say "We inspect and regulate to achieve excellence in the care of children and young people, and in education and skills for learners of all ages.". I can find no mention of this nonsense in their report, so I've asked them.
Ofsted has admitted a spectacular failure in its inspection of child care in the London Borough of Haringey. Polly Curtis wrote in the Guardian (6 Dec 2008) "We failed over Haringey – Ofsted head". It was the front page story. But of course Ofsted don't take the blame, they say they were supplied with false information,
That is precisely what happens whenever a committee or quango endorses rubbish. They look only at the documents sent to them and they don't investigate, don't engage their brains.
In the case of these courses in utter preposterous rubbish, it seems rather likely that the ultimate source of the misinformation is the Princes' Foundation for Integrated Health. Tha views of the Prince of Wales get passed on to the ludicrous Skills for Health and used as a criterion by all the other organisations, without a moment of critical appraisal intervening at any point.
2 December 2008 A link from James Randi has sent the hit rate for this post soaring. Someone there left are rather nice comment.
"A quango seems to be a kind of job creation for the otherwise unemployable 'educated '( degree in alternative navel contemplation) middle classes who can't be expected to do anything useful like cleaning latines ( the only other thing they seem qualified for ). I really hate to think of my taxes paying for this codswollop."
Tagged assessment, CAM, Edexcel, Foundation for Integrated Health, herbalism, homeopathy, OfQual, Prince of Wales, Prince's Foundation, Quality assessment, regulation, schools, Skills for Care, Skills for Health | 45 Comments
Another worthless validation: the University of Wales and nutritional therapy
It seems that validation committees often don't look beyond the official documents. As a result, the validations may not be worth the paper they are written on. Try this one.
One of the best bits of news recently was the downfall of Matthias Rath. He's the man who peddled vitamin pills for AIDS in Africa, and encouraged the AIDS denialists in the South African government. Thabo Mbeki and his Health Minister, Mrs Beetroot, have gone now, thank heavens.
Rath was one of the best illustrations of the murderous effect of selling ineffective treatments. The fact that nobody in the "nutritional therapy" industry has uttered a word of condemnation for this man illustrates better than anything one can imagine the corrupt state of "nutritional therapy". The people who kept silent include the British Association of Nutritional Therapists (BANT).
It might be surprising, then, to find the Northern College of Acupuncture proudly adding a course in alternative nutrition to its courses in acupuncture (now known to be a theatrical placebo) and Chinese herbal medicine (largely untested and sometimes toxic). It might be even more surprising to find the boast that the course is validated by the University of Wales. It seemed a good idea to find out a bit more about how this came about. Thanks to the Freedom of Information Act, some interesting things can be discovered.
Polly Toynbee's superb article, Quackery and superstition – available soon on the NHS, written in January 2008, mentioned diplomas and degrees in complementary therapies offered by, among others, the University of Wales. This elicited a letter of protest to Toynbee from the Vice-Chancellor of the University of Wales, Professor Marc Clement BSc, PhD, MInstP, CEng,CPhys,FIET. He invited her to visit the university to see their "validation and monitoring procedures (including the University's very specific guidelines on health studies disciplines".
So let's take a look at these validation procedures and guidelines.
The validation process
The Northern College of Acupuncture submitted a 148 page proposal for the course in October 2007. The document has all the usual edu-bollocks jargon, but of course doesn't say much about clinical trials, though it does boast about an unblinded trial of acupuncture published in 2006 which, because of lack of appropriate controls, served only to muddy the waters. : This submission was considered by the University's validation committee last December.
Panel of Assessors:
Professor Nigel Palastanga (Chair), Cardiff University
Dr Celia Bell (School of Health and Social Sciences Middlesex University)
Dr John Fish (Moderator designate) (Institute of Biological Sciences University of Wales, Aberystwyth)
Ms Rhiannon Harris (Centre for Nutrition & Dietetics University of Wales Institute,
Cardiff (UWIC))
Ms Felicity Moir (School of Integrated Health University of Westminster)
The whole validation document is only four pages long [download it]. The most interesting thing about it is that the words 'evidence' or 'critical' do not occur in it a single time. It has all the usual bureaucratic jargon of such documents but misses entirely the central point.
Does that mean that the University of Wales doesn't care about evidence or critical thinking? Well, not on paper. Two years previously a short document called Health Studies Guidelines had been written by Dr Brian Spriggs (Health Studies Validation Consultant, since retired) for the Health Studies Committee, and it was approved on 21 April 2005. It starts well.
"Degrees in the Health Studies field are expected to promote an understanding of the importance of the scientific method and an evidence-base to underpin therapeutic interventions and of research to expand that base."
It even goes on to say that a BSc degree in homeopathy is "unacceptable". Don't get too excited though, because it also says that acupuncture and Chinese herbal stuff is quite OK. How anyone can imagine they live up to the opening sentence beats me. And it gets worse. It says that all sorts of rather advanced forms of battiness are OK if they form only part of another degree. They include Homeopathy, Crystal therapy. Dowsing, Iridology; Kinesiology, Radionics, Reflexology, Shiatsu, Healing, and Maharishi Ayurvedic Medicine.
Dowsing? Crystal therapy? Just let me remind you. We are living in 2008. It is easy to forget that when ploughing through all this new age junk.
The Validation Handbook of Quality Assurance: Health Studies (2007) runs to an astonishing 256 pages [download the whole thing]. On page 12 we find the extent of the problem.
"The University of Wales validates a number of schemes in the Health Studies field. At the current time we have undergraduate and/or postgraduate degree schemes in Acupuncture, Animal Manipulation, Chiropractic, Herbal Medicine, Integrative Psychotherapy, Osteopathy, Osteopathic Studies, Traditional Chinese Medicine and Regulatory Affairs, both in the UK and overseas."
That sounds pretty shocking. Further down on page 12, though, we find this.
"Degrees in the Health Studies field are expected to promote an understanding of the importance of the scientific method and an evidence-base to underpin therapeutic interventions and of research to expand that base. The mission is to promote and require the critical evaluation of the practices, doctrines, beliefs, theories and hypotheses that underlie the taught therapeutic measures of the discipline."
They are indeed fine words. The problem is that I can detect no sign in the submission, nor in its consideration by the validation committee, that any attempt whatsoever was made to ensure that the course complied with these requirements.
The only sign of concern I could detect of any concern about the quality of what was being taught came in a minute to a meeting of the Health Studies Committee meeting on 24th April 2008.
"Members received a copy of an article entitled Quackery and superstition available soon on the NHS which appeared in The Guardian newspaper in January 2008, and a copy of the Vice- Chancellors response. Members agreed that this article was now historical but felt that if/when the issue were to arise again; the key matter of scientific rigour should be stressed. The Committee agreed that this was the most critical element of all degree schemes in the University of Wales portfolio of health studies schemes. It was felt it would be timely to re-examine the schemes within the portfolio as well as the guidelines for consideration of Health Studies schemes at the next meeting. The Committee might also decide that Institutions would be required to include literature reviews (as part of their validation submission) to provide evidence for their particular profession/philosophy. It was agreed that the guidelines would be a vital document in the consideration of new schemes and during preliminary visits to prospective Institutions. "
The Press Office had passed Polly Toynbee's article to them. Curiously the Health Studies Committee dismissed it as "historical", simply because it was written three months earlier. That is presumably "historical" in the sense that the public will have forgotten about it, rather than in the sense that the facts of the matter have changed since January. So, at least for the nutrition degree, Toynbee's comments were simply brushed under the carpet.
After a few cosmetic changes of wording the validation was completed on 16th January 2008. For example the word "diagnosis" was removed in 43 places and "rewritten in terms of evaluation and assessment". There was, needless to say, no indication that the change in wording would change anything in what was taught to students.
You may think that I am being a bit too harsh. Perhaps the course is just fine after all? The problem is that the submission and the reaction of the validation committee tell you next to nothing about what actually matters, and that is what is taught. There is only a vague outline of that in the submission (and part of it was redacted on the grounds that if it were made public somebody might copy ;it. Heaven forbid).
That is why I have to say, yet again, that this sort of validation exercise is not worth the paper it's written on.
How can we find out a bit more? Very easily as it happens. Just Google. What matters is not so much formal course outlines but who teaches them.
The nutrition course
The title of the course is just "Nutrition", not 'Nutritional Therapy' or 'Alternative Nutrition'. That sounds quite respectable but a glance at the prospectus shows immediately that it is full-blown alternative medicine.
Already in July 2007, the glowing press releases for the course had attracted attention from the wonderfully investigative web site HolfordWatch. I see no sign that the validation committee was aware of this. But if not, why not? I would describe is as dereliction of academic duty.
"This pioneering course is unique in that it is firmly rooted in both Western nutritional science and naturopathic medicine and also covers concepts of nutrition within traditional Chinese, Japanese, Tibetan and Ayurvedic medicine.
This means that graduates will gain comprehensive understanding of both modern scientific knowledge and ancient wisdom concerning nutrition and dietetics."
Ancient wisdom, of course, means something that your are supposed to believe though there is no good reason to think it's true. In the end, though, almost the only thing that really matters about any course is who is running it. The brochure shows that all of the people are heavily into every form of alternative nuttiness.
Course Director and Tutor: Jacqueline Young nutritionist, naturopath, clinical psychologist and Oriental medical practitioner
Nutrition Tutors:
Elaine Aldred (qualified as a chiropractor with the Anglo European Chiropractic College, as an acupuncturist with the British College of Acupuncture and as a Western Medical Herbalist with the College of Phytotherapy. She recently also qualified in Chinese herbal medicine with the Northern College of Acupuncture.)
Sue Russell (3 year diploma in nutritional therapy at the Institute of Optimum Nutrition. She currently practises as a nutritional therapist and also works part-time as a manager at the Northern College of Homeopathic Medicine.)
Anuradha Sharma (graduated as a dietician from Leeds Metropolitan University in 2002 and subsequently completed a Naturopathy certificate and a post-graduate diploma in acupuncture).
Guest Lecturers include : Dr John Briffa, Professor Jane Plant, M.B.E. (a geochemist turned quack), and, most revealingly, none other than the UK's most notorious media celebrity and pill peddler, Patrick Holford.
So much has been written about Holford's appalling abuse of science, one would have thought that not even a validation committee could have missed it.
"The course has been created by Jacqueline Young", so let's look a bit further at her track record.
Jacqueline Young has written a book, 'Complementary Medicine for Dummies' [Ed: ahem shouldn't that be Dummies for Complementary Medicine?]. You can see parts of it on Google Books. Did the validation committee bother to look at it? As far as I can tell, the words 'randomised' or 'clinical trial' occur nowhere in the book.
The chapter on Tibetan medicine is not very helpful when it comes to evidence but for research we are referred to the Tibetan Medical and Astrology Institute. Guess what? That site gives no evidence either. So far not a single university has endorsed Astrology (there is a profitable niche there for some vice-chancellor).
Here are few samples from the book. The advice seems to vary from the undocumented optimism of this
Well researched? No. Safe? Nobody knows. Or this
Mandarin peel prevents colds and flu? Old wive's tale. Then there are things that verge on the weird, like this one
or the deeply bizarre like this
The problem of Jacqueline Young's fantasy approach to facts was pointed out at least as far back as 2004, by Ray Girvan., who wrote about it again in May 2005. The problems were brought to wider attention when Ben Goldacre wrote two articles in his Badscience column, Imploding Researchers (September 2005), and the following week, Tangled Webs.
"we were pondering the ethics and wisdom of Jacqueline Young dishing out preposterous, made-up, pseudoscientific nonsense as if it was authoritative BBC fact, with phrases such as: "Implosion researchers have found that if water is put through a spiral its electrical field changes and it then appears to have a potent, restorative effect on cells." "
and later
"Take this from her article on cranial osteopathy, riddled with half truths: "Sutherland found that the cranial bones (the skull bones encasing the brain) weren't fused in adulthood, as was widely believed, but actually had a cycle of slight involuntary movement." In fact the cranial bones do fuse in adulthood.
She goes on: "This movement was influenced by the rhythmic flow of cerebrospinal fluid (the nourishing and protective fluid that circulates through the spinal canal and brain) and could become blocked." There have now been five studies on whether "cranial osteopaths" can indeed feel these movements, as they claim, and it's an easy experiment to do: ask a couple of cranial osteopaths to write down the frequency of the rhythmic pulses on the same person's skull, and see if they give the same answer. They don't. A rather crucial well-replicated finding to leave out of your story.
That was in 2005 and since then all of Young's "preposterous, made-up, pseudoscientific nonsense" (along with most of the other stuff about junk medicine) has vanished from the BBC's web site, after some people with a bit of common sense pointed out what nonsense it was. But now we see them resurfacing in a course validated by a serious university. The BBC had some excuse (after all, it is run largely by arts graduates). I can see no excuses for the University of Wales.
Incidentally, thanks to web archive you can still read Young's nonsense, long after the BBC removed it. Here is a quotation.
"Implosion researchers have found that if water is put through a spiral its ,field changes and it then appears to have a potent, restorative effect on cells. In one study, seedlings watered with spiralised water grew significantly faster, higher and stronger than those given ordinary water."
The vice-chancellor of the University of Wales, Marc Clement, is a physicist (Department of Electrical and Electronic Engineering), so can he perhaps explain the meaning of this?
Selection committees for jobs (especially senior jobs) and validation committees for courses, might make fewer mistakes if they didn't rely so much on formal documents and did a little more investigation themselves. That sort of thing is why the managerial culture not only takes a lot more time, but also gives a worse result.
It would have taken 10 minutes with Google to find out about Young's track record, but they didn't bother. As a result they have spent a long time producing a validation that isn't worth the paper it's written on. That makes the University of Wales a bit of a laughing stock. Worse still, it brings science itself into disrepute.
What does the University of Wales say? So far, nothing. Last week I sent brief and polite emails to Professor Palastanga and to Professor Clement to try to discover whether it is true that the validation process had indeed missed the fact that the course organiser's writings had been described as "preposterous, made-up, pseudoscientific nonsense" in the Guardian.
So far I have had no reply from the vice-chancellor, but on .26 October I did get an answer from Prof Palastaga.
As regards the two people you asked questions about – J.Young – I personally am not familiar with her book and nobody on the validation panel raised any concerns about it. As for P.Holford similarly there were no concerns expressed about him or his work. In both cases we would have considered their CV's as presented in the documentation as part of the teaching team. In my experience of conducting degree validations at over 16 UK Universities this is the normal practice of a validation panel.
I have to say this reply confirms my worst fears. Validation committees such as this one simply don't do their duty. They don't show the curiosity that is needed to discover the facts about the things that they are meant to be judging. How could they not have looked at the book by the very person that they are validating? After all that has been written about Patrick Holford, it is simply mind-boggling that the committee seems to have been quite unaware of any of it.
It is yet another example of the harm done to science by an unthinking, box-ticking approach.
Pharmacology. A Handbook for Complementary Healthcare Professionals
Elsevier were kind enough to send me an inspection copy of this book, which is written by one of the nutrition course tutors, Elaine Aldred. She admits that pharmacology is "considered by most students to be nothing more that a 'hoop-jumping' exercise in the process of becoming qualified". She also says. disarmingly. that "I was certainly not the most adept scientist at school and found my university course a trial".
The book has all the feel of a cut and paste job. It is mostly very simple (if not simplistic). though for no obvious reason it starts with a long (and very amateur) discussion of chemical bonding Then molecules are admitted to be indivisible (but, guess what, the subject of homeopathy is avoided). There is a very short section on ion channels, though, bizarrely, it appears under the heading "How do drugs get into cells?". Since the author is clearly not able to make the distinction between volts and coulombs, the discussion is more likely to confuse the reader than to help.
Then a long section on plants. It starts of by asserting that "approximately a quarter of prescription drugs contain at least one chemical that was originally isolated and extracted from a plant".. This cannot be even remotely correct. There are vast tables showing complicated chemical structures, but the usual inadequate
list of their alleged actions This is followed by a quick gallop through some classes of conventional drugs, illustrated again mainly by chemical structures not data. Hormone replacement therapy is mentioned, but the chance to point out that it is one of the best illustrations of the need for RCTs is missed.
The one thing that one would really like to see in such a book is a good account of how you tell whether or not a drug works in man. This is relegated to five pages at the end of the book, and it is, frankly, pathetic. It
is utterly uncritical in the one area that matters more than any other for people who purport to treat patients. All you get is a list of unexplained bullet points.
If this book is the source of the "scientific content" of the nutrition course, things are as bad as we feared.
Tagged Academia, Jacqueline Young, Marc Clement, Nigel Palastanga, Northern College of Acupuncture, nutribollocks, nutrition, nutritional therapy, Patrick Holford, Universities, University of Wales, validation, vice-chancellors | 39 Comments
Patent medicines in 1938 and now: A.J.Clark's book.
Alfred Joseph Clark FRS held the established chair of Pharmacology at UCL from 1919 to 1926, when he left for Edinburgh. In the 1920s and 30s, Clark was a great pioneer in the application of quantitative physical ideas to pharmacology. As well as his classic scientific works, like The Mode of Action of Drugs on Cells (1933) he wrote, and felt strongly, about the fraud perpetrated on the public by patent medicine salesmen. In 1938 (while in Edinburgh) he published a slim volume called Patent Medicines. The parallels with today are astonishing.
Alfred Joseph Clark FRS (1885 – 1941)
I was lucky to be given a copy of this book by David Clark, A.J. Clark's eldest son, who is now 88. I visited him in Cambridge on 17 September 2008, because he thought that, as holder of the A.J. Clark chair at UCL from 1985 to 2004, I'd be a good person to look after this and several other books from his father's library. They would have gone to the Department of Pharmacology if we still had one, but that has been swept away by mindless administrators with little understanding of how to get good science.
Quotations from the book are in italic, and are interspersed with comments from me.
The book starts with a quotation from the House of Commons Select Committee report on Patent Medicines. The report was submitted to the House on 4 August 1914, so there is no need to explain why it had little effect. The report differs from recent ones in that it is not stifled by the sort of political correctness that makes politicians refer to fraudsters as "professions".
"2.2 The situation, therefore, as regards the sale and advertisement of proprietary medicines and articles may be summarised as follows:
For all practical purposes British law is powerless to prevent any person from procuring any drug, or making any mixture, whether patent or without any therapeutical activity whatever (as long as it does not contain a scheduled poison), advertising it in any decent terms as a cure for any disease or ailment, recommending it by bogus testimonials and the invented opinions and facsimile signatures of fictitious physicians, and selling it under any name he chooses, on payment of a small stamp duty. For any price he can persuade a credulous public to pay."
Select Committee on Patent Medicines. 1914
"The writer has endeavoured in the present article to analyse the reasons for the amazing immunity of patent medicines form all attempts to curb their activity, to estimate the results and to suggest the obvious measures of reform that are needed."
Clark, writing in 1938, was surprised that so little had changed since 1914. What would he have thought if he had known that now, almost 100 years after the 1914 report, the fraudsters are still getting away with it?
Chapter 2 starts thus.
The Select Committee appointed by the House of Commons in 1914 'to consider and inquire into the question of the sale of Patent and Proprietary Medicines' stated its opinion in 28 pages of terse and uncompromising invective. Its general conclusions were as follows:
That the trade in secret remedies constituted a grave and widespread public evil.
That the existing law was chaotic and had proved inoperative and that consequently the traffic in secret remedies was practically uncontrolled.
In particular it concluded '"that this is an intolerable state of things and that new legislation to deal with it, rather than merely the amendment of existing laws, is urgently needed in the public interest."
The "widespread public evil"continues almost unabated, and rather than introduce sensible legislation to cope with it, the government has instead given a stamp of approval for quackery by introducing utterly ineffective voluntary "self-regulation".
Another Bill to deal with patent medicines was introduced in 1931, without success, and finally in 1936, a Medical and Surgical Appliances (Advertisement) Bill was introduced. This Bill had a very limited scope. Its purpose was to alleviate some of the worst abuses of the quack medicine trade by prohibiting the advertisement of cures for certain diseases such as blindness, Bright's disease [nephritis] , cancer, consumption [tuberculosis], epilepsy, fits, locomotor ataxy, fits, lupus or paralysis.
The agreement of many interests was secured for this measure. The president of the Advertising Association stated that the proposed Bill would not affect adversely any legitimate trade interest. Opposition to the Bill was, however, whipped up amongst psychic healers, anti-vivisectionists and other opponents of medicine and at the second reading in March 1936, the Bill was opposed and the House was counted out during the ensuing debate. The immediate reason for this fate was that the Bill came up for second reading on the day of the Grand National! This is only one example of the remarkable luck that has attended the patent medicine vendors.
(Page 14).
The "remarkable luck" of patent medicine vendors continues to this day, Although, in principle, advertisement of cures for venereal diseases was banned in 1917, and for cancer in 1939, it takes only a few minutes with Google to find that these laws are regularly flouted by quacks, In practice quacks get away with selling vitamin pills for AIDS, sugar pills for malaria and homeopathic pills for rabies, polio anthrax and just about anything else you can think of. Most of these advertisements are contrary to the published codes of ethics of the organisations to which the quack in question belongs but nothing ever happens.
Self-regulation simply does not work, and there is still no effective enforcement even of existing laws..
"It has already been stated that British law allows the advertiser of a secret remedy to tell any lie or make any claim that he fancies will sell his goods and the completeness of this licence is best illustrated by the consideration of a few specific points.
Advertisements for secret remedies very frequently contain a list of testimonials from medical men, which usually are in an anonymous form, stating that ………….. M.D., F.R.C.S., has found the remedy infallible. Occasionally, however, the name and address of a doctor is given and anyone unaware of the vagaries of English law would imagine that such use of a doctor's name and professional reputation could not be made with impunity without his consent. In 1899, however, the Sallyco Mineral Water Company advertised that 'Dr. Morgan Dochrill, physician to St. John's Hospital, London and many of the leading physicians are presenting 'Sallyco' as an habitual drink. Dr. Dochrill says nothing has done his gout so much good.
Dr. Dochrill, whose name and title were correctly stated above, sued the company but failed in his case. "
"The statement that the law does not prevent the recommending of a secret remedy by the use of bogus testimonials and facsimile signatures of fictitious physicians is obviously an understatement since it is doubtful how far it interferes with the use of bogus testimonials from real physicians."
Dodgy testimonials are still a mainstay of dodgy salesman. One is reminded of the unauthorised citation of testimonials from Dr John Marks and Professor Jonathan Waxman by Patrick Holford to aid his sales of unnecessary vitamin supplements. There is more on this at Holfordwatch.
The man in the street knows that the merits of any article are usually exaggerated in advertisements and is in the habit of discounting a large proportion of such claims, but, outside the realm of secret remedies, the law is fairly strict as regards definite misstatements concerning goods offered for sale and hence the everyday experience of the man in the street does not prepare him for dealing with advertisements which are not merely exaggerations but plain straightforward lies from beginning to end.
Scientific training is undoubtedly a handicap in estimating popular gullibility as regards nostrums. One imagines that no one today would be willing to spend money on pills guaranteed to prevent earthquakes but yet the claims of many of the remedies offered appear equally absurd to anyone with an elementary
knowledge of physiology or even of chemistry. A study of the successes and failures suggests that success depends chiefly on not over-rating the public intelligence. (Page 34)
This may have changed a bit since A.J. Clark was writing in 1938. Now the main clients of quacks seem to be the well-off "worried-well". But it remains as true as ever that "Scientific training is undoubtedly a handicap in estimating popular gullibility as regards nostrums." In 2008, it is perhaps more a problem of Ben Goldacre's dictum ""My basic hypothesis is this: the people who run the media are humanities graduates with little understanding of science, who wear their ignorance as a badge of honour."
Clark refers (page 36) to a successful conviction for fraud in the USA in 1917. The subject was a widely advertised 'get fat quick' pill that contained lecithin, proteins and sugar. The BMA analysis (in 1912)
suggested that the cost of the ingredients in a box of 30 tablets sold for 4/6 was 1 1/4 d. [4/6 meant 4 shillings and six pence, or 22.5 pence since 1971, and 1 1/4 old pence, a penny farthing, is 0.52 new pence]. He comments thus.
The trial revealed many interesting facts. The formula was devised after a short consultation with the expert of one of the largest drug manufacturers in the U.S.A. This firm manufactured the tablets and sold them to the proprietary medicine company at about 3/- per 1000, whilst they were retailed to the public at the rate of £7 10s. per 1000. The firm is estimated to have made a profit of about $3,000,000.
These trials in the U.S.A. revealed the fact that in a considerable proportion of cases the 'private formula' department of the large and well known drug firm already mentioned had first provided the formula for the nostrum and subsequently had prepared it wholesale.
Nothing much has changed here either. The alternative medicine industry (and it is a very big industry) is fond of denouncing the evils of the pharmaceutical industry, and sadly, occasionally they are right. One of the less honest practices of the pharmaceutical industry (though one never mentioned by quacks) is buying heavily into alternative medicine. Goldacre points out
"there is little difference between the vitamin and pharmaceutical industries. Key players in both include multinationals such as Roche and Aventis; BioCare, the vitamin pill producer that media nutritionist Patrick Holford works for, is part-owned by Elder Pharmaceuticals."
And then. of course, there is the deeply dishonest promotion by Boots the Chemists of homeopathic miseducation, of vitamins and of CoQ10 supplements.
The manner in which secret remedies can survive repeated exposure is shown by the following summary of the life history of a vendor of a consumption [tuberculosis] cure.
1904, 1906: Convicted of violating the law in South Africa.
1908: Exposed in British Medical Association report and also attacked by Truth.
1910: Sued by a widow. The judge stated: 'I think this is an intentional and well-considered fraud. It is a scandalous thing that poor people should be imposed upon and led to part with their money, and to hope that those dear to them would be cured by those processes which were nothing but quack remedies and had not the slightest value of any kind.'
1914: A libel action against the British Medical Association was lost.
1915 The cure was introduced into the United States.
1919 The cure was sold in Canada.
1924 Articles by men with medical qualifications appeared in the Swiss medical journal boosting
the cure.
Secret remedies have a vitality that resembles that of the more noxious weeds and the examples mentioned suggest that nothing can do them any serious harm.
Most of the time, quacks get away with claims every bit as outrageous today. But Clark does give one example of a successful prosecution. It resulted from an exposé in the newspapers -wait for it -in the Daily Mail.
There is, however, one example which proves that a proprietary remedy can be squashed by exposure if this is accompanied by adequate publicity.
The preparation Yadil was introduced as an antiseptic and was at first advertised to the medical profession. The proprietor claimed that the remedy was not secret and that the active principle was 'tri-methenal allylic carbide'. The drug acquired popularity in the influenza epidemic of 1918 and the proprietor became more and more ambitious in his therapeutic claims. The special virtue claimed for Yadil was that it would kill any harmful organism that had invaded the body. A more specific claim was that consumption in the first stage was cured with two or three pints whilst advanced cases might require a little more. Other advertisements suggested that it was a cure for most known diseases from cancer downwards.
These claims were supported by an extraordinarily intense advertising campaign. Most papers, and even magazines circulating amongst the wealthier classes, carried full page and even double page advertisements. The Daily Mail refused these advertisements and in 1924 published a three column article by Sir William Pope, professor of Chemistry in the University of Cambridge. He stated that
the name 'tri-methenal allylic carbide' was meaningless gibberish and was not the chemical definition of any known substance. He concluded that Yadil consisted of :
'About one per cent of the chemical compound formaldehyde.
About four per cent of glycerine.
About ninety-five per cent of water and, lastly, a smell.
He calculated that the materials contained in a gallon cost about 1/6, whilst the mixture was sold at £4 10s. per gallon.
This exposure was completely successful and the matter is of historic interest in that it is the only example of the career of a proprietary medicine being arrested by the action of the Press.
Clark goes on to talk of the law of libel.
"On the other hand the quack medicine vendor can pursue his advertising campaigns in the happy assurance that, whatever lies he tells, he need fear nothing from the interference of British law. The law does much to protect the quack medicine vendor because the laws of slander and libel are so severe."
The law of libel to this day remains a serious risk to freedom of speech of both individuals and the media. Its use by rogues to suppress fair comment is routine. My first encounter was when a couple of herbalists
threatened to sue UCL because I said that the term 'blood cleanser' is gobbledygook. The fact that the statement was obviously true didn't deter them for a moment. The herbalists were bluffing no doubt, but they caused enough nuisance that I was asked to take my pages off UCL's server. A week later I was invited back but by then I'd set up a much better blog and the publicity resulted in an enormous increase in readership, so the outcome was good for me (but bad for herbalists).
It was also good in the end for Andy Lewis when his immortal page "The gentle art of homoeopathic killing" (about the great malaria scandal) was suppressed. The Society of Homeopaths' lawyers didn't go for him personally but for his ISP who gave in shamefully and removed the page. As a result the missing page reappeared in dozens of web sites round the world and shot to the top in a Google search.
Chiropractors are perhaps the group most likely to try to suppress contrary opinions by law not argument. The only lawyers' letter that has been sent to me personally, alleged defamation in an editorial that I wrote for the New Zealand Medical Journal. That was a little scary, but the journal stuck up for its right to speak and the threat went away after chiropractors were allowed right of reply (but we got the last word).
Simon Singh, one of the best science communicators we have, has not been so lucky. He is going to have to defend in court an action brought by the British Chiropractic Association because of innocent opinions expressed in the Guardian.
Chapter 6 is about "The harm done by patent medicines". It starts thus.
"The trade in secret remedies obviously represents a ridiculous waste of money but some may argue that, since we are a free country and it pleases people to waste their money in this particular way, there is no call for any legislative interference. The trade in quack medicines cannot, however, be regarded as a harmless one. The Poisons Acts fortunately prevent the sale of a large number of dangerous drugs, but there are numerous other ways in which injury can be produced by these remedies."
The most serious harm, he thought, resulted from self-medication, and he doesn't mince his words.
"The most serious objection to quack medicines is however that their advertisements encourage self-medication as a substitute for adequate treatment and they probably do more harm in this than in any other manner.
The nature of the problem can best be illustrated by considering a simple example such as diabetes. In this case no actual cure is known to medicine but, on the other hand, if a patient is treated adequately by insulin combined with appropriate diet, he can be maintained in practically normal health, in spite of his disability, for an indefinite period. The expectation of life of the majority of intelligent diabetics, who make no mistakes in their regime, is not much less than that of normal persons. The regime is both irksome and unpleasant, but anyone who persuades diabetics to abandon it, is committing manslaughter as certainly as if he fired a machine gun into a crowded street.
As regards serious chronic disease the influence of secret remedies may be said to range from murderous to merely harmful.
'Cures' for consumption, cancer and diabetes may fairly be classed as murderous, since they are likely to cause the death of anyone who is unfortunate enough to believe in their efficacy and thus delay adequate treatment until too late.
The phrase "'Cures for consumption, cancer and diabetes may fairly be classed as murderous" made Clark himself the victim of suppression of freedom of speech by lawyers. His son, David Clark, wrote of his father in "Alfred Joseph Clark, A Memoir" (C. & J. Clark Ltd 1985 ISBN 0-9510401-0-3)
"Although tolerant of many human foibles, A. J. had always disapproved fiercely of quacks, particularly the charlatans who sold fraudulent medicines. During his visits to London he met Raymond Postgate, then a crusading left wing journalist, who persuaded A.J. to write a pamphlet which was published in an ephemeral series called 'Fact' in March 1938. It was a lively polemical piece. . To A.J.'s surprise and dismay he was sued for libel by a notorious
rogue who peddled a quack cure for for tuberculosis. This man said that A.J.'s remarks (such as "'Cures' for consumption, cancer and diabetes may fairly be classed as murderous") were libellous and would damage his business. A.J. was determined to fight, and he and Trixie decided to put their savings at stake if necessary. The B.M.A. and the Medical Defence Union agreed to support him and they all went to lawyers. He was shocked when they advised him that he would be bound to lose for he had damaged the man's livelihood! Finally, after much heart searching, he made an apology, saying that he had not meant that particular man's nostrum"
Talk about déjà vu!
On page 68 there is another very familiar story. It could have been written today.
"The fact that the public is acquiring more knowledge of health matters and is becoming more suspicious of the cruder forms of lies is also helping to weed out the worst types of patent medicine advertisements. For example, in 1751 a bottle of oil was advertised as a cure for scurvy, leprosy and consumption but today such claims would not be effective in promoting the sale of a remedy. The modern advertiser would probably claim that the oil was rich in all the vitamins and the elements essential for life and would confine his claims to a statement that it would alleviate all minor forms of physical or mental ill-health.
The average patent medicine advertised today makes plausible rather than absurd claims and in general the advertisements have changed to conform with a change in the level of the public's knowledge.
It is somewhat misleading, however, to speak of this as an improvement, since the law has not altered and hence the change only means that the public is being swindled in a somewhat more skilful manner.
The ideal method of obtaining an adequate vitamin supply is to select a diet containing an abundant supply of fresh foods, but unfortunately the populace is accustomed to live very largely on preserved or partially purified food stuffs and such processes usually remove most of the vitamins."
The first part of the passage above is reminiscent of something that A.J Clark wrote in the BMJ in 1927. Nowadays it is almost unquotable and I was told by a journal editor that it was unacceptable even with asterisks. That seems to me a bit silly. Words had different connotations in 1927.
"The less intelligent revert to the oldest form of belief and seek someone who will make strong magic for them and defeat the evil spirits by some potent charm. This is the feeling to which the quack appeals; he claims to be above the laws of science and to possess some charm for defeating disease of any variety.
The nature of the charm changes with the growth of education. A naked n****r howling to the beat of a tom-tom does not impress a European, and most modern Europeans would be either amused or disgusted by the Black mass that was popular in the seventeenth century. Today some travesty of physical science appears to be the most popular form of incantation."
A.J. Clark (1927) The historical aspect of quackery, BMJ October 1st 1927
Apart from some of the vocabulary, what better description could one have of the tendency of homeopaths to harp on meaninglessly about quantum theory or the "scienciness" and "referenciness" of
modern books on nutritional therapy?
So has anything changed?
Thus far, the outcome might be thought gloomy. Judging by Clark's account, remarkably little has changed since 1938, or even since 1914. The libel law in the UK is as bad now as it was then. Recently the United Nations Human Rights Committee said UK laws block matters of public interest and encourage libel tourism (report here, see also here). It is unfit for a free society and it should be changed.
But there are positive sides too. Firstly the advent of scientific bloggers has begun to have some real influence. People are no longer reliant on journalists to interpret (or, often, misinterpret) results for them. They can now get real experts and links to original sources. Just one of these, Ben Goldacre's badscience.net, and his weekly column in the Guardian has worked wonders in educating the public and improving journalism. Young people can, and do, contribute to the debate because they can blog anonymously if they are frightened that their employer might object.
Perhaps still more important, the law changed this year. Now, at last, it may be possible to prosecute successfully those who make fraudulent health claims. Sad to say, this was not an initiative of the UK government, which remains as devoted as ever to supporting quacks. Remember that, quite shamefully, the only reason given by the Medicines and Health Regulatory Authority (MHRA) gave for allowing false labelling of homeopathic pills was to support the "homeopathic industry". They suggested (falsely) that the EU required them to take this irresponsible step, which was condemned by just about every scientific organisation. But the new unfair trading regulations did come from the EU. After almost 100 years since the 1914 report, we have at last some decent legislation. Let's hope it's enforced.
Postcript
The back cover of the series of 'Fact' books in which A.J. Clark's article appeared is reproduced below, simply because of the historical portrait of the 1930s that it gives.
This post got a lot of hits from Ben Goldacre's miniblog which read
Prof David Colquhoun gets into a time machine and meets himself
A truly classic DC post.
Thanks, Ben.
Tagged A.J.Clark, Academia, advertisements, alternative medicine, CAM, cancer, Edinburgh, MHRA, Patent medicines, quacks, select committee, tuberculosis, UCL, Universities | 14 Comments
Five good books and a bad one
During the last year, there has been a very welcome flurry of good and informative books about alternative medicine. They are all written in a style that requires little scientific background, even the one that is intended for medical students.
CAM, Cumming | Trick or Treatment | Snake Oil Science |
Testing treatments | Suckers | Healing, Hype or Harm
I'll start with the bad one, which has not been mentioned on this blog before.
Complementary and Alternative medicine. An illustrated text.
by Allan D. Cumming, Karen R. Simpson and David Brown (and 12 others). 94 pages, Churchill Livingstone; 1 edition (8 Dec 2006).
The authors of this book sound impressive
Allan Cumming, BSc(Hons), MBChB, MD, FRCP(E), Professor of Medical Education and Director of Undergraduate Learning and Teaching, and Honorary Consultant Physician, College of Medicine and Veterinary Medicine, University of Edinburgh, Edinburgh, UK;
Karen Simpson, BA(Hons), RN, RNT, Fellow in Medical Education, College of Medicine and Veterinary Medicine
David Brown, MBChB, DRCOG, General Practitioner, The Murrayfield Medical Centre, and Honorary Clinical Tutor, University of Edinburgh
Sadly, this is a book so utterly stifled by political correctness that it ends up saying nothing useful at all. The slim volume is, I have to say, quite remarkably devoid of useful information. Partly that is a result of out-of-date and selective references (specially in the chapters written by alternative practitioners),
But the lack of information goes beyond the usual distortions and wishful thinking. I get the strong impression is that it results not so much for a strong commitment to alternative medicine (at least by Cumming) as from the fact that the first two authors are involved with medical education. It seems that they belong to that singularly barmy fringe of educationalists who hold that the teacher must not give information to s student for fear of imparting bias. Rather the student must be told how to find out the information themselves. There is just one little problem with this view. It would take about 200 years to graduate in medicine.
There is something that worries me about medical education specialists. Just look at the welcome given by Yale's Dean of Medical Education, Richard Belitsky, to Yale's own division of "fluid concepts of evidence", as described at Integrative baloney @ Yale, and as featured on YouTube. There are a lot of cryptic allusions to alternative forms of evidence in Cumming's book too, but nothing in enough detail to be useful to the reader.
What should a book about Alternative medicine tell you? My list would look something like this.
Why people are so keen to deceive themselves about the efficacy of a treatment
Why it is that are so often deceived into thinking that something works when it doesn't
How to tell whether a medicine works better than placebo or not,
Summaries of the evidence concerning the efficacy and safety of the main types of alternative treatments.
The Cumming book contains chapters with titles like these. It asks most of the right questions, but fails to answer any of them. There is, time and time again, the usual pious talk about the importance of evidence, but then very little attempt to tell you what the evidence says. When an attempt is made to mention evidence, it is usually partial and out of date. Nowhere are you told clearly about the hazards that will be encountered when trying to find out whether a treatment works.
The usual silly reflexology diagram is reproduced in Cumming's introductory chapter, but with no comment at all, The fact that it is obviously total baloney is carefully hidden from the reader.. What is the poor medical student meant to think when they perceive that it is totally incompatible with all the physiology they have learned? No guidance is offered.
You will look in vain for a decent account of how to do a good randomised controlled trial, though you do get a rather puerile cartoon, The chapter about evidence is written by a librarian. Since the question of evidence is crucial, this is a fatal omission.
Despite the lack of presentation of evidence that any of it works, there seems to be an assumption throughout the book that is is desirable to integrate alternative medicine into clinical practice. In Cumming's chapter (page 6) we see
Since it would not be in the interests of patients to integrate treatments that don't work with treatments that do work, I see only two ways to explain this attitude. Either the authors have assumed than most alternative methods work (in which case they haven't read the evidence), or they think integration is a good idea even if the treatment doesn't work. Neither case strikes me as good medical education.
The early chapters are merely vague and uninformative. Some of the later chapters are simply a disgrace.
Most obviously the chapter on homeopathy is highly selective and inaccurate, That is hardly surprising because it is was written by Thomas Whitmarsh, a consultant physician at Glasgow Homeopathic Hospital (one that has still survived). It has all the usual religious zeal of the homeopath. I honestly don't know whether people like Whitmarsh are incapable of understanding what constitutes evidence, or are simply too blinded by faith to even try. Since the only other possibility is that they are dishonest, I suppose it must be one of the former.
The chapter on "Nutritional therapy" is also written by a convert and is equally misleading piece of special pleading.
The same is true of the chapter on Prayer and Faith Healing. This chapter reproduces the header of the Cochrane Review on "Intercessory prayer for the alleviation of Ill Health", but then proceeds to ignore entirely its conclusion "Most of the studies show no real differences").
If you want to know about alternative medicine, don't buy this book. Although this book was written for medical students, you will learn a great deal more from any of the following books, all of which were written for the general public.
Trick or Treatment
by Simon Singh and Edzard Ernst, Bantam Press, 2008
Simon Singh is the author of many well-known science books, like Fermat's Last Theorem. Edzard Ernst is the UK's first professor of complementary and alternative medicine.
Ernst, unlike Cumming et. al is a real expert in alternative medicine. He practised it at an early stage in his career and has now devoted all his efforts to careful, fair and honest assessment of the evidence. That is what this book is about. It is a very good account of the subject and it should be read by everyone, and certainly by every medical student.
Singh and Ernst follow the sensible pattern laid out above, The first chapter goes in detail into how you distinguish truth from fiction (a little detail often forgotten in this area).
The authors argue, very convincingly, that the development of medicine during the 19th and 20th century depended very clearly on the acceptance of evidence not anecdote. There is a fascinating history of clinical trials, from James Lind (lemons and scurvy), John Snow and the Broad Street pump, Florence Nightingale's contribution not just to hygiene, but also to the statistical analysis that was needed to demonstrate the strength of her conclusions (she became the first female member of the Royal Statistical Society, and had studied under Cayley and Sylvester, pioneers of matrix algebra).
There are detailed assessments of the evidence for acupuncture, homeopathy, chiropractic and herbalism, and shorter synopses for dozens of others. The assessments are fair, even generous in marginal cases.
Acupunture. Like the other good books (but not Cumming's), it is pointed out that acupuncture in the West is not so much the product of ancient wisdom (which is usually wrong anyway), but rather a product of Chinese nationalist propaganda engineered by Mao Tse-tung after 1949. It spread to the West after Nixon's visit Their fabricated demonstrations of open heart surgery under acupuncture have been known since the 70s but quite recently they managed again to deceive the BBC It was Singh who revealed the deception. The conclusion is " . . . this chapter demonstrates that acupuncture is very likely to be acting as nothing more than a placebo . . . "
Homeopathy. "hundreds of trials have failed to deliver significant or convincing evidence to support the use of homeopathy for the treatment of any particular ailment. On the contrary, it would be to say that there is a mountain of evidence to suggest that homeopathic remedies simply do not work".
Chiropractic. Like the other good books (but not Cumming's) there is a good account of the origins of chiropractic (see, especially, Suckers). D.D. Palmer, grocer, spiritual healer, magnetic therapist and fairground quack, finally found a way to get rich by removing entirely imaginary 'subluxations'. They point out the dangers of chiropractic (the subject of court action), and they point out that physiotherapy is just as effective and safer.
Herbalism. There is a useful table that summarises the evidence. They conclude that a few work and most don't Unlike homeopathy, there is nothing absurd about herbalism, but the evidence that most of them do any good is very thin indeed.
"We argue that it is now the time for the tricks to stop, and for the real treatments to take priority. In the name of honesty, progress and good healthcare, we call for scientific standards, evaluation and regulation to be applied to all types of medicine, so that patients can be confident that they are receiving treatments that demonstrably generate more harm than good."
Snake Oil Science, The Truth about Complementary and Alternative Medicine.
R. Barker Bausell, Oxford University Press, 2007
Another wonderful book from someone who has been involved himself in acupuncture research, Bausell is a statistician and experimental designer who was Research Director of a Complementary and Alternative Medicine Specialised Research Center at the University of Maryland.
This book gives a superb account of how you find out the truth about medicines, and of how easy it is to be deceived about their efficacy.
I can't do better than quote the review by Robert Park of the American Physical Society (his own book, Voodoo Science, is also excellent)
"Hang up your lantern, Diogenes, an honest man has been found. Barker Bausell, a biostatistician, has stepped out of the shadows to give us an insider's look at how clinical evidence is manipulated to package and market the placebo effect. Labeled as 'Complementary and Alternative Medicine', the placebo effect is being sold, not just to a gullible public, but to an increasing number of health professionals as well. Bausell knows every trick and explains each one in clear language"
Bausell's conclusion is stronger than that of Singh and Ernst.
"There is no compelling, credible scientific evidence to suggest that any CAM therapy benefits any medical condition or reduces any medical symptom (pain or otherwise) better than a placebo".
Here are two quotations from Bausell that I love.
[Page 22] " seriously doubt, however, that there is a traditional Chinese medicine practitioner anywhere who ever stopped performing acupuncture on an afflicted body in the presence of similarly definitive negative evidence. CAM therapists simply do not value (and most cases, in my experience, do not understand) the scientific process"
And even better,
[Page39] "But why should nonscientists care one iota about something as esoteric as causal inference? I believe that the answer to this question is because the making of causal inferences is part of our job description as Homo Sapiens."
by Imogen Evans, Hazel Thornton, Iain Chalmers, British Library, 15 May 2006
You don't even need to pay for this excellent book (but buy it anyway, eg from Amazon). If you can't afford, £15 then download it from the James Lind Library.
This book is a unlike all the others, because it is barely mentions alternative medicine. What it does, and does very well, is to describe he harm that can be done to patients when they are treated on the basis of guesswork or ideology, rather than on the basis of proper tests. This, of course, is true whether or not the treatment is labelled 'alternative'.
It is worth noting that one of the authors of this book is someone who has devoted much of his life to the honest assessement of evidence, Sir Iain Chalmers, one of the founders of the Cochrane Collaboration , and Editor of the James Lind Library .
A central theme is that randomised double blind trial are essentially the only way to be sure you have the right answer. One of the examples that the authors use to illustrate this is Hormone Replacement Therapy (HRT). For over 20 years, women were told that HRT would reduce their risk of heart attacks and strokes. But when, eventually, proper randomised trials were done, it was found that precisely the opposite was true. The lives of many women were cut short because the RCT had not been done,
The reason why the observational studies gave the wrong answer is pretty obvious. HRT was used predominantly by the wealthier and better-educated women. Income is just about the best predictor of longevity. The samples were biassed, and when a proper RCT was done it was revealed that the people who used HRT voluntarily lived longer despite the HRT, not because of it. It is worth remembering that there are very few RCTs that test the effects of diet. And diet differs a lot between rich and poor people. That, no doubt, is why there are so many conflicting recommendations about diet. And that is why "nutritional therapy" is little more than quackery. Sadly, the media just love crap epidemiology. One of the best discussions of this topics was in Radio 4 Programme. "The Rise of the Lifestyle Nutritionists", by Ben Goldacre.
One of the big problems in all assessment is the influence of money, in other words corruption, The alternative industry is entirely corrupt of course, but the pharmaceutical industry has been increasingly bad. Testing Treatments reproduces this trenchant comment.
Suckers. How Alternative Medicine Makes Fools of Us All
Rose Shapiro, Random House, London 2008
I love this book. It is well-researched, feisty and a thoroughly good read.
It was put well in the review by George Monbiot.
"A fascinating and excoriating book; witty, shocking and utterly convincing"
The chapters on osteopathy and chiropractic are particularly fascinating.
This passage describes the founder of the chiropractic religion.
"By the 1890s Palmer had established a magnetic healing practice in Davenport, Iowa, and was styling himself 'doctor'. Not everyone was convinced as a piece about him in an 1894 edition of the local paper, the Davenport Leader, shows."
A crank on magnetism has a crazy notion hat he can cure the sick and crippled with his magnetic hands. His victims are the eak-minded, ignorant and superstitious, those foolish people who have been sick for years and have become tired of the regular physician and want health by the short-cut method . . . he has certainly profited by the ignorance of his victims . . . His increase in business shows what can be done in Davenport, even by a quack"
Over 100 years later, it seems that the "weak-minded, ignorant and superstitious" include the UK's Department of Health, who have given these quacks a similar status to the General Medical Council.
The intellectual standards of a 19th Century mid-western provincial newspaper leader writer are rather better than the intellectual standards of the Department of Health, and of several university vice-chancellors in 2007.
Healing Hype or Harm
Edited by Imprint Academic (1 Jun 2008)
Download the contents page
My own chapter in this compilation of essays, "Alternative medicine in UK Universities" is an extended version of what was published in Nature last year (I don't use the term CAM because I don't believe anything can be labelled 'complementary' until it has been shown to work). Download a copy if the corrected proof of this chapter (pdf).
Perhaps the best two chapters, though, are "CAM and Politics" by Rose and Ernst, and "CAM in Court" by John Garrow.
CAM and politics gives us some horrifiying examples of the total ignorance of almost all politicians and civil servants about the scientific method (and their refusal to listen to anyone who does understand it).
CAM in Court has some fascinating examples of prosecutions for defrauding the public. Recent changes in the law mean we may be seeing a lot more of these soon. Rational argument doesn't work well very well with irrational people. But a few homeopaths in jail for killing people with malaria would probably be rather effective.
Healing, Hype or Harm has had some nice reviews, That isn't so surprising from the excellent Harriet Hall at Science-Based Medicine. The introduction to my chapter was a fable about the replacemment of the Department of Physics and Astronomy by the new Department of Alternative Physics and Astrology. It was an unashamedly based on Laurie Taylor'e University of Poppleton column. Hall refers to it as "Crislip-style", a new term to me. I guess the incomparable Laurie Taylor is not well-known in the USA, Luckily Hall gives a link to Mark Crislip's lovely article, Alternative Flight,
"Americans want choice. Americans are increasingly using alternative aviation. A recent government study suggests that 75% of Americans have attempted some form of alternative flight, which includes everything from ultralights to falling, tripping and use of bungee cords."
"Current airplane design is based upon a white male Western European model of what powered flight should look like. Long metal tubes with wings are a phallic design that insults the sensibilities of women, who have an alternative, more natural, emotional, way of understanding airplane design. In the one size fits all design of allopathic airlines, alternative designs are ignored and airplane design utilizing the ideas and esthetics of indigenous peoples and ancient flying traditions are derided as primitive and unscientific, despite centuries of successful use."
Metapsychology Online Reviews doesn't sound like a promising title for a good review of Healing, Hype or Harm, but in fact their review by Kevin Purday is very sympathetic. I like the ending.
"One may not agree with everything that is written in this book but it is wonderful that academic honesty is still alive and well."
Tagged alternative medicine, bad science, Bausell, CAM, Chalmers, Cumming, Ernst, Rose Shapiro, Shapiro, Singh, Thornton | 28 Comments
In-human resources, science and pizza
Jump to follow up
This is a fuller version, with links, of the comment piece published in Times Higher Education on 10 April 2008. Download newspaper version here.
If you still have any doubt about the problems of directed research, look at the trenchant editorial in Nature (3 April, 2008. Look also at the editorial in Science by Bruce Alberts. The UK's establishment is busy pushing an agenda that is already fading in the USA.
Since this went to press, more sense about "Brain Gym" has appeared. First Jeremy Paxman had a good go on Newsnight. Skeptobot has posted links to the videos of the broadcast, which have now appeared on YouTube.
Then, in the Education Guardian, Charlie Brooker started his article about "Brain Gym" thus
"Man the lifeboats. The idiots are winning. Last week I watched, open-mouthed,
a Newsnight piece on the spread of "Brain Gym" in British schools "
Dr Aust's cogent comments are at "Brain Gym" loses its trousers.
The Times Higher's subeditor removed my snappy title and substituted this.
So here it is.
"HR is like many parts of modern businesses: a simple expense, and a burden on the backs of the productive workers", "They don't sell or produce: they consume. They are the amorphous support services" .
So wrote Luke Johnson recently in the Financial Times. He went on, "Training advisers are employed to distract everyone from doing their job with pointless courses". Luke Johnson is no woolly-minded professor. He is in the Times' Power 100 list, he organised the acquisition of PizzaExpress before he turned 30 and he now runs Channel 4 TV.
Why is it that Human Resources (you know, the folks we used to call Personnel) have acquired such a bad public image? It is not only in universities that this has happened. It seems to be universal, and worldwide. Well here are a few reasons.
Like most groups of people, HR is intent on expanding its power and status. That is precisely why they changed their name from Personnel to HR. As Personnel Managers they were seen as a service, and even, heaven forbid, on the side of the employees. As Human Resources they become part of the senior management team, and see themselves not as providing a service, but as managing people. My concern is the effect that change is having on science, but it seems that the effects on pizza sales are not greatly different.
The problem with having HR people (or lawyers, or any other non-scientists) managing science is simple. They have no idea how it works. They seem to think that every activity
can be run as though it was Wal-Mart That idea is old-fashioned even in management circles. Good employers have hit on the bright idea that people work best when they are not constantly harassed and when they feel that they are assessed fairly. If the best people don't feel that, they just leave at the first opportunity. That is why the culture of managerialism and audit. though rampant, will do harm in the end to any university that embraces it.
As it happens, there was a good example this week of the damage that can be inflicted on intellectual standards by the HR mentality. As a research assistant, I was sent the Human Resources Division Staff Development and Training booklet. Some of the courses they run are quite reasonable. Others amount to little more than the promotion of quackery. Here are three examples. We are offered a courses in "Self-hypnosis", in "Innovations for Researchers" and in "Communication and Learning: Recent Theories and Methodologies". What's wrong with them?
"Self-hypnosis" seems to be nothing more than a pretentious word for relaxation. The person who is teaching researchers to innovate left science straight after his PhD and then did courses in "neurolinguistic programming" and life-coaching (the Carole Caplin of academia perhaps?). How that qualifies him to teach scientists to be innovative in research may not be obvious.
The third course teaches, among other things, the "core principles" of neurolinguistic programming, the Sedona method ("Your key to lasting happiness, success, peace and well-being"), and, wait for it, Brain Gym. This booklet arrived within a day or two of Ben
Goldacre's spectacular demolition of Brain Gym "Nonsense dressed up as neuroscience"
"Brain Gym is a set of perfectly good fun exercise break ideas for kids, which costs a packet and comes attached to a bizarre and entirely bogus pseudoscientific explanatory framework"
"This ridiculousness comes at very great cost, paid for by you, the taxpayer, in thousands of state schools. It is peddled directly to your children by their credulous and apparently moronic teachers"
And now, it seems, peddled to your researchers by your credulous and
moronic HR department.
Neurolinguistic programming is an equally discredited form of psycho-babble, the dubious status of which was highlighted in a Beyerstein's 1995 review, from Simon Fraser University.
" Pop-psychology. The human potential movement and the fringe areas of psychotherapy also harbor a number of other scientifically questionable panaceas. Among these are Scientology, Neurolinguistic Programming, Re-birthing and Primal Scream Therapy which have never provided a scientifically acceptable rationale or evidence to support their therapeutic claims."
The intellectual standards for many of the training courses that are inflicted on young researchers seem to be roughly on a par with the self-help pages of a downmarket women's magazine. It is the Norman Vincent Peale approach to education. Uhuh, sorry, not education, but training. Michael O'Donnell defined Education as "Elitist activity. Cost ineffective. Unpopular with Grey Suits . Now largely replaced by Training ."
In the UK most good universities have stayed fairly free of quackery (the exceptions being the sixteen post-1992 universities that give BSc degrees in things like homeopathy). But now it is creeping in though the back door of credulous HR departments. Admittedly UCL Hospitals Trust recently advertised for spiritual healers, but that is the NHS not a university. The job specification form for spiritual healers was, it's true, a pretty good example of the HR box-ticking mentality. You are in as long as you could tick the box to say that you have a "Full National Federation of Spiritual Healer certificate. or a full Reiki Master qualification, and two years post certificate experience". To the HR mentality, it doesn't matter a damn if you have a certificate in balderdash, as long as you have the piece of paper. How would they know the difference?
A lot of the pressure for this sort of nonsense comes, sadly, from a government that is obsessed with measuring the unmeasurable. Again, real management people have already worked this out. The management editor of the Guardian, said
"What happens when bad measures drive out good is strikingly described in an article in the current Economic Journal. Investigating the effects of competition in the NHS, Carol Propper and her colleagues made an extraordinary discovery. Under competition, hospitals improved their patient waiting times. At the same time, the death-rate e emergency heart-attack admissions substantially increased."
Two new government initiatives provide beautiful examples of the HR mentality in action, They are Skills for Health, and the recently-created Complementary and Natural Healthcare Council.(already dubbed OfQuack).
The purpose of the Natural Healthcare Council .seems to be to implement a box-ticking exercise that will have the effect of giving a government stamp of approval to treatments that don't work. Polly Toynbee summed it up when she wrote about " Quackery
and superstition – available soon on the NHS " . The advertisement for its CEO has already appeared, It says that main function of the new body will be to enhance public protection and confidence in the use of complementary therapists. Shouldn't it be decreasing confidence in quacks, not increasing it? But, disgracefully, they will pay no attention at all to whether the treatments work. And the advertisement refers you to
the Prince of Wales' Foundation for Integrated Health for more information (hang on, aren't we supposed to have a constitutional monarchy?).
Skills for Health, or rather that unofficial branch of government, the Prince of Wales' Foundation, had been busy making 'competences' for distant healing, with a helpful bulletted list.
"This workforce competence is applicable to:
healing in the presence of the client
distant healing in contact with the client
distant healing not in contact with the client"
And they have done the same for homeopathy and its kindred delusions. The one thing they never consider is whether they are writing 'competences' in talking gobbledygook. When I phoned them to try to find out who was writing this stuff (they wouldn't say), I made a passing joke about writing competences in talking to trees. The answer came back, in all seriousness,
"You'd have to talk to LANTRA, the land-based organisation for that",
"LANTRA which is the sector council for the land-based industries uh, sector, not with us sorry . . . areas such as horticulture etc.".
Anyone for competences in sense of humour studies?
The "unrepentant capitalist" Luke Johnson, in the FT, said
"I have radically downsized HR in several companies I have run, and business has gone all the better for it."
Now there's a thought.
The provost's newletter for 24th June 2008 could just be a delayed reaction to this piece? For no obvious reason, it starts thus.
"(1) what's management about?
Human resources often gets a bad name in universities, because as academics we seem to sense instinctively that management isn't for us. We are autonomous lone scholars who work hours well beyond those expected, inspired more by intellectual curiosity than by objectives and targets. Yet a world-class institution like UCL obviously requires high quality management, a theme that I reflect on whenever I chair the Human Resources Policy Committee, or speak at one of the regular meetings to welcome new staff to UCL. The competition is tough, and resources are scarce, so they need to be efficiently used. The drive for better management isn't simply a preoccupation of some distant UCL bureaucracy, but an important responsibility for all of us. UCL is a single institution, not a series of fiefdoms; each of us contributes to the academic mission and good management permeates everything we do. I despair at times when quite unnecessary functional breakdowns are brought to my attention, sometimes even leading to proceedings in the Employment Tribunal, when it is clear that early and professional management could have stopped the rot from setting in years before. UCL has long been a leader in providing all newly appointed heads of department with special training in management, and the results have been impressive. There is, to say the least, a close correlation between high performing departments and the quality of their academic leadership. At its best, the ethos of UCL lies in working hard but also in working smart; in understanding that UCL is a world-class institution and not the place for a comfortable existence free from stretch and challenge; yet also a good place for highly-motivated people who are also smart about getting the work-life balance right."
I don't know quite what to make of this. Is it really a defence of the Brain Gym mentality?
Of course everyone wants good management. That's obvious, and we really don't need a condescending lecture about it. The interesting question is whether we are getting it.
There is nothing one can really object to in this lecture, apart from the stunning post hoc ergo propter hoc fallacy implicit in "UCL has long been a leader in providing all newly appointed heads of department with special training in management, and the results have been impressive.". That's worthy of a nutritional therapist.
Before I started writing this response at 08.25 I had already got an email from a talented and hard-working senior postdoc. "Let's start our beautiful working day with this charging thought of the week:".
He was obviously rather insulted at the suggestion that it was necessary to lecture academics with words like " not the place for a comfortable existence free from stretch and challenge; yet also a good place for highly-motivated people who are also smart about getting the work-life balance right.". I suppose nobody had thought of that until HR wrote it down in a "competence"?
To provoke this sort of reaction in our most talented young scientists could, arguably, be regarded as unfortunate.
I don't blame the postdoc for feeling a bit insulted by this little homily.
So do I.
Now back to science.
|
CommonCrawl
|
Monday, June 30, 2014 ... / /
The ISIL caliphate twist
The successes of the ISIL/ISIS, a terrorist organization working to establish the Islamic State in Iraq and the Levant, have brought a new twist (rooted in very old anti-civilizational delusions) to the unstable politics of the Middle East.
"Dr Ibrahim", the 40-or-so old "caliph" and the permanently masked chieftain of the ISIL/ISIS terrorists. Caliphates – territories led by a Muslim head – were founded after Mohammed's 632 AD death as a religious (and later political) institution. The previous (or so far latest) caliphate, the fifth one, was abolished by Turkish leader Atatürk in 1924 – he managed to convert Turkey to a nearly modern, almost secular country.
The last letter L/S in ISIL/ISIS is either Syria or the Levant. In effect, it may make a difference because "Syria" would be a relatively "modest" ambition of these bigots. On the other hand, the "Levant" refers to the Orient, the whole Eastern Mediterranean with its potentially flexible definition – see e.g. these diverse maps of the new black would-be state (including the Pooh bear on his trip to China).
Sunday, June 29, 2014 ... / /
Franson's "breakthrough" concerning the speed of light
Increasingly pathetic crackpot papers are being promoted by the outlets calling themselves "scientific media" at an increasing frequency that has probably surpassed the value of "one crackpot paper per day" a long time ago.
In recent days, tons of journalists got obsessed with the theme that "the speed of light might be wrong". The places where you could read this stuff included The Daily Mail and Pakistan's The Nation claiming that Einstein was wrong all along in the very title, The Huffington Post, The Financial Express, Science Alert, and dozens of others.
Most shockingly, there is a website called The Physics arXiv Blog that praises this stuff as well and the wording looks similar to the "real" Physics arXiv Blog although I can't find it there.
Other texts on similar topics: astronomy, string vacua and phenomenology, stringy quantum gravity
Saturday, June 28, 2014 ... / /
Sarajevo assassination: 100 years
Exactly 100 years ago, the Great War became unavoidable. (That's how the people called a world war before they were forced to realize that this exercise is repeatable.)
On Sunday, June 28th, 1914, the prospective Czech king – who also managed to be destined to become the Hungarian king and the emperor of the rest of Austria-Hungary, too – archduke Franz Ferdinand d'Este, along with his wife, Czech countess Sophie (Žofie, genetically Czech aristocrat, culturally fully Germanized) who was afraid of her husband's safety (rightfully, it turned out, but her fear didn't help), was murdered during his visit to the Bosnian capital of Sarajevo, in a South Slavic region that would belong to Austria-Hungary at that time.
The assassin was Gavrilo Princip, a Bosnian Serb whose act was almost certainly coordinated by the Serbian secret services or parts of the Serbian military. Serbia had ambitions to destabilize the South Slavic regions of the Austrian Empire, Croatia, Bulgaria, and others and the principle or the principal goal of Princip's and other efforts was to create something like a Great Serbia. Well, let's use the word: they simply wanted to create Yugoslavia. ;-)
Bruno Zumino: 1923-2014
Sadly, Bruno Zumino, an Italian emeritus professor at UC Berkeley, died at age of 91 on June 22nd early after the midnight.
His 100 or so papers have won him 20,000 citations or so, a sign he was a top physicist.
His papers include six very different articles with over 1,000 citations per paper. They cover the eras both before the discovery of supersymmetry and after the discovery of supersymmetry.
Friday, June 27, 2014 ... / /
Have Australians and their photons legitimized time travel?
I've received links to this story approximately from 7 people so I will write a short blog post although I don't claim that it's a well-deserved honor for the authors. At any rate, the popular science media were full of the news that physicists managed to simulate time travel with photons, showed that there is nothing wrong with closed time-like curves at the quantum level, and so on.
See e.g. The Daily Mail – to be sure that at least one URL works for more than 30 days – and the press release at the University of Queensland, Doctor Who meets Professor Heisenberg.
All these wonderful things were not invented by the journalists. There is actually an article in Nature Communications
Experimental simulation of closed timelike curves
by Martin Ringbauer, a PhD student (!) and a lead author, and collaborators.
Other texts on similar topics: philosophy of science, quantum foundations, science and society, stringy quantum gravity
Thursday, June 26, 2014 ... / /
Strings 2014: talks
Princeton University and the nearby IAS – in combination, the ultimate epicenter of string theory on this planet – are co-hosting Strings 2014 this week (Monday-Friday). By far the most useful page on that server is this
Talks at Strings 2014 (URLs of slides and videos)
You may see that the AdS/CFT (and, more generally, "holographic") talks represent the largest percentage of the contributions. That includes applications in condensed matter physics, topological metals, turbulence, various indices, spectral curves, and so on.
Several talks – including one by the BICEP2 boss John Kováč – are dedicated to inflation and primordial gravitational waves. This set includes Paul Steinhardt's monologue, Daniel Baumann's talk, and Eva Silverstein's monodromy speech, Fernardo Marchesano's thoughts about the same type of models, and Matias Zaldarriaga's musings about the dawn of B-modes (have I missed someone)?
Should BICEP2, Higgs have crushed the Universe?
Only if you believe that there can't be any saviors
Yo Yo and other readers were intrigued by the following cool yet slightly misleading article in the Daily Mail:
Big Bang controversy grows: Study claims universe would have collapsed 'a second after it formed' if Bicep2 results were true
Lots of science media are combining the July 2012 Higgs discovery and the March 2014 BICEP2 discovery in this apocalyptic way although some of them chose a more sensible – more correct and less catastrophic – title (I mean and praise the titles referring to "new physics"). The articles were sparked by the following paper
Electroweak Vacuum Stability in light of BICEP2 (arXiv)
by Malcolm Fairbairn and Robert Hogan from Kings College London that was published in PRL one month ago (which is why the explosion of hype right now seems to be a bit late from any point of view).
To make the story short, if both the discovery of the \(125\GeV\) Higgs boson and the BICEP2 discovery of the primordial gravitational waves are valid, the Universe should have decayed – fled into an increasingly unlivable state incompatible with the particles as we know them – just a moment after the Big Bang. The following 13.8 billion years should have been impossible.
Other texts on similar topics: astronomy, experiments, LHC, string vacua and phenomenology, stringy quantum gravity
Tuesday, June 24, 2014 ... / /
Philosophy became a euphemism for crackpot physics
Sean Carroll attempted to defend philosophy against the physicists who rightfully point out that it is not the right path to learn how the world around us works:
Physicists Should Stop Saying Silly Things about Philosophy (Preposterous Universe)
This defense isn't surprising because Sean Carroll is an eminent example of a physics crackpot who uses philosophy and his personal links with philosophers to suggest that he is something else than a physics crackpot.
He lists three valid criticisms that physicists sometimes raise against philosophy and attempts to disagree with them:
"Philosophy tries to understand the universe by pure thought, without collecting experimental data."
"Philosophy is completely useless to the everyday job of a working physicist."
"Philosophers care too much about deep-sounding meta-questions, instead of sticking to what can be observed and calculated."
These criticisms – in one way or another articulated by folks like Weinberg, Hawking, Krauss, Tyson, and others – are true even though Weinberg in particular has stated the problems with philosophy much more crisply and accurately.
Try Wolfram Programming Cloud now
Sort of a remote online Mathematica+ for everyone
Stephen Wolfram and his folks have silently started an amazing thing:
WolframCloud.COM (click!)
See also Wolfram's blog post, Wolfram Programming Cloud is live, that offers you some cool examples what you may do right now.
I came to the web above, WolframCloud.COM, and tried a username/password combination I used to use with some Wolfram Alpha widgets or something. It has worked! You may create a new account over there, I guess.
What's really new and live on WolframCloud.COM is the "Wolfram Programming Cloud". You may click at many things over there, including "New". The latter gives you a Mathematica (more precisely: Wolfram Language) notebook interface of a sort. But everything is run on Wolfram's computers.
Lots of the functionalities you may expect from the cloud will satisfy you.
Other texts on similar topics: computers, markets, mathematics
An LHC-friendly type IIA stringy braneworld
Dimitri Nanopoulos et al. have studied a different class of string compactifications capable of describing the Universe around us, at least as the first sketch. But today, they switched to type IIA braneworlds:
A Realistic Intersecting D6-Brane Model after the First LHC Run
Tianjun Li, D. V. Nanopoulos, Shabbar Raza, Xiao-Chuan Wang look at a particular model with D6-branes in type IIA string theory on the \(T^6/ \ZZ_2\times\ZZ_2\) orbifold.
This picture is originally from a paper about F-theory model building but the 2D illustrations aren't too different.
BICEP2 and PRL: journalists prove that they're trash
On Wednesday, Prof Knížák, a top Czech artist, told us that the journalists are wrong about everything. Whatever they write down is guaranteed to be wrong.
He has famously said that nowadays, the state of being informed is a sign of the lack of education because people are being mass-fed by distorted, irrelevant, and bogus stories. Students at schools are no longer able to think or debate. A cacophony of meaningless monologues has superseded a thoughtful dialogue. The Internet search engines have amplified the problem because especially young people are increasingly copying whole sentences and answers verbatim. They're no longer able to build any framework in their minds that could be used as a starting point for generating conclusions, predictions, or opinions.
When he was saying these things, I would think he was exaggerating. I have seen journalists who have written deep and sometimes even true and important things before, haven't I? However, since Wednesday or so, my impression has changed. I've been totally overwhelmed and repelled by the Internet and the media. The amount and intensity of recent junk and pure lies has exceeded some episodes I vaguely remember from the past.
Higgs correctly decays to bottoms, taus
The confidence level exactly matches the ATLAS contest top scores
The MIT released a cute press release 12 hours ago (which promotes a new paper in Nature Physics):
Fresh evidence suggests particle discovered in 2012 is the Higgs boson
Findings confirm that a particle decays to fermions, as predicted by the Standard Model.
Evidence for the direct decay of the \(125\GeV\) Higgs boson to fermions (Nature Physics)
Markus Klute of MIT and his collaborators looked for traces of a Higgs boson decaying to a pair of tau leptons,\[
h \to \tau^+\tau^-
\] See also a related CERN press release. Note that in July 2012, the Higgs boson was originally discovered by looking at processes when it was born and decayed either to two photons or two Z-bosons:\[
h \to \gamma\gamma, \quad h\to Z^0 Z^0
\] The processes involving a pair of fermions in the final state are a little bit less frequent. Note that the fermions get their masses from the God mechanism which means that the heavier fermions have stronger interactions with the Higgs. That's why the 3rd generation fermions are reasonably easy to be seen.
You may watch Particle Fever soon
If you have been thinking about watching the 99-minute movie about the LHC, Particle Fever (by Mark Levinson and David Kaplan), it's time for you to recheck the movie's website which provides you with diverse ways how to be able to see the thriller.
What's your place where you usually get movies? Amazon? iTunes? Something else? ;-) Whatever it is, you should look there because chances are that you may see the movie very soon, sooner than you expect! You may find traces of the movie that you haven't found before when you tried. I hope that the hints have been sufficient. :-)
Other texts on similar topics: experiments, LHC, string vacua and phenomenology, video
Siméon Denis Poisson: a birthday
Siméon Denis Poisson was born at the beginning of Summer 1781, on June 21st (like today), to a French royal soldier. He would die in 1840, at age of 58. He was a top French mathematician, geometer, and physicist of his era.
Both Lagrange and Laplace were his advisers. Liouville, Dirichlet, and Carnot (of the thermodynamic cycle fame) were among his students.
Revolutionary changes in France have influenced him in many professional ways. You may read it elsewhere.
Other texts on similar topics: France, science and society
A Czech anti-Maidan warrior
This guy came to Eastern Ukraine to protect Donbass and to fight against the Maidan regime:
I must say that I sort of admire him. His name is Ivo Stejskal and he is a teacher of physical education and civic education in Brno, Moravia, Czechia. He must be sort of inspiring for his (basic school, Novolíšeňská Street) students. The Czech media inform that he's a polite, likable person who gets the best ratings from his friends and colleagues.
Other texts on similar topics: Czechoslovakia, everyday life, politics, Russia
BICEP2 gets published in PRL
Discovery upheld, paper nearly unchanged
Lots of sourballs and jealous experimenters (and theorists) have been trying to sling mud on the March 2014 discovery of the primordial gravitational waves by BICEP2. Some of them have been suggesting that it wasn't even kosher to write or talk about the discovery before it gets through the "peer review".
Well, those people associating the process of "peer review" with supernatural abilities have yet another reason to shut their mouth because the work was just published in the appropriately prestigious Physical Review Letters:
Detection of B-Mode Polarization at Degree Angular Scales by BICEP2 by Ade et al. (BICEP2)
Not just the abstract above but the whole paper is available for free. We should have entered the era in which almost all cosmo/astro/particle/theoretical physics papers should be available for free due to a contract.
Unless you have memorized individual sentences in the original draft (see arXiv) really carefully, you won't really find a difference. The paper claims the discovery of these waves. I can't even safely say whether it lists more methods to be confident that the discovery is real or fewer methods to do so.
The abstract claims that the null hypothesis is excluded at "more than five sigma" confidence level (and, later in the abstract, "seven sigma" confidence level) using the first method, that the dust is 5-10 times smaller than the observed signal if various models of the dust available in the literature are being used, and that cross-correlation arguments and the right spectral index exclude the dust at "three sigma" or "one point seven sigma" even without any models. It also says that if all these things are ignored, it's plausible that some completely new model of the dust could change the conclusion or at least the confidence level. What a surprise. Any development in any part of science may change things.
Other texts on similar topics: astronomy, experiments, string vacua and phenomenology, stringy quantum gravity
Will millions of girls start to code for $50 million?
I don't think so but I wish Google good luck
These folks at Google must really believe the cause.
What you said was nice, girls, except that I would bet that these words were written by a man.
Google along with Chelsea Clinton and others are paying $50 million to an initiative (madewithcode.com) that should attract girls to coding, programming, and reduce the gender gap in information technologies.
Other texts on similar topics: computers, freedom vs PC, science and society
AMS-02: no cutoff in positron fraction up to \(400^+\GeV\)
Fer137 has pointed out that the AMS-02 collaboration has published some new data a few days ago.
In contradiction with my previous sociological speculations 14 months ago, they still don't show any cutoff after 11 million electron-positron events have been taken into account.
Feynman was right: easily explainable theories can't be worth physics Nobel prize
Out of many quotes by Richard Feynman, Tommaso Dorigo picked a 1965 statement printed in the July 22nd 1985 issue of People Magazine:
Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize.
Dorigo himself adds: "I sure cannot disagree more with Dick than on the above sentence!" It is not quite clear to me whether Dorigo "only" disagrees with the general idea that Feynman wanted to convey or whether he even disagrees with the statement that Quantum Electrodynamics cannot be explained to the average person. I am eagerly waiting for Dorigo's textbook explaining QED to the average person! One can't even explain the actual general postulates of quantum mechanics to an average physics PhD – there is a whole "discipline" in the Academia that says that it's perfectly OK not to understand them and to replace the proper knowledge of the universal postulates by a comparative-literature-department-inspired discussion blog – so be sure that the task I assigned to him is much harder.
Maybe I should have used a skyscraper instead?
Well, your humble correspondent does agree with Feynman. At least for several centuries, cutting-edge physics – especially theoretical physics – has been built on top of a tall pyramid of insights that depend on other insights. The newest developments, including the breakthroughs, may transform several floors at the top, perhaps many floors. But they just can't overthrow the pyramid so entirely that you wouldn't need at least a couple of floors at the bottom.
Other texts on similar topics: philosophy of science, science and society
An ex-presidential birthday party
I think that generic parties and similar events are not a good material for a weblog that is expected to be global in character. One reason is that most readers are not interested; another reason is that I am intrinsically an introvert and privacy is something I consider important.
But Czech ex-president Václav Klaus' 73rd birthday banquet may be different. It is a public event of a sort. I was invited and I went there yesterday. It was organized in the headquarters of the Václav Klaus Institute, in the Chateaux of Hanspaulka, the villa above.
Hanspaulka is "better than the adjacent places" villa quarter in Prague, Northwest from the Prague Castle, a subhabitat inside the villa neighborhoods called Dejvice, Střešovice, and Bubeneč. Prague dwellers will surely forgive me this outsider version of the Prague geography. I was walking a lot – 15 miles a day – and the "social shoes" are not good for that so some of my blisters are really bloody bastards. By the way, it was a nearly sunny, very warm, but fortunately not tropical, day.
Other texts on similar topics: climate, everyday life, politics
David Evans' notch-filter theory of the climate is infinitely fine-tuned
The required notch filter itself is the key disease showing that the particular solar model is almost certainly incorrect
More than two months ago, Jo Nova's partner David Evans sent a group of people including your humble correspondent impressively looking and formally convincing documents about a new solar theory of the climate. I have spent many hours with reading them and thinking about them, exchanging e-mails with David, and so on. Because the documents were rather long, I needed an hour at the very beginning to see what the model really says, but that was followed by many other hours of reading.
Sometime on the second day, I became pretty much certain that the model is wrong. At that time, I should have stopped all interactions because they were unlikely to be constructive and I was at risk that I wouldn't even be thanked for the intense hours even though David would tell me he was incorporating my feedback – and this worry seems to have materialized, indeed. Not that it's too important! ;-) I did stop spending my time a few days later, anyway.
More importantly, I think that the climate cannot work like that and if you look how the theory works and what is used as evidence in favor of the theory, it's very clear that there is no evidence at all. Now when the theory is no longer embargoed, see big news I and big news II on Jo Nova's blog, let me summarize the model a little bit concisely.
Other texts on similar topics: climate, heliophysics, mathematics, science and society
John Oliver funnily interviews Stephen Hawking
Stephen Hawking has demonstrated that he's a good comedian once again.
John Oliver asked many important questions about the Universe, the alien life, the dictatorship by robots, and his abilities to date someone in a parallel universe.
Obama's commencement speech and the illusion of science literacy
Obama gave a commencement speech at UC Irvine:
He said some optimistic words, left-wing clichés about the inequality and the tautologically untrue propositions about the superior importance of the middle class, the need to welcome immigrants, and especially various words about the good quality of UC Irvine. This school sort of sucks but because they sent him 10,000 postcards to make him visit, they must be great.
I will discuss his comments about the climate and science in general. Go to 8:00 or so where this segment begins. This junk unfortunately goes on and on and on.
Other texts on similar topics: climate, education, politics, science and society
ATLAS race: conquering K2 for the first time
If someone happens to occasionally follow the ATLAS Higgs Contest Leaderboard, she could have noticed that among the 677 athletes or teams, your humble correspondent jumped to the 2nd place early in the morning. (Too bad, the T.A.G. team jumped above me an hour later, by a score higher by 0.00064 than mine, so I am third again.)
This is how I imagine the formidable competitors. Lots of powerful robotics and IT under the thick shields, boasting the ability to transform from one form to another, consuming terawatts of energy, and so on. Most of them would have competed in numerous similar contests. The only "big data" programming I have done in my life was a reformatting of the 80,000 Echo comments on this blog a few years ago, and I didn't really write too many smart programs in the last 25 years, and none of them was in the typical programming languages that contemporary programmers like to use.
But even if one is a programming cripple like that, he is allowed to compete. In this sense, ATLAS and Kaggle are more welcoming than a KFC branch in Missouri that ordered a 3-year-old girl injured by pitbulls to leave the restaurant because she was scaring the other consumers away.
Other texts on similar topics: computers, experiments, Kaggle, LHC, science and society, sports, string vacua and phenomenology
Windows 8.1, shape writing etc.
Last night, I wasn't sufficiently exhausted by the usual Sunday floorball match and card games so I decided to upgrade my Lumia 520 to Windows Phone 8.1 preview for developers. You register yourself as a developer and then "check for updates" offers you these beta updates, too. You can't return to 8.0 and may void some warranties. But you will be upgraded to the final version of Windows Phone 8.1 when it's out. The upgrade process was free of any glitches (you shouldn't try to interrupt the process by taking pictures etc.! Be sure that you don't run out of the battery in the middle!) but it took hours to be completed.
There are various improvements. You may see that the tiles may be transparent and show a background image behind them. You may increase the number of columns of tiles from two to three. FM radio is moved to a special application, a new file manager works, an Android-like "notification [action] center plus fast settings" is added via the top swipe. Cortana is a personal assistant, Microsoft's answer to Siri (it only works in English and Chinese so far). A battery supervisor is improved much like tons of other things. Applications and everything else may finally be saved to the removable SD card.
Other texts on similar topics: computers, everyday life
Has the extinction rate increased 1,000 times?
Lots of journalists happily spread the "gospel" about a recent paper in the Science Magazine,
The biodiversity of species and their rates of extinction, distribution, and protection
by S.L. Pimm and 8 co-authors (U.S., U.K., Brazil). The abstract suggests it is a rather careful, conservative paper with some interesting statistics. The summaries in the media are not so careful, I think.
The eye-catching figure is that it's being estimated – and as far as I see, the paper assumes it is essentially right – that the "number of species that go extinct per year" has increased by three orders of magnitude. That's huge and I would surely count myself as someone who cares about the biodiversity problem if the truth were close to this number.
I have no doubts that people have exterminated numerous species – usually by clearly hostile tools such as guns, not so much by some esoteric, hypothetical, and convoluted methods such as carbon dioxide emissions which are OK for everyone – but are the numbers so bad?
Other texts on similar topics: biology, science and society
James Clerk Maxwell: a birthday
Off-topic, geology: The world ocean may have just quadrupled. A Science article brings evidence that at the depth of 410-660 km, there is a huge amount of so far overlooked water hiding in ringwoodite, a sponge-like stone, whose volume is 3 times the surface oceans' combined.
James Clerk Maxwell was born on June 13th, 1831, i.e. 183 (similar digits) years ago, which was fortunately for him Monday and not Friday we have today ;-), to a wealthy advocate from a family of Conservative Party lawmakers, and he died on November 5, 1879 (deja vu without MathJax). He was probably the most influential 19th century string theorist even though he mostly cared about the low-energy limit, similarly to the supergravity theorists today.
Michael Mann's six new lies
BillMoyers.com interviewed the world's most notorious fraudulent climate fearmonger Michael Mann about
Six Things Michael Mann Wants You to Know About the Science of Global Warming.
Well, they mostly spoke in such a way that Michael Mann preached and BillMoyers.com obediently listened so I shouldn't have called it an interview.
His text is rather incredible. As Roger Pielke Jr observed five years ago, if Michael Mann did not exist, the skeptics would have to invent him.
According to Mann's latest tirade, everyone would be a fearmonger and a demagogue like himself if the public became more familiar with six propositions – various would-be facts and ideas. What are they? Are they true?
1. Climate Scientists are the Real Skeptics
No, they are not. Climate skeptics are known as "skeptics" for a good reason – because they are nothing else than the practitioners of scientific skepticism in the context of the remarkable claims about the climate. Climate fearmongers such as Michael Mann himself are those who are rejecting the rules of the scientific skepticism in a way that is completely analogous to the blunders committed by the advocates of paranormal phenomena and similar things.
Quantum contextuality is just another fancy word for Bohr's complementarity
People keep on rediscovering the old quantum wheel, while they produce and eat lots of šit, too
The popular science media were full of reports that a "magic word" has been found that will enable quantum computers, and the "magic word" is "contextuality". Quantum contextuality is a fancy word for the fact that quantum mechanics doesn't allow you to assume that the quantities you measure objectively had (in the classical sense) the sharp values you ultimately measured before the measurement.
Because of the quantum contextuality, what the measurements reveal depends on the character of the measurements, and not just some would-be objective reality that exists independently of the measurements.
All this journalistic excitement is based on a paper in Nature:
Contextuality supplies the magic for quantum computation by Howard, Wallman, Veitch, Emerson (arXiv, Nature)
Quantum computing: Powered by magic by Bartlett (Nature, semipopular)
... EurekAlert press release, Google News ...
The actual technical paper has a higher percentage of correct statements relatively to the wrong statements than the typical papers published about "the foundations of quantum mechanics" these days. But it is still a bizarre mixture of popular-book-level hype and distortions with some potentially technical stuff in quantum computation.
Barack Obama passes the Turing test, too
The famous computer science pioneer Alan Turing decided to define "artificial intelligence" as the machine's ability to speak in such a way that it fools people around into thinking that he or she or it is an actual human being. I don't think that this very definition of intelligence is deep – this will be discussed later.
Barack Obama and his Japanese friend
But let's first cover the story. As the chatbot's namesake Eugene S told us, the media have been full of hype about a chatbot pretending to be a 13-year-old Ukrainian boy Eugene Goostman (see his or her or its web where you may chat with Eugene) who has tricked 1/3 of a London committee into believing the words were produced by a human. The programmer of the chatbot remained modest and he would probably agree that his program isn't dramatically more advanced than Eliza that was created half a century ago.
(I still remember my encounter with a 130-cm robot who came to me and shook my hand at the Rutgers Busch Campus Cafeteria sometime in 1999. The discussion with this robot – about Czechia, Werner Heisenberg, and other things – was much more inspiring than similar talks one may have with 99% of the people. For a day or so, I was stunned: has the artificial intelligence improved so much? Beware spoilers: After the day, I assured myself that the robot has had cameras, microphones, and speakers converting human voice to a funny robotic noise, and this "artificial personality" was controlled remotely from about a 50-meter-distant location.)
Here's my interview with another one that has tricked almost all Americans and people in the world that his sentences are genuine human creations rather than decorated rhetorical patterns invented by semi-automatic politically correct speechwriters.
Motl: Did you know about the policy of selective targeting of conservative groups by the Internal Revenue Service?
Obama: Let me make sure that I answer your specific question. I can assure you that I certainly did not know anything about the IG report before the IG report had been leaked through the press.
Motl: But that wasn't my question. I was asking generally about the harassment of right-wingers, not about a report of yours.
Obama: Let me be clear. Now, could you tell me where you live?
Motl: Hmm. What about the relationships with Eastern Europe? Don't you think that America should support the independently working prosperity of countries such as Poland instead of their obsession with permanently viewing Russia as the culprit behind all their failures?
Obama: Let me make sure: Poland is one of our strongest and closest allies. Using a phrase from boxing, Poland punches above its weight. ;-D
Other texts on similar topics: computers, politics, science and society
Zeman's speech on Arabs, Islam, and Israel's independence
Czech president wins the hearts of some Israel supporters
Two weeks ago, I shouldn't have missed Czech president Miloš Zeman's speech in Prague's Hilton – on the Israel's independence day. Here is a translation from the Czech original.
Speech of the president of the republic on the Israel's Independence Day banquet
thank you for your invitation to this celebration of Israel's Independence Day. In the Czech Republic, dozens of state holidays commemorating the independence are being marked every year. I may arrive to some of them, I am too busy to attend others, but the only holiday of independence which I can never leave out is the celebration of the independence of the Jewish State of Israel.
Other texts on similar topics: Czechoslovakia, Middle East, politics, religion
Wednesday, June 11, 2014 ... / /
Entanglement and networks of wormholes
The newly realized relationship between the geometric connections in the spacetime and the standard quantum entanglement has been the topic of exciting papers in recent years. One aspect of the papers that have been written down so far made them simple and too special: the entangled systems were always pretty much pairs of degrees of freedom and the wormhole correspondingly looked bipartite, like a cylindrical tunnel connecting two pretty much identical throats at the ends.
A newly published 65-page-long hep-th preprint
Multiboundary Wormholes and Holographic Entanglement
by Balasubramanian (I don't need a clipboard, Vijay!), Hayden, Maloney (hi, Alex!), Marolf, and Ross from Upenn/CUNY-Stanford-McGill/Harvard-UCSB-Durham (yes, seven affiliations for five authors, guess why!) was written in order to transcend this limitation.
A cantor learns a lesson from a brat
Oops, native English speakers probably don't know that a "kantor" is a teacher or a schoolmaster in Germany, Czechia, or Central Europe in general, so please be aware that the title is wittier than it sounds LOL
America is thrilled by the victory of an unknown Tea Party candidate, Dave Brat, an economics instructor at an unknown college in Virginia, over Eric Cantor, the House Majority (=GOP) Leader, in the Virginian Republican primaries.
One has to return by a decade in time to find a majority leader (Daschle) who would lose an election and no one remembers a loss of a majority leader in the primary elections. No one remembers because it hasn't happened since 1899 when the chair of the majority leader was invented.
I think that Cantor is a smart and sensible chap and I could disagree with some beliefs of Brat but as far as I am concerned, the positive emotions outweigh the negative ones.
Basics of the ATLAS contest
Update 6/15: After several days, I returned to top three out of the 656 competitors (or teams). 3.74428 would be enough to lead a week ago but times are changing. We are dangerously approaching the 3.8 territory at which I am likely to lose a $100 bet that the final score won't surpass 3.8, and I am contributing to this potential loss myself. ;-)
...and some relativistic kinematics and statistics...
In the ATLAS machine learning contest, somebody jumped above me yesterday so I am at the fourth place (out of nearly 600 athletes) right now. Mathieu Cliche made Dorigo's kind article about me (yes, some lying anti-Lumo human trash has instantly and inevitably joined the comments) a little bit less justifiable. The leader's advantage is 0.02 relatively to my score. I actually believe that up to 0.1 or so may easily change by flukes so the first top ten if not top hundred could be in a statistical tie – which means that the final score, using a different part of the dataset, may bring anyone from the group to the top.
(Correction in the evening. It's the fifth place now, BlackMagic got an incredible 3.76 or so. I am close to giving up because the standard deviation in the final score is about 0.04, I was told.)
I have both "experimental" and theoretical reasons to think that 0.1 score difference may be noise. Please skip this paragraph if it becomes too technical. Concerning the "experimental" case, well, I have run several modified versions of my code which were extremely similar to my near-record at AMS=3.709 but which seemed locally better, faster, less overfitted. The expected improvement of the score was up to 0.05 but instead, I got 0.15 deterioration. Concerning the theoretical case, I believe that there may be around 5,000 false negatives among the 80,000+ or so (out of 550,000) that the leaders like me are probably labeling as "signal". The root mean square deviation for 5,000 is \(\sqrt{5,000}\sim 70\) so statistically, \(5,000\) really means \(5,000\pm 70\) which is \(1.5\%\). That translates to almost \(1\%\) error in \(\sqrt{b}\) i.e. \(1\%\) error in \(s/\sqrt{b}\) (the quantity \(s\) probably has a much smaller relative statistical error because it's taken from the 75,000 base) which is 0.04 difference in the score.
It may be a good time to try to review some basics of the contest. Because the contest is extremely close to what the statisticians among the experimental particle physicists are doing (it's likely that any programming breakthrough you would make would be directly applicable), this review is also a review of basic particle physics and special relativity.
Other texts on similar topics: computers, experiments, Kaggle, LHC, science and society, string vacua and phenomenology
Lumia, Windows Phone: experience of the first hours
For a few hours, I've considered myself familiar with all the major mobile operating systems. I received an iPod Touch with iOS almost four years ago as a gift/compensation from Paul O. and I have played with an Android (ASUS Memo Pad Smart 10) tablet since October, while helping others with their Android phones (and another Android tablet I bought as a gift).
It was sort of inevitable that I wanted to try Windows Phone. Its users have been immensely satisfied. So today, I decided to replace my classic, reliable dumbphone Nokia 1600 with a Lumia. Even though I am Lumo, Microsoft failed to send me a Lumia for free. Just to be sure, Motlorola and others have failed, too. ;-) So I finally bought the cheapest one, a cyan Lumia 520 – although I still had a plan to buy a 625 last night. Its non-replaceable battery was a reason why I decided for something else.
Lumia 520 is the entry-level phone with Windows Phone 8 (which will be upgraded to Windows 8.1 in two months). I bought it for $130 today (CZK 2,599, not counting 2 times CZK 15 for my stupid useless connections to one-time cellular data haha: I hope that the cellular Internet is safely turned off for a while now) but in the U.S., you may have an unlocked one for $104, too. It's a good price for a smartphone that allows you to do so many things.
Other texts on similar topics: computers, markets
Inflation and BICEP2: Steinhardt is missing the whole point
If the BICEP2's discovery of the primordial gravitational waves is valid, and I am confident that the evidence still strongly suggests that it is, then Paul Steinhardt, Neil Turok, and Roger Penrose are perhaps the world's three sorest losers because the absence of such primordial gravitational waves were what these men self-confidently predicted as a consequence of their bold idiosyncratic "cosmologies".
However, Physics World hired Neil Turok, Science Friday interviewed Roger Penrose, and Nature now asked Paul Steinhardt to inform us about the status and the future of cosmology.
This is really amusing, shocking, or hysterical, depending on your temperament. It's like the following situation: It's April 1945. The Red Army arrives to Berlin and CNN, MSNBC, and the New York Times interview Adolf Hitler, Joseph Goebbels, and the Japanese emperor (the order isn't necessarily the same as the order of the three physicists at the top!) about their plans for the future of Europe, Asia, and the world. ;-)
Other texts on similar topics: astronomy, experiments, philosophy of science, science and society, stringy quantum gravity
New physics? LHCb insists on a flavor anomaly in \(B\) decays
The Symmetry Magazine has told us about intriguing claims at an ongoing conference in the New York City:
LHCb glimpses possible sign of new physics
Electroweak penguins at LHCb (slides from the talk)
The LHCb experiment is smaller than the two LHC bulls, ATLAS and CMS, but it is more careful when it comes to the analyses of particles including \(b\)-quarks. These quarks incorporate themselves into hadrons – most typically, the \(B\)-mesons. The latter are analogous to pions.
Because the \(b\)-quarks belong to the third generation of the Standard Model quarks, all three generations are involved in their life stories. It follows that processes where \(B\)-mesons do something interesting are also sensitive to the CP-violating phase of the CKM matrix. The phase is inconsequential for all phenomena that only involve at most two generations of fermions. In other words, the LHCb experiment is particularly sensitive to violations of the CP symmetry, the symmetry with respect to the transformation placing all particles in the mirror and replacing them by antiparticles at the same moment.
The Standard Model violates the CP symmetry by the CKM phase only. Experiments even imply that the \(\theta\)-angle in front of the QCD \(F\wedge F\) pretty much vanishes – a puzzing result known as the strong CP-problem whose resolution requires axions, according to most phenomenologists. All older experiments are compatible with the assumption that this CKM CP-violating phase is indeed the only "CP offender" in Nature. However, LHCb seems to be carefully coming with a paradigm shift by daring to suggest that they sometimes see new sources of CP violation.
Tom Steyer's donations are crippling and corrupting America
His events have nothing to do with "climate change"
Tom Steyer is worth $1.6 billion or so. He made his fortune through hedge funds 25+ years ago when it was still possible to extract lots of money from the inefficiencies of the markets. These days, the hedge fund industry belongs to the sector of lotteries – and the investors pay hefty service fees to be sure that in average, they will underperform the stock market.
At any rate, he is not only a billionaire but also a moron. The word "moron" understates what he is. He is not just a moron; he is – and I was afraid to say the E-word – an environmentalist. The San Francisco Chronicle told us about a new fund he is paying from his money:
Billionaire sets up fund for victims of climate change
His initial $2 million donation is supposed to grow and it should compensate victims of wildfires and other "extreme weather events". Additional funds should similarly go to victims of droughts, floods, and other natural disasters. All this stuff is painted as being linked to the "climate change". In reality, these events don't depend on any "climate change" at all. Precipitation, sunshine, wildfires, droughts, floods etc. are weather events that may be counted as business-as-usual weather that takes place in a normal, moderate, tropical, subtropical or almost any other climate. These phenomena have been the norm on our blue, not green planet for billions of years.
These donations are being presented as charity, something that helps the society. In reality, they contaminate and corrupt the society, make it dumber and less honest, and reduce the potential for the growth and prosperity in the future. These harmful effects occur because numerous sufficiently gullible people are being manipulated into believing claims that are patently false. And these claims are not just some academic questions. They influence policy in a way that costs hundreds of billions of dollars a year and this number may grow to trillions.
Other texts on similar topics: climate, markets, politics, weather records
Stops at \(200\GeV\), a \(W^+W^-\) anomaly may be screaming
While supersymmetry remains the most well motivated candidate for new physics, it seems that all the observations at the LHC up to the \(8\TeV\) run in 2012 show that no new physics is needed and all "excessively bold and obvious" proposals for new physics – those that could explain the unbearable lightness of Higgs' being – have been excluded.
The Standard Model seems to be a more "thrifty" theory than any other bottom-up effective phenomenological model, and because there's no significant contradiction between the Standard Model and the LHC data, one is expected to pick the 40-year-old theory of nearly everything as his preferred effective theory of choice. Empirically speaking, no other theory is doing better.
Well, the first two hep-ph papers on the arXiv today argue that this common wisdom may very well be wrong, in a very exciting way!
Natural SUSY in Plain Sight by Curtin, Maede, Tien [Stony Brook]
'Stop' that ambulance! New physics at the LHC? by Kim, Rolbiecki, Sakurai, Tattersall [Madrid/London/Heidelberg]
The second title probably jokingly refers to the theme of hospitals for theories.
I still think that the seven authors of both papers must be aware of the second group because it would be rather unlikely for two groups to publish a paper with pretty much the same speculative claim on the same day, just minutes after one another. However, I would normally expect some comment of the type "While this paper was being completed, we learned about some damn competing bastards [83] who wanted to scoop us; we're better, faster, correcter, and prettier than them, but we're also generous to cite their future paper because such incomplete future citations aren't counted, anyway."
But I don't see any such comment in either of the two papers so it is in principle conceivable that the papers are independent!
MINOS disfavors nearly degenerate sterile neutrinos
The experiments that have been clearly accepted require at least three "flavors" of neutrinos, namely the \(SU(2)\) partners of the charged leptons called \(e^\pm, \mu^\pm,\tau^\pm\) that are called\[
\nu_e,\quad \nu_\mu, \quad \nu_\tau.
\] The Greek letter \(\nu\) ("nu") stands for a "neutrino" and it hasn't been copyrighted by an artist yet. A "neutrino" is a word invented by Enrico Fermi. This word may be translated from Italian to English as a "small stupid Italian neutral thing" (you see that the Italian language is more concise).
ATLAS contest, off-topic: Out of 521 competitors, your humble correspondent is back in top ten right now.
More precisely, the three mass eigenstates of these three neutrino species are some linear superpositions of the \(SU(2)\) partners of the charged lepton mass eigenstates – and it's the mass eigenstates that we call \(e,\mu,\tau\). The required mixing – the unitary transformation mapping the partners of the charged lepton eigenstates to the neutrino mass eigenstates – is called the PMNS matrix and it is mostly analogous to the CKM matrix for quarks.
A charged lepton such as the electron/positron is described by a complex 4-component Dirac spinor which is why we encounter both electrons and positrons (electrons' antiparticles) and each of them may be spinning up and down. Recall that \(2\times 2 = 4\).
However, the known neutrinos have a correlation between the helicity and their being antimatter: the neutrinos we may produce and see are always left-handed while the antineutrinos are right-handed. That's why a two-component Majorana (or, less appropriately, Weyl) spinor is enough for the description of a neutrino flavor. \(SO(10)\) and higher grand unified theories – and their stringy extensions – like to predict right-handed neutrinos, too. They are likely to exist because the existence of heavy right-handed neutrinos is capable of explaining the low mass of the known neutrinos via the "seesaw mechanism".
However, the masses of the "mostly right-handed" and "mostly left-handed" neutrino eigenstates are so different that the "in principle" 4-component Dirac spinor for neutrinos and antineutrinos (a spinor relevant if the right-handed neutrinos exist which is uncertain) is effectively split into two 2-component Majorana (or Weyl) spinors, anyway. At low energies, only the single well-known Majorana (or Weyl) effectively 2-component spinor is needed to explain all the known observations. If the additional, massive Majorana 2-component part of the Dirac spinor doesn't exist, the nonzero mass of neutrinos implies that the neutrino and the antineutrino are really the same particle and must be able to oscillate in between each other. (However, the knowledge of the angular momentum prohibits the most generic "transmutations" of this sort, anyway: a left-handed neutrino/antineutrino has to stay left-handed.)
Other texts on similar topics: experiments, string vacua and phenomenology
New Russia can no longer reunify with a Kiev-led country
The fights in the Novorussian Confederacy – the newly declared union of the Donetsk People's Republic and the Luhansk People's Republic – continue and we have gotten used to this sad situation, to women and children who have to live in basements, to save their lives from hostile airstrikes organized by the Maidan regime that has declared the whole large ethnic population of what used to be Ukraine as terrorists and that is working 24/7 to invade the newly born republics and to violently force everyone to obedience.
Sad news in climate science: George Kukla, a Czech American who would be a Nixon adviser and who would suggest as early as in 1972 that people should think about global warming, passed away on Monday. Later, he would become a prominent climate skeptic and even an advocate of "global cooling". Kukla was close to ex-president Klaus and may also be viewed as one of the top five scientists who have convinced the U.S. not to sign to the Kyoto protocol.
Like Slobodan Miloševič, Saddam Hussein, and other villains of ethnic cleansing, it is very clear now that the likes of Arseniy Yatsenyuk, Oleksandr Turchynov, and now even Petro Poroshenko deserve nooses. It is also self-evident, however, that many people will have to die before the basic justice may be restored. The ethnic cleansing will continue for quite some time. The fascist junta in Kiev just boasted that it has murdered 300 citizens of New Russia just in the last 24 hours. 300 murders a day is what some American officials praise as "restraint".
Other texts on similar topics: Europe, politics, Russia
Biometric passport terror
An hour ago, a seemingly mundane procedure has reminded me why I hate the government so much. After a decade, I had to apply for a new "Citizen's ID card", the most widely used ID in Czechia, and the new passport.
I don't remember a bureaucratic procedure of this kind that would be quite smooth but the experience today was much worse than the average.
Other texts on similar topics: biology, everyday life, science and society
It's the female hurricanes that are destructive: paper
I have made the observation – not seriously – many times, especially after the 2005 Atlantic hurricane season. It seems that the women hurricanes such as Katrina, Rita, and Wilma are those that are strong and that destroy things. Finally, this observation was made independently and published in peer-reviewed literature.
Thanks to W.M. Briggs who commented on the paper critically while the media covered it uncritically.
The hole at the center of a hurricane is often called an "eye" but new research suggests it is a vagina.
Female hurricanes are deadlier than male hurricanes
by Jung and 3 co-authors from Illinois and Arizona has made it to the Proceedings of the National Academy of Science of the United States of America. Not bad.
Other texts on similar topics: climate, freedom vs PC, science and society, weather records
Pi on T-shirts illegal due to a registered trademark
As a kid, I thought that \(\pi\approx 3.14159265358979\) is a very important number, so I memorized the first 30 digits when I was 8. Because the amount of "wow" I could see at school was higher than the efforts I had to invest to learn the digits, I learned 100 digits when I was 10 – I can still tell you how it goes – even though I was already recognizing very clearly that there's no point in learning too many digits.
The number\[
\Huge \pi^{\rm TM}\approx 3.14
\] is arguably the most important irrational number in mathematics – possible competitors are \(e\) and \(\sqrt{2}\). It is very natural for a high percentage of geeky mugs and T-shirt to display this Greek lowercase letter.
Off-topic, programming: BTW, with some improved score around 3.67, I returned to top 15 (and top 3%) of the Higgs machine learning contest LOL.
Houston, we have a problem. A bizarre T-shirt company Pi Productions led by a Brooklyn street artist Paul Ingrisano has successfully copyrighted \(\pi\)© as the U.S. trademark registration 4,473,631. Congratulations or, even more precisely, fak you, aßhole.
Other texts on similar topics: mathematics, science and society
EPA's carbon planning emulates communism
The EPA, America's major gang of official bureaucratic green brains, decided to write down a plan that every American source of energy should obey:
U.S. to unveil sweeping rules to cut power plant pollution (Reuters)
Obama to announce controversial emissions limit on power plants (Fox News)
The word "pollution" in that sentence is a piece of a dirty toxic propaganda, of course. In reality, they talk about CO2 which is no pollution in any sense – it is a natural gas that unavoidably accompanies a big part of the essential economic activities in the modern world and that is the primary source of the biological material within plants – and therefore also animals.
By 2030, the coal burners have to emit 30% less carbon dioxide than in 2030, and so on. Will they? Is that possible? I don't know. America may need much more coal in 2030 than it needs today and it will emit more CO2 emissions because it's not economically feasible to filter it; America may need much less coal due to the fracking boom and other, known or unknown technological alternatives and other reasons. What's more important is that the stupid green brains don't know the answer at all. These people exhibit a hardwired hardcore communist way of thinking, or the lack of it.
The ability to draw infantile pictures isn't the same thing as the ability to do science or the ability to wisely manage the economy.
The idea is that a group of enlightened leftists sits down and thinks about the best numbers that everyone should achieve in 5, 10, or 20 years for everyone to be optimally happy. And everyone else is just obliged to realize what these superior brains have outlined. That's how countries are supposed to build a rosy future. Does it work?
Other texts on similar topics: climate, Kyoto, politics
Franson's "breakthrough" concerning the speed of l...
Have Australians and their photons legitimized tim...
AMS-02: no cutoff in positron fraction up to \(400...
Feynman was right: easily explainable theories can...
David Evans' notch-filter theory of the climate is...
Obama's commencement speech and the illusion of sc...
Quantum contextuality is just another fancy word f...
Zeman's speech on Arabs, Islam, and Israel's indep...
Inflation and BICEP2: Steinhardt is missing the wh...
New physics? LHCb insists on a flavor anomaly in \...
Tom Steyer's donations are crippling and corruptin...
Stops at \(200\GeV\), a \(W^+W^-\) anomaly may be ...
New Russia can no longer reunify with a Kiev-led c...
It's the female hurricanes that are destructive: p...
|
CommonCrawl
|
Gabdullin Mikhail Rashidovich
(recent publications)
| by years | scientific publications | by types |
1. Mikhail R. Gabdullin, "Trigonometric series with noninteger harmonics", J. Math. Anal. Appl., 508 (2022), 125792 (Published online) , 11 pp., arXiv: 2102.05698 ; (Published online)
2. Analiticheskaya i kombinatornaya teoriya chisel, Sbornik statei. K 130-letiyu so dnya rozhdeniya akademika Ivana Matveevicha Vinogradova, Trudy MIAN, 314, ed. D. V. Treschev, S. V. Konyagin, V. N. Chubarikov, M. A. Korolev, M. R. Gabdullin, MIAN, M., 2021 , 346 pp.
3. Kevin Ford, Mikhail R. Gabdullin, "Sets whose differences avoid squares modulo $m$", Proc. Amer. Math. Soc., 149 (2021), 3669–3682 ;
4. M. R. Gabdullin, "Lower Bounds for the Wiener Norm in $\mathbb Z_p^d$", Math. Notes, 107:4 (2020), 574–588
5. M. R. Gabdullin, Dokl. RAN. Math. Inf. Proc. upr., 491:1 (2020), 19–22
6. M. R. Gabdullin, S. V. Konyagin, "Stechkin's works in number theory", Chebyshevskii Sb., 21:4 (2020), 9–18
7. M. R. Gabdullin, "Sets in $\mathbb{Z}_m$ whose difference sets avoid squares", Sb. Math., 209:11 (2018), 1603–1610 (cited: 1) (cited: 1)
8. M. R. Gabdullin, "Estimates for character sums in finite fields of order $p^2$ and $p^3$", Proc. Steklov Inst. Math., 303 (2018), 36–49
9. M. R. Gabdullin, "On squares in special sets of finite fields", Chebyshevskii Sb., 17:2 (2016), 56–63
10. M. R. Gabdullin, "On the Squares in the Set of Elements of a Finite Field with Constraints on the Coefficients of Its Basis Expansion", Math. Notes, 101:2 (2017), 234–249 (cited: 4)
11. M. R. Gabdullin, "On the Divergence of Fourier Series in the Spaces $\varphi(L)$ Containing $L$", Math. Notes, 99:6 (2016), 861–869
12. M. R. Gabdullin, "On the divergence of trigonometric Fourier series in classes $\varphi(L)$ contained in $L$", Proc. Steklov Inst. Math. (Suppl.), 297, suppl. 1 (2017), 81–87
13. M. R. Gabdullin, "An estimate of the geometric mean of the derivative of a polynomial in terms of its uniform norm on a closed interval", Trudy Inst. Mat. i Mekh. UrO RAN, 18, no. 4, 2012, 153–161
Full list of publications
|
CommonCrawl
|
Metric and Analytic Aspects of Moduli Spaces
Seminars (MAM)
Presentation Material
MAM 20th July 2015
14:00 to 15:00 Geometric invariant theory for graded unipotent group actions and applications
16:00 to 17:00 ALG and ALH spaces
MAM 21st July 2015
11:00 to 12:30 Physics of Moduli Space Dynamics of Solitons
15:30 to 16:30 Decay and Moduli of Yang Mills Instantons
MAM 22nd July 2015
11:00 to 12:30 Analysis on singular spaces
15:30 to 17:00 New results on euclidean monopole metrics
MAM 23rd July 2015
11:00 to 12:30 Hitchin's self-duality equation and limiting configurations
14:00 to 15:00 Ruling out non-collapsed singularities in Riemannian 4-manifolds via the symplectic geometry of their twistor spaces
16:00 to 17:00 The renormalized volume of hyperbolic 3 manifolds
11:00 to 12:30 Instanton and Bow Moduli Spaces
14:00 to 15:00 A Vasy Analysis of quantum N-body type problems
MAMW01 27th July 2015
10:00 to 11:00 Folded hyperKähler metrics
11:30 to 12:30 H Auvray An analytic construction of dihedral ALF gravitational instantons
14:30 to 15:30 P Boalch Non-perturbative hyperkahler manifolds
16:00 to 17:00 Kähler metrics and Chern forms on the moduli space of punctured Riemann surfaces
09:00 to 10:00 On the geometry of some Hyperkaehler manifolds
10:00 to 11:00 X Zhu Nodal degeneration of hyperbolic metrics and application to Weil-Petersson metric on the moduli space
11:30 to 12:30 Asymptotics of hyperboilic, Weil-Peterssen and Takhtajan-Zograf metrics
14:30 to 15:30 Renormalized volume on the Teichmuller space of punctured Riemann surfaces
16:00 to 17:00 LD Saper Perverse sheaves on compactifications of locally symmetric spaces
09:00 to 10:00 R Bielawski Asymptotics and compactification of monopole moduli space
10:00 to 11:00 Coulomb branches of 3-dimensional $\mathcal N=4$ gauge theories
11:30 to 12:30 ALG and the SU($\infty$) Toda equation
16:00 to 17:00 'Breakout' Session
09:00 to 10:00 T Hausel Hyperkähler toy models
10:00 to 11:00 Quantization of integrable systems of periodic monopoles
11:30 to 12:30 Schiffer variations and Abelian differentials
14:30 to 15:30 The charge density of a monopole and its asymptotic tail
16:00 to 17:00 The elliptic genus - a view from conformal field theory
MAMW01 31st July 2015
09:00 to 10:00 Polynomial Pick forms for affine spheres, real projective polygons, and surface group representations in PSL(3,R).
10:00 to 11:00 Mass in Kaehler Geometry
11:30 to 12:30 Coulomb Branch and the Moduli Space of Instantons
MAM 3rd August 2015
14:45 to 15:45 A Dancer Symplectic and hyperkahler implosion
16:00 to 17:00 L Kamenova On Kobayashi's conjecture for K3 surfaces and hyperk\"ahler manifolds"
MAM 4th August 2015
11:00 to 12:00 Higgs bundles, spectral data, and fiber products of curves
11:00 to 12:00 L Fredrickson A construction of limiting solutions of Hitchin's equations
16:00 to 17:00 (Open) Problem Discussion Session
11:00 to 12:00 J Hurtubise Monopoles on circle bundles
11:00 to 12:00 Topology and Compactifications of Moduli Spaces
14:00 to 15:00 J Lotay The moduli space of hyperKaehler metrics on 4-manifolds with boundary
MAM 11th August 2015
14:00 to 15:00 R Maldonado Moduli space of periodic monopoles and Hitchin equations on a cylinder
14:00 to 15:00 HyperKaehler metrics with circle action
15:30 to 16:30 Asymptotically Conic Calabi-Yau Manifolds
15:00 to 16:00 Two predictions from physics relevant to metric and analytic aspects of moduli spaces
|
CommonCrawl
|
Friction forces and sliding slabs
I have 2 questions, one generalizing the other.
Question 1: Suppose we have 2 slabs resting horizontally on a table. Assume there is friction between the 2 slabs as well as between the bottom slab and the table and that all friction coefficients are different. Now we apply a horizontal force to the top slab. How do we figure out the direction of movement of the bottom slab?
Question 2: We now have a stack of n slabs resting horizontally on the table. All surfaces in contact have friction, and all friction coefficients are different. If we apply a horizontal force to the top slab, how can we predict the direction of movement of each slab in the stack?
For the first question, I am guessing that depending on the amount of force and the values of the friction coefficients, there can be multiple scenarios: the slabs won't move till the force overcomes the first static friction coefficient, then they might move together, then one might move in one direction, and the other... well, how does the bottom slab move exactly? That's where I am confused. It seems I have to know the direction of movement to set the sign of the friction force between the bottom slab and the table correctly, but I am not sure how to establish the force transmitted by the top slab to the bottom one. Is it just the friction force between top and bottom slab?
For the second question, I'd gladly apply the same method as for the first question repetitively, but lo and behold, I haven't solved the first question yet...
This is not for a "homework" and I am not a student trying to get his/her homework answered :-) Thanks!
newtonian-mechanics classical-mechanics friction
FrankFrank
$\begingroup$ First remove friction. Which way things are going to move? Apply friction to oppose this motion. That is how you figure out the direction of friction. $\endgroup$ – ja72 Oct 1 '13 at 1:23
So you want the formal answer to question 2? Read on:
Lets say we have $k$ blocks, numbered $i=1 \ldots k$ with 1 on the bottom and $k$ on the top. The top block has an applied force $\mathcal{P}$ and each block has mass $m_i$ and friction coefficient with the previous block (or the ground) $\mu_i$. Also the movement of each block is characterized by the acceleration $\ddot{x}_i$. In matrix form the above define
$$ P=\begin{pmatrix}0\\ 0\\ \vdots\\ 0\\ \mathcal{P} \end{pmatrix} $$ $$ m=\begin{bmatrix}m_{1}\\ & m_{2}\\ & & \ddots\\ & & & m_{k-1}\\ & & & & m_{k} \end{bmatrix} $$ $$ \mu=\begin{bmatrix}\mu_{1}\\ & \mu_{2}\\ & & \ddots\\ & & & \mu_{k-1}\\ & & & & \mu_{k} \end{bmatrix} $$ $$ \ddot{x}=\begin{pmatrix}\ddot{x}_{1}\\ \ddot{x}_{2}\\ \vdots\\ \ddot{x}_{k-1}\\ \ddot{x}_{k} \end{pmatrix} $$
The weight on each block is $m_i g$ and the contact force with the previous block (or the ground) is $N_i$. Also the friction limit is $F_i \leq \mu_i N_i$. In matrix form the above is
$$ N=\begin{pmatrix}N_{1}\\ N_{2}\\ \vdots\\ N_{k-1}\\ N_{k} \end{pmatrix} $$
$$ F \leq \begin{bmatrix}\mu_{1}\\ & \mu_{2}\\ & & \ddots\\ & & & \mu_{k-1}\\ & & & & \mu_{k} \end{bmatrix}\begin{pmatrix}N_{1}\\ N_{2}\\ \vdots\\ N_{k-1}\\ N_{k} \end{pmatrix}=\begin{pmatrix}\mu_{1}N_{1}\\ \mu_{2}N_{2}\\ \vdots\\ \mu_{k-1}N_{k-1}\\ \mu_{k}N_{k} \end{pmatrix} $$
Why do we need all this? To to make the equation of motion for the $i$-th block, which is $P_i - F_i + F_{i+1} = m_i \ddot{x}_i $
Look at the free body diagram above. By convention the i-th friction opposes the motion which is to the right. The friction from the above block is reacted upon this block and applied to the left. That is why the sum of the fores is $P_i + F_{i+1} - F_i$.
The balance in matrix form, using an adjacency matrix is
$$ A=\begin{bmatrix}1 & -1\\ & 1 & -1\\ & & \ddots & \ddots\\ & & & 1 & -1\\ & & & & 1 \end{bmatrix} $$ $$ P-A\, F=m\ddot{x} $$
which expands out to
$$\begin{pmatrix}0\\ 0\\ \vdots\\ 0\\ \mathcal{P} \end{pmatrix}+\begin{pmatrix}F_{2}-F_{1}\\ F_{3}-F_{2}\\ \vdots\\ F_{k}-F_{k-1}\\ -F_{k} \end{pmatrix}=\begin{pmatrix}m_{1}\ddot{x}_{1}\\ m_{2}\ddot{x}_{2}\\ \vdots\\ m_{k-1}\ddot{x}_{k-1}\\ m_{k}\ddot{x}_{k} \end{pmatrix}$$
Now the contact normal force is derived from the blocks above it with
$$ A\,N = m\,g $$ $$ N = A^{-1} m\,g $$ $$ \begin{pmatrix}N_{1}\\ N_{2}\\ \vdots\\ N_{k-1}\\ N_{k} \end{pmatrix}=\begin{bmatrix}1 & 1 & 1 & 1 & 1\\ & 1 & 1 & 1 & 1\\ & & \ddots & \vdots & \vdots\\ & & & 1 & 1\\ & & & & 1 \end{bmatrix}\begin{pmatrix}m_{1}g\\ m_{2}g\\ \vdots\\ m_{k-1}g\\ m_{k}g \end{pmatrix} $$
So all together $$ P - \left( A\,\mu A^{-1}\right) m\, g=m\ddot{x} $$
or with $ \mu_{SYS}=A\,\mu A^{-1} $
$$ \mu_{SYS}=\begin{bmatrix}1 & -1\\ & 1 & -1\\ & & \ddots & \ddots\\ & & & 1 & -1\\ & & & & 1 \end{bmatrix}\begin{bmatrix}\mu_{1}\\ & \mu_{2}\\ & & \ddots\\ & & & \mu_{k-1}\\ & & & & \mu_{k} \end{bmatrix}\begin{bmatrix}1 & 1 & 1 & 1 & 1\\ & 1 & 1 & 1 & 1\\ & & \ddots & \vdots & \vdots\\ & & & 1 & 1\\ & & & & 1 \end{bmatrix} \\ \mu_{SYS}=\begin{bmatrix}\mu_{1} & \mu_{1}-\mu_{2} & \cdots & \mu_{1}-\mu_{2} & \mu_{1}-\mu_{2}\\ & \mu_{2} & \cdots & \mu_{2}-\mu_{3} & \mu_{2}-\mu_{3}\\ & & \ddots & \vdots & \vdots\\ & & & \mu_{k-1} & \mu_{k-1}-\mu_{k}\\ & & & & \mu_{k} \end{bmatrix} $$
$$ P -\mu_{SYS} m\, g=m\ddot{x} $$ $$ \ddot{x} = m^{-1} \left(P-\mu_{SYS} m\, g \right) $$
So this is the motion once with have slipping. We need to reverse the equations and find the traction required when $\ddot{x}=0$ which ends up being
$$ \mu_i \geq \frac{\mathcal{P}}{g (\sum_{j=i}^k m_j)} $$
When the above is not satisfied the contact is slipping. Otherwise system will have $\ddot{x}_i=0$ for when the contact sticks.
Block Matrix Solution
Here are the steps needed to solve the above system
Stick all contacts with $\ddot{x}=0$ and find the friction needed $F^{\star}=A^{-1}P$. For example $$F^{\star}=\begin{bmatrix}1 & 1 & \cdots & 1 & 1\\ & 1 & \cdots & 1 & 1\\ & & \ddots & \vdots & \vdots\\ & & & 1 & 1\\ & & & & 1 \end{bmatrix}\begin{pmatrix}0\\ 0\\ \vdots\\ 0\\ \mathcal{P} \end{pmatrix}=\begin{pmatrix}\mathcal{P}\\ \mathcal{P}\\ \vdots\\ \mathcal{P}\\ \mathcal{P} \end{pmatrix}$$
Compose the system mass matrix $M=A^{-1}m$ such that the horizontal equations of motion are $\boxed{F^{\star}=M\ddot{x}+F}$
Compare friction needed to available traction with $F^{\star}<\mu N$. Construct two projection matrices $T$ and $U$ with $k$ rows and values as follows: For each block $i$ that is sliding add a column to $U$ with the i-th row element equal to 1 and all others 0. For each block $i$ that is sticking add a column to $T$ with the i-th row element equal to 1 and all others 0. For example if only the last element (top) slides then $$ \begin{aligned} T&=\begin{bmatrix}1\\ & 1\\ & & \ddots\\ & & & 1\\ & & & 0 \end{bmatrix}&U&=\begin{bmatrix}0\\ 0\\ \vdots\\ 0\\ 1 \end{bmatrix} \end{aligned}$$
Define the known motions (sticking blocks) with $T^{\top}\ddot{x}=0$ and the known friction (sliding blocks) with $f=U^{\top}F=U^{\top}\mu N$. With the example above then $$\begin{aligned} \begin{pmatrix}0\\ 0\\ \vdots\\ 0 \end{pmatrix}&=\begin{bmatrix}1\\ & 1\\ & & \ddots\\ & & & 1\\ & & & 0 \end{bmatrix}^{\top}\begin{pmatrix}\ddot{x}_{1}\\ \ddot{x}_{2}\\ \vdots\\ \ddot{x}_{k-1}\\ \ddot{x}_{k} \end{pmatrix}=\begin{pmatrix}\ddot{x}_{1}\\ \ddot{x}_{2}\\ \vdots\\ \ddot{x}_{k-1} \end{pmatrix}\\f&=\begin{bmatrix}0\\ 0\\ \vdots\\ 0\\ 1 \end{bmatrix}^{\top}\begin{pmatrix}\mu_{1}N_{1}\\ \mu_{2}N_{2}\\ \vdots\\ \mu_{k-1}N_{k-1}\\ \mu_{k}N_{k} \end{pmatrix}=\begin{pmatrix}\mu_{k}N_{k}\end{pmatrix} \end{aligned}$$
Define the unknown motions vector $a$ and unknown forces vector $R$ such that the block motion is $\ddot{x}=U\, a$ and the block friction $F=T\, R+M\, U\left(U^{\top}M\, U\right)^{-1}f$. Note that $U^{\top}F=f$ and $T^{\top}M^{-1}F=\left(T^{\top}M^{-1}T\right)\, R$.
The horizontal equations of motion are $\boxed{ F^{\star}=T\, R+M\, U\left(a+\left(U^{\top}M\, U\right)^{-1}f\right)}$ with $R$ and $a$ as unknowns.
Project to the sliding blocks with $U^{\top}F^{\star}=U^{\top}M\, U\left(a+\left(U^{\top}M\, U\right)^{-1}f\right)$ } $\boxed{a=\left(U^{\top}M\, U\right)^{-1}\left(U^{\top}F^{\star}-f\right)}$
Project to the sticking blocks with $T^{\top}M^{-1}F^{\star}=\left(T^{\top}M^{-1}T\right)\, R$ } $\boxed{R=\left(T^{\top}M^{-1}T\right)^{-1}T^{\top}M^{-1}F^{\star}}$
Back substitute the projections to get $\ddot{x}=U\, a$ and $F=F^\star - M \ddot{x}$.
ja72ja72
$\begingroup$ I think W could be dropped: you are not using later in the derivation. Nice derivation :-) $\endgroup$ – Frank Oct 1 '13 at 3:40
$\begingroup$ Can you explain a bit more how that works in terms of directions of the friction forces? I see that you are using Pi - Fi + Fi+1 = mi d2(xi)/dt2, which seems to assume that Fi+1 has the same direction as Pi. Is that always going to work? $\endgroup$ – Frank Oct 1 '13 at 3:49
$\begingroup$ The equation with $\ddot{x}$ is never applicable right? $\endgroup$ – Brian Moths Oct 1 '13 at 12:54
$\begingroup$ @Frank W is used to derive N. The contact normal force is the sum of the weights of the blocks above it. $\endgroup$ – ja72 Oct 1 '13 at 14:51
$\begingroup$ I'll put it in Mathematica and try it for a couple of cases where I think I know the solution. Thanks a lot! $\endgroup$ – Frank Oct 1 '13 at 17:34
Just going by the definition of friction , it tends to oppose relative motion between two objects or in other words it opposes the TENDENCY of motion.
Ans 1)So if you apply horizontal force to the right to the top slab, the top slab will definitely have friction lets say $fr_1$ in the backward direction(left). Now to the lower slab this $fr_1$ acts in the opposite direction with the same magnitude.(Remember Newton's 3rd Law).To the lower surface of the lower block $V_{relative}$ with respect to ground frame is to the right. Again friction should oppose relative motion .Hence $fr_2$ acts in the backwards(left direction). Hence $fr_1$ and $fr_2$ act in the right and left directions respectively for lower slab and $fr_1$ acts in left direction for upper slab.
Applying the same concept we get quite an interesting answer for the n slabs .(Think !)
SimarSimar
$\begingroup$ I think this is only half correct - IMHO you need to create scenarios depending on the values of the friction coefficients. For example, if the friction with the table is very small when the friction between the 2 slabs is high, depending on the force applied, the two slabs could move together towards the right. Also, agreed that there is fr1 and fr2, but depending on their magnitude, the net acceleration of the bottom slab could be either right or left, IMHO. $\endgroup$ – Frank Oct 1 '13 at 3:33
Let's consider the case where there are two blocks first. In this case, there are two interfaces, the first one is below the top block, and it is described by a coefficient of static friction $\mu_{ts}$. The second one is below the bottom block and is described by a coefficient of static friction $\mu_{bs}$.
If you apply a force $F$ to the top block, and if no motion is to happen, then the force from the bottom block on the top block must cancel $F$. The source of this force must be friction. The strength of friction can be as large as $m_t g \mu_{ts}$ where $m_t$ is the mass of the top block. Also, if no motion is to happen, the ground must provide a frictional force $F$ on the bottom block. This force can be as large as $(m_t + m_b) g \mu_{bs}$, where $m_b$ is the mass of the bottom block.
If you increase $F$ from zero, eventually something will move. The weakest interface will break, where "weakest" means it has the lowest maximum $F$ it can sustain. After the slip occurs, the force need to be supplied by the other interfaces decrease. Say if the top interface is weaker then after the top interface breaks, the forces needed to be supplied by the other interfaces will be $m_T g \mu_{Tk}$, where now $\mu_{Tk}$ is the coefficient of kinetic friction for the interface. Thus no additional slipping will occur (as long as we ignore inertia.)
Now lets talk about the case of many blocks. Let $m_n$ be the mass of the $n$th block from the top. Let me $\mu_{ns}$ be the coefficient of static friction for the interface below the $n$th block. Then the maximum force that can be supported by the $n$th interface is $F_n = \sum_{i=1}^n m_i g \mu_{ns}$.
If you increase $F$ from zero, eventually something will move. The movement will occur at the interface that has the lowest maximum force it can support, that is, the interface with the lowest $F_n$. After that, the force at the other interfaces will drop to $\frac{\mu_{nk}}{\mu_{ns}}F_n$, and no further interfaces will break. This means all of the blocks above the broken interface will move at the same speed and all the blocks below the broken interface will remain stationary.
Brian MothsBrian Moths
$\begingroup$ I can't find any fault in what you are saying, but it doesn't IMHO answer the question: what direction does the bottom slab move, in the case of 2 slabs? $\endgroup$ – Frank Oct 1 '13 at 3:35
$\begingroup$ I am not sure that your reasoning for the n slabs case is valid: if you continue increasing the force after the weakest interface "broke", why wouldn't you reach a point where the second weakest interface can break? $\endgroup$ – Frank Oct 1 '13 at 3:36
$\begingroup$ @Frank, for your first comment about the case of two slabs, just reread the bottom paragraph with $n=2$, so the bottom slab doesn't move unless the bottom interface is the weaker one and the force is sufficiently high to break it. For your second comment, the forces at the interface decrease to $\frac{\mu_{nk}}{\mu_{ns}}F_n$ which is too weak to get any of the other interfaces to break, so only a maximum of one interface will break. $\endgroup$ – Brian Moths Oct 1 '13 at 12:57
$\begingroup$ Staying stationary feels incorrect to me: first, in the case of 2 slabs, if the bottom slabs was moving in unison with the top block when F was not strong enough, why would the bottom slab suddenly stop? Second, even if an interface "breaks", it doesn't mean the friction coefficient there drops to zero. It changes to the kinetic friction coefficient, and there is still a force transmitted to the bottom slab via that friction. So, there are forces on the bottom slab, and hence possibly an acceleration. IMHO the bottom slab becomes stationary only if the forces on it precisely cancel. $\endgroup$ – Frank Oct 1 '13 at 14:18
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics classical-mechanics friction or ask your own question.
Paradox in the two block problem
Comparing Static Frictions
Forces on two boxes
Friction between $n$ surfaces?
What is the direction of the friction force on a rolling ball?
Block on a block problem, with friction
Push a box in a plane with friction. How to deal with the rotation?
Sliding blocks problem
Which normal force for friction?
Confusion about work done by a force in different reference frames
Static friction in a slope and two boxes problem
|
CommonCrawl
|
Stability of Hasimoto solitons in energy space for a fourth order nonlinear Schrödinger type equation
Asymptotic stability and smooth Lyapunov functions for a class of abstract dynamical systems
July 2017, 37(7): 4071-4089. doi: 10.3934/dcds.2017173
Dacorogna-Moser theorem on the Jacobian determinant equation with control of support
Centro de Matemática da Universidade do Porto, Rua do Campo Alegre, 687, 4169-007 Porto, Portugal
Received August 2016 Revised March 2017 Published April 2017
Figure(2)
The original proof of Dacorogna-Moser theorem on the prescribed Jacobian PDE, $\text{det}\, \nabla\varphi=f$ , can be modified in order to obtain control of support of the solutions from that of the initial data, while keeping optimal regularity. Briefly, under the usual conditions, a solution diffeomorphism $\varphi$ satisfying $ \text{supp}(f-1)\subset\varOmega\Longrightarrow\text{supp}(\varphi-\text{id})\subset\varOmega $ can be found and $\varphi$ is still of class $C^{r+1, α}$ if $f$ is $C^{r, α}$, the domain of $f$ being a bounded connected open $C^{r+2, α}$$ set $\varOmega\subset\mathbb{R}^{n}$ .
Keywords: Volume preserving diffeomorphism, volume correction, prescribed Jacobian PDE, control of support, optimal regularity.
Mathematics Subject Classification: Primary: 35F30.
Citation: Pedro Teixeira. Dacorogna-Moser theorem on the Jacobian determinant equation with control of support. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4071-4089. doi: 10.3934/dcds.2017173
R. Abraham, J. Marsden and T. Ratiu, Manifolds, Tensor Analysis, and Applications, Global Analysis Pure and Applied: Series B, 2. Addison-Wesley Publishing Co. , Reading Mass. , 1983. Google Scholar
A. Avila, On the regularization of conservative maps, Acta Math., 205 (2010), 5-18. doi: 10.1007/s11511-010-0050-y. Google Scholar
E. Bierstone, Differentiable functions, Bol.Soc.Brasil, 11 (1980), 139-189. doi: 10.1007/BF02584636. Google Scholar
G. Csató, B. Dacorogna and O. Kneuss, The Pullback Equation for Differential Forms, Progress in Nonlinear Differential Equations and their Applications, 83. Birkhäuser/Springer, 2012. doi: 10.1007/978-0-8176-8313-9. Google Scholar
B. Dacorogna, Direct Methods in the Calculus of Variations, Second edition. Applied Mathematical Sciences, 78. Springer, New York, 2008. Google Scholar
B. Dacorogna and J. Moser, On a partial differential equation involving the Jacobian determinant, Ann. Inst. H. Poincaré Anal. Non Linéaire, 7 (1990), 1-26. doi: 10.1016/S0294-1449(16)30307-9. Google Scholar
D. Gilbarg and N. Trudinger, Elliptic Partial Differential Equations of Second Order, Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. Google Scholar
M. Hirsch, Differential Topology, Corrected reprint of the 1976 original edition. Graduate Texts in Mathematics 33. Springer-Verlag, New York, 1994. Google Scholar
C. Matheus, A remark on the Jacobian determinant PDE, https://matheuscmss.wordpress.com/2013/07/06/a-remark-on-the-jacobian-determinant-pde/ Google Scholar
R. Seeley, Extension of $C^{∞}$ functions defined in a half space, Proc. Amer. Math. Soc., 15 (1964), 625-626. doi: 10.2307/2034761. Google Scholar
F. Takens, Homoclinic points in conservative systems, Invent. math., 18 (1972), 267-292. doi: 10.1007/BF01389816. Google Scholar
Figure 6.1. Finding hb t satisfying$\int_\mathit{\Omega } {\left( {f/\widetilde f} \right)} {h_{\widehat t}} = {\rm{meas}}{\mkern 1mu} \;\mathit{\Omega }$. The functions ht are seen in the background (bell shaped).
Figure Options
Download full-size image
Download as PowerPoint slide
Figure 8.1. Extending $\mathit{g} \in {\mathit{C}^1}\left( U \right)\mathit{ }$ to the whole $\mathit{\Omega }$
Huiyan Xue, Antonella Zanna. Generating functions and volume preserving mappings. Discrete & Continuous Dynamical Systems - A, 2014, 34 (3) : 1229-1249. doi: 10.3934/dcds.2014.34.1229
H. E. Lomelí, J. D. Meiss. Generating forms for exact volume-preserving maps. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 361-377. doi: 10.3934/dcdss.2009.2.361
Huyi Hu, Miaohua Jiang, Yunping Jiang. Infimum of the metric entropy of volume preserving Anosov systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4767-4783. doi: 10.3934/dcds.2017205
Dimitra Antonopoulou, Georgia Karali. A nonlinear partial differential equation for the volume preserving mean curvature flow. Networks & Heterogeneous Media, 2013, 8 (1) : 9-22. doi: 10.3934/nhm.2013.8.9
Fuzhong Cong, Hongtian Li. Quasi-effective stability for a nearly integrable volume-preserving mapping. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 1959-1970. doi: 10.3934/dcdsb.2015.20.1959
Olivier Verdier, Huiyan Xue, Antonella Zanna. A classification of volume preserving generating forms in $\mathbb{R}^3$. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2285-2303. doi: 10.3934/dcds.2016.36.2285
Rafael de la Llave, Jason D. Mireles James. Parameterization of invariant manifolds by reducibility for volume preserving and symplectic maps. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4321-4360. doi: 10.3934/dcds.2012.32.4321
Ali Hyder, Luca Martinazzi. Conformal metrics on $\mathbb{R}^{2m}$ with constant Q-curvature, prescribed volume and asymptotic behavior. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 283-299. doi: 10.3934/dcds.2015.35.283
Ali Hyder, Juncheng Wei. Higher order conformally invariant equations in $ {\mathbb R}^3 $ with prescribed volume. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2757-2764. doi: 10.3934/cpaa.2019123
Neil S. Trudinger. On the local theory of prescribed Jacobian equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1663-1681. doi: 10.3934/dcds.2014.34.1663
Lorena Bociu, Barbara Kaltenbacher, Petronela Radu. Preface: Introduction to the Special Volume on Nonlinear PDEs and Control Theory with Applications. Evolution Equations & Control Theory, 2013, 2 (2) : i-ii. doi: 10.3934/eect.2013.2.2i
Zhangxin Chen. On the control volume finite element methods and their applications to multiphase flow. Networks & Heterogeneous Media, 2006, 1 (4) : 689-706. doi: 10.3934/nhm.2006.1.689
Vadim Kaloshin, Maria Saprykina. Generic 3-dimensional volume-preserving diffeomorphisms with superexponential growth of number of periodic orbits. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 611-640. doi: 10.3934/dcds.2006.15.611
Rhudaina Z. Mohammad, Karel Švadlenka. Multiphase volume-preserving interface motions via localized signed distance vector scheme. Discrete & Continuous Dynamical Systems - S, 2015, 8 (5) : 969-988. doi: 10.3934/dcdss.2015.8.969
Qi Hong, Jialing Wang, Yuezheng Gong. Second-order linear structure-preserving modified finite volume schemes for the regularized long wave equation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6445-6464. doi: 10.3934/dcdsb.2019146
François Ledrappier, Seonhee Lim. Volume entropy of hyperbolic buildings. Journal of Modern Dynamics, 2010, 4 (1) : 139-165. doi: 10.3934/jmd.2010.4.139
Ilesanmi Adeboye, Harrison Bray, David Constantine. Entropy rigidity and Hilbert volume. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1731-1744. doi: 10.3934/dcds.2019075
Hassan Belhadj, Mohamed Fihri, Samir Khallouq, Nabila Nagid. Optimal number of Schur subdomains: Application to semi-implicit finite volume discretization of semilinear reaction diffusion problem. Discrete & Continuous Dynamical Systems - S, 2018, 11 (1) : 21-34. doi: 10.3934/dcdss.2018002
Lili Chang, Wei Gong, Guiquan Sun, Ningning Yan. PDE-constrained optimal control approach for the approximation of an inverse Cauchy problem. Inverse Problems & Imaging, 2015, 9 (3) : 791-814. doi: 10.3934/ipi.2015.9.791
Annalisa Cesaroni, Matteo Novaga. Volume constrained minimizers of the fractional perimeter with a potential energy. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 715-727. doi: 10.3934/dcdss.2017036
|
CommonCrawl
|
Path-dependent import-substitution policies: the case of Argentina in the twentieth century
Sebastián Galiani1 &
Paulo Somaini ORCID: orcid.org/0000-0001-9485-18662,3
We use a simple three-sector model to narrate the economic history of Argentina during the twentieth century as seen through the prism of its integration into and dis-integration from the world economy. Assuming that capital moves between the primary and secondary sectors more slowly than labor moves between the secondary and tertiary sectors, we show that import-substitution policies exhibit path dependence. We contend that the endogenous industrialization of the inter-war period generated political changes that paved the way for import-substitution industrialization during the post-war period. Even if this inward-oriented strategy failed to spur economic growth, protectionist policies became entrenched. In the absence of mature political institutions, the liberalization process was delayed and, when it finally did occur, it was extremely costly.
Argentina tends to grow relatively faster when its economy is integrated into world markets. Why, then, did it remain closed to world trade for 60 years during the twentieth century? In this paper we contend, like many other authors have in the past (see, among others, Díaz-Alejandro 1970, 1984; Mallon and Sourrouille 1975; O'Donnell 1977; Waisman 1987; Rogowski 1989; Gerchunoff 1989; Taylor 1994; Gerchunoff and Llach 2004), that a severe distributional conflict lies at the core of this phenomenon. In Argentina, for a large part of the twentieth century, what was efficient was not popular. In the words of one insightful economic historian of the Argentine Republic:
"… Argentina is too transparently a Stolper–Samuelson country where a zero-sum view of economic policy is plausible in the short and even the medium term" (Díaz-Alejandro 1984).
The ideas behind the Stolper–Samuelson theorem explain the increasingly pronounced urban–rural political cleavage seen in the aftermath of the Second World War; however, they do not explain the process of integration into world markets. We show that these processes can be understood once we add a non-tradable sector and frictions in the mobility of capital across sectors. Under these conditions, free trade can benefit all factors of production. However, even if that is the case, protectionism may persist if political institutions are not able to enforce long-term agreements between political actors.
Up to the 1930s, Argentina was well integrated into the world economy and, although some protectionism naturally arose in the wake of the worldwide crisis of the 1930s, it was only after the Second World War that the country closed its economy off from world markets and then remained in a situation close to autarky until the mid-1970s. It was only after a long period of absolute economic decline and devastating hyperinflation that an intensive program of reform and integration into the world economy was adopted.
In this paper, we present a simple three-sector model to narrate the economic history of Argentina during the twentieth century as seen through the prism of its integration into and dis-integration from the world economy. In our model, the primary sector uses land and capital to produce agricultural goods; the secondary sector employs labor and capital to produce manufactured or industrial goods; and the tertiary sector uses only labor to produce services. We assume that (as in fact is the case) Argentina has a comparative advantage in the production of agricultural goods. Thus, the economy exports agricultural goods and imports manufactured goods; services are non-tradable and are always produced in equilibrium. The government's intervention in the economy is limited to taxing trade and distributing the proceeds among the relevant agents.
We characterize the steady-state equilibria of this economy and show that the economy could operate under specialization and trade, where neither labor nor capital is employed to produce manufactured goods; under diversification and trade, where the manufacturing sector is active in production; or under autarky, where there is no trade (for the sake of completeness, we also show that there are other equilibria where the patterns of trade reverse).
We focus on the functional distribution of income; therefore, we consider three socioeconomic groups: workers, landowners and capitalists. We use our model to characterize these different groups' demands for protectionist policies. Assuming that capital moves between the primary and secondary sectors more slowly than labor moves between the secondary and tertiary sectors, we show that import-substitution policies exhibit path dependence. Indeed, this is a very important insight into understanding the economic history of Argentina.
Using the insights derived from our model, we then argue that much of the distributional conflict that arose during that period was among owners of different production factors and that trade policies were widely used to shift income across groups. At the beginning of the century, the country specialized in the production of primary goods and was highly integrated into world trade. During the inter-war period, trade opportunities and the terms of trade worsened and this led to an incipient industrialization process. Argentina started the second half of the century with a very different economic configuration. Industrialization had come a long way, and integration into world markets was weak. These new economic conditions also changed the political equilibrium; urban workers employed in the manufacturing sector and industrialists were now major social actors and they were demanding protectionist policies. Traditional sectors comprising owners of factors employed in the primary sector, on the other hand, supported free trade policies. This distributional conflict surrounding trade policy shaped the politics of the second half of the century.
The years that followed the Second World War were a time of an extraordinarily rapid expansion of trade, in which Argentina was not an active participant. Instead, it embarked on an ambitious process of import-substitution industrialization that resulted in bumpy cycles of economic expansion followed by sharp recessions. Argentina had the opportunity to return to an export-led growth strategy, but the new political forces that emerged from the industrialization process during the inter-war period were able to block any attempt to liberalize.
Liberalization could have been achieved gradually, thus mitigating the losses of those with vested interests in protected activities. However, it would have required a set of political institutions capable of enforcing intertemporal agreements between political groups. Sadly, Argentina lacked such institutions (see Spiller and Tommasi 2009). Instead, the dismantlement of the import-substitution strategy came only after a substantial deterioration of economic and political conditions. The steps that were then taken toward liberalization were abrupt and applied as shock policies by political groups that had political power but that did not represent a consensus of the Argentine population. As a result, Argentina's integration into world markets proved to be extremely costly in terms of inequality and poverty.
Our main thesis is that the interplay of economic and political forces that were spurred by international conditions during the inter-war period trapped the country into an anti-trade equilibrium which limited economic growth. The conditions that generated the anti-trade trap in Argentina, however, should have also generated the same effect in other new-settler, land-rich economies. This poses a pressing question: Was Argentina the only economy that fell into an anti-trade trap? We argue that most economies that shared the endowment configuration of Argentina faced a distributional conflict of similar characteristics, but with different intensities and outcomes.
The rest of the chapter is organized as follows. In Sect. 2, we relate our work with the existing literature and explain why we focus on trade policy. In Sect. 3, we set up and solve the model. In Sect. 4, we interpret the economic history of Argentina during the twentieth century as seen through the prism of our model. In Sect. 5, we compare Argentina with another new-settler, land-rich economy: Australia. Finally, in Sect. 6, we present our conclusions.
Why is trade policy important?
There is a vast amount of literature on the decline of Argentina during the twentieth century, and a wide variety of factors have been identified as causes of its dismal economic performance. However, there is broad agreement in the literature that this period was marked by a severe distributional conflict that shaped the politics and the economics of the country (see, among others, Díaz-Alejandro 1970, 1984; Mallon and Sourrouille 1975; O'Donnell 1977; Waisman 1987; Rogowski 1989; Gerchunoff 1989; Taylor 1994; Gerchunoff and Llach 2004).
Essays on Argentine economic history usually describe, in more or less detail, the periods of economic crisis that alternated with stability and recovery; this is usually referred to as a "stop-and-go" process (see Díaz-Alejandro 1970; Mallon and Sourrouille 1975; Gerchunoff and Llach 2004). These authors note that the crises were usually caused by overvaluation of the domestic currency, high inflation and current account deficits, whereas stabilization generally involved some combination of fiscal austerity, devaluation and price controls. Once the economy had been stabilized, the government resumed its profligate behavior which led inevitably to yet another "stop". These stop-and-go cycles were closely linked to the real exchange rate or to the relative price of tradables versus non-tradables; stabilization required a real devaluation, whereas government deficits generated real appreciation.
We will focus on a different relative price: the terms of trade, i.e., the price of exports relative to the price of imports. We will also discuss the effect of protectionism on such relative prices as perceived by economic actors. To isolate the analysis from the effect of the real exchange rate, we build a model in which there is no debt and the trade balance has to be balanced in every single period.
The real exchange rate is a key element in analyzing short-term debt management problems, short-term capital flows and agents' perceived wealth (Heymann 1984). However, long-run trends in terms of trade and persistent trade policies are key to an understanding of long-term investment and capital reallocation in the economy. Ultimately, these factors are more influential in shaping the political and economic landscape. That is why our narrative deals with general developments over a span of decades rather than delving into the details of each one of the sudden stops that plagued Argentina during this period.
For at least 50 years, successive Argentine governments intentionally distorted producer prices by setting import tariffs and export duties and maintaining a dual exchange rate mechanism (see Brambilla et al. (2010) in this volume). These distortions altered the allocation of resources in the economy, which in turn affected the political equilibrium.
Finally, we do not minimize the role of organizations and institutions in shaping the course of history (North 1990; Cortés-Conde 1998). As we argue in this paper, once the import-substitution development strategy had proven to be inefficient, liberalization measures could have been instituted gradually to mitigate the losses of those with vested interests in protected activities. A gradual but steady process of liberalization would have required consensus among different interested groups and a mature institutional framework capable of limiting the incumbent government's ability to discretionally introduce major shifts in trade policy and benefit some groups at the expenses of others. Argentina lacked such institutions and as a result trade liberalization occurred abruptly, without consensus and too late.
A simple model
In this section, we introduce a simple model that we use to articulate the analytical discussion in the next section. We use a model with two tradable goods and one non-tradable good. The tradable goods are labeled as agricultural (a) and manufactured (m). The agricultural good is produced in the primary sector, using land and capital, while the manufactured good is produced in the secondary sector, using labor and capital. The non-tradable good (n) is labeled as a service and is produced using labor only. The economy is endowed with K units of capital, T units of land and L units of labor.
The tradable goods are produced using the following Cobb–Douglas production functions:Footnote 1
$$Y_{\text{a}} = AT^{1 - \alpha } K_{\text{a}}^{\alpha } ,$$
$$Y_{\text{m}} = ML_{\text{m}}^{1 - \beta } K_{\text{m}}^{\beta } .$$
The non-tradable good is produced with the following linear technology:
$$Y_{\text{n}} = L_{\text{n}} ,$$
where Y i is the total output of good i and K i (L i ) is the amount of capital (labor) employed in sector i ∊ {a, m, n}. A(M) is the total factor productivity in the primary (secondary) sector. We assume that capital is used more intensively in the secondary sector: 0 ≤ α ≤ β ≤ 1. We also assume that there are many competitive firms in each sector, which allows us to cast the model in terms of a representative firm of the sector that behaves competitively.
Since our focus is on the functional distribution of income, we consider three types of agents: workers, endowed with one unit of labor; landowners, endowed with equal shares of the total rewards to land; and capitalists, endowed with equal shares of total capital. Agents consume the three goods (a, m, n), for which they have identical preferences as represented by a Cobb–Douglas utility function:Footnote 2
$$U_{j} = \phi_{\text{a}} lnc_{{{\text{a}}j}} + \phi_{\text{m}} lnc_{{{\text{m}}j}} + \left( {1 - \phi_{\text{a}} - \phi_{\text{m}} } \right)lnc_{{{\text{n}}j}} ,$$
where c ij is the consumption by agent j of good i. We will use C i to denote aggregate consumption for good i.
We assume that the Argentine economy is a price-taker in world markets. Therefore, the international prices for the agricultural good p a and the manufactured good p m are considered exogenous. The terms of trade are denoted by π = p a/p m, i.e., the relative price of exports over imports. We also assume the absence of any international capital markets; therefore, trade should be balanced in equilibrium.
The government intervenes in the economy by taxing trade. Without loss of generality, we assume that the government introduces an ad valorem tax on exports at rate τ. We confine our attention to taxes on exports of the primary good. Since the equilibrium depends on relative prices, the effect of any tax on imports can be replicated by a tax on exports (Lerner symmetry result). Because we are interested in Argentina, which is a country with comparative advantages in the primary sector, we will not fully develop the case in which the pattern of trade reverses. If the economy reverses its pattern of trade, we assume that export taxes (on the manufactured good) are zero. The economic agents take the export tax, τ, as given. Unless the country is in autarky, domestic prices are given by \(p_{\text{a}}^{d} = p_{\text{a}} \left( {1 - \tau } \right)\) and \(p_{\text{m}}^{d} = p_{\text{m}}\), where the nominal exchange rate is normalized to 1. We assume that the government reinjects the tax proceeds into the economy via lump-sum transfers to agents.
The long-run equilibrium
In the long-run equilibrium, firms hire capital and labor competitively and produce according to their production functions, while consumers sell their endowments to the firms and buy the produced goods with the proceeds. In the appendix, we solve for the long-run equilibrium of this economy (see Sect. 7.1). Here, we will highlight our results.
It will be useful, for our purposes, to consider the preferences parameters (ϕ i ), the technological parameters (α and β) and the endowments of the economy as being fixed. We will focus on the effects of changes in the terms of trade (π) and export duties (τ). As shown in the appendix, there are four types of long-run equilibria:
Specialization: the country produces only in the primary and tertiary sectors; it imports the manufactured good and exports the agricultural good.
Diversification and trade: the country produces in the three sectors; it imports the manufactured good and exports the agricultural good.
Autarky: the country produces in the three sectors; there is no trade.
Diversification and reversal of the pattern of trade: the country produces in the three sectors; it imports the agricultural good and exports the manufactured good.
Each pair (π, τ) is associated with one and only one of these equilibria; therefore, under the assumptions made, we can represent the areas or regions that correspond to each of these types of long-run equilibria in the (π, τ) plane:
Notice that, for a given tax rate τ, as the terms of trade worsen (π decreases), the economy moves from specialization to diversification and trade, to autarky and, finally, to a reversal of the patterns of trade. For higher levels of taxes τ, the autarky region is larger.
Consider the share of capital employed in the secondary sector: κ = K m/(K m + K a). This is a measure of industrialization that will be useful in our discussion about preferences for protectionism. Figure 2 shows how this share varies in the long-run equilibrium for different configurations of terms of trade and taxes. A figure for λ = L m/(L m + L a) would look similar.
Notice that the specialization region in Fig. 1 coincides with the region where κ equals zero in Fig. 2. Under specialization and trade, capital and labor employment in the secondary sector are zero.
The long run: four regions
The long-run equilibrium, κ
In the autarky region, the tax rate is set high enough so that the country will not trade with the rest of the world; consequently, changes in π or τ will have no marginal effect on the resulting allocation of resources in the economy. For any point in the region, the factor allocation is the autarky allocation, which we denote as κ aut and λ aut (see Sect. 7.1.1 in the appendix). The autarky region in Fig. 1 coincides with the region with κ = κ aut in Fig. 2.
In the diversification and trade region, the manufacturing sector employs capital and labor. As we move upward and to the left within this region, both κ and λ increase from zero, as in the frontier with the specialization and trade region, up to κ aut and λ aut in the autarky region. The diversification and trade region in Fig. 1 coincides with the region where κ is increasing in Fig. 2.
Finally, in the reversal of patterns of trade region, the tax rate on agricultural exports has no effect on the real economy. As π decreases, the secondary sector grows and employs more resources. The reversal region in Fig. 1 coincides with the leftmost region in Fig. 2. As the terms of trade worsen, the share of capital in the secondary sector approaches one; however, the share of labor, λ, converges toward an upper bound that is less than one, since some workers are always employed in the tertiary sector.
It seems appropriate to make two remarks about our model and its usefulness in analyzing the Argentine economy. First, we have simplified the analysis to two tradable sectors. Therefore, our model does not allow for an equilibrium in which some manufactures are exported while others are imported. This is due to the assumption that manufactures are a homogeneous good. A careful interpretation of our model is nonetheless helpful in building our narrative of Argentina's economic history. The manufacturing sector should be interpreted as comprising the activities that compete with imports, the primary sector as the set of activities oriented toward the international market and the tertiary sector as the services and manufactures that are naturally protected from external competition. Thus, our model assumes that exportable activities are intensive in capital and land, import-competing manufactures in labor and capital, and non-tradables in labor.
Second, we should interpret the autarky equilibrium as representing a situation in which the economy has exhausted its possibilities of import substitution, rather than as an actual autarkic situation. During the period under consideration, Argentina was never in actual autarky; however, it took its import-substitution strategy almost all the way to its technological limit. Of course, there were some inputs that had to be imported because it was simply not feasible to produce them domestically.Footnote 3
Our assumption that each agent owns a single type of input allows us to group agents according to the input they own and the industry where they are employed. As we show below, the tax rate τ affects the real remuneration of each of these groups in a different way. Some groups will gain from an increase in protectionism (higher τ), while others will lose. Thus, there is a distributional conflict around protectionism.
Notice that no conflict would arise in an economy where each agent owns the same bundle of inputs. Yet, agents endowed with different resources have conflicting interests. The essence of the rivalry between proponents of free trade and advocates of protectionism lies in the assumption that each agent can be identified with one of the socioeconomic groups based on the inputs that the agent owns and the industry in which the agent is employed.
The reader will recall that we have assumed that tax revenues are distributed in lump-sum transfers to agents; thus, the agents' attitudes will also depend on the share of total tax revenues that each one of them expects to receive. Since we do not specify who the recipients of the lump-sum transfers are, we should bear in mind that, even if a group's real remuneration is reduced by an increase in export taxes, its overall utility might increase if the group receives a disproportionately bigger share of tax revenues. We should also bear in mind that, given the first welfare theorem, it is impossible to put each and every agent in a better off position by increasing the tax rate and redistributing the revenues.
In analyzing the effect of changes of τ on each group's welfare, we consider the short-, medium- and long-run time horizons. In the short run, no reallocation of factors takes place. In the medium run, only labor is allowed to move between the secondary and tertiary sectors. In the long run, all factors can be reallocated, and the economy fully adjusts to its new long-run equilibrium.
In Appendix 1 (Sect. 7.2.1), we show that the diversification and trade region is particularly prone to distributional conflict. This is because, in the other regions, either all interests are aligned (under specialization) or a marginal change in the export tax rate has no real consequences (under reversal of the pattern of trade and autarky). Therefore, we will focus on pairs (π, τ) such that the economy will be in the diversification and trade region.
In the short run, protectionist policies will benefit owners of factors employed in the secondary sector and will harm those employed in the primary and tertiary sectors. Since the proportion of factors employed in the secondary sector increases as we move upward and toward the left in the diversification and trade region, protectionist policies have more short-run support as we move closer to the autarky region and less support as we move closer to the specialization area (see Proposition 3 in Appendix 1).
In the medium run, landlords and capitalists with investments in the primary sector will oppose protectionism, while capitalists with investments in the secondary sector will support it. Workers will now have a homogenous attitude toward τ; either all workers will prefer protectionism or all of them will oppose it. We show that the pairs of (π, τ) at which workers switch from opposing protectionist policies to supporting them lies in the diversification and trade region (see Proposition 5 in Appendix 1) (Fig. 3).
Medium-run preferences over τ
In the long run, landlords will always oppose protectionist policies and will benefit from improvements in terms of trade (Proposition 6, Appendix 1). One of our key results is that workers will also prefer a zero tax rate if π is sufficiently high (Proposition 7, Appendix 1). In this case, workers prefer to be employed in the tertiary sector where they can take advantage of the high level of national income induced by high terms of trade. The result for capitalists is similar; for a sufficiently high π, farsighted capitalists will also support free trade policies.
The key insight that we want to convey here is that agents will support or oppose policies according to their source of income and their relevant time horizon. In the diversification and trade region, agents' attitudes toward protectionism exhibit an interesting pattern. Landlords oppose them in all cases; capitalists employed in the manufacturing sector support them both in the short and medium terms.Footnote 4 Who prevails in this struggle depends on several factors that are beyond the scope of this paper; however, our analytical model gives us some mileage in answering this question. It seems fairly reasonable that the size of the capitalist faction that supports protectionism will be positively correlated with the likelihood of these policies being enacted. Moreover, in a democracy, workers could be the pivotal faction that shifts the balance of power.
Clearly, as we move upward and to the left in the diversification and trade region, protectionist policies will enjoy wider support. As we move in this direction, both workers and capitalists will be more likely to advocate these policies. In the short run, there will be more workers and capitalists employed in the manufacturing sector. In the medium run, workers as a whole group are also more likely to prefer taxation.Footnote 5
This model can also generate endogenous pressure for the enactment of free trade policies in a protected economy that experiences favorable terms of trade or high levels of productivity in the primary sector. As π grows, farsighted workers will stand to benefit greatly from free trade policies. Landlords' remuneration under free trade is greater when π is large, and they will therefore support these policies more actively. Consequently, if the economy is trapped in the autarky equilibrium, higher π will intensify the distributional conflict because those who want to challenge the status quo have more incentives to do so.
Path-dependent import-substitution policies
We will now discuss how, starting from a situation of specialization, a significant and exogenous worsening of the terms of trade may lead to an incipient industrialization process, change the "political equilibrium" and lead to the introduction of an import-substitution policy. Interestingly enough for our case study, even if the terms of trade were to later rebound to the previous level at which the economy operated under specialization, new endogenous political forces may have developed that prevent the economy from returning to its initial stance. As in the cases of path dependence discussed in the literature on inefficient institutions (see, among others, North 1990), there are self-reinforcing mechanisms for the persistence of import-substitution policies.
Suppose that the economy is specialized in the primary sector. In that case, the preferences of all agents in the economy are aligned; they all agree on a zero export tax rate. Naturally, this does not mean that they agree on the level of redistribution by other means such as an income tax, but we are abstracting from the analysis of these issues here. Suppose that the terms of trade worsen significantly and that the country naturally initiates an incipient industrialization process, i.e., the economy moves into diversification and trade. Initially, protectionist policies will lack support, since most of the capital is still employed in the primary sector and most workers produce services. If workers take into account the medium-run prospects, they may favor an increase in τ; however, for most of them, it is likely that the short-run costs of a tax increase would outweigh the medium-run benefits.
As the process of industrialization deepens, either because of a further deterioration in the terms of trade or because of capital flows from the primary to the secondary sector, the short- and medium-run support for protectionist policies increases and eventually these policies may be implemented. Protectionism tends to be self-reinforcing, since now more capital and labor will flow to the secondary sector. New waves of demand for protectionism drive the economy toward autarky, which might be characterized as an import-substitution strategy. Notice, however, that for this to happen, either the economy has to have a high level of capital—i.e., to be rich enough—to transfer capital from the primary sector to the manufacturing sector, and the shock has to be sufficiently long-lasting to allow the economy to accumulate enough capital in the manufacturing sector to give rise to a protectionist coalition.
Suppose now that, once the economy is industrialized and the import-substitution strategy has driven the economy close to autarky, the terms of trade improve. In the short run, this harms all the agents who have switched to the secondary sector. However, if these agents hold political power, they will not allow capital to flow back to the primary sector; instead, they will increase the export tax. If the tax is increased to levels that ensure autarky, the improvement in the terms of trade will not have any real effect. The economy will be trapped in a situation where every improvement in the terms of trade will be neutralized and nobody will gain (or lose) from it.
If the terms of trade improve, the distributional conflict becomes more intense. Workers may benefit from a reduction in the tax rate in the long run. Moreover, landlords' incentive to exert influence in the political arena will increase, because the benefit of reducing the level of protectionism increases with the terms of trade. They will be opposed by industrial capitalists and shortsighted workers who benefit from protectionism. This distributional conflict may grow in intensity, destabilizing the political equilibrium and, depending on how the conflict is resolved, spurring liberalization. Similarly, the distributional conflict will also become more severe if the productivity in the primary sector increases.
The next subsection deals with other forces that may give rise to trade liberalization, not through increased distributional conflict, but by weakening the protectionist political coalition of workers–capitalists.
Forces leading to trade liberalization
Events that reduce the proportion of workers and capital in the manufacturing sector will weaken the coalition that supports protectionist policies. We have discussed how an increase in the price or productivity of the agricultural sector may generate enough distributional conflict to prompt the formation of a coalition of landlords and longsighted workers that supports liberalization. In this subsection, we show what other kinds of events can shift employment and capital allocation when the economy has traveled far enough down the road of protectionism.
In our basic model, protectionism will lead the economy somewhere near autarky. The assumptions of Cobb–Douglas preferences and technology imply that the shares of labor and capital (λ and κ) in autarky depend only on the Cobb–Douglas shares (α, β, ϕ m and ϕ a) and not on factor endowments or productivity (see Sect. 7.1.1 in Appendix 1). This will not be the case if we relax the Cobb–Douglas assumption. We can first relax the assumption of unitary elasticity of substitution in preferences and technology. We can go even further and relax the homotheticity assumption. We note that, if preferences are elastic but technologies are not, the share of workers employed in the secondary sector decreases with both population growth and productivity in the primary sector.
Finally, we conjecture that labor unions that were created or empowered to maintain and support protectionist policies also generated frictions in the labor market that ended up depriving them of their most vital input: unionized workers.
Relaxing the Cobb–Douglas assumption
In this section, we analyze how shocks to factor endowments and productivity can change the factor allocation of an economy in autarky. As shown in Sect. 7.1.1 in the Appendix, if preferences and technology are Cobb–Douglas, then the shares of labor and capital (λ and κ) in autarky will depend only on the parameters (α, β, ϕ m and ϕ a), rather than on factor endowments or productivity. However, under more general preferences or technologies, capital and labor shares will depend on productivity and endowments.
In Sect. 7.3 in the appendix, we show how changes in endowments or productivity can shift the allocation of labor and capital if we relax the assumption of unitary elasticity of substitution. We could comment on many different shocks that, together with some assumptions about the elasticities of substitution (EoS), would result in a smaller share of workers employed in the manufacturing sector (lower λ); however, we will focus on just two shocks: population growth and technological improvements in the agricultural sector.
Population growth will decrease λ if the EoS in consumption is greater than the EoS in the production of manufactures. The intuition is that an increase in the number of workers will push wages down. As a result, both manufactures and services will become cheaper. However, the percentage fall in price will be sharper in services (i.e., services will become cheaper relative to manufactures) because services employ only labor. The increase in the demand for services will be directly related to consumers' elasticity of substitution. Because labor becomes cheaper, the manufacturing sector will become more labor intensive. The increase in demand for labor in the secondary sector will be related to the elasticity of factor substitution. If consumers' preferences exhibit more elasticity of substitution than manufacturing firms' technology, the share of workers employed in the service sector will increase. A similar argument shows that the shift in the share of capital, κ, will have an opposite sign from the shift in λ. Therefore, under these circumstances, we may expect to see that, as population grows, λ decreases and κ increases.
Higher productivity in the agricultural sector will reduce λ if the EoS in preferences is greater than 1 and greater than the EoS in the technology of manufactures.Footnote 6 Moreover, the share of capital, κ, will decrease if the EoS in preferences is greater than 1. The intuition is that an increase in productivity in the agricultural sector will depress the autarky price of the primary good and increase the return of capital. High substitution elasticity in consumption implies that consumers will increase the share of primary goods in their bundles and that capital will move from the secondary to the primary sector. Low elasticity of substitution in the manufacturing sector implies that the marginal productivity of labor in that sector will decrease rapidly as a consequence of decapitalization; therefore, labor will shift to the tertiary sector.
Alternatively, if preferences and technology are not homothetic, then it is possible to obtain decreasing λ and κ following exogenous shocks if they change the total income of the economy or total production of a particular good. For example, if the manufacturing sector becomes more capital intensive, then the autarky equilibrium will result in a smaller λ and a larger κ. Similarly, if preferences shift toward services as income grows, then neutral technological improvements or increases in all endowments will reduce λ as more workers become employed in the service sector. Moreover, if the share of total income represented by food expenditures tends to decrease and food is produced in the primary sector, then the primary sector will tend to shrink under autarky (i.e., κ increases). More importantly, since the primary good has less weight in the consumption bundle, the impact of trade liberalization on workers and industrial capitalists is less harmful.
Trade and unions
We have discussed how protectionist policies shift labor and capital employment to the secondary sector, which reinforces the political demand for protectionist policies. So far, we have abstracted from the institutions and organizations that might emerge to represent these demands. As we will argue later, labor unions were organized and empowered during the Peronist period and were key actors during the following 40 years. Labor unions' most visible role was not lobbying for protectionism, but intervening in the wage-setting and employment decisions of manufacturing firms to keep real wages high and avoid layoffs. In this section, we will explain why, if the number of workers in the economy is increasing, unions' zeal to prevent wage declines will lead to an increase in the share of workers employed in the service sector and to their ultimate loss of political power.
Labor unions can influence wages in two basic ways. First, by restricting the access of workers to the manufacturing sector (e.g., enforcing closed-shop agreements), they can prevent wage equalization between the secondary and tertiary sectors and maintain a positive industrial wage premium in the medium and long run. Second, through aggressive collective bargaining, they can obtain a higher share of total remuneration and reduce the return to capital in the sector in the medium run. In an environment where the relative supply of workers is increasing, unions will have to rely on some of these interventions if they are to keep real wages from falling.
If labor unions effectively restrict access to the manufacturing sector, the service sector will absorb a disproportionately high number of new workers in the medium run and long run. This will result in a growing share of workers employed in the service sector being opposed to the labor unions; they will be against both restricted access and protectionist policies.
On the other hand, if labor unions can use their market power to set wages above the value of the marginal product of labor, then the remuneration of capital in unionized activities will decrease. In the long run, capital will flow to alternative uses, such as agriculture or non-unionized manufacturing activities. Decapitalized, unionized manufacturing activities will not hire new employers and, as a result, union membership will decline in relative terms.
In both of the cases reviewed above, unions' objectives of keeping wages high and avoiding layoffs of union members run counter to their long-run survival in a context where population growth outpaces capital accumulation.
Lessons from the model
The key result of our model is the finding that protectionist policies are path-dependent. A land-rich economy that is well integrated into world markets may embark upon an industrialization process in response to poor terms of trade, especially if the new prices are not a transient shock. This incipient industrialization process is possible if the economy has enough capital—i.e., if it is rich enough—and labor; otherwise, the secondary sector will not be profitable and the economy will not be able to cushion the negative terms-of-trade shock.
Starting from the onset of the industrialization process, capitalists and workers recently employed in the industrial sector have incentives to lodge demands for protectionism. As the process advances, the political power of these groups grows and, eventually, their demands may be met. As a consequence, the industrial sector receives a new boost at the expense of the primary and tertiary sectors, and the economy gradually becomes closed to world markets. Moreover, the political coalition supporting protectionism gains power. As a result, anti-trade policies become entrenched and the economy moves closer to autarky. Even if the conditions that gave rise to the endogenous industrialization subside, the economy remains closed, since the alliance of capitalist and workers retains its power.
However, the anti-trade alliance is not unbreakable. Secular trends in labor supply, frictions between workers and capitalists or a strong improvement in terms of trade can push the economy back into a free trade equilibrium:
Under more general preferences and technology, population growth and higher productivity in the primary sector can shift the factor allocation and lead to increased demands for free trade. In both cases, under some conditions, a greater share of workers will be employed in the service sector. Therefore, more workers will support liberalization.
Similarly, if services gain in importance in the consumption bundle, more workers will be employed in the tertiary sector. As a result, there will be greater support for liberalization. Moreover, even the owners of inputs employed in the secondary sector will have weaker incentives to support protectionism if this shift toward services occurs at the expense of the consumption of the exportable good.
Once the economy is near autarky, capitalists and workers will not be able to use their coalition's political power to pursue further industrialization. Besides, they will be extremely vulnerable to negative shocks in industrial productivity (e.g., an increase in the price of a non-modeled importable input). Under these circumstances, unions may be tempted to use their power against capitalists, thereby weakening their alliance. We have discussed how unions, in their zeal to keep wages from falling in the short run, may introduce distortions that reduce their power in the long run.
Finally, an improvement in terms of trade or an increase in agricultural productivity increases the incentives for landlords to intervene in the political process. The economy will be able to escape the anti-trade trap if landlords are successful in challenging the coalition of industrial workers and capitalists.
Analytical narrative
Argentina did relatively well when it was integrated with world markets. Why, then, did it remain under autarky for approximately 60 years? We will now use the model outlined in the previous section to articulate an analytical narrative concerning the political economy of autarky during the twentieth century in Argentina.
The Belle Époque
In 1860, Argentina was a fairly empty land. As in the rest of Latin America, the pace and characteristics of Argentine expansion were fundamentally determined by the success with which some of its regions became exporters of primary products (see Cortés-Conde 1979). The period from 1870 to 1914 was one of free trade and market integration and during this period the country benefited from its marked comparative advantage in the primary sector due to its vast amount of highly fertile land (O'Rourke and Williamson 1999). The dramatic decline in transport costs during the late nineteenth century led to a trade boom and commodity price convergence internationally. In Argentina, the scarcity of labor and abundance of land, relative to Europe, induced a high marginal product of labor. The wage differential between Argentina and some European countries attracted a colossal flow of overseas immigrants, who came to constitute the majority of Argentina's labor force. A similar process also triggered a massive flow of capital into the country (see Cortés-Conde 1979).
During the second half of the ninetieth century, a large proportion of Argentine land was settled and divided up into latifundia (Adelman 1994). The sharp increase in the availability of land spurred an expansion in livestock raising, primarily because it was a non-labor-intensive activity that could be launched at a time when labor was a scarce resource.
With the pattern of land ownership determined by political history, and with prices of exports, imports and capital set by international markets, total rents depended on the labor supply. Therefore, immigration policy became the critical policy variable under the control of the government (Díaz-Alejandro 1984). Not surprisingly, the Argentine elite chose to promote immigration. The expansion of agricultural activities and a pro-immigration policy paved the way for a very substantial increase in the urban population, especially in Buenos Aires. In addition to its administrative functions as the capital of the country, this city developed an increasingly large and sophisticated service sector.
The export-oriented growth made possible by an expanding international market raised per capita income in a sustained and substantial way. Indeed, the growth process was closely related to successive booms in the exports of land-intensive commodities, with land having a very low opportunity cost. The economic usefulness of the pampas was not discovered overnight, as an oil deposit might be, but instead arose as the result of the combination of a growing European need for primary goods, technological progress in transport and an increasing interest on the part of Argentine policymakers in promoting exports, foreign investment and immigration. By the beginning of the twentieth century, however, the Argentine growth process had become less dependent on the discovery of new resource-based export commodities and on the performance of any one export. It still relied heavily, however, on a steady expansion of exports based on the growth of the world economy and on the completion of the adjustment by which primary production was being transferred from Europe to more recently settled countries (see Díaz-Alejandro 1970).
The early manufacturing sector was closely linked to the primary sector and supplied the domestic market with products that were naturally protected from external competition, (e.g., wine, meat and flour). There also was a smaller industrial sector that competed with imports (e.g., clothes, cigarettes, perfumes). These industries were granted some degree of protection after the passage of the Customs Act of 1876. However, the level and extent of protectionism were rather limited compared to what was yet to come (Gómez-Galvarriato and Williamson 2009). First, the main goal of these customs duties was to obtain revenues for the government, which was a widely accepted practice in Latin America at the time (see Brambilla et al. in this volume). Second, the protected activities accounted for a small share of total economic activity and, to a large extent, the policy was geared toward protecting regional products as a means of preserving the federalist model adopted by the country. Thus, this specific departure from free trade can be more accurately interpreted as a means of securing revenues and of sustaining a political order that, on the whole, was pro-export oriented.
Thus, in our view, the period from 1870 to 1914 was one of specialization in production, with the country specializing in the production of primary goods, importing manufactured goods and employing its workers mainly in the primary sector and the services industry. This was therefore a period in which the political views of the majority of economic agents were aligned against protectionist policies.
Globalization backlash
It is not clear whether Argentina could have sustained its fast pace of growth under specialization (see Llach in this volume) if the world had remained widely integrated, as it was during the Belle Époque. However, there is no reason why it should not have diversified its production and exports of agricultural and manufactured goods under a policy of free trade. Had the terms of trade remained favorable for Argentina, even if the productivity of the primary sector had not kept increasing rapidly, some manufacturing sectors would have eventually become competitive and taken off. What is more, if the economy had continued to expand, it would have begun to meet an increasing (but previously inexistent) domestic demand for many manufactured goods, thereby encouraging their domestic production, particularly in view of the existence of natural barriers. The same reasoning applies to services (see Galiani et al. 2008).
Instead, the country's fortune took a sharp turn for the worse in the 1930s. World trade collapsed after the Great Depression. The 1932 Ottawa Conference marked the end of multilateralism in international trade. Great Britain, Argentina's foremost trading partner, shifted its trade to members of the Commonwealth. A protectionist pandemic spread throughout the world. As a consequence, the ratio of world trade (export plus imports) to GDP declined from 22% in 1913 to 9% in the 1930s. Though there was a recovery toward the end of the decade, international trade was again disrupted during the Second World War, when it was geared toward war requirements. Trade opportunities did not start to improve until after the Second World War under the Bretton Woods system and with the signing of the General Agreement on Tariffs and Trade (GATT). Then world trade began to recover and, by 1950, it had surpassed pre-war levels, mostly thanks to the growth of trans-Atlantic and intra-European trade.Footnote 7 There is a consensus that, after the Second World War, a second globalization era began (see, among others, Baldwin and Martin 1999; Williamson 2002). Nevertheless, the move toward multilateralism was gradual and was not achieved, for all practical purposes, until the 1990s (see Brambilla et al. in this volume for a fuller discussion of these issues).
The breakdown of the economic order was transmitted to Latin America first of all through a sharp change in relative prices: dollar export prices collapsed more steeply than dollar import prices. According to Clemens and Williamson (2002), the magnitude of the decline was around 30% for Asia and the Middle East and 40% for Latin America. This decline in the terms of trade was used as a strong argument in support of the move of the developing world toward autarky in the 1940s and 1950s, within the context of a highly interventionist industrialization strategy.
In Argentina, the terms of trade deteriorated considerably even before the collapse of the international economic order in the early 1930s (see Fig. 4). During the 1920s, on average, the terms of trade were approximately 30% below the pre-First-World-War level of 1913. Such a shock alone merits the label of a reverse of fortune. For a country with a ratio of exports to GNP of one-to-three, a 30% deterioration in the terms of trade represents a loss in real income of about one-tenth, assuming no change in physical output. The 1930s show some recovery in relative prices, which still were, on average, about 16% below their 1913 level. This reversal of fortune, with some pronounced fluctuations, continued throughout the rest of the twentieth century. Just to put this into perspective, the average terms of trade for the period 1930–1999 was 20% below the average relative prices for the period 1890–1913. Nevertheless, in recent years the terms of trade have improved substantially.
Source ECLAC Office in Buenos Aires
Terms of trade, 1875–2006 (1993 = 100).
The protectionist measures enacted by most countries in the world and the increased risk of sending goods overseas during wartime reduced trade opportunities beyond what would be expected as a result of the terms of trade. To sum up, in the late ninetieth century, Argentina had highly auspicious opportunities to trade with the rest of the world: favorable terms of trade, peace and the application of free trade policies by its trading partners. The terms of trade did not start to decline until early in the twentieth century and were then followed by war and protectionist policies.
Endogenous industrialization
The deterioration in the terms of trade during the 1920s severely damaged the economy. At the same time that the profitability of the primary sector was plummeting because of low export prices, opportunities in the secondary sector flourished thanks to the natural protection provided by high import prices. As indicated by the research of Villanueva (1972), the 1920s were a particularly active period in terms of the development of the industrial sector in Argentina. International conditions worsened again in the 1930s, leading to another wave of endogenous industrialization. As the economy began to produce goods that it had imported in the past, it naturally began to close itself off from the world economy.Footnote 8
The decline in the terms of trade harmed both service workers and landowners. However, the situation was less appalling for workers, since capital and labor were shifting to the secondary sector. The flow of workers to the urban secondary sector was primarily composed of people from rural areas. Their welfare began to increase as capital was reallocated to its most productive uses and as new manufacturing activities prospered. In the model presented in the previous section, this is reflected by a shift from specialization in production toward diversification and trade.
The early industrialization process of the inter-war period was accompanied by the consolidation of the labor movement. Argentine unions date back to 1877, but active unionism did not start until the twentieth century. Union demands centered on basic improvements in working conditions, some sort of insurance for work-related injuries and the prohibition of child labor. As industry blossomed and wages rose during the 1920s, the unions succeeded in having their demands met (see Galiani and Gerchunoff 2003). The Great Depression put an end to the workers' bonanza, however. Unions tried, without much success, to prevent wages from falling, but they did succeed in retaining most of their achievements in terms of working conditions. The union movement was seen by employers as a lesser evil that would maintain industrial peace, while workers saw it as a reliable tool for protecting their rights. Unions thus emerged as an institutional device for coping with the conflict of interest between capitalists and workers in the incipient process of industrialization during the inter-war period. The battleground was the shop floor, and the conflicts were mainly about the improvement of working conditions and wage stability.
It is somewhat ironic that the debate about protectionism became a permanent fixture in the national dialog in the wake of the Roca–Runciman Treaty, which was devised to protect the Argentine primary sector and ensure exports to Great Britain. In exchange, Argentina promised to reduce tariffs on British imports and made other concessions to British companies that operated in the country. Although the treaty was not fully honored by Argentina, it did spur the debate about the role of industry. For the first time, industrialists began to call for economic independence, self-sufficiency and autarky as Argentina's answer to the new international order and continued to do so during the uncertain period of the Second World War.
This process of import substitution intensified during the Second World War under the shelter of the trade barriers associated with the war. By the end of the war, the manufacturing sector was playing a significant role in the economy, but manufacturers were arguing that a strong policy of commercial protection and subsidies was needed for them to survive, especially if the terms of trade were likely to improve. It was under the leadership of General Perón, in the midst of a major political shift, that these demands were to be fulfilled.
A new Argentina
The 1930s world economic crisis had profound effects on the economic and political life of Argentina. Certainly, much of the development of Argentine foreign trade seen during the 1930s, 1940s and early 1950s can be seen simply as a consequence of trade agreements and exogenous shocks coming from the rest of the world. The crisis and its immediate consequences were also a shock for the political life of the country. By the same token, the economic changes that were occurring also triggered major changes in the socioeconomic structure which ultimately created conditions conducive to the development of a populist mass movement.
Argentine politics was monopolized by the landowning elite until 1916, when a major political shift occurred thanks to an electoral reform law passed in 1912 which ushered in universal adult male suffrage (though it restricted the right to vote of the large number of unnaturalized immigrants), secret ballots and compulsory voting. Despite its apparently democratic implications, this reform was designed to perpetuate the prevailing oligarchic system by extending the vote to the urban middle class, whose members had taken part in the economic expansion in the sense that they were working in the service sector, although they had been excluded from the strongholds of power. Not surprisingly, the oligarchic elite that ruled the country believed that middle-class workers were committed to maintaining the existing political and economic structure.
This experiment in limited democracy (the new electoral law gave voting rights to nearly one million adult males, but this was no more than approximately 40% of the adult male population) was interrupted in 1930, when the army carried out a coup and installed itself as the dominant factor in Argentine politics. Over time, the popular base of the democratic system expanded. In 1946, 3.4 million adult males had voting rights (see Cantón 1968). Thus, the voice of the people in the Argentine political system grew substantially between 1916 and 1946, despite the intervening military coup. By 1946, the economic configuration had changed dramatically. The political alignment between landowners and workers had broken down. Instead, workers—now mainly employed in the secondary sector—found their perfect ally in the capitalists of the manufacturing sector, because their political preferences were aligned both in the short and in the medium terms (see Sect. 3.2). Under Peronist policies, more capital and labor shifted to the secondary sector, thereby furthering the process of industrialization and consolidating both this alliance and the urban–rural conflict.
At that point, distributive conflict between urban factors of production and landowners emerged and paved the way for the possibility of populism as an equilibrium point. Rogowski (1989), among others, argues that backward economies with abundant natural resource endowments in which both labor and capital are relatively scarce are likely to display political cleavages that are protectionist in nature. The urban manufacturing sector will seek to protect itself, by taxing both exports and imports, against rural activities. However, this analysis, which was widely applied to Argentina during the Perón era, is at best incomplete, as our model demonstrates. This prediction holds only for certain configurations of the parameters of the model and certain histories. In particular, we stress that protectionism and protectionist cleavages arise in resource-rich economies after the potentially protected activities are initiated spontaneously in response to changing market conditions (see also Galiani et al. (2010) for a discussion on the role of skilled labor and unskilled labor in the formation of political coalitions in this context).
By 1940, the labor movement had matured; moreover, industrial capitalists had been aspiring to self-sufficiency and economic independence ever since 1930. Conditions were therefore ripe for Perón to build a mass workers movement. He started to engineer this when he was the Labor Secretary, right before he was elected President in 1946. Industry-wide bargaining was instituted; labor courts were set up to enforce the rather progressive new labor laws, social security coverage was greatly expanded, minimum wages were increased and the system of aguinaldo (1 month's extra pay at Christmas time) was introduced. Finally, Professional Associations Act was adopted in 1945, which provided for the withholding of union dues by employers, recognition of only one union organization per branch of activity and direct union participation in political activity under state supervision. As a result, the growth of union density during the 1940s was astonishingly rapid, rising from 10% in 1936 to 40% in 1948 and to 49% in 1951 (see Galiani and Gerchunoff 2003).
In this manner, a new national populist coalition was brought to power in 1946 under the leadership of Perón. The Peronist coalition left behind the traditional dispute between radicals and conservatives that had marked the political arena since the electoral reform. This pattern of opposition was replaced by one which had a greater share of class content and was rooted in the expansion of social rights and the political and social integration of the working classes. Indeed, the political history of Argentina in the twentieth century is divided into two: before and after the emergence of Peronism (see Torre 2002).
The Peronist era (1946–1955)
By 1950, most of the countries of Latin America had implemented an import-substitution strategy. Although it was a pragmatic endogenous response to the conditions created by the Great Depression of the 1930s and the Second World War, this strategy was not necessarily the optimal response to the new international conditions of the post-war era. To a great extent, the decision as to what sort of strategy would be the best depended on what could be expected of the future evolution of the international economy. By the late 1930s, it was reasonably clear that the laissez-faire approach was finished in international economic relations. In this context, the import-substitution strategy can be seen as a defensive measure against an uncertain future of trading relations.
Clearly, world market conditions were more favorable to Argentina in 1943–1955 than in 1929–1943. After the war, policymakers had an option which they had not had during the Great Depression: to guide economic growth on the basis of expanding exports of both rural and manufactured products (see Díaz-Alejandro 1970). Indeed, this was explicitly attempted under the economic leadership of Federico Pinedo during the early 1940s. Pinedo's plan was a well thought out attempt to recover the dynamism of the agricultural sector and to promote export-led industrialization (see Llach 2002). However, Pinedo's strategy failed to take hold. One of the reasons for this failure is that it was opposed by the new dominant electoral coalition formed by urban capitalists and workers, who stood to benefit from a deepening of the import-substitution strategy (see Sect. 3.3). This electoral coalition would elect Juan Perón as President of the country in 1946 in what were arguably the first truly free and democratic elections with universal male suffrage.
Perón decided to consolidate the social base of his movement by redistributing income to the working classes. In fact, he saw industrialization as a mean of achieving the goals of his nationalistic and populist policy of increasing the real consumption, employment and economic security of the masses of workers (see Gerchunoff 1989).
Indeed, as Fig. 5 shows, the share of wages on GDP peaked during the Peronist era. It is clear from the figure that the share of wages in GDP is lower when the economy is integrated into the international economy than under autarky when the secondary sector has exhausted its possibilities of import substitution. Notice that this stylized fact is consistent with our model. In the long-run, the equilibrium workers' share is equal to (1 − ϕ a − ϕ m) + (1 − β)(Y m/GDP), i.e., the share of services in consumer preferences plus the share of labor in the secondary sector times the share of industrial output in total GDP. Notice that in the long run, and perhaps even in the medium run, workers not necessarily are better off under autarky (see Sect. 3.3 and Proposition 7).
Source Gerchunoff and Llach (2004)
Share of wages in GDP (index: 1884 = 100).
The Peronist policy of import substitution was not an integrated, well thought plan. Rather, there was a great deal of improvisation in its application as policymakers reacted to short-run economic and political pressures. Clearly, toward the end of the war and during the early post-war years, the government's main concern was to defend the industries that had arisen and expanded prior to and during the war, regardless of their efficiency (Díaz-Alejandro 1970). The protectionist measures that were used included not only high tariffs on imports of goods that were also produced domestically, but also the requirement that farmers sell their crops to a state trading monopolyFootnote 9 that would profit from the difference between world prices and the prices paid to producers.
Import substitution gave the Peronist state control over resource allocation in the economy. By deciding which industries to protect and where to channel national credit, the Peronist government was able to discipline industrialists and determine the destination of investment. Either industrialists complied with the demands of the government or they were forced out and their capital was nationalized. The nationalization of private capital and Perón's military ambitions explain why the government became so deeply involved in the economy. Labor was also kept in line by the Professional Associations Act. Only one union was allowed to operate in each branch of activity; obviously, the government was entitled to decide which one could do so if two or more unions vied for the same branch. Outlawed unions had their bank accounts frozen and their offices closed.
As a result, the Peronist government cemented a closed-economy and import-substitution model for the years to come. The most important government intervention during the period 1945–1975 was the introduction of a relative price system which favored industry (and particularly labor-intensive industry) at the expense of the agricultural sector. As a consequence, internal relative prices diverged from international market prices, thus generating a sharp differential (which put the agricultural sector at a disadvantage) between the internal and external terms of trade (see Díaz-Alejandro 1970; Mallon and Sourrouille 1975). The triumph of the industrialization model under a closed economy, over time, and even after the demise of Perón, led to the adoption of a scheme of industrial integration which consisted of completing every step of the production process, from capital goods and inputs to final goods, inside the country's borders, in evident contradiction with the post-war tendency of developed countries, whose trade was and continues to be mainly intra-industry (see Llach 2002).
Behind these economic policy decisions, there was an alliance of economic and political interests formed by unions, industrialists and the armed forces. Unions consolidated their power by delivering better wages, working conditions and social protection to their members. Industrialists had achieved a considerable level of protection from competition. Finally, the military took the development of the steel and oil industries under its wing. Although this alliance was evidently born after the Peronist years, it had sufficient resilience to last even through the military governments and the periods of political proscription of Peronism (see, among others, Halperín Donghi 1994; Llach 2002).Footnote 10
Up to now, we have been assuming that the economy operated near the efficiency frontier. This is reasonable if we assume that capital allocation and employment decisions were made in a decentralized way by profit-maximizing agents. However, during Peronism and the years that followed until the collapse of the import-substitution model, the assumption is hard to maintain. Capital was allocated on the basis of political rather than economic considerations. Labor allocation was no less distorted: public employment was used as a means of combatting unemployment; moreover, unions regulated quantities and prices in their members' labor markets to the extent that they were politically able to do so.
Not surprisingly, income redistribution and industrial promotion policies rapidly ran up against a formidable constraint: exports stagnated (see Brambilla et al. in this volume). It is true that the stagnation of Argentine exports can be partly attributed to the global closure of markets and to the protectionist policies applied by industrial countries in agriculture that favored self-sufficiency (especially in Europe). However, it is also true that Argentina underperformed even in comparison to other countries that shared the same markets.
Argentina accounted for more than one-third of all Latin American exports in 1928, one-fourth in 1938 and only one-eighth in 1954. It exported mainly primary goods: corn, wheat, linen, wool and meat. The joint share of these five goods in world trade declined from 8.6% in 1926–1929 to 3.9% in 1960. Nevertheless, the fact that Argentina's market share was halved during that period provides evidence of Argentina's decline relative to other agricultural exporters. Overall, if we consider the world exports of these five primary products, Argentina accounted for 1.8% of those exports in the late 1920s and for only 0.4% in 1960. If we analyze export trends by product, we see that, in that same period, Argentina's market share in corn decreased from 57% to 21%, in wheat from 20% to 9%, in linen from 73% to 40% and in meat from 40% to 24%, while its market share in wool remained unchanged at around 6% (see Llach 2006). The stagnation of Argentine exports placed an inescapable constraint on the country's growth.
In sum, during Peronism Argentina embarked on an ambitious import-substitution industrialization process backed by a coalition of industrial capitalists and workers. In the language of our model, the protectionist policies drove the economy from the diversification and trade area to a near-autarky situation.
A nation in Deadlock (1955–1973)
Toward the end of the 1950s, it was becoming clear that the world was entering a new free trade era and that the woes of the inter-war mercantilist period were over. However, taking advantage of the new international conditions required a painful period of readjustment. In terms of our model, as capital flows back to the primary sector, industrial capitalist and workers suffer the most, whereas landowners benefit greatly. At the domestic level, it was also clear that the shift toward the consumption frontier for mass-produced, labor-intensive domestic goods had come to an end. Steel, machinery, motor vehicles and petroleum were the activities that were being protected and promoted during this new phase of import substitution in Argentina, and all of these industries were more capital intensive than those targeted during the initial state of import substitution (see Mallon and Sourrouille 1975).
Perón himself, after being reelected by a landslide, was seeking an economic alternative that would have inevitably entailed major economic and social readjustments. Nonetheless, Perón had taken notice of the political risks of departing from the path that had until that point driven him toward the amplification of redistributive policies and import-substitution strategies. Indeed, Perón was ready to abandon nationalism to attract the foreign capital needed to sustain the deepening of the import-substitution model, but not to revert the improvement in the distribution of income achieved under that model. Under these conditions, the armed forces abandoned their alliance with the unions and industrialists. High-ranking officers were becoming increasingly worried about the path that Argentina was taking under Perón's rule. They silently plotted against Perón and forced him to withdraw in 1955.
Interestingly enough, all the governments between 1955 and 1973 tried, to the extent of their possibilities, to deepen the import-substitution process, which was still backed by an increasingly weakened coalition of workers and industrialists. The social revolution embodied by Peronism created a new society that took on a life of its own and that, even though it had no way to survive, simply refused to die (Halperín Donghi 1994).
On average, export incentives were larger during the period 1955–1973 than during the first post-war decade. But the policy tilt toward import substitution and away from exports remained a feature of the Argentine economy during the period 1955–1976. Argentina's effective rates of protectionism remained the highest in Latin America (Díaz-Alejandro 1984). Protectionism and hostility toward the rural producers of the pampas were hardly limited to the Peronist movement, neither was a strong nationalist stance toward foreign capital. As with export incentives, governments zigzagged in their policies toward foreign capital during this period. However, foreign corporations were nonetheless used as key instruments in expanding industrial production in consumer durables and in intermediate and capital goods (Díaz-Alejandro 1984).
These years also saw a steep increase in the consumption of services, many of which were provided by highly educated workers, for whom there was a strong demand in this sector. These educated workers began to break down the rural–urban political cleavage (see Galiani et al. 2010). As a result, the shift toward the promotion of more capital-intensive industries and the growth of a services sector catering to high-income and upper-middle-income groups gradually eclipsed distributionist protectionism.
Over time, sustained growth required more government intervention. The state had to finance the deficits run by public-sector enterprises, subsidize the substitution of capital-intensive imports and promote non-traditional exports. Yet it became less and less able to do so as trade revenues began to shrink under increasing autarky and as the surplus enjoyed by the social security system created under Perón melted away, turning into a deficit by the mid-1960s. The inflation tax thus became the adjustment variable for an increasingly conflict-ridden and inviable society (see Mallon and Sourrouille 1975).
The alliance between industrialists and workers began to grow stale. Labor unions faced a dilemma, since preventing wages from going down required limiting the supply of workers, and they knew all too well that having fewer members implied less power. They also knew, of course, that new investment in unionized activities would allow them to achieve both higher employment and higher wages. In sum, they needed modern and capitalized industries, but their own power kept capitalists away. The solution to the dilemma was direct government intervention and direct investment in industrial activities.
The alliance between workers and industrialists was also unstable. They both wanted high protection for industry, and hence their interests were aligned in this respect. However, their interests conflicted with respect to real wages. Thus, from time to time, when the economy needed to adjust to its consumption possibilities, the alliance would break down for a time (see O'Donnell 1977).
To complete this dim picture, some workers became increasingly disappointed with their union leaders and found hope in the promises of a "socialist fatherland" made by leftist groups. These groups accused the landowners of serving foreign interests and being unpatriotic. To differing degrees, depending on each group's political orientation, they proposed various strategies, with the most extreme one being the outright expropriation of land and its redistribution among the people by means of revolutionary violence.
To sum up, chronic inflation and recurrent cycles of recession and recovery—associated with substantial changes in income distribution arbitrated by the state (see Mallon and Sourrouille 1975; O'Donnell 1977)—were salient economic features throughout this period (and even beyond it). At the same time, social and political divisions grew increasingly tense, reaching such a point that violence dominated the political and economic life of the country. As a result, Argentina failed to regain its prosperity and to achieve a consensual political order; instead, it was stumbling along in a volatile stalemate. The successive administrations proved unable to prevent the progressive institutional decay of the country. Nevertheless, the darkest hour for Argentina was yet to come.
Crisis and reforms (1973–2010)
The intervention of the state in the economy increased substantially during the Peronist era and the next 20 years. There is a stark contrast between the industrialization process of the period 1920–1945 and that of 1946–1975. In the former, the private sector reacted to the shortage of foreign manufactured goods and led the way toward endogenous industrialization. In the latter, the state took an active role in deepening the import-substitution process. This led to decisions based on political expediency rather than economic rationality.
The industrialization process was guided by an alternation of administrations with different strategic objectives, so it is not surprising that, overall, we find that it failed to achieve self-sufficiency or even a more rational or coherent industrialization process. This led to an essentially disproportionate development process that promptly ran into binding constraints: (a) the inadequate growth of exports was a very serious obstacle to the industrialization process, which required growing inputs of capital and intermediate goods; and (b) the intensification of the industrialization process, especially the development of heavy industry, required larger subsidies that needed to be financed in some way. The government's inability to accomplish this task with fiscal resources drove inflation up to levels that were inconsistent with a healthy economic performance.
A final populist experiment (under President Perón and then his wife) in the early 1970s ended up in economic and political disorder. On the political side, it failed to curb the spiral of violence that leftist guerrillas had ignited in the late 1960s. On the economic side, the oil crisis exposed the weakness of the import substitution strategy. The increase in the price of imported oil, a vital input of the manufacturing sector, fueled inflation and reduced real wages.Footnote 11
A top-down disciplinarian military administration then took its place. The main economic objective of this government was to reduce inflation. A significant, although gradual and partial, market-oriented financial and trade liberalization program was also implemented. This time, the military government was quite intransigent in its attitude toward the other groups within the weakened industrialist alliance. In disciplining the unions, the military government not only suppressed collective bargaining and other union rights, as it had at other times in the past, but actually used its military might against union leaders, some of whom became victims of kidnappings and forced disappearance at their hands. Nevertheless, the unions were not entirely decimated and, after the return to democracy some years later, they were again a very powerful social force in the country. Industrial businessmen were also disciplined through trade liberalization measures.
The discipline imposed on both labor and capital was not reflected in fiscal austerity. With favorable international conditions for credit, the military–industrial complex was empowered, and public spending on infrastructure soared. Large business groups were also able to modernize considerably thanks to their easy access to cheap credit. Over time, both inflation inertia and the prevalence of large fiscal deficits made the exchange rate system of pre-announced gradual devaluations, which had been adopted to control inflation, unsustainable. Between 1979 and 1981, capital flight amounted to around 20% of GDP, leaving the government (which absorbed private sector external debt) with a hefty external debt that has influenced the country's economic performance ever since.
The country's extraordinary debt rates paved the way for a fiscal and balance-of-payments crisis that dominated the political and economic scene during the 1980s. Throughout the 1980s, the Argentine economy posted the worst performance it had turned in at any time since the end of the Second World War. Investment collapsed. Per capita GDP decreased by approximately 20% between 1980 and 1989. Inflation was above 100% for every year except 1986. Both the external debt and the debt-to-exports ratio rose at an ominous pace. The dollarization of the economy deepened, increasing its financial fragility. Ultimately, in the presence of severe uncertainty at a time when the country was making its first democratic transition in decades, its high inflation gave way to a short but devastating bout of hyperinflation.
It was only after a brutal episode of hyperinflation that a comprehensive reform process was adopted (see, among others, Acuña et al. 2007). In the wake of its trade and financial reforms of the 1970s, Argentina had embarked upon a process of integration into the international economy. This was substantially deepened during the 1990s, when the Peronist administration privatized state enterprises and drastically reduced import tariffs and export duties. Labor unions, which had blocked free trade policies since 1955, were unable to effectively oppose these reforms (see, however, Acuña et al. 2007, for a discussion of how the government seduced union leaders into supporting the reformist agenda).
Although not without large social costs, measured by a substantial increase in poverty and inequality (see Alvaredo et al. in this volume), this reform process finally moved the Argentine economy toward a rational form of integration into the world economy. The recovery of the agricultural sector and the growth of exports have been spectacular (see Brambilla et al. in this volume). The surviving industries are realistically competitive and largely oriented toward the manufacturing of the natural resources with which the country is abundantly endowed (see Brambilla et al. in this volume).
The Peronist party (Justicialist Party) continues to dominate the political arena, having held office for 18 years in the period 1990–2010. Its support base has changed somewhat though. Now, its supporters can be found not only among unionized workers and public employees, but also among a large number of informal service workers and small rural producers. The challenge of the twenty-first century for the Peronist party is to build an alliance with landowners and rural producers in the pursuit of an export-led form of growth without losing the support of the vast number of people living in poverty as a result of 50 years of economic stagnation and a painful trade liberalization process. The prospects are not bright—the Peronist party has increasingly used political clientelistic practices to retain the support of the poorest segments of the population.
In the language of our model, the reform process initiated in the 1990s redirected capital to the primary sector and labor to the tertiary sector within the area of diversification of production and trade. The balance of power shifted away from the industrialists and toward the coalition of agricultural producers and service providers. During the 2000s, the improvement in terms of trade has helped them to consolidate their power. The distributional conflict has not disappeared; there are urban sectors that would benefit from an increase in protectionism. However, the pro-agricultural coalition appears stronger than in the past. Indeed, in March 2008 a government attempt to increase export taxes on soybeans and sunflower was met with a nationwide lockout by farming associations. The proposal was finally defeated in Congress after 4 months of large-scale demonstrations in urban areas and road blocks in rural areas.Footnote 12 However, as we learned from the country's experiences in the early twentieth century, such coalition between landowners and service workers is viable only under favorable external conditions and it is politically weak.
Why Argentina?
We have analyzed the economic history of twentieth-century Argentina as seen through the prism of a model that is a tractable, yet seemingly adequate, simplification. The model allows us to derive the preferences or attitudes of each socioeconomic group regarding protectionism. Without being explicit about the political process that determines the taxes on international trade, we have been able to support our main claim: the negative external shocks faced by the economy during the first half of the century spurred an endogenous industrialization process that had a profound impact on the political landscape of the second half of the century. Over the first half, capital and labor were reallocated from the primary and tertiary sectors to the secondary sector, and this changed the attitudes of the majority of the population with respect to protectionism. The import-substitution industrialization process was, in part, a response to those attitudes.
The argument presented in our model is similar to the Stolper–Samuelson (1941) result: if labor is assumed to be employed less intensively in the production of the exportable good, then protection should increase its real remuneration. However, once we include the labor-intensive non-tradable sector, this prediction no longer holds; with favorable terms of trade, wages can be higher under free trade (see also Galiani et al. 2009).Footnote 13 In this case, path dependence is introduced by assuming that physical capital adjusts slowly and that impatient workers are the pivotal group in the political process. The attitude of labor toward protectionism depends on the allocation of capital that is assumed to be fixed in the medium run. This is also very relevant because it helps to explain the entire economic history of Argentina between 1870 and the present within a unified framework. In contrast, in the previous literature, the widely used Stolper–Samuelson theorem only helps to understand the rise of the urban–rural political cleavage that appeared following the Second World War, but it cannot account for the periods of integration into world trade seen in the late nineteenth century and after the fall of the Berlin wall.
At first sight, it seems that this type of path-dependent anti-trade trap could have appeared in any economy; however, we claim that this is not the case. It is true that endogenous protectionism can arise in almost any economy if we assume some adjustment costs and persistent external volatility in the terms of trade. However, if the underlying distributional conflict is not too intense, the economy can gradually steer itself toward a more efficient pattern of trade. It is the intensity of the distributional conflict—determined mainly by technology and factor endowments—and the inability to resolve it by institutional means that place Argentina in a special situation.
Our model has three features that generate both path dependence and intense distributional conflict. First, the production of the exportable good does not use the pivot input—labor—intensively. Otherwise, the pivot group would tend to support free trade policies in the short and medium run. Second, the exportable good is an important component of the consumption bundle. Otherwise, it is possible to show that, in the medium run, workers would prefer a tariff level that decreases with the terms of trade; in that case, workers would prefer gradual liberalization as the terms of trade improve. Third, at the point in time when the terms of trade worsen, the economy has to have enough capital to start the endogenous industrialization process. Poor economies that have not accumulated enough capital yet are less prone to the severe distributional conflict described here. These three conditions fit fairly well for Argentina and point to what other economies we should look at in an effort to discern protectionist traps. We focus on land-rich newly settled countries, particularly Australia, since there is a long tradition of comparing Argentina with Australia in the literature (see, among others, Díaz-Alejandro 1984; Gerchunoff and Fajgelbaum 2006).
Argentina and Australia
There are a number of similarities between these two economies that make this exercise of comparative history worthwhile. First, their initial endowments, that is, the relative scarcity of labor relative to land, determined their position as exporters of agricultural goods. Second, there is the natural emergence of manufacturing sectors in response to the natural protection provided by exogenous international conditions and the distance of main industrial centers. Third, there is the demand for protectionism by urban manufacturing interests. As a result, both countries relied heavily on tariffs and quantitative restrictions to trade to provide protection for their manufacturing sectors. These policies were blamed for the relatively poor performance of these economies and were eventually abandoned by the end of the twentieth century, although not without opposition from vested interest groups.
Anderson (2002) states that "seven decades of import-substituting industrialization cost Australia dearly in terms of its comparative standard of living. In 1900, Australia was arguably the highest-income country in the world on a per capita basis. But by 1950 its rank had slipped to third; by 1970 it was eighth; and by the 1990s Australia was not even in the top twenty" and that "Australia's comparatively poor growth performance for most of the twentieth century contrasts with that of the final decade, when Australia out-performed all other advanced economies other than Ireland and Norway". The author claims that part of that success is attributable to the "belated opening of the Australian economy to the rest of the world".
The differences between these two cases start to appear when we focus on the intensity of the distributional conflict and the institutional settings where this conflict needed to be resolved. We claim that the Argentine distributional conflict was more intense and that its institutions were weaker. As a result, while Australia was able to overcome its conflict, Argentina was overwhelmed by it. Moreover, international and geopolitical conditions helped to ease the Australian anti-trade, trap but not the Argentinian one. In what follows, we stress some key differences between these two economies and show how they contribute to our argument.
From endowments to institutions
Since its creation in 1901, the Australian Federation adopted protectionist trade policies that were strengthened during the course of the twentieth century up until 1973, when the country entered into a gradual but steady process of liberalization (see, among others, Anderson 1998, 2002; Anderson and Garnaut 1987; Corden 1996; Garnaut 2002).
The Australian gold rushes of the late ninetieth century sparked an early influx of immigrants who helped to consolidate a mining export sector. The mining sector had powerful forward and backward industrial linkages that generated interest in scientific and technical research, as well as giving rise to a unionized labor force across the economy. The trade unions and entrepreneurs involved with mining coalesced into political groups that opposed the creation of a ruling landowning elite.
In 1901, the Labor Party joined the Protectionist Party to form the first government of the Australian Federation. Two key issues on the political agenda were the level of protectionism and immigration policy. The government successfully passed the Immigration Restriction Act of 1901, which formed the basis for the White Australia Policy. However, the government had to reach a compromise with the Free Trade Party to set import tariffs in 1902.
Australian immigration policies have been substantially different from those of Argentina. As mentioned before, the Argentine elite chose to promote immigration. Argentina's population went from 1.35 million in 1861 to 11.28 million in 1928, while, in Australia, it went from 1.2 to 6.22 million. In Argentina, this decreased wages and increased the return on land. Indeed, Taylor (1997) calibrates a general equilibrium model to estimate the impact on wages of the massive flow of immigration to Argentina up to the First World War. His calibration suggests that the flow of immigration reduced real wages in Argentina by approximately 20% from what wage levels would have been if immigration had not taken place.
What is more, and in spite of similar factor endowments, land was more concentrated in Argentina than in Australia, where family-operated, medium-sized farms were relatively more common. As a consequence, landowners in Australia did not constitute an oligarchy as they did in Argentina; they were a broad social group and were not a ruling class. Landlords in Australia never controlled the governmental machinery as they did in Argentina (see Hirst 1979).
To sum up, by the beginning of the twentieth century, the Australian labor movement was already mature and consolidated, had an active role in the policymaking process and had successfully demanded protection and restrictions on the flow of immigrants. However, it was not a hegemonic party; it had to make compromises with the Free Trade Party, which represented the interests of the agricultural sector. In Argentina, the ruling elite had vested interests in the agricultural sector and did not need to compromise with antagonistic interest groups. Even before the 1930s crisis, Australia was already experiencing a distributional conflict similar to the one described in our model, and it found institutional ways to deal with it. In practice, Australia had a democratic government, while Argentina had an autocratic government ruled by the oligarchic landlord class.
Australia's stronger institutions also translated into better policymaking. In 1921, the Australian government moved to protect the industries that had expanded during the war; however, recognizing that vested interest groups would attempt to influence the policymaking process, it established the Tariff Board, an advisory body composed of "disinterested experts" to provide technical advice to both the Parliament and the Minister for Trade and Customs. This development had two direct benefits that would facilitate the process of liberalization. First, as noted, it reduced the direct influence of interest groups. Second, it created a bureaucracy with technical expertise on the matter.
The Australian factor endowment also helped to reduce the intensity of the distributional conflict. While Argentine exports were mainly agricultural goods, an important component of the consumption bundle, a large share of Australian exports were mineral products that do not enter directly into the consumption bundle. Free trade policies were more harmful to Argentine workers.
Liberalization
By the late 1960s, there was consensus among Australian economists on the benefits of import liberalization. These views came to be adopted first by the members of the Tariff Board and then by politicians. However, public opinion continued to show support for protectionism. Interestingly, the first move toward liberalization was in 1973 under a government led by the Labor Party, whose constituents tended to be stronger supporters of protection. From then on, Australia embarked on a gradual but steady path toward free trade. This process was facilitated by favorable external and internal conditions that reduced the intensity of the distributional conflict and by properly functioning institutions that made intertemporal bargaining possible.
The rise of Eastern Asia as a potential trading partner that was interested not only in Australian raw materials but also manufactures shifted the Labor Party's views on protectionism. Closer integration into the regional economy through trade liberalization would increase the demand for exports of manufactures that were more labor-intensive than traditional exports (see Díaz-Alejandro 1984; Gerchunoff and Fajelbaum 2006).
Not only Labor Party leaders, but also the Australian Council of Trade Unions (labor) and the Business Council (mining and service industries) advocated free trade. Recognizing the effects of protection on export performance, both farming and mining groups joined the public debate. At a federal level, the exporting states also supported liberalization. The textiles, clothing and footwear, and automobile industries, which enjoyed ample protection, invested heavily in political activity aimed at maintaining protectionism. However, these industries were already declining by the mid-1970s and they were further weakened by successive tariff reductions from then on (see Garnaut 2002).
These external and internal developments changed the nature of the distributional conflict associated with trade policy. Only capitalists and workers employed in import-competing activities would oppose liberalization in the short run. However, as part of a gradual, steady and predictable process of liberalization, new capital investments were redirected toward activities that were not dependent on protection, while, at the same time, vested interests were not harmed. The role played by the institutions and the political leadership that took part in this task is remarkable. The political system was able to set long-term policy goals to guide economic activity without imposing large adjustment costs in terms of output or employment.
In contrast, during the early 1970s, Argentina was immersed in what was tantamount to a civil war in which leftist groups were trying to create a socialist country that would expropriate the holdings of the oligarchic landlords and transfer the land to poor rural workers. Even when the economy was opened to trade during the second part of the 1970s, this was not done by consensus. Instead, it was the result of a unilateral decision made by a military government aligned with landlords and the capitalists that could survive integration with the world economy and that were threatened by the fierce distributive conflict that arose during the last Peronist government. The second attempt to integrate the country with the world was made during the 1990s, after a devastating episode of hyperinflation, by a government that campaigned on a populist agenda. Both these attempts were abrupt and were conducted as shock policies by political groups that had political power but did not represent a consensus view on the part of the population. Thus, trade reform was abrupt and did not provide any way to smooth out losses. Even today, when serious attempts to restrict trade are being made by the current government, a large segment of the population sees the two episodes of trade liberalization as disastrous.
To sum up, the distributional conflict in Australia was mitigated both by a differential initial factor endowment that led to the appearance of different organizations and institutions in society and, later, by the rise of East Asia as a trading partner. Moreover, Australian institutions were well suited to pursue a gradual process of adjustment to minimize the losses of those who had sunk investments in protected industries, while Argentine institutions and organizations did not display those capabilities. In a context of policy path dependence, all these differences ended up making a substantial difference in the outcomes.
Up to the 1930s, Argentina was well integrated into the world economy and, though some protectionism naturally developed after the Great Depression of the 1930s, it was only after the Second World War that the country closed itself off from world markets. It then remained in a situation close to autarky until the mid-1970s. It was only after a long period of absolute economic decline and a devastating bout of hyperinflation that a comprehensive program of reform and integration into the world economy was adopted.
We use a model with two tradable goods and one non-tradable good. We assume that Argentina has a comparative advantage in the production of agricultural goods. Thus, it might or might not produce manufactured goods. It also produces services. We assume that the agricultural good is produced in the primary sector using land and capital, while the manufactured good is produced in the secondary sector using labor and capital. Services are produced using labor only. We also assume that capital moves between the primary and secondary sectors more slowly than labor moves between the secondary and tertiary sectors. This gives rise to three different time horizons: the short run (no factor reallocation), the medium run (only labor adjusts) and the long run (full reallocation).
We show that import-substitution policies exhibit path dependence. Indeed, this is a very important insight in understanding the economic history of Argentina. We also use our model to characterize the demands for protectionist policies of the different groups in the economy. In the short run, landowners, capitalists who have invested in the primary sector and workers employed in the tertiary sector support free trade policies. On the other hand, capitalists and workers in the secondary sector support protectionist policies. In the medium run, workers behave as a group and will support protectionist policies if the industrial sector is sufficiently developed (i.e., the secondary sector employs enough labor and capital). In the long run, workers will support free trade if the terms of trade are favorable enough.
Using the insights derived from our model, we then argue that much of the distributional conflict that arose was among owners of different production inputs and that trade policies were widely used to shift income across groups. At the beginning of the century, factor allocation resembled what we call "specialization and trade." During the inter-war period, trade opportunities and the terms of trade worsened, which led to an incipient industrialization process. Argentina started the second half of the century with a very different economic configuration, as industrialization had come a long way in terms of what we refer to as diversification and trade. These new economic conditions also changed the political equilibrium. Urban workers employed in the manufacturing sector and industrialists were now major social actors who demanded that the industrialization process be deepened, which hurt trade and took the economy close to autarky. The years that followed the Second World War witnessed an extraordinary expansion of trade in which Argentina was not an active participant. We contend that one important reason behind this outcome was the set of protectionist policies that were enacted in the years following that war and that the main supporters of these policies were the new political forces that emerged from the industrialization process in the inter-war period.
The second half of the century was characterized by a strong distributional conflict centered on trade policy. Traditional sectors composed of owners of factors employed in the primary sector supported free trade policies, whereas the newer political forces supported protectionism and import substitution. Argentina embarked on an ambitious process of import substitution that aimed at achieving self-sufficiency, especially in activities deemed strategic, such as oil and steel. As domestically produced goods were substituted for labor-intensive imported manufactures, the industrial sector grew and drew inputs from other sectors. The substitution of capital-intensive activities was more problematic. Some of these activities were not profitable even though they had a captive internal market. With little regard for economic rationality, the government took an active role in developing these activities through public enterprises that became a chronic source of deficits.
Instead of delivering a steady path of inward-oriented growth, the import-substitution strategy resulted in bumpy cycles of economic expansion followed by sharp recession. Liberalization promised a return to export-led growth; however, in the case of agents with vested interests in protected activities, it would cost them dearly. The protectionist coalition, industrial capitalists and unionized workers, had enough political power to keep liberalization off the policy agenda.
The accomplishment of gradual liberalization process that mitigated the losses of those with vested interests and the definition of clear and sound long-term policy goals required a set of political institutions capable of enforcing intertemporal agreements between political groups. Sadly, Argentina lacked such institutions. Instead, the dismantlement of the import-substitution strategy came only after the protectionist coalition had become sufficiently weakened. The steps taken toward liberalization were abrupt and were conducted as shock policies by political groups that had political power, but did not represent a consensus view among the population. Moreover, it did not provide any way to smooth out the losses. As a result, Argentina's integration into world markets was extremely costly in terms of inequality and poverty.
Argentina had to wait to reap the benefits of liberalization until the first decade of the twenty-first century, when favorable commodity prices in world markets fueled rapid economic growth. As the primary sector gained in productivity and received large capital inflows and as employment in the tertiary sector soared, the demand for protectionism was reduced. However, the distributional conflict centered on trade policy survived the turn of the century and remains important.
The parameters A and M in the production functions of the tradable goods can be interpreted as neutral technological shocks. However, if the production function were instead to include an additional imported input with a low elasticity of substitution, then an increase in the price of that input could be interpreted as a change in A and/or M.
Homogeneity of degree one allows us to ignore distributional issues in computing the steady state of the economy and studying its equilibrium properties. Unitary elasticity of substitution also simplifies the computation of the steady state.
We can reinterpret our model to accommodate an imported input. For a linearly homogeneous Cobb–Douglas production function on \(K,L\) and the imported input \(F\), we can write the value-added function \({\text{VA}} = Y - p_{\text{f}} F\). If \(F\) is chosen optimally for a given \(p_{\text{f}}\), \(K\), and \(L\), then the value-added function is also a linearly homogenous Cobb–Douglas on \(K\) and \(L\). Our production functions should be reinterpreted as value-added functions. An increase in the international price of the imported input can be reinterpreted as a negative productivity shock in the sector where the input is employed.
We will assume that capitalists are not farsighted. We are careful to draw the distinction between different time horizons in view of the fact that capital is not perfectly mobile across sectors. If we were to assume that capital is, in fact, not mobile at all and that capital reallocation occurs only through a process in which depreciated capital in one sector is not replaced while the other sector has a positive net rate of investment, then it would make perfect sense to assume that capitalists whose capital is already locked into one of the two sectors will only care about the short and medium terms.
There is a significant difference between the outcomes in the short and medium terms. In the medium run, workers are a homogeneous group and, when they change their preferences toward protectionism, they do so as a group. In the short run, only those employed in the secondary sector will support protectionist policies; therefore, anti-trade policies gain adherents gradually as \(\lambda\) increases.
It will also reduce \(\lambda\) if the EoS in consumption is less than 1 and less than the EoS in the production of manufactures.
After successive rounds of negotiations, substantial tariff reductions were put into practice, mainly for industrial products. Unfortunately for Argentina, distortions in the trade of agriculture products remained relatively high. In the USA, subsidies to American farmers date from the Great Depression, whereas, in Europe, protectionism in agriculture emerged in response to the food shortages that the continent suffered during the Second World War.
Of course, the size of the market played an important role in promoting industrialization. In others words, the same shock, in a much poorer country, although might promote industrialization for export activities, it would not necessarily lead to import substitution.
The Junta Reguladora de Granos, created in 1933, was transformed into the Instituto Argentino de Promoción del Intercambio (IAPI) in 1946. The JRG and IAPI worked in very different ways. The JRG operated in a period when external demand for agricultural products was weak. The goal of the JRG was to control supply to prevent domestic prices from falling. It benefited producers at the expense of consumers. The IAPI worked in the opposite way in a period when external demand was strong. It profited from buying from domestic producers below international prices. As a result, domestic prices were kept in check, benefiting consumers at the expense of producers.
This alliance was very effective at maintaining and obtaining new rents from the state (see Mallon and Sourrouille 1975).
Recall that in our model, the oil price hike can be interpreted as a negative productivity shock to the manufacturing sector.
In appendix B, we exploit this natural experiment to provide evidence that: (a) trade policies are still a key component of electoral competition; and (b) the coalitions vote as suggested by our model.
With capital mobility, wages are a U-shaped function of the terms of trade. Wages are high either under specialization and trade with favorable terms of trade, or under autarky or reversal of the terms of trade. The lowest wages are at the frontier between specialization and diversification.
Acuña C, Galiani S, Tommasi M (2007) Understanding reforms: the case of Argentina. In: Fanelli JJ (ed) Understanding reform in Latin America. Palgrave-Macmillan, Basingstoke
Adelman J (1994) Frontier development: land, labour, and capital on the wheatlands of Argentina and Canada, 1890–1914. Clarendon Press, Oxford
Anderson K (1998) Are resource-abundant economies disadvantaged? Aust J Agric Resour Econ 42:1–23
Anderson K (2002) International trade and industry policies. CIES discussion paper 0216
Anderson K, Garnaut R (1987) Australian protectionism: extent, causes and effects. Allen & Unwin, Sydney
Baldwin R, Martin P (1999) Two waves of globalization: superficial similarities, fundamental differences. NBER working paper 6904, National Bureau of Economic Research, Cambridge, MA
Cantón D (1968) Materiales para el estudio de la sociologa poltica en la Argentina. Mimeo, Buenos Aires
Clemens M, Williamson J (2002) Closed Jaguar, open dragon: comparing tariffs in Latin America and Asia before World War II. National Bureau of Economic Research, working paper 9401
Corden WM (1996) Protection liberalisation in Australia and abroad. Aust Econ Rev 29(2):141–154
Cortés-Conde R (1979) El progreso Argentino: 1880-1914. Editorial Sudamericana, Buenos Aires
Cortés-Conde R (1998) Progreso y Declinacion de La Economia Argentina: Un Analisis Historico Institucional. Fondo de Cultura Economica, Buenos Aires
Díaz-Alejandro CF (1970) Essays on the economic history of the Argentine republic. Yale University Press, New Haven
Díaz-Alejandro CF (1984) No less than one hundred years of Argentine economic history plus some comparisons. In: Ranis G, West R, Leirsenson M, Taft Morris C (eds) Trade, comparative development perspectives. Westview Press, Boulder, CO
Galiani S, Gerchunoff P (2003) The labor market. In: Della Paolera G, Taylor A (eds) A new economic history of Argentina. Cambridge University Press, Cambridge
Galiani S, Heymann D, Dabus C, Thome F (2008) On the emergence of public education in land-rich economies. J Dev Econ 86:434–446
Galiani S, Schofield N, Torrens G (2009) Factor endowments, democracy and trade policy divergence. J Pub Econ Theory 16:119–156
Galiani S, Heymann D, Magud N (2010) On the distributive effects of terms of trade shocks: the role of non-tradable goods. IMF working papers, pp 1–38
Garnaut R (2002) Australia: a case study of unilateral trade liberalisation. In: Bagwati J (ed) Going alone: the case for relaxed reciprocity in freeing trade. The MIT Press, Cambridge, pp 139–166
Gerchunoff P (1989) Peronist economic policies, 1946-55. In: DiTella G, Dornbusch R (eds) The political economy of Argentina. St Antony's series. Oxford University Press, Oxford, pp 1946–1983
Gerchunoff P, Fajgelbaum P (2006) Por que Argentina no fue Australia? Una Hipotesis sobre un Cambio de Rumbo. Editorial Siglo XXI, Argentina
Gerchunoff P, Llach L (2004) Entre la Equidad y el Crecimiento: Ascenso y Cada de la Economía Argentina, 1880-2002. Siglo XXI, Argentina
Gomez-Galvarriato A, Williamson J (2009) Was it prices, productivity or policy? Latin American industrialisation after 1870. J Lat Am Stud 41:663–694
HalperínDonghi T (1994) La Larga Agonia de la Argentina Peronista. Ariel, Argentina
Heymann D (1984) Precios relativos, riqueza y producción. Ens Econ 29:53–90
Hirst J (1979) La sociedad rural y la politica en Australia. In: Fogarty J, Gallo E, Dieguez H (eds) Argentina y Australia. Instituto Torcuato Di Tella, Buenos Aires, pp 1859–1930
Llach JJ (2002) La Industria, 1945-1976. In: Nueva Historia de la Nación Argentina. Academia Nacional de Historia, Editorial Planeta, Argentina
Llach L (2006) Argentina y el mercado mundial de sus productos, 1920-1976. Estudios y Perspectivas Series, no. 35, Economic Commission for Latin America and the Caribbean (ECLAC)
Mallon RD, Sourrouille JV (1975) Economic policymaking in a conflict society: the Argentine case. Harvard University Press, Cambridge
North D (1990) Institutions, institutional change and economic performance. Cambridge University Press, New York
O'Donnell A (1977) Estado y Alianzas en la Argentina, 1956-1976. Desarro Econ 16(64):523–554
O'Rourke KH, Williamson JG (1999) Globalization and history: the evolution of a nineteenth-century Atlantic economy. The MIT Press, Cambridge
Rogowski R (1989) Commerce and coalitions: how trade affects domestic political alignments. Princeton University Press, Princeton
Spiller P, Tommasi M (2009) The institutional foundations of public policy in Argentina: a transactions cost approach. Cambridge University Press, Cambridge
Stolper WF, Samuelson PA (1941) Protection and real wages. Rev Econ Stud 9(1):58–73
Taylor AM (1994) Three phases of Argentine economic growth. NBER historical working paper, no. 60
Taylor AM (1997) Peopling the pampa: on the impact of mass migration to the River Plate, 1870—1914. Explor Econ Hist 34(1):100–132
Torre JC (2002) Introducción a los años Peronistas. In: Torre JC (ed) Nueva Historia Argentina. Editorial Sudamericana, Argentina
Villanueva J (1972) El origen de la industrialización Argentina. Desarrollo Económico 12:451–476
Waisman C (1987) Reversal of development in Argentina. Princeton University Press, Princeton
Williamson J (2002) Winners and losers over two centuries of globalization. NBER working paper 9161, National Bureau of Economic Research, Cambridge, MA
We are grateful for the comments provided by editors Rafael Di Tella and Edward Glaeser, the three anonymous referees, Hugo Hopenhayn, Douglass North, Jeffrey Williamson, and seminar participants at Harvard (March 2009) and LACEA (October 2009) in Buenos Aires. We have also benefited greatly from conversations with D. Heymann and would like to thank Ivan Torre for his excellent research assistance.
Department of Economics, University of Maryland, College Park, MD, USA
Sebastián Galiani
Stanford Graduate School of Business, Stanford, CA, USA
Paulo Somaini
NBER, Cambridge, MA, USA
Correspondence to Paulo Somaini.
In this appendix, we solve the long-run equilibrium of the model presented in Sect. 3. We also derive the effect of export taxes on real factor remuneration in the short, medium and long terms.
Let Υ denote the degree of comparative advantage of the secondary sector and π denote the international price of the agricultural good relative to the manufacturing good, i.e., the terms of trade:
$$\varUpsilon = \frac{M}{A}\frac{{L^{1 - \beta } K^{\beta - \alpha } }}{{T^{1 - \alpha } }},$$
$$\pi = \frac{{p_{\text{a}} }}{{p_{\text{m}} }}.$$
Moreover, let
$$\lambda = \frac{{L_{\text{m}} }}{L},$$
$$\kappa = \frac{{K_{\text{m}} }}{K}.$$
That is, λ is the share of workers employed in the manufacturing sector and κ is the share of units of capital employed in that sector. We seek to characterize the steady-state ratios κ and λ as functions of the technological and preference parameters, factor endowments and exogenous variables: terms of trade π and the ad valorem tax rate on exports τ.
Since land is used only in the primary sector, its outside opportunity cost is zero. Given our technological assumptions, the marginal product of the first infinitesimal unit of capital employed in the primary sector is infinite; therefore κ < 1, i.e., the primary sector always employs some capital.
The demand for capital in the primary sector solves the following first-order condition for profit optimization of the representative firm in the sector:
$$\alpha \left( {\frac{1}{1 - \kappa }} \right)^{1 - \alpha } p_{\text{a}}^{d} = \frac{{r_{\text{a}} K}}{{K^{\alpha } T^{1 - \alpha } A}},$$
where \(p_{\text{a}}^{d}\) is the domestic price of the agricultural good and r a is the return to capital in the primary sector. Similarly, the demand for land in the primary sector, given the land rental rate, q, is given by:
$$\left( {1 - \alpha } \right)\left( {1 - \kappa } \right)^{\alpha } p_{\text{a}}^{d} = \frac{qT}{{K^{\alpha } T^{1 - \alpha } A}}.$$
If some capital is also employed in the secondary sector, then the demand for capital in the secondary sector satisfies:
$$\beta \left( {\frac{\lambda }{\kappa }} \right)^{1 - \beta } \varUpsilon p_{\text{m}}^{d} = \frac{{r_{\text{m}} K}}{{K^{\alpha } T^{1 - \alpha } A}},$$
where \(p_{\text{m}}^{d}\) is the domestic price of the manufactured good and r m is the return to capital in the secondary sector. The demand for labor in the sector is given by:
$$(1 - \beta )\left( {\frac{\kappa }{\lambda }} \right)^{\beta } \varUpsilon p_{\text{m}}^{d} = \frac{Lw}{{K^{\alpha } T^{1 - \alpha } A}},$$
where w is the wage rate.
The Cobb–Douglas utility function that we use to represent the preferences of consumers implies that the share of each good in total expenditure is constant. Let ϕ a, ϕ m be the shares of the agricultural and manufactured goods, respectively. Naturally, 1 − ϕ a − ϕ m is the share of the service good. The aggregate demand for each good (c a, c m and c s) satisfies the following maximizing condition:
$$\frac{{c_{\text{m}} p_{\text{m}}^{d} }}{{\phi_{\text{m}} }} = \frac{{c_{\text{a}} p_{\text{a}}^{d} }}{{\phi_{\text{a}} }} = \frac{{\left( {1 - \lambda } \right)Lw}}{{1 - \phi_{\text{a}} - \phi_{\text{m}} }},$$
where we have already imposed the market equilibrium condition in the non-tradable sector:
$$c_{\text{s}} = \left( {1 - \lambda } \right)Lw.$$
In an open economy without international capital markets, trade is balanced in each period. Therefore,
$$\kappa^{\beta } \lambda^{1 - \beta } \varUpsilon + \pi \left( {1 - \kappa } \right)^{\alpha } = \frac{{c_{\text{m}} + \pi c_{\text{a}} }}{{K^{\alpha } T^{1 - \alpha } A}}$$
If the country is trading internationally, the domestic price of the agricultural good is \(p_{\text{a}}^{d} \,=\, \left( { 1 - \tau } \right)p_{\text{a}}\). Due to the Lerner symmetry theorem, we assume that the import tax is zero. Therefore, we have \(p_{\text{m}}^{d} = p_{\text{m}}\).
The following subsections solve the different types of steady-state equilibria that might exist. First, we study the autarky equilibrium. We derive the shares λ aut and κ aut and the autarky relative domestic price p aut. This price has to be such that π(1 − τ) ≤ p aut ≤ π: it is not profitable to export or import goods. Second, we study the equilibrium under specialization in the production of primary goods. We derive the input prices w and r and then obtain the marginal cost of producing the manufactured good. This marginal cost has to be higher than the international price of the manufactured good. Third, we study the equilibrium under diversification and trade. We derive the shares λ and κ and the exports of primary goods. All of these three variables have to be positive in equilibrium. Finally, we derive the equilibrium under reversal of the pattern of trade. We proceed in the same way as in the case of diversification and trade, but now we set τ = 0 and we require the exports of the manufactured good to be positive.
Autarky equilibrium
We now solve the model for autarky by imposing that the consumed quantities equal the produced quantities for each of the three goods:
$$\begin{aligned} \frac{{c_{\text{m}} }}{{K^{\alpha } T^{1 - \alpha } A}} \,= \kappa^{\beta } \lambda^{1 - \beta } \varUpsilon , \hfill \\ \frac{{c_{\text{a}} }}{{K^{\alpha } T^{1 - \alpha } A}} = \left( {1 - \kappa } \right)^{\alpha } . \hfill \\ \end{aligned}$$
Using 1, 2, 4, 5, 6 and 8, we derive the following values for λ aut, κ aut and the autarky relative domestic price p aut:
$$\begin{aligned} \kappa_{\text{aut}} = & \frac{{\phi_{\text{m}} \beta }}{{\phi_{\text{m}} \beta + \phi_{\text{a}} \alpha }}, \\ \lambda_{\text{aut}} = & \frac{{\phi_{\text{m}} (1 - \beta )}}{{\phi_{\text{m}} (1 - \beta ) + \left( {1 - \phi_{\text{a}} - \phi_{\text{m}} } \right)}}, \\ p_{\text{aut}} = & \frac{{\beta^{\beta } }}{{\alpha^{\alpha } }}\left( {\phi_{\text{m}} \beta + \phi_{\text{a}} \alpha } \right)^{\alpha - \beta } \phi_{\text{a}}^{1 - \alpha } \left( {\frac{(1 - \beta )}{{\left( {\left( {1 - \phi_{\text{a}} - \beta \phi_{\text{m}} } \right)} \right)}}} \right)^{1 - \beta } \varUpsilon . \\ \end{aligned}$$
For autarky to be a steady-state equilibrium, p aut has to satisfy
$$\pi \left( {1 - \tau } \right) \le p_{\text{aut}} \le \pi .$$
Otherwise, there are arbitrage opportunities for exporting and importing goods.
Equilibrium under specialization
A specialized economy imports the secondary good and produces and exports the agricultural good. The economy is specialized in the primary sector if there is no capital or labor employed in the secondary sector; therefore: κ = λ = 0. For this to be an equilibrium, the wages and capital rental rate paid in the other sectors of the economy must be greater than what can be profitably paid by the secondary sector.
$$mc_{\text{m}} = \left[ {\left[ {\frac{1 - \beta }{\beta }} \right]^{\beta } + \left[ {\frac{\beta }{1 - \beta }} \right]^{1 - \beta } } \right]r^{\beta } w^{1 - \beta } M^{ - 1} \ge p_{\text{m}}^{d} .$$
Using 1, 5, 7 and 10, setting λ = κ = 0, \(p_{\text{m}}^{d} = p_{\text{m}}\) and \(p_{\text{a}}^{d} = (1 - \tau )p_{\text{a}}\), we obtain that specialization is an equilibrium if
$$\varUpsilon \le \left[ {\left[ {\frac{1 - \beta }{\beta }} \right]^{\beta } + \left[ {\frac{\beta }{1 - \beta }} \right]^{1 - \beta } } \right]\alpha^{\beta } \left[ {\frac{{\left( {1 - \phi_{\text{a}} - \phi_{\text{m}} } \right)}}{{\left( {\phi_{\text{m}} \left( {1 - \tau } \right) + \phi_{\text{a}} } \right)}}} \right]^{1 - \beta } \left( {1 - \tau } \right)\pi .$$
Otherwise, there will be diversification. Naturally, ceteris paribus, for favorable enough terms of trade, the economy will specialize in the production of primary goods.
Diversification and trade
Using 1,2, 4,5, 7 and imposing \(p_{\text{m}}^{d} = p_{\text{m}}\) and \(p_{\text{a}}^{d} = (1 - \tau )p_{\text{a}}\), we solve for the endogenous variables κ and λ.
From the conditions 1 and 2, we obtain λ as an increasing function of κ:
$$\lambda = \left[ {\frac{\alpha }{\beta }\left( {\frac{1}{1 - \kappa }} \right)^{1 - \alpha } \left( {1 - \tau } \right)\frac{\pi }{\varUpsilon }} \right]^{{\frac{1}{1 - \beta }}} \kappa .$$
From 4, 5 and 7 we deduce:
$$\frac{\lambda }{{\left( {1 - \lambda } \right)}} + \left( {\frac{\pi }{\varUpsilon }} \right)^{{\frac{\beta }{1 - \beta }}} \frac{{\left( {1 - \kappa } \right)^{{\frac{\alpha - \beta }{1 - \beta }}} }}{{\left( {1 - \lambda } \right)}}\left[ {\frac{\alpha }{\beta }\left( {1 - \tau } \right)} \right]^{{\frac{\beta }{1 - \beta }}} = \frac{{\phi_{\text{m}} + \frac{{\phi_{\text{a}} }}{{\left( {1 - \tau } \right)}}}}{{1 - \phi_{\text{a}} - \phi_{\text{m}} }}(1 - \beta ).$$
If β > α, then the left-hand side of the former expression is increasing in κ, whereas the right-hand side is constant. Thus, there is at most one value of κ that satisfies this expression; λ*and κ* denote the shares that satisfy Eq. 11.
In the diversification and trade equilibrium, an improvement in the terms of trade or a reduction in the export tax will lead to lower values of λ* and κ*.
The solution is a steady-state equilibrium if the country exports the primary good and, at the same time, produces a positive amount of the manufactured good. The conditions for diversification were explained in Sect. 7.1.2.
Positive exports of the agricultural good implies:
$$\frac{{c_{\text{a}} }}{{K^{\alpha } T^{1 - \alpha } A}} \le \left( {1 - \kappa } \right)^{\alpha } .$$
In terms of the exogenous variables, this condition becomes:
$$\frac{{\phi_{\text{a}} (1 - \beta )}}{{\left( {1 - \phi_{\text{a}} - \phi_{\text{m}} } \right)\left( {1 - \tau } \right)}}\frac{\varUpsilon }{\pi } \le \frac{{\left( {1 - \kappa^{ * } } \right)^{\alpha } }}{{\left( {1 - \lambda^{ * } } \right)}}\left( {\frac{{\lambda^{ * } }}{{\kappa^{ * } }}} \right)^{\beta } .$$
Reversal of the pattern of trade
Using the same approach as in Sect. 7.1.3 but setting τ = 0, we solve for the endogenous variables. In this case, the solution is a steady-state equilibrium if the exports of the manufacturing good are positive, i.e., if c a(K α T 1−α A)−1 > (1 − κ)α. In terms of the exogenous variables, this condition becomes:
$$\frac{{\phi_{\text{a}} (1 - \beta )}}{{\left( {1 - \phi_{\text{a}} - \phi_{\text{m}} } \right)}}\frac{\varUpsilon }{\pi } > \frac{{\left( {1 - \kappa^{ * } } \right)^{\alpha } }}{{\left( {1 - \lambda^{ * } } \right)}}\left( {\frac{{\lambda^{ * } }}{{\kappa^{ * } }}} \right)^{\beta } .$$
Graphical representation
Given a set of parameters Υ, ϕ a, ϕ m, α and β with β > α ,0 < ϕ a, 0 < ϕ m and ϕ m + ϕ a < 1, we can map each pair (π, τ) to one of the steady states above. Assuming β > α, Fig. 1 in Sect. 3.1 shows the different regions in the (π, τ) space. The frontier between the reversal of trade and autarky regions is given by the autarky price equation:
$$p_{\text{aut}} = \frac{{\beta^{\beta } }}{{\alpha^{\alpha } }}\left( {\phi_{\text{m}} \beta + \phi_{\text{a}} \alpha } \right)^{\alpha - \beta } \phi_{\text{a}}^{1 - \alpha } \left( {\frac{(1 - \beta )}{{\left( {\left( {1 - \phi_{\text{a}} - \beta \phi_{\text{m}} } \right)} \right)}}} \right)^{1 - \beta } \varUpsilon .$$
The autarky region and the diversification and trade region are delimited by the level of τ that makes exports equal to zero:
$$\tau = 1 - \frac{{p_{\text{aut}} }}{\pi }.$$
The specialization and diversification regions are separated by the points at which the marginal firm is indifferent to producing the first unit of the manufactured good or not:
$$\pi = \frac{{\left[ {\frac{{\left( {\phi_{\text{m}} \left( {1 - \tau } \right) + \phi_{\text{a}} } \right)}}{{\left( {1 - \phi_{\text{a}} - \phi_{\text{m}} } \right)}}} \right]^{1 - \beta } \varUpsilon }}{{\left[ {\left[ {\frac{1 - \beta }{\beta }} \right]^{\beta } + \left[ {\frac{\beta }{1 - \beta }} \right]^{1 - \beta } } \right]\alpha^{\beta } }}\frac{1}{{\left( {1 - \tau } \right)}}.$$
The political economy of protectionism
The tax rate τ affects the prices and resource allocation of the economy. As we show below, the real remuneration of some factors of production increases with τ, while the real remuneration of other factors decreases. Therefore, unless all economic agents are equally endowed, changes in the level of protectionism could have major distributional consequences. In this section, we derive the preferences of the different economic groups with regard to the policy variable τ under the main assumption that each economic agent has only one source of income. In our analysis, we consider three time horizons: the short, medium and long terms. In the short run, no reallocation of factors takes place. In the medium run, only labor is allowed to move between the secondary and the tertiary sector. In the long run, all mobile factors can be reallocated and the economy can fully adjust to its new equilibrium. Although we may assume that inputs are fixed within a sector, they are mobile across different firms within that sector. Thus, competition among different firms within a sector drives input prices to equalize the value of their marginal product.
While we do not set up a formal model of political competition that determines the evolution of the policy variable τ, we do stress the political tensions that this model generates. We use these results to articulate our discussion on the rise and fall of protectionism in Argentina and the underlying distributional conflict.
Under autarky, or when the patterns of trade are such that the country exports manufactured goods, the tax on exports of primary goods has no effect whatsoever. We might think that the government could also tax the exports of manufactured goods. However, we do not delve into those issues simply because we do not think that they will shed any light on the main topic of this paper. So, we assume that the economy is always in one of the two other possible scenarios in which τ matters: either close to a steady state in which the economy specializes in the production of primary goods, or close to a steady state in which there is diversification of production and the country exports primary goods.
The demand for protectionism
In this section, we derive the effects of protectionism and changes in the terms of trade on the real remunerations of the factors of production. We log-linearize the model to derive the effect of protectionism in the short and medium run. The log-linearization is around an initial allocation. This initial allocation might be a steady-state equilibrium, in which case it is determined by π and τ; however, the argument follows through for any initial allocation determined also by κ and λ.
The zero profit condition in the primary sector implies:
$$a_{\text{a}} = (1 - \alpha )t + \alpha k_{\text{a}} ,$$
where \(a_{\text{a}} = dp_{\text{a}}^{d} /p_{\text{a}}^{d}\) is the percentage variation in the domestic price of the agricultural good, t = dq/q denotes the percentage variation in the rent of the land and k a = dr a/r a is the percentage variation in the return to capital in the primary sector. Since, in the short and medium run, capital is not mobile between sectors, it will be useful to employ different notations for the capital invested in the primary and secondary sectors. Finally, α is the share of capital in the total cost of production in the primary sector. Homotheticity of the production function implies that α is a function only of input prices. Moreover, under the assumption of a Cobb–Douglas technology, α is invariant. Similarly, in the manufacturing sector, we have:
$$m_{\text{m}} = l_{\text{m}} (1 - \beta ) + k_{\text{m}} \beta ,$$
where \(m_{\text{m}} = dp_{\text{m}}^{d} /p_{\text{m}}^{d}\) is the percentage variation in the domestic price of the manufactured good, l m = dw m/w m denotes the percentage variation in wages and k m = dr m/r m is the percentage variation in the return to capital in the secondary sector. As before, β is the share of capital in the total cost of production. We continue to assume that β ≥ α; that is, we assume that capital is used more intensively in the secondary sector. Though this last assumption is not crucial, it will help us to solve some ambiguities later on. Finally, for the service sector, we have:
$$n = l_{\text{n}} ,$$
where n and l n = dw n/w n are the respective percentage variations in the prices of the service good and the wages paid in that sector.
Cobb–Douglas preferences ensure that the percentage increase in expenditures of the three goods are the same: a a + ca = m m + c m = c n + n, where c i denotes the percentage variation in the consumption of good i. For any agent, the indirect utility function is given by:
$$\ln w - \sum\limits_{i = 1}^{3} \phi_{i} \ln p_{i}^{d} ,$$
where w denotes the income of the individual. Notice that we can construct an exact "price index" to account for the effect of price changes in total utility. We use this price index to deflate all the nominal variables of the economy.
$$p = a_{c} \phi_{a} + m_{c} \phi_{m} + n_{n} \left( {1 - \phi_{a} - \phi_{m} } \right).$$
In our model, the government changes domestic relative prices by taxing trade. The domestic price of the agricultural good is then given by \(p_{\text{a}}^{d} = p_{\text{a}} \left( { 1 - \tau } \right)\).Taking logs and denoting t a = dτ a/(1 − τ a), we obtain:
$$a_{\text{a}} = a_{i} - t_{\text{a}} .$$
For the manufactured good, its domestic price is given by m m = m i . The economy budget constraint is: p m Y m + p a Y a = p m C m + p a C a. Log-linearizing this equation around the initial values, we have:
$$\left( {m_{i} + y_{m} } \right)\left( {1 - \chi_{a} } \right) + \left( {a_{i} + y_{a} } \right)\chi_{a} = \left( {c_{m} + m_{i} } \right)\left( {1 - \gamma_{a} } \right) + \left( {a_{i} + c_{a} } \right)\gamma_{a} ,$$
where y i = dY i /Y i and γ a is the share of the agricultural good in total expenditure on tradable goods, evaluated at international prices. The parameter χ a is the share of the production of the agricultural good in the total value of the domestic production of tradable goods at international prices. If the country exports the primary good, then χ a > γ a.
The variable γ a can be re-written in terms of parameters of the model:
$$\begin{aligned} \gamma_{\text{a}} = & \frac{{p_{\text{a}} C_{\text{a}} }}{{p_{\text{m}} C_{\text{m}} + p_{\text{a}} C_{\text{a}} }} \\ = & \frac{{\phi_{\text{a}} }}{{\left( {1 - \tau } \right)\phi_{\text{m}} + \phi_{\text{a}} }}. \\ \end{aligned}$$
Similarly, for χ a,
$$\chi_{\text{a}} = \frac{1}{{1 + \frac{{\lambda^{1 - \beta } \kappa^{\beta } }}{{\pi \left( {1 - \kappa } \right)^{\alpha } }}\varUpsilon }}.$$
We now consider the adjustment of the economy to changes in international prices and taxes, assuming different speeds of adjustment for the mobile factors of production.
In the short run, all factors of production are reallocated only within the sector where they were previously employed. Given the Cobb–Douglas production function and the zero profit condition, we know that the flow of earnings accruing to landlords is equal to a fraction of the value of the total production of the primary sector. Given that land is not reallocated, the percentage increase in the rental rate for land is equal to:
$$t = a_{\text{a}} + y_{\text{a}} .$$
Since, in the short run, the allocation of capital in the primary sector does not change, the following capital rent equation holds:
$$k_{\text{a}} = a_{\text{a}} + y_{\text{a}} .$$
Similarly, in the manufacturing sector, the following capital rent and wage equations hold:
$$k_{\text{m}} = m_{\text{m}} + y_{\text{m}} ,$$
$$l_{\text{m}} = m_{\text{m}} + y_{\text{m}} .$$
Finally, total expenditure on services has to equal the total wages paid in the sector. Noting that the production of services has to equal consumption, we find that:
$$l_{\text{n}} = c_{\text{n}} + n_{\text{n}} .$$
Let us now consider the effects of an increase in the international price of the primary good. Given that there is no factor reallocation, the output of the three goods remains constant. Without government intervention, the domestic price of the primary good and the return to the factors employed in the primary sector increase in proportion to the increase in the terms of trade. Since the agents owning those resources are wealthier, they increase their demand for services, which drives up wages in the tertiary sector. Workers in the service sector enjoy an increase in their nominal wages that is proportional to the economy's degree of specialization: χ a. Finally, the factors employed in the manufacturing sector do not receive any increase in their remunerations. The consumer price index rises, since the prices of both the primary and the tertiary goods increase. Proposition 2 summarizes these results.
In the short run, an increase in the international price of the agricultural good (i.e., an improvement in the terms of trade) raises the real remuneration received by landowners, capitalists in the primary sector and service workers. However, it reduces the real remuneration of workers and capitalists in the manufacturing sector.
Notice that the real effects of an increase in the international price of the agricultural good are identical to those of a decrease in the international price of the manufactured good. Agents may demand policies that will protect them from changes in international prices. Proposition 3 deals with the effects of taxes on exports.
In the short run, protectionist policies reduce the real remuneration of landowners, capitalists in the primary sector and service workers. If ϕa > 0, protectionist policies will raise the real remuneration of workers and capitalists in the secondary sector.
Medium run
In the medium run, labor is allowed to move across industries, so wages equalize across sectors. Log-linearizing the market clearing condition for labor, we have:
$$\lambda \left( {m_{\text{m}} + y_{\text{m}} } \right) + (1 - \lambda )\left( {n_{\text{n}} + y_{\text{n}} } \right) = l.$$
This equation and the condition that l m = l n = l replace the two equations of wage determination obtained for the case of the short-run equilibrium. Now, the short-run effects of an improvement in the terms of trade include an increase in the production of services and a decrease in the total production of manufactures. Since there is no factor adjustment in the primary sector, the remuneration of capital and land increase by the same proportion as the terms of trade. This generates an upward shift in the demand for services which is met both by an increase in its equilibrium price and by a displacement of labor from the secondary to the tertiary sector. The manufacturing sector uses less labor, and the return to capital in this sector therefore falls. Overall, consumption of the primary good decreases, and consumption of the manufactured and service goods increases.
In the medium run, an improvement in the terms of trade increases the real remuneration received by landowners and capitalists in the primary sector. It harms capitalists in the manufacturing sector. The real wage increases if and only if:
$$\chi_{\text{a}} (1 - \lambda ) > \frac{{\phi_{\text{a}} }}{{\left( {\phi_{\text{m}} \beta + \alpha_{\text{a}} } \right)}}.$$
Higher demand for services increases wages in that sector and attracts workers from the manufacturing sector, raising wages across the economy. However, the equilibrium increase in wages may fall short of compensating the negative welfare effect of the increase in the price of the agricultural good. The more specialized the economy in the primary and tertiary sector (i.e., a higher χ a and a lower λ), the more likely is it that real wages will increase in the medium run. This is because, in such cases, the upward shift in demand for labor in the service sector is stronger. Thus, notice that, if the economy is already industrialized, an increase in the terms of trade may harm workers even in the medium run.
In the medium run, protectionist policies reduce the real remuneration of landowners and capitalists in the primary sector. If ϕ a > 0, protectionist policies increase the real remuneration of capitalists in the manufacturing sector. If ϕ a > 0, workers' welfare increases if and only if:
$$(1 - \lambda )\left[ {\left( {1 - \beta } \right)\chi_{\text{a}} + \beta \frac{{\phi_{\text{a}} + \phi_{\text{m}} }}{{\phi_{\text{a}} + \left( {1 - \tau } \right)\phi_{\text{m}} }}} \right] \le 1.$$
Workers' welfare increases with protectionism if the economy is beyond a given level of industrialization. In this case, workers may ally with capitalists in the secondary sector to demand protectionist policies. If τ = 0, this condition is satisfied as soon as the economy starts producing in the secondary sector. A higher tax rate implies that the condition will be met for higher λ and lower χ a. In Fig. 3, we find the pairs (π, τ), such that workers are indifferent to whether there is more or less protection, since movement in either direction will improve workers' welfare in the medium run.
Moreover, we expect that, the more industrialized the economy is, the larger the share of workers who will be employed in the secondary sector and, hence, by virtue of Proposition 2, the larger the share of workers who will also benefit from protectionist policies in the short run.
In the long run, the economy will tend toward a new steady state. Therefore, it is useful to analyze the effects of protectionism based on the results obtained in Sect. 7.1.
A full analysis of the long-run solution for this economy is fairly complicated. Nevertheless, the two propositions set out below suffice for our purposes in this paper. We focus only on the preferences for protectionism of landlords and workers, since we assume that capitalists are concerned only with policies in the short and medium run, when their capital is sunk in one particular activity. We assume that the economy is initially in the specialization and trade or in the diversification and trade regions (i.e., it exports the primary good). Otherwise, changes in the export tax rate would not have any effect.
In the long run, landlords benefit from an improvement in the terms of trade and from a reduction in export taxes.
If the economy is specialized, then, in the long run, workers benefit from an improvement in the terms of trade and from a reduction in export taxes. There is always a π * high enough so that workers are better off at τ = 0.
Constant-elasticity-of-substitution (CES) preferences and technology in Autarky
In this appendix, we derive a log-linearization around the autarky equilibrium for a CES economy. The results of this section are referred to in Sect. 3.4.
The production functions of the agricultural and manufactured goods are, respectively:
$$A\left( {\xi_{T} T^{{\rho_{1} }} + \xi_{{K,{\text{a}}}} K_{A}^{{\rho_{1} }} } \right)^{{1/\rho_{1} }} ,$$
$$M\left( {\xi_{L} L_{M}^{{\rho_{2} }} + \xi_{{K,{\text{m}}}} K_{M}^{{\rho_{2} }} } \right)^{{1/\rho_{2} }} ,$$
where A and M are productivity parameters, \(\xi^{\prime}_{i} s\) are share parameters and (1 − ρ i )−1 for i ϵ {1, 2} are the elasticity of substitution. Notice that:
ρ i
Elasticity of substitution
−∞ Leontieff: perfect complements 0
0 Cobb–Douglas 1
1 Perfect substitutes ∞
The production function for services is still Y N = NL N, where N is a productivity parameter.
Consumer's preferences are represented by:
$$\left( {\phi_{1} c_{A}^{{\rho_{d} }} + \phi_{2} c_{M}^{{\rho_{d} }} + (1 - \phi_{1} - \phi_{2} )c_{\text{N}}^{{\rho_{d} }} } \right)^{{1/\rho_{d} }} .$$
We are interested in the effect of the exogenous variables (\(\hat{T},\,\hat{K},\,\hat{L},\,\hat{A},\,\hat{M},\,\hat{N}\), where \(\hat{T} = dT/T\)) on the capital and labor employment share \(\hat{\lambda }\) and \(\hat{\kappa }\). The following table shows the sign of these effects as a function of the elasticity of substitutions ρ 1, ρ 2 and ρ d . For instance, the first row shows that the effect of an increase in the amount of land, \(\hat{T}\), on κ (i.e., \({\text{d}}\hat{\kappa }/{\text{d}}\hat{T}\)) has the same sign as ρ 1 − ρ d , whereas the effect on λ (i.e., \({\text{d}}\hat{\lambda }/{\text{d}}\hat{T}\)) has the same sign as −(ρ 2 − ρ d )(ρ 1 − ρ d ). The next rows show the sign of the effect for the other five exogenous variables.
\(d\hat{\kappa }\)
\(d\hat{\lambda }\)
\(d\hat{T}\) ρ 1 − ρ d −(ρ 2 − ρ d )(ρ 1 − ρ d )
\(d\hat{K}\) (footnote) ρ d − ρ 2
\(d\hat{L}\) ρ d − ρ 2 ρ 2 − ρ d
\(d\hat{A}\) − ρ d (ρ 2 − ρ d )ρ d
\(d\hat{M}\) ρ d ρ d
\(d\hat{N}\) (ρ 2 − ρ d )ρ d − ρ d
(footnote table) The sign of the effect of the endowment of capital on the share of capital employed in the manufacturing sector is the same as a quadratic function of ρ 1, ρ 2 and ρ d that depends on parameters α, β and λ.
In Sect. 3.4.1, we analyze the effect of \(\hat{L}\) and \(\hat{A}\) (population growth and productivity growth in agriculture) on \(\hat{\kappa }\) and \(\hat{\lambda }\).
We notice that \({\text{d}}\hat{\lambda }/{\text{d}}\hat{L}\) has the same sign as ρ 2 − ρ d , i.e., population growth L will decrease λ if the elasticity of substitution in consumption is greater than in the production of manufactures (ρ d > ρ 2). We also state that the effect on κ will be the opposite: \({\text{d}}\hat{\kappa }/{\text{d}}\hat{L}\) has the same sign as ρ d − ρ 2.
Similarly, in the table we read that \({\text{d}}\hat{\lambda }/{\text{d}}\hat{A}\) has the same sign as (ρ 2 − ρ d )ρ d , which corresponds with what was stated in Sect. 3.4.1: Higher productivity in the agricultural sector will decrease λ if the elasticity of substitution in consumption is greater than 1 and than that in the production of manufactures (i.e., ρ d > 0, ρ d > ρ 2). Similarly, \({\text{d}}\hat{\kappa }/{\text{d}}\hat{A}\) will have the same sign as −ρ d : the share of capital, κ, will decrease if the elasticity of substitution in consumption is greater than 1.
In this appendix, we provide evidence supporting our argument that trade policies are still a key component of electoral competition and that the coalitions vote as suggested by our model. We look at the developments of 2008, when the government's attempt to increase export duties was met with a nationwide lockout by farming associations and mass demonstrations in urban centers. We also use the results of the 2007 presidential election and the 2009 legislative elections to compare how the incumbent party—Frente para la Victoria (FPV), a political coalition including the Justicialist Party—fared before and after it publicly confronted the pro-agriculture coalition.
Export duties were almost non-existent during the 1990s, but were raised after the devaluation in 2002 to capture windfall profits from exporting firms. Over time, they became a reliable source of revenue for the federal government and a handy mechanism for keeping domestic food prices in check. For example, the tax rate on oilseeds exports was raised from 0.5% in 2001 to 17.5% in 2002.
The FPV is an electoral alliance that was founded in 2003 within the Justicialist (Peronist) Party by Néstor Kirchner, who ran for President the same year. The party won the election with an unimpressive 22% of the vote. However, in the legislative election of 2005, the FPV secured a majority in both houses of Congress, and in the presidential election of 2007, it obtained 45% of the vote—22% more than its nearest rival. In 2007 the FPV candidate was Mrs. Cristina Fernández de Kirchner, the incumbent president's wife.
Up to 2008, the FPV government had increased export duties substantially. Export duties for oilseeds reached 32% during 2007. However, the government also kept the local currency undervalued, which benefitted exporting sectors.
In March 2008, the international price of oilseeds reached record levels. The government attempted to introduce a new sliding-scale taxation system for soybean and sunflower exports that would raise duties to 44% of the prices of that time. The announcement was met by a nationwide lockout by farming firms. Government officials and government-affiliated labor unionists denounced the lockout as being staged by big farming companies and having no popular support. However, the pro-agriculture movement drew support from a large share of the middle-class population that gathered in urban centers to oppose the new tax scheme. After 4 months of political struggles that eroded the government's approval ratings and fractured the cohesion among FPV members of Congress, the proposal was defeated in the Senate, despite the fact that the FPV had a majority in both houses of Congress. The legislative elections of 2009 mirrored the major setback suffered by the government the previous year. The FPV obtained 30% of the vote, 15% less than in the previous election, and lost its majority in both houses.
During the events of 2008, the FPV took a clear stance in the distributional conflict and appealed to the protectionist sentiment of its constituents. These appeals, which had been so effective during the second half of the twentieth century, resulted in a sharp reduction in approval ratings and votes.
Under the predictions of our model, agents with vested interests in the primary sector would be less likely to vote for the FPV after the party revealed its position concerning the distributional conflict. If agents voted according to their interests and trade policy was an important component of electoral competition, we should observe a sharper fall in FPV votes in districts where the majority of voters derive their income from the primary or the tertiary sector. We test that prediction by comparing the percentages of votes that the FPV received in 2007 and 2009 in different districts, or Partidos, of the Province of Buenos Aires.
For each of the 134 districts of Buenos Aires, we obtain a measure of the ratio of the population that should support free trade. Using 2001 census data, all individuals that derive their income from activities in the primary sector and all other individuals with some secondary schooling who are not employed in the manufacturing sector are classified as "free traders". All individuals who derive their income from the manufacturing sector and those individuals who do not have at least some secondary schooling and are not employed in the primary sector are classified as "protectionists".
In our model, we have abstracted from skill heterogeneity among workers. However, if skilled workers are employed more intensively in the tertiary sector, then we might expect them to support free trade. Similarly, if unskilled workers are employed intensively in the secondary sector, they should support protectionism (see Galiani et al. 2010). The inclusion of educational attainment in the classification captures such heterogeneity to some extent.
Suppose that, in district d, free traders and protectionists voted for FPV with probabilities π d,f and π d,p , respectively. Then, if the proportion of free traders in district d is f d , the total share of votes of FPV is: v d ≡ π d,p + (π d,f − π d,p )f d . This identity holds for any classification of free traders. Now, we model π d,f = π(β f , ɛ d ), i.e., the probability π d,f is equal to a monotonic function of a parameter β f and a disturbance ɛ d that is common to π d,f and π d,p . If we assume that π(β, ɛ) = β + ɛ and that E(ɛ|f) = 0, we can estimate β f and β p consistently by OLS, since v d = β p + (β f − β p )f d + ɛ d . The parameters β i can be interpreted as the expected probability that an agent of type i votes for the FPV, where the expectation is taken across districts. The estimation results are shown below:
Coef.
Free traders 0.205 0.042 0.122 0.288 − 0.086 0.041 − 0.167 − 0.005
Protectionists 0.858 0.058 0.742 0.974 0.774 0.057 0.660 0.888
Notice that both protectionists and free traders were less likely to vote for the FPV in 2009 than they were in 2007. However, the drop in the probability for free traders is more pronounced. To test the null hypothesis of an identical drop for both groups, we regress the difference in FPV votes between 2009 and 2007 on the share of free traders. Notice that
$$v_{d,09} - v_{d,07} = \left( {\beta_{p,09} - \beta_{p,07} } \right) + \left( {\beta_{f,09} - \beta_{f,07} - \beta_{p,09} + \beta_{p,07} } \right)f_{d} + \varepsilon_{d} .$$
We find some evidence against the hypothesis of an identical drop in probabilities: p value 0.067.
The negative coefficient for free traders in 2009 suggests that our linear specification of π(β, ɛ) may be incorrect. Therefore, we try a different specification: π(β, σ, ɛ) = Φ(β + σɛ), where Φ is the cumulative density function of a standard normal and σ is a parameter to be estimated. If we assume that ɛ is normally distributed, we can estimate β f ,β p and σ by maximum likelihood. Φ(β i ) can be interpreted as the median probability that an agent of type i will vote for the FPV, where the median is taken over the distribution of probabilities π d,i across districts. The estimation results are shown below:
Coef β i
Φ(β i )
Free traders − 1.030 0.142 0.152 − 2.567 0.336 0.005
Protectionists 1.437 0.288 0.925 0.385 0.051 0.650
Sigma 0.344 0.209 0.438 0.099
Now, we obtain that free traders voted for the FPV with positive probability. Moreover, it is still true that the probability of voting for the FPV drops more in the case of free traders.
The estimated probabilities seem too extreme, i.e., our classification seems to imply a strong negative correlation between the proportion of "free traders" and FPV votes by district. It may be the case that, irrespective of their classification, individuals in more agricultural districts are less likely to vote for the FPV, independently of their source of income. In that case, f d and ɛ d are negatively correlated and our results would be unable to distinguish between individual and district-level political attitudes. However, even if that is the case, the fact that the aggregate source of income affects political attitudes at the district level is also consistent with the predictions of our model: service workers will support policies that increase the aggregate income of their district and boost the demand for their services.
One might suspect that these differences in political attitudes are driven exclusively by the heterogeneity in educational attainment across districts. However, if we classify individuals solely on the basis of their educational attainment, we obtain strikingly different results. The estimated probability for unskilled individuals (no secondary education) falls drastically, while the probability for skilled workers remains almost constant. Unskilled individuals employed in the primary sector were less likely to vote for the FPV in 2009, while skilled individuals employed in the secondary sector partially compensated for the loss of votes from skilled individuals employed in the tertiary sector.
Skilled 0.318 0.031 0.257 0.379 0.288 0.037 0.215 0.360
Unskilled 0.662 0.036 0.590 0.733 0.253 0.043 0.169 0.337
For comparison purposes, we present the maximum likelihood results for the specification: Φ(β + σɛ). Notice how similar the estimated probabilities are in the two specifications.
Skilled − 0.491 0.087 0.312 − 0.569 0.110 0.285
Unskilled 0.435 0.099 0.668 − 0.698 0.138 0.243
This provides support for our claim that the source of income is a key determinant of individuals' political attitudes. In particular, individuals with vested interests in the primary sector and skilled individuals in the tertiary sector support free trade policies. Individuals whose source of income is linked to the manufacturing sector support protectionist policies. Moreover, this exercise also suggests that individuals took into account the ideological and political stance of the FPV with respect to protectionism. Those who opposed protectionism were less likely to vote for the FPV in 2009 than in 2007.
Galiani, S., Somaini, P. Path-dependent import-substitution policies: the case of Argentina in the twentieth century. Lat Am Econ Rev 27, 5 (2018). https://doi.org/10.1007/s40503-017-0047-4
Revised: 15 September 2017
Import substitution
Trade liberalization
Argentine Exceptionalism
|
CommonCrawl
|
Local well-posedness for the Klein-Gordon-Zakharov system in 3D
Mckean-Vlasov sdes with drifts discontinuous under wasserstein distance
April 2021, 41(4): 1681-1705. doi: 10.3934/dcds.2020337
Global stability in a multi-dimensional predator-prey system with prey-taxis
Dan Li ,
Department of Mathematics, South China University of Technology, Guangzhou 510641, China
* Corresponding author: [email protected]
Received March 2020 Revised August 2020 Published April 2021 Early access October 2020
Fund Project: The first author is supported by NSF grant 12001201
This paper studies the predator-prey systems with prey-taxis
$ \begin{eqnarray*} { \label{1.1}} \left\{ \begin{array}{llll} u_{t} = \Delta u-\chi\nabla\cdot(u\nabla v)+\gamma uv-\rho u, \\ v_{t} = \Delta v-\xi uv+\mu v(1-v), \ \end{array} \right. \end{eqnarray*} $
in a bounded domain
$ \Omega\subset\mathbb{R}^{n} $
$ (n = 2, 3) $
with Neumann boundary conditions, where the parameters
$ \chi $
$ \gamma $
$ \rho $
$ \xi $
$ \mu $
are positive. It is shown that the two-dimensional system possesses a unique global-bounded classical solution. Furthermore, we use some higher-order estimates to obtain the classical solutions with uniform-in-time bounded for suitably small initial data. Finally, we establish that the solution stabilizes towards the prey-only steady state
$ (0, 1) $
$ \rho>\gamma $
and towards the co-existence steady state
$ (\frac{\mu(\gamma-\rho)}{\xi\rho}, \frac{\rho}{\gamma}) $
$ \gamma>\rho $
under some conditions in the norm of
$ L^{\infty}(\Omega) $
$ t\rightarrow\infty $
Keywords: Predator-prey system, Prey-taxis, Global stability, Boundedness.
Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35.
Citation: Dan Li. Global stability in a multi-dimensional predator-prey system with prey-taxis. Discrete & Continuous Dynamical Systems, 2021, 41 (4) : 1681-1705. doi: 10.3934/dcds.2020337
N. D. Alikakos, $L^p$ bounds of solutions of reaction-diffusion equations, Comm. Partial Differential Equations, 4 (1979), 827-868. doi: 10.1080/03605307908820113. Google Scholar
H. Amann, Nonhomogeneous linear and quasilinear elliptic and parabolic boundary value problems, Function Spaces, Differential Operators and Nonlinear Analysis, Teubner Text in Math. Teubner, Stuttgart, 133 (1993), 9-126. doi: 10.1007/978-3-663-11336-2_1. Google Scholar
C. Cosner, Reaction-diffusion-advection models for the effects and evolution of dispersal, Discrete. Contin. Dyn. Syst., 34 (2014), 1701-1745. doi: 10.3934/dcds.2014.34.1701. Google Scholar
H. Gajewski and K. Zacharias, Global behaviour of a reaction-diffusion system modelling chemotaxis, Math. Nachr., 195 (1998), 77-114. doi: 10.1002/mana.19981950106. Google Scholar
T. Hillen and K. J. Painter, Global existence for a parabolic chemotaxis model with prevention of overcrowding, Adv. Appl. Math., 26 (2001), 280-301. doi: 10.1006/aama.2001.0721. Google Scholar
M. A. Herrero and J. J. L. Velázquez, A blow-up mechanism for a chemotaxis model, Ann. Sc. Norm. Super. Pisa Cl. Sci, 24 (1997), 633-683. Google Scholar
D. Horstmann and M. Winkler, Boundedness vs. blow-up in a chemotaxis system, J. Differential Equations, 215 (2005), 52-107. doi: 10.1016/j.jde.2004.10.022. Google Scholar
D. Horstmann and G. Wang, Blow-up in a chemotaxis model without symmetry assumptions, European J. Appl. Math., 12 (2001), 159-177. doi: 10.1017/S0956792501004363. Google Scholar
J. Jiang, H. Wu and S. Zheng, Blow-up for a three dimensional Keller-Segel model with consumption of chemoattractant, J. Differential Equations, 12 (2001), 159-177. doi: 10.1016/j.jde.2018.01.004. Google Scholar
H.-Y Jin and Z. A. Wang, Global stability of prey-taxis system, J. Differential Equations, 262 (2017), 1257-1290. doi: 10.1016/j.jde.2016.10.010. Google Scholar
H.-Y Jin and Z. A. Wang, Global dynamics and spatio-temporal patterns of predator-prey systems with density-dependent motion, Euro. J. Appl. Math., (2019). Google Scholar
A. Jüngel, C. Kuehn and L. Trussardi, A meeting point of entropy and bifurcations in cross-diffusion herding, European J. Appl. Math., 28 (2017), 317-356. doi: 10.1017/S0956792516000346. Google Scholar
A. Jüngel, Diffusive and Nondiffusive Population Models. Mathematical modeling of collective behavior in socio-economic and life sciences, Model. Simul. Sci. Eng. Technol., Birkhäuser Boston, Inc., Boston, MA, 2010,397–425. doi: 10.1007/978-0-8176-4946-3_15. Google Scholar
P. Kareiva and G. Odell, Swarms of predators exhibit 'preytaxis' if individual predators use arearestricted search, Amer. Nat., 130 (1987), 233-270. Google Scholar
E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theoret. Biol., 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. Google Scholar
O. A. Ladyzenskaja, V. A. Solonnikov and N. Nral'ceva, Linear and quasi-linear equations of parabolic type, Amer. Math. Soc. Transl., 23, Providence, RI, 1968. Google Scholar
J. M. Lee, T. Hillen and M. A. Lewis, Pattern formation in prey-taxis systems, J. Biol. Dyn., 3 (2009), 551-573. doi: 10.1080/17513750802716112. Google Scholar
N. Mizoguchi and M. Winkler, Finite-time blow-up in the two-dimensional Keller-Segel system, preprint, 2013. doi: 10.1016/j.matpur.2013.01.020. Google Scholar
Y. Tao, Boundedness in a chemotaxis model with oxygen consumption by bacteria, J. Math. Anal. Appl., 381 (2011), 521-529. doi: 10.1016/j.jmaa.2011.02.041. Google Scholar
Y. Tao and M. Winkler, Global smooth solvability of a parabolic-elliptic nutrient taxis system in domain of arbitrary dimension, J. Differential Equations, 267 (2019), 388-406. doi: 10.1016/j.jde.2019.01.014. Google Scholar
Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, J. Differential Equations, 252 (2012), 692-715. doi: 10.1016/j.jde.2011.08.019. Google Scholar
Y. Tao and M. Winkler, Eventual smoothness and stabilization of large-data solutions in a three-dimensional chemootaxis system with consumption of chemoattractant, J. Differential Equations, 252 (2012), 2520-2543. doi: 10.1016/j.jde.2011.07.010. Google Scholar
Y. Tao and M. Winkler, Energy-type estimates and global solvability in a two-dimensional chemotaxis-haptotaxis model with remodeling of non-diffusivle attractant, J. Differential Equations, 257 (2014), 784-815. doi: 10.1016/j.jde.2014.04.014. Google Scholar
J. I. Tello and D. Wrzosek, Predator-prey model with diffusion and indirect prey-taxis, Math. Models Methods Appl. Sci., 26 (2016), 2129-2162. doi: 10.1142/S0218202516400108. Google Scholar
M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., 100 (2013), 748-767. doi: 10.1016/j.matpur.2013.01.020. Google Scholar
M. Winkler, Asymptotic homogenization in a three-dimensional nutrient taxis system involving food-supported proliferation, J. Differential Equations, 263 (2017), 4826-4869. doi: 10.1016/j.jde.2017.06.002. Google Scholar
M. Winkler, Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops, Comm. Partial Differential Equations, 37 (2012), 319-351. doi: 10.1080/03605302.2011.591865. Google Scholar
Guoqiang Ren, Bin Liu. Global existence and convergence to steady states for a predator-prey model with both predator- and prey-taxis. Discrete & Continuous Dynamical Systems, 2022, 42 (2) : 759-779. doi: 10.3934/dcds.2021136
Qian Cao, Yongli Cai, Yong Luo. Nonconstant positive solutions to the ratio-dependent predator-prey system with prey-taxis in one dimension. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021095
Mostafa Bendahmane. Analysis of a reaction-diffusion system modeling predator-prey with prey-taxis. Networks & Heterogeneous Media, 2008, 3 (4) : 863-879. doi: 10.3934/nhm.2008.3.863
Jinfeng Wang, Sainan Wu, Junping Shi. Pattern formation in diffusive predator-prey systems with predator-taxis and prey-taxis. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1273-1289. doi: 10.3934/dcdsb.2020162
Hengling Wang, Yuxiang Li. Boundedness in prey-taxis system with rotational flux terms. Communications on Pure & Applied Analysis, 2020, 19 (10) : 4839-4851. doi: 10.3934/cpaa.2020214
Zhong Li, Maoan Han, Fengde Chen. Global stability of a predator-prey system with stage structure and mutual interference. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 173-187. doi: 10.3934/dcdsb.2014.19.173
Evan C. Haskell, Jonathan Bell. Pattern formation in a predator-mediated coexistence model with prey-taxis. Discrete & Continuous Dynamical Systems - B, 2020, 25 (8) : 2895-2921. doi: 10.3934/dcdsb.2020045
Ke Wang, Qi Wang, Feng Yu. Stationary and time-periodic patterns of two-predator and one-prey systems with prey-taxis. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 505-543. doi: 10.3934/dcds.2017021
Yinshu Wu, Wenzhang Huang. Global stability of the predator-prey model with a sigmoid functional response. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 1159-1167. doi: 10.3934/dcdsb.2019214
Leonid Braverman, Elena Braverman. Stability analysis and bifurcations in a diffusive predator-prey system. Conference Publications, 2009, 2009 (Special) : 92-100. doi: 10.3934/proc.2009.2009.92
Yu Ma, Chunlai Mu, Shuyan Qiu. Boundedness and asymptotic stability in a two-species predator-prey chemotaxis model. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021218
Xiaoyuan Chang, Junjie Wei. Stability and Hopf bifurcation in a diffusive predator-prey system incorporating a prey refuge. Mathematical Biosciences & Engineering, 2013, 10 (4) : 979-996. doi: 10.3934/mbe.2013.10.979
Shanshan Chen, Jianshe Yu. Stability and bifurcation on predator-prey systems with nonlocal prey competition. Discrete & Continuous Dynamical Systems, 2018, 38 (1) : 43-62. doi: 10.3934/dcds.2018002
Jing-An Cui, Xinyu Song. Permanence of predator-prey system with stage structure. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 547-554. doi: 10.3934/dcdsb.2004.4.547
Dongmei Xiao, Kate Fang Zhang. Multiple bifurcations of a predator-prey system. Discrete & Continuous Dynamical Systems - B, 2007, 8 (2) : 417-433. doi: 10.3934/dcdsb.2007.8.417
Yun Kang, Sourav Kumar Sasmal, Amiya Ranjan Bhowmick, Joydev Chattopadhyay. Dynamics of a predator-prey system with prey subject to Allee effects and disease. Mathematical Biosciences & Engineering, 2014, 11 (4) : 877-918. doi: 10.3934/mbe.2014.11.877
Xinyu Song, Liming Cai, U. Neumann. Ratio-dependent predator-prey system with stage structure for prey. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 747-758. doi: 10.3934/dcdsb.2004.4.747
Kexin Wang. Influence of feedback controls on the global stability of a stochastic predator-prey model with Holling type Ⅱ response and infinite delays. Discrete & Continuous Dynamical Systems - B, 2020, 25 (5) : 1699-1714. doi: 10.3934/dcdsb.2019247
Hongmei Cheng, Rong Yuan. Existence and stability of traveling waves for Leslie-Gower predator-prey system with nonlocal diffusion. Discrete & Continuous Dynamical Systems, 2017, 37 (10) : 5433-5454. doi: 10.3934/dcds.2017236
S. Nakaoka, Y. Saito, Y. Takeuchi. Stability, delay, and chaotic behavior in a Lotka-Volterra predator-prey system. Mathematical Biosciences & Engineering, 2006, 3 (1) : 173-187. doi: 10.3934/mbe.2006.3.173
Dan Li
|
CommonCrawl
|
Tagged: Princeton.LA
Characteristic Polynomial, Eigenvalues, Diagonalization Problem (Princeton University Exam)
\[\begin{bmatrix}
0 & 0 & 1 \\
1 &0 &0 \\
0 & 1 & 0
\end{bmatrix}.\]
(a) Find the characteristic polynomial and all the eigenvalues (real and complex) of $A$. Is $A$ diagonalizable over the complex numbers?
(b) Calculate $A^{2009}$.
(Princeton University, Linear Algebra Exam)
A Square Root Matrix of a Symmetric Matrix
Answer the following two questions with justification.
(a) Does there exist a $2 \times 2$ matrix $A$ with $A^3=O$ but $A^2 \neq O$? Here $O$ denotes the $2 \times 2$ zero matrix.
(b) Does there exist a $3 \times 3$ real matrix $B$ such that $B^2=A$ where
\[A=\begin{bmatrix}
1 & -1 & 0 \\
-1 &2 &-1 \\
0 & -1 & 1
\end{bmatrix}\,\,\,\,?\]
(Princeton University Linear Algebra Exam)
Compute the Determinant of a Magic Square
If Two Vectors Satisfy $A\mathbf{x}=0$ then Find Another Solution
Does an Extra Vector Change the Span?
Find Bases for the Null Space, Range, and the Row Space of a $5\times 4$ Matrix
Subspace of Skew-Symmetric Matrices and Its Dimension
|
CommonCrawl
|
Maxim O. Lavrentovich
I am an assistant professor at the University of Tennessee, working on theoretical problems in biophysics and soft condensed matter physics.
Pattern formation
Evolutionary dynamics and domain walls
Non-equilibrium statistical mechanics
Although much is known about systems at equilibrium, their non-equilibrium counterparts remain poorly understood. For example, even the humble one-dimensional Ising model exhibits a rich range of behaviors when it is driven far from equilibrium. These far-from-equilibrium states are difficult to describe; there is no existing conceptual framework of the same power and breadth as the one developed for equilibrium systems by Boltzmann, Gibbs, and others.
I am interested in one of the most basic ways of driving a system: setting different pieces of the system at different temperatures. I'm mostly interested in Ising-like models whose states are characterized by a spin configuration \(\{ \sigma_i \}\), where \(\sigma_i = \pm 1\) at some lattice sites \(i\). Perhaps the most basic starting point for studying such systems is trying to solve for the probability \(P(\{ \sigma_i \},t)\) of observing a particular spin configuration \(\{ \sigma_i \}\) at time \(t\), and finding steady-state solutions \(P^*( \{ \sigma_i \})\) as \(t \rightarrow \infty\). The probability obeys a conversation law called the master equation:
\partial_t P(\{ \sigma_i \},t) = \sum_{\{ \sigma_i'\}} \left[\omega( \{ \sigma_i' \} \rightarrow \{ \sigma_i \})P(\{ \sigma_i' \},t)- \omega(\{ \sigma_i \} \rightarrow \{ \sigma_i' \})P(\{ \sigma_i \},t) \right],
$$ where the \(\omega\)'s are probability rates of moving from one spin configuration to another one. Note that the sum here is over all possible spin configurations \(\{ \sigma_i' \}\). In general, it is very difficult to solve the master equation. However, in certain cases, a judicious choice of \(\omega\) allows us to make progress.
We considered a one-dimensional Ising chain driven by Glauber dynamics [1]. In this case, the \(\omega\)'s are non-zero just for configurations \(\{ \sigma_i \}\) and \(\{ \sigma_i' \}\) that differ by a single spin flip. If we identify the spin flip as \(\sigma_x\), then we may replace the transition \(\{ \sigma_i\} \rightarrow \{ \sigma_i' \}\) by just the value of the spin \(\sigma_x\) at site \(x\) in the initial configuration. Our rates are:
\omega(\sigma_x) = \frac{1}{2 \Delta t} \left[ 1 - \frac{\gamma(x)}{2} \, \sigma_x (\sigma_{x-1}+\sigma_{x+1}) \right],
$$ where \(\gamma(x) = \tanh (2 \beta_x J)\), where \(J\) is the using Ising coupling strength and \(\beta_x=(k_B T_x)^{-1}\) is the inverse temperature of the spin at site \(x\). We were able to solve for various quantities such as the energy flux $F(x)$ through the system for a system in which \(T_x = \infty\) for all \(x \leq 0\) and \(T_x = T_c\) for all \(x > 0\) [2,3]. Such analytic results pave the way for more general understanding of such systems.
Our group has recently started working on a driven system in two and three dimensions consisting of a binary mixture of particles. These particles have simple excluded volume interactions and, in addition, particles of opposite types cannot occupy nearest neighbor locations. When the particles are subjected to a drive, amazing striped patterns emerge [4].
[1] R. J. Glauber Time-dependent statistics of the Ising model, Journal of Mathematical Physics 4, 294 (1963)
[2] M. O. Lavrentovich and R. K. P. Zia Energy flux near the junction of two Ising chains at different temperatures, EPL 91(5) 50003 (2010)
[3] M. O. Lavrentovich Steady-state properties of coupled hot and cold Ising chains, Journal of Physics A: Mathematical and Theoretical 45 085002 (2012)
[4] R. Dickman and R. K. P. Zia Driven Widom-Rowlinson lattice gas Physical Review E 97 062126 (2018)
|
CommonCrawl
|
Feature Column Archive
Risk Analysis and Romance
On: February 1, 2021
In: 2021, Ursula Whitcher
Happily ever after for Courtney Milan's math-major heroine Maria Camilla Lopez involves a master's degree focused on risk analysis. Let's explore real-world research in risk and management, from food bank strategies to the moons of Jupiter.
Ursula Whitcher
AMS | Mathematical Reviews, Ann Arbor, Michigan
February brings Valentine's Day, and with it an opportunity to play one of my favorite games: what research would this fictional character be working on? This time, our protagonist is Maria Camilla Lopez, the heroine of Courtney Milan's novel Hold Me.
Courtney Milan is a genuine polymath. She has bachelor's degrees in mathematics and chemistry, and a master's degree in physical chemistry. She then switched gears to earn a law degree, clerked for Supreme Court Justices Sandra Day O'Connor and Anthony Kennedy, and worked as a law professor before leaving academia for a full-time writing career.
Courtney Milan (Photo by Jovanka Novakovic)
The fictional Maria Lopez is just finishing her own bachelor's degree in math. Maria is a nontraditional student. She took time off between high school and college to work, saving money for hormones and gender affirmation surgery. To keep herself intellectually engaged, Maria started an anonymous blog about hypothetical disasters. She funnels her real anxiety and wide-ranging curiosity into mathematical models of subjects such as international cyberattacks and zombie plagues. As the book begins, she's running a Monte Carlo simulation of grocery supply chain failures during an apocalyptic pandemic. (Hold Me was published in 2016, but Maria's puzzles have all-too-enduring relevance!)
Maria is becoming closer and closer friends with one of the regular commenters on her blog, a man who goes by the handle ActualPhysicist. They share science jokes and pictures of their day. Maria even shares a photo of the gorgeous, bright red, hand-decorated high heels she's wearing as a sort of armor, to meet with an acquaintance who has been dismissive and rude. The only problem is, her acquaintance, Jay Thalang, is ActualPhysicist.
Hold Me is a romantic comedy, so eventually Maria and Jay work things out. This entails Jay admitting what a jerk he has been. His rudeness stemmed from a combination of stress, sexism, youthful trauma, and a form of loneliness many mathematicians will relate to—the loneliness of having your closest friends scattered all over the world.
The cover of Hold Me
I want to focus on the resolution of another of Maria's problems, the question of what to do after graduation. She applies to entry-level positions in actuarial science, but she wants to do something weirder and riskier: use her expertise in imaginary disasters to advise companies on preventing real ones. To do so, she needs credentials. She seeks them in a very specific place: Stanford's Management Science and Engineering (MS&E) department. Novels are full of fictional departments at fictional universities, but Management Science and Engineering is a real program. It focuses on mathematically informed approaches to solving business and policy problems, drawing on disciplines such as operations research, statistics, and computer science.
What kinds of projects would Maria find intriguing? Let's explore some of the real research at MS&E that could engage someone with a strong mathematical background and experience modeling a wide range of scenarios.
Elisabeth Paté-Cornell and the Europa Clipper
The Engineering Risk Research Group headed by Professor Elisabeth Paté-Cornell, the founding chair of MS&E, provides an obvious source of projects for Maria. Paté-Cornell, whose father was an officer in the French Marine Corps, was born in Dakar, Senegal in 1948. Growing up, she was interested in both mathematics and literature, but decided that a more technical career would offer her more job opportunities while still allowing her to indulge her literary interests. Thus, she majored in mathematics and physics at Aix-Marseille University, where she earned bachelor's degrees in mathematics and physics in 1968. Though her undergraduate program was highly theoretical, Paté-Cornell knew she wanted to attack more applied problems. She did a master's degree in the exciting new field of computer science at the Institute Polytechnique in Grenoble. Based on advice from one of her professors there, she came to Stanford for a second master's degree in operations research. She combined all of this experience for her PhD from Stanford's Engineering-Economic Systems department, where she worked on risk analysis and models of earthquakes. Over the course of her career, Paté-Cornell has pursued research on a huge variety of topics, including space shuttle heat shielding and the risk of nuclear war. She has analyzed lessons from disasters such as the failure of the Fukushima Daichi nuclear plant and the Deep Water Horizon oil spill, studied hospital trauma centers, and considered terrorism risks.
Elisabeth Paté-Cornell (Professional photo used under CC-BY-SA 4.0)
Recently, Paté-Cornell mentored Stanford mechanical engineering PhD student Yiqing Ding and four MS&E master's students, Sean Duggan, Matthew Ferranti (now an economics PhD student at Harvard), Michael Jagadpramana, and Rushal Rege, in a study of radiation risk in outer space. Their subject was NASA's Europa Clipper spacecraft, which is due to launch toward Jupiter's moon Europa in 2024. Europa is covered in smooth water ice, streaked with lines or cracks. Scientists hypothesize that a moon-wide liquid ocean layer lies between the ice and Europa's rocky core. Learning about Europa's structure is made more difficult because the moon orbits within a belt of radiation trapped by Jupiter's magnetic field. Radiation is a danger to both spacecraft and the scientific instruments they carry. To manage the radiation risk, instead of orbiting Europa itself, the Clipper will enter an elliptical orbit around Jupiter. Each time the Clipper flies by Europa it will pass by at a different angle, slowly building a detailed picture of the moon's surface.
Schematic illustration shows Europa Clipper flybys
Ding, Paté-Cornell, and their group write that past quantitative analyses of radiation risk in space exploration have focused on possible radiation exposure to individual astronauts. The radiation risk to the Europa Clipper is different because of its cumulative exposure over multiple flybys, and because the difficulties of exploration near Jupiter limit our information about how intense those exposures might be. Furthermore, different parts of the spacecraft and its payload may have different radiation tolerances.
Instead of assuming a constant radiation dose on each flyby, the MS&E group built a probabilistic model that allowed for radiation to be higher or lower, according to a log-normal distribution. In other words, the logarithm of the radiation dose was normally distributed; in one example they considered, the most likely radiation dose in a single twelve-hour flyby was just under 2000 rad, but the potential dose was far higher. (For comparison, doctors treating cancer might target a tumor with 2000 rad over the course of five days.) After constructing their model, the group ran simulations, modeling approximately 1000 missions with about 70 flybys in each mission. The extra flybys allowed them to see how long it might take for multiple instruments to fail. Their model showed that multiple instruments were likely to fail in quick succession as radiation accumulated.
Log normal distribution curves for different parameters
Of course, radiation is only one form of risk to the Europa Clipper mission. Paté-Cornell has written elsewhere about the importance of incorporating multiple types of error, including human error, in complete risk analyses. Systematic attempts to measure risk encourage us to contemplate dangers we might otherwise ignore. In an essay entitled "Improving Risk Management: From Lame Excuses to Principled Practice," Paté-Cornell and Louis Anthony Cox Jr. (University of Colorado) write:
Deliberate exercises in applying "prospective hindsight" (i.e., assuming that a failure will occur in the future, and envisioning scenarios of how it could happen) and probabilistic analysis using systems analysis, event trees, fault trees, and simulation can be used to overcome common psychological biases that profoundly limit our foresight. These include anchoring and availability biases, confirmation bias, group-think, status quo bias, or endowment effects.
Volunteers and Food Systems
Maria hopes that graduate school will help her connect with businesses and community groups that are trying to make better choices. She would find new opportunities to do so by collaborating with Professor Irene Lo, who researches ways to use operations research for social good. Lo majored in math at Princeton University and received a PhD from Columbia's Industrial Engineering and Operations Research department in 2018. She has studied school choice algorithms and problems in graph theory.
Irene Lo (Professional photo used by permission)
Recently, Lo put her expertise in matching to the test in a collaboration with Food Rescue U.S. (FRUS), a nonprofit that connects businesses that have extra food with food banks that need it. Coordinating food pickup is a hard problem. Food Rescue U.S. uses an app to connect volunteers who want to help with donor businesses that have food ready to share. Lo, Yale School of Management professor Vahideh Manshadi, and the PhD students Scott Rodilitz (Yale) and Ali Shameli (Stanford) teamed up with Food Rescue U.S. to look for ways to maximize volunteer engagement. Volunteers are more likely to keep contributing to an organization when it's easy for them to find ways to participate. One strategy Food Rescue U.S. uses to keep volunteers involved is "adoption": a volunteer can promise to visit a particular site at the same time every week. Adoption makes food delivery more predictable for both volunteers and businesses. But if too many sites are adopted, volunteers logging into the app for the first time won't have anything to do. This conundrum illustrates an economic concept called market thickness: buyers and sellers (or, here, volunteers and donors) can only accomplish their goals when sufficient numbers of people participate in the process.
Lo, Manshadi, Rodilitz, and Shameli built a mathematical model to study matching between volunteers and donor sites. Choose a scaling parameter $n$ that controls the overall size of the market, and suppose there are $na$ donor sites and $nb$ volunteers. Suppose the probability that a volunteer likes an available donor site is $c/n$, where $c$ is another fixed parameter (so matching is easier when $c$ is large, and tougher when $c$ is small). When the first volunteer arrives, the probability that none of the sites works for them is $(1-\frac{c}{n})^{na}$. Thus, the probability that the volunteer can find a good match is $1-(1-\frac{c}{n})^{na}$. If they are successful, the number of available sites drops by 1. Write $M$ for the total number of matches after all volunteers have arrived. Lo, Manshadi, Rodilitz, and Shameli showed that as the scaling factor $n$ grows large, $M/n$ converges (almost surely) to $a + b – \frac{1}{c} \log(e^{ca}+e^{cb}-1)$.
Using this mathematical model, Lo and her collaborators then considered multiple rounds of volunteer and donor matching, and explored how removing some donor sites due to adoption would change the overall matching process. They identified two simple and appealing optimal strategies, depending on market characteristics: either all of the donor sites should be adopted, or none of them should be removed from the pool. More complicated efforts at optimization did not increase the number of overall matches. They point out that this theoretical prediction matches real-world observations about the differences between volunteer pools in different places:
Our interviews with site directors reveal that there are inherent differences between the volunteer pools in different locations. For example, some FRUS sites are in college towns, and thus, the volunteer base consists of many engaged students who are more likely to be attentive to last-minute needs. In other sites, however, a majority of volunteers are professionals who may not be as flexible in their level of engagement.
Nonprofits could use this insight to find new, subtle ways to encourage their volunteers to keep coming back. For example, instead of showing the same "adopt" button to everyone logging into the app, Food Rescue U.S. could encourage adoption in big cities and discourage it in college towns. Our heroine Maria Lopez, who knows a lot about building online communities, might have other ideas to test!
Yiqing Ding, Sean Duggan, Matthew Ferranti, Michael Jagadpramana, Rushal Rege, Yuriy Zhovtobryukh, and M.‐Elisabeth Paté‐Cornell, Probabilistic Assessment of the Failure Risk of the Europa Clipper Spacecraft due to Radiations, Risk Analysis Volume 40, Issue 4, April 2020, pp. 842-85. https://doi.org/10.1111/risa.13439.
Irene Lo, Vahideh Manshadi, Scott Rodilitz, and Ali Shameli, Commitment on Volunteer Crowdsourcing Platforms: Implications for Growth and Engagement (arXiv:2005.10731 [cs.GT])
Courtney Milan, Hold Me (links and excerpt)
Oral-History: Elisabeth Paté-Cornell, Engineering and Technology History Wiki.
Alvin E. Roth, The Art of Designing Markets, Harvard Business Review.
Previous Post: Am I really uninfected?
Next Post: Lost (and found) in space
2,691 Spambots Blocked by Simple Comments
Recently on the Feature Column
Rook Polynomials: A Straight-Forward Problem January 1, 2022
Alan Turing and the Countability of Computable Numbers December 1, 2021
What is a prime, and who decides? November 1, 2021
Decomposition October 1, 2021
Meet me up in space! September 1, 2021
Principal Component Analysis – Three Examples and some Theory August 1, 2021
The Battle of Numbers July 1, 2021
The Once and Future Feature Column June 1, 2021
An epidemic is a sequence of random events May 1, 2021
In Praise of Collaboration March 31, 2021
Lost (and found) in space March 1, 2021
Risk Analysis and Romance February 1, 2021
Subscribe to the Feature Column via Email
Enter your email address to subscribe to the Feature Column and receive notifications of new articles by email.
Adam A. Smith (1)
Algebra and Number Theory (2)
Bill Casselman (6)
Colm Mulcahy (1)
Courtney Gibbons (1)
Discrete Math and Combinatorics (2)
Étienne Ghys (1)
Guillermo Fereyra (1)
History of mathematics (4)
John Eggers (1)
Joseph Malkevitch (6)
Josh Leys (1)
Math and Technology (2)
Math and the Sciences (1)
Mathematics and Biology (2)
Moira Chas (1)
Patrick Ion (1)
Probability and Statistics (2)
Steven H. Weintraub (1)
Thomas Morrill (1)
Tony Phillips (3)
Ursula Whitcher (6)
5G (1) alan turing (1) ams (1) astronauts (1) Branko Grünbaum (1) cantor set (1) chess (1) Claude Shannon (1) Desargues (1) epidemic (2) Euclidean geometry (1) feature column history (1) generating functions (1) genetics (1) Geoffrey Colin Shephard (1) herd immunity (1) Higgs boson (1) Machine learning (1) measles (1) medieval (1) mirror symmetry (1) paleontology (1) Pappus's Theorem (1) partitions (2) pca (1) polar codes (1) polytopes and polyhedra (1) Predictive policing (1) PredPol (1) primes (2) quadratic formula (1) relationship trees (1) rithmomachia (1) SARS (1) SEIR model (2) simulations (1) space exploration (1) string theory (1) T-duality (1) tilings (1) Weatherball (1)
|
CommonCrawl
|
Quality over quantity: on workflow and model space exploration of 3D inversion of MT data
K. Robertson ORCID: orcid.org/0000-0001-7096-41531,2,
S. Thiel ORCID: orcid.org/0000-0002-8678-412X1,2 &
N. Meqbel ORCID: orcid.org/0000-0003-2459-48383 nAff4
Earth, Planets and Space volume 72, Article number: 2 (2020) Cite this article
3D inversions of magnetotelluric data are now almost standard, with computational power now allowing an inversion to be performed in a matter of days (or hours) rather than weeks. However, when compared to 2D inversions, these are still very computationally demanding. As a result, 3D inversions are generally not subjected to as rigorous testing as a 1D or 2D inversion would be, which has implications when these models are used for geological interpretation. In this study, we explore the parameter space for inversion of continent-scale datasets. The generalisations made regarding the effects of each parameter should also be scalable to smaller surveys and will enable MT practitioners to optimise their results. We have performed testing on a subset of the South Australian component of the eventual Australia-wide AusLAMP (Australian Lithospheric Architecture Magnetotelluric Project). The subset was inverted with different parameters, model setup and data subsets. Specifically, results from testing of the model covariance, the resistivity of the prior model, the inclusion of 'known' information into the prior model, the model cell size, the data components inverted for and the damping parameter \(\lambda \) were all investigated. In our testing of the 3D inversion software, ModEM3DMT, we found that the resistivity of the starting/prior model had significant effect on the final model. Careful selection of initial \(\lambda \) value can aid in reducing computational time whilst having a negligible effect on the resultant model, whilst large covariance values and model cell sizes enhanced conductive features at depth.
The electrical resistivity structure of Earth is 3D. In a sedimentary basin, it often approximates to 1D. If geological structures have consistent strike direction across a region such as a long fault plane, maybe it approximates to 2D resistivity structure. More often than not, however, the geological structures are complex, and even if careful consideration of electromagnetic survey layouts is taken, e.g. measurements taken perpendicular to the strike direction, the data will inevitably be 3D in places. With the advance in 3D inversion codes and more readily available high-performance computing facilities, magnetotelluric (MT) surveys are now commonly collected in arrays rather than transects and thus need to be inverted using a 3D code. Additionally, where transects are collected, it is now becoming more commonplace to invert using a 3D inversion code to allow for three-dimensionality of data (e.g. Robertson et al. (2016); Meqbel et al. (2016)).
Ideally, ensembles of models from stochastic inversion methods will provide a variety of solutions that fall within the acceptable array of model parameters (Muñoz and Rath 2006). This is available in 1D (e.g. Cerv et al. 2007), or in 2D (e.g. Chen et al. 2012), but these types of probabilistic methods are difficult to realise in three dimensions due to the very expensive computational nature of this process when performing hundreds or even thousands of inversions. For now, deterministic methods are standard in 3D, and thus, it is vital that we have the highest level of assurance that the model we present is robust, or at the very least we know which features the model is sensitive to (i.e. required by the data).
This becomes critical when the models are used for quantitative geological interpretations such as calculating melt or fluid percentage, or determining the cause of a conductor (sulphides, graphite, hydrogen, etc.). To ensure interpretations are accurate, the range of resistivity values that could apply to a given feature should be considered in the interpretation, and an exploration of the model space is needed to determine these ranges. Various model resolution and sensitivity testing has occurred in 3D magnetotelluric inversions (see Miensopust 2017, for a comprehensive summary of these). When a preferred model is chosen, specific features are generally tested in a number of ways such as removing the feature in question and letting the inversion run for a few more iterations to see whether it returns or locking cells to certain values and seeing whether there is an effect on the model fit, etc. (e.g. Kelbert et al. 2012; Yang et al. 2015). Before one settles on a preferred model, there are a lot of parameters that can be tested, e.g. varying the model smoothing parameters, the starting resistivity, the inclusion of a priori information, the components inverted (e.g. full impedance tensor or off-diagonals only, the tipper, the phase tensor, etc.), the cell size, the method of error calculation and the error floors or the interstation sampling rate (e.g. inverting every second site). Testing all of these parameters is time-consuming and impractical in cases where significant computational time is not available, or model results are required quickly.
A thorough overview of how data errors, data components inverted and grid rotation affect 3D inversion has been conducted by Tietze and Ritter (2013), and a recent review of 3D modelling in practice by Miensopust (2017) exists, along with an investigation into some ModEM3DMT parameters (Slezak et al. 2019); however, as of yet, an in-depth overview of the many other modelling parameters that affect the resultant model is not available, and some of these will be investigated throughout this manuscript, specific to the 3D inversion code, ModEM3DMT.
We present recommendations for modelling MT arrays using the inversion code ModEM3DMT (Egbert and Kelbert 2012; Kelbert et al. 2014), tested by performing many 3D inversions on AusLAMP data in northeast South Australia (Fig. 1). AusLAMP is the Australian Lithospheric Architecture Magnetotelluric Project, which aims to provide a 3D image of the electrical resistivity distribution of the crust and mantle beneath the Australian continent by acquiring long-period MT data at approximately 2800 sites across Australia at half-degree intervals (approximately 55 km).
Left: AusLAMP long-period MT site deployed locations (black) across Australia. Sites used in this study are red. Right: AusLAMP long-period MT site deployed locations (black) and intended locations (white) over topography in South Australia. The red sites were used in the 3D modelling presented in this study. The survey area is outlined in blue. The grey lines show the locations of the cross sections used for visualising modelling results in later figures
We aim to explore the model space, showing how varying selected modelling parameters and starting models affect the resultant model. We intend to provide direction on what may significantly affect the resultant model, and in doing so, this may give the reader an indication of which types of inversion configuration they can run to get the best resultant model for their data.
The MT dataset used in these tests is comprised of 123 long-period MT sites, across a \(\sim\) 55 km spaced array covering an area of 350 (east–west) × 700 (north–south) km. Various testing was done including varying the initial damping value, a trade-off between data misfit and model regularisation (\(\lambda \)), incorporating a highly conductive mantle beneath the 410 km and 660 km seismic discontinuities, altering the resistivity of the half-space (that includes conductive ocean), varying the model covariance, the model horizontal cell size and lastly the data components (impedance tensor and tipper) included in the inversion.
The long-period MT data used for this investigation were mostly collected from November 2016 to July 2017 in the northeast AusLAMP acquisition project, but also utilised two rows of AusLAMP data south of the northeast project region, which have previously been modelled (Robertson et al. 2016). The total number of sites was 123 (including 120 AusLAMP sites and three legacy sites), and the data were processed using BIRRP (Chave and Thomson 2004), remote referenced using simultaneously recording sites (Gamble et al. 1979). The sites recorded five components (Ex, Ey, Bx, By and Bz) using non-polarisable Pb–PbCl electrodes and a fluxgate magnetometer with a sampling rate of 10 Hz. Each site was left out for about 3 weeks and provided frequency-domain data over a period range of approximately 5–20,000 s, with 10–16,000 s (23 periods) inverted for the impedance tensor and 10–9000 s (21 periods) used for the tipper (as the planar wave approximation fundamental to the magnetotelluric theory starts to break down at longer periods). All inversions presented were performed using the parallelised nonlinear conjugate gradient (NLCG) algorithm of ModEM3DMT (Egbert and Kelbert 2012; Kelbert et al. 2014), which aims to minimise the penalty function:
$$\begin{aligned} {\Psi } {(m,d)} = {(d-f(m))}^{\text{T}}{C}_{\text{d}}^{{-1}} {(d-f(m))} + {{\lambda }} {(m-m}_{\text{prior}} )^{{T}}{C}_{\text{m}}^{{-1}}{(m-m}_{\text{prior}}) \end{aligned}$$
where d is the observed data, m the conductivity model, f(m) the forward response of m, C\(_{{d}}\) the data covariance, m\(_{\text{prior}}\) the prior model, C\(_{{m}}\) the model covariance and \({{\lambda }}\) the damping parameter, a trade-off parameter between data misfit and model structure. The code penalises smoothed deviations from a prior model, with a spatial covariance where small-scale features are more heavily penalised. The inversions were run on Raijin, a high-performance computational facility of the National Computational Infrastructure (NCI), in parallel across 48 cores. (Optimal number of cores is equal to twice the number of periods inverted + 1.) The model parameters that are generally unchanged (unless specified) whilst other parameters were investigated are as follows. Where the X direction is N–S, and the Y direction is E–W, the number of cells is 54 (X) × 104 (Y), plus 15 padding cells (with a size increase factor of 1.3) in each direction, totalling 96 × 132 cells. The horizontal cell size is 7500 × 7500 m, resulting in a model size of 2899 (X) × 3275 (Y) km, with only 350 (X) × 700 (Y) km of that as the survey area (the area which includes MT sites). The vertical (Z) thickness begins at 25 m with 75 layers increasing by a factor of 1.135 to a total depth of 2467 km. The survey area is mostly covered by deep sedimentary basins, usually associated with low static shift in the Australian environment. Most sites are unaffected by static shift, and it is intended that the very thin near-surface layers will account for those sites that do exhibit static shift effects caused by near-surface inhomogeneities of the data. The ocean is incorporated (bathymetry from https://maps.ngdc.noaa.gov/) with a resistivity of 0.3 \(\Omega \)m along with underlying sediments with resistivity of 0.3 \(\Omega \)m linearly increasing to the background resistivity for 2 km beneath the base of the ocean. The resistivity of the sediment layer is not fixed. From previous experience of modelling AusLAMP data, we chose to start exploring the model domain by using a smoothing parameter (covariance value) of 0.4 in X, Y and Z directions. The covariance is applied on a cell basis rather than a distance basis, where larger values favour larger scale variations and/or smaller contrasts. Error floors for all testing are 3% of \(\sqrt{|{Zxy} * {Zyx}|}\) for the off-diagonal components, Zxy and Zyx, 7% of \(\sqrt{|{Zxy} * {Zyx}|}\) for Zxx and Zyy and 0.01 for the respective tipper components, Tzx and Tzy. The starting \(\lambda \) is 10.
For each test, the preferred parameter was chosen based on a number of criteria; firstly, whether the overall RMS is within 10% of the lowest overall RMS from the inversions included in that test. Inversions could subsequently be excluded based on visual appearance, such as a very speckled model or a distinct lack of model heterogeneity with depth. Lastly, a parameter is introduced as a measure of the variance of the RMS both between the data subsets used in the inversion (RMS of Z and T, RMS\(_{\text{all}}\); RMS of Z only, RMS\(_{Z}\); RMS of T only, RMS\(_{T}\)) and the variance of the RMS with period, represented by the RMS per decade (RMS divided into period range of 10–100, 100–1000 and 1000–10,000 s for each component (Z + T, Z, T)), with a desire to minimise the variance of these values so that the most equal weighting of the data components, and the data across the entire period range, can be found. The RMS variance parameter RMS\(_{\text{var}}\) is defined as follows:
$$\begin{aligned} {\text{RMS}}_{\text{var}}=0.25*({\text{RMS}}_{\text{OAvar}}+ {\text{RMS}}_{10-100{\text{var}}}+ {\text{RMS}}_{100-1000{\text{var}}}+{\text{RMS}}_{1000-10,000{\text{var}}}) \end{aligned}$$
$$\begin{aligned} {\text{RMS}}_{\text{OAvar}}=100*|({\text{RMS}}_{Z}-{\text{RMS}}_{T})|/{\text{RMS}}_{\text{all}} \end{aligned}$$
and RMS\(_{x-y{\text{var}}}\) are calculated the same as RMS\(_{\text{OAvar}}\) except for over a limited bandwidth where x and y are the minimum and maximum period of the decade. Whilst these criteria were used to select our preferred parameter, we did not always use the preferred parameter in further testing, usually to minimise computational time [referred to herein in KSU, where 1 KSU is one thousand service units (or 1000 h) on supercomputer Raijin] for subsequent tests.
Initial damping parameter—\(\lambda \)
A user-defined variable in the inversion process is the initial damping parameter, \(\lambda \). As \(\lambda \) decreases throughout the inversion, the model progressively fits the data better with smaller scale structures and larger conductivity contrasts. The user can define an initial (\(\lambda \)) value along with a \(\lambda \) exit value (default 1.0e−8) that when reached, causes the inversion to exit. This is usually the cause of an inversion exit unless the target RMS is reached (default 1.05). When the RMS decrease per iteration is less than a certain value (default 1.0e−3), then \(\lambda \) is decreased by a user input divisor (default 10), and this process is repeated until the \(\lambda \) exit value is reached. This process ensures that the orthogonality of the search direction vector is maintained and allows the algorithm to escape from a local minimum (Meqbel et al. 2016). An initial damping parameter of 1, 10, 100 and 1000 was tested (with \(\lambda \) exit value and divisor kept as default), with the results outlined in Figs. 2 and 3. The starting resistivity for these tests was 100 \(\Omega \)m. We have given the RMS as the resultant overall RMS and as the RMS per decade. The RMS per decade is the RMS for a period range of 10–100 s, 100–1000 s, and 1000 and 10,000 s, to give an indication of how the models are fitting the shortest, intermediate and longest periods. Generally speaking, 10–100 s incorporates periods where each station cannot fully sense the adjacent station (resulting in the 'speckled' appearance of the shallow domain in some models); 100–1000 s is roughly crustal depths; and 1000–10000 approximates mantle depths and controls the deep features in the model. The RMS and RMS per decade for these models are similar (Fig. 2) with \(\lambda \) of 10 slightly higher (overall RMS 14% larger than the minimum RMS). Visually, the inversions look similar. The ideal starting \(\lambda \) was deemed to be 1, obtaining the lowest RMS of 1.74 after 169 iterations, having the lowest RMS\(_{\text{var}}\) indicating the best distribution of model fit across the period range and individual data components. Additionally, the inversion converged in 49 less iterations than that of \(\lambda \) 1000. However, a \(\lambda \) of 10 took only 152 iterations with an RMS of 1.98 and was used for further testing, significantly reducing computational time in further tests.
Summary of overall RMS, and RMS per period decade, for models testing the initial \(\lambda \) for 1000, 100, 10 and 1. The number of KSU used for each inversion and the RMS\(_{\text{var}}\) are also displayed with values on the right vertical axis
Three east–west cross sections (top row), a north–south cross section (bottom left) and a depth slice at 42 km (bottom right) through the models where all model setup and parameters were identical except \(\lambda \) = 1000, 100, 10 and 1 from top to bottom. The locations that the cross sections are extracted from are shown as grey dashed lines in the bottom right panel
Known resistivities
We test the hypothesis that the incorporation of known resistivities into the prior model returns a better model than using a half-space and/or may reduce the number of iterations required. By known, we mean information that is reasonably and consistently inferred from other geophysical datasets. To do this, we used a half-space (no ocean) with resistivity of 100 \(\Omega \)m and covariance of 0.4 to compare to models with one individual known feature added at a time. The known features are as follows: bathymetry (with ocean resistivity of 0.3 \(\Omega \)m), 10 \(\Omega \)m resistivity beneath the 410 km seismic discontinuity and 1 \(\Omega \)m resistivity beneath the 660 km seismic discontinuity. The inclusion of these features is described in more detail in the following sections, and the inversions are summarised in Table 1 and plotted in Figs. 4 and 5.
Table 1 Information that is included in the prior model for four different inversions
The averaged resistivity values taken from the inversions with different resistivities for the various prior models for depths of 10 to 750 km (left) and 10 to 140 km (right)
Comparison of models with prior information included in the starting model. Top panel: three east–west cross sections taken through the four models outlined in Table 1. Locations of these cross sections are shown as grey dashed lines in the bottom panel on the depth slices. Bottom panel, left to right: NS slices through the four models; EW slice 3 down to a depth of 800 km to show the deep mantle conductivity; depth slice at 42 km with cross-section locations shown as grey dashed lines; depth slice at 172 km with cross-section locations shown as grey dashed lines
It is commonplace in 3D magnetotelluric inversion to include bathymetry in the prior model. Sea water has a very low resistivity of about 0.3 \(\Omega \)m and thus generally is in stark contrast with the much more resistive lithosphere. The existence of sea water around the survey area also severely affects observed MT responses, known as the geomagnetic coast effect (Parkinson and Jones 1979). The sea has a substantial influence on observed MT data particularly when the separation distance from the coast is smaller than the skin depth of the frequency of interest. The skin depth of a typical long-period MT survey (such as this study) can readily reach up to a few hundred kilometres, so ocean effects can be noticeable over quite some distance. However, the closest point of the survey area to the ocean is about 200 km and the inclusion of bathymetry had little effect on the resultant inversion with the RMS remaining similar (9% increase from 1.82 to 1.98), and the inversions visually are almost identical. The inversion converged with 10 less iterations than the half-space starting model inversion.
Mantle discontinuities 410 and 660 km
Seismic models around the world reveal an abrupt increase in seismic velocity at a depth of 410 km—a result of the transition from olivine to wadsleyite (Shearer and Flanagan 1999). This transition additionally shows a decrease in resistivity to 10 \(\Omega \)m (Huang et al. 2005; Yoshino et al. 2008). At 660 km another seismic discontinuity exists, expected as a result of a phase transition from ringwoodite to bridgmanite and periclase (Ito and Takahashi 1989; Ishii et al. 2018). Both of these transitions are associated with decreases in resistivity. To test whether it is useful to incorporate this information into the prior model, we have set the resistivity beneath 410 km to 10 \(\Omega \)m and then with 1 \(\Omega \)m beneath 660 km depth (Constable 2015; Xu et al. 2000). These resistivities were not fixed in the model.
Summary of prior models
As additional prior information is incorporated in this study, there is a small improvement on the overall model fit (with the exception of the bathymetry only model), as represented by the RMS. The cross sections and depth slices taken through the model (Figs. 4 and 5) are reasonably similar. Visually, there is little difference between the half-space model and the model with bathymetry and ocean bottom sediments, although the overall RMS increased from 1.82 to 1.98, but taking 10 less iterations. The impact of including this information is expected to be more significant if the ocean was closer to the survey area, and/or if the survey had a larger aperture. Including a conductive mantle beneath 410 km depth converged after 182 iterations, 30 more iterations than when only bathymetry information are included. The RMS decreased with this extra information but only minutely from 1.98 to 1.81. Once the even more conductive mantle beneath 660 km was added, the RMS decreased slightly again, from 1.81 to 1.71, taking 29 more iterations than the bathymetry only inversion. These two models that incorporated enhanced conductivities beneath 410 km are less resistive at depth, particularly in the southwest of the model region which can be seen in the NS slice 1 and in the averaged resistivity–depth curves (Figs. 4 and 5) The variation in RMS between all of these models is marginal, and all but one of the models (the one with bathymetry only) have an overall RMS within 10% of the model with the lowest overall RMS. Using the selection criteria outlined earlier, it is deduced that the inclusion of no prior information is the preferred model, with an RMS within 10% of the smallest overall RMS for those tested, and with the smallest RMS\(_{\text{var}}\). However, given that we expect that the inclusion of the ocean would be much more important in a survey region closer to the ocean, the second preferred model may be better, which include all of the prior information that we tested, bathymetry and reducing the prior conductivity to 10 \(\Omega \)m beneath 410 km and 1 \(\Omega \) m beneath 660 km. The model that we used for further testing was neither of these models—we instead used the bathymetry only model due to the much less computational time required for this inversion.
Starting resistivity in prior model
ModEM3DMT allows the user to input a starting model and a prior model. The prior model is a compulsory input, and by default if no starting model is given, then the model starts from the prior model. In regions of the model poorly constrained by the data (e.g. in areas outside of the survey area or at depths exceeding MT signal penetration), the resistivities usually revert to the resistivity of the prior model. No independent starting model was included in any of the tests; thus, the start model is identical to the prior model. A variety of resistivities were tested for the prior/starting model (half-space plus ocean where ocean is locked at resistivity of 0.3 \(\Omega \)m, with bathymetry taken from https://maps.ngdc.noaa.gov/). The land part of the model was varied to resistivity values that were evenly spaced on a log scale, 10, 31.6, 100, 316 and 1000 \(\Omega \)m (1, 1.5, 2, 2.5 and 3 on log scale). In addition, the average apparent resistivity of all data points across all sites and all periods was calculated to be 69 \(\Omega \)m, and this was used also as a prior model. Depth slices (Fig. 9) and cross sections (Fig. 10) show key features of the models.
It is difficult to determine the best resistivity for the prior model as most of the inversions fit the data similarly well (for example, the difference in overall RMS between the 10 and 69 \(\Omega \)m prior models is only 0.18, a 10 % decrease from 1.84 to 1.66), with the exception of 1000 \(\Omega \)m prior model and less so the 316 \(\Omega \)m and 100 \(\Omega \)m, which show a poorer model fit both by overall RMS and RMS per decade (Figs. 6 and 7). The misfit reduction should also be considered in this case (starting RMS-final RMS/starting RMS). The largest percentage decrease in misfit occurs for the models with the highest starting resistivity (Fig. 6). The largest range in RMS between models occurs in the 10–100 s range with RMS varying between 2.45 for 1000 \(\Omega \)m and 1.85 for 10 \(\Omega \)m. This is likely a result of the lack of sensitivity at shallow depths in between 55 km spaced stations. The skin depth for periods longer than 100 s is roughly similar or larger than the interstation spacing, and this reduces the dependency on the starting half-space. In areas of known sedimentation, a low starting resistivity near the surface where there is little to no sensitivity between stations may serve to reduce the spottiness near the surface, with the resistivity staying close to the starting resistivity, and close to the resistivity values achieved beneath the sites where there is sensitivity. Sediments within our survey region are known to have a resistivity within the range of 6 \(\Omega \)m in top 1.5 km, and 25 m in the 2 km beneath from broadband MT surveys and well resistivity logs (Didana et al. 2017). This explains the best RMS in the 10–100 s bandwidth for a 10 \(\Omega \)m starting resistivity which sits between 6 and 25 \(\Omega \)m.
The RMS per decade (10–100, 100–1000 and 1000–10,000 s) for seven different inversions with differing starting half-spaces, 3, 10, 31, 69, 100, 316 and 1000 \(\Omega \)m
The RMS, KSU (divided by ten for plotting purposes) and the RMS variance (left vertical axis) and the starting RMS (right vertical axis) for models with starting resistivity for seven different inversions with differing starting half-spaces, 3, 10, 31, 69, 100, 316 and 1000 \(\Omega \)m
The influence of the prior model on the final converged model was found to be significant, with a general trend being that the more conductive the prior/starting model, the more conductive the final model. For each converged model, the resistivities were averaged for each depth (excluding model padding cells) and are visualised in Fig. 8. By a depth of 400 km, the average resistivity is very similar to the prior model resistivity which indicates a lack of sensitivity at these depths. Similarly at shallow depths, the effect of skin depth and station spacing can be seen. The variation in averaged resistivity for the different models reflects 'residual' or leftover starting resistivity between stations that cannot be constrained by the coarse station spacing (55 km) and by the short data periods (shortest period \(\sim \) 0 s).
The averaged resistivity values taken from the inversions with different resistivities for the prior model ranging from 3 to 1000 \(\Omega \)m, for depths of 10 to 140 km (left) and 10 to 450 km (right)
The averaged resistivity of the most conductive model (3 \(\Omega \)m prior) and the most resistive model (1000 \(\Omega \)m prior) is most similar in lower crustal to shallow upper mantle depths (Moho depth \(\sim \) 36 km in most of the model region; Kennett et al. 2011) where sensitivity peaks. The smallest spread occurs around 40 km depth where the resistivity varies by 0.733log \(\Omega \)m between these models (from 3.19log \(\Omega \)m for the 1000 \(\Omega \)m prior to 2.46log \(\Omega \)m for the 3 \(\Omega \)m prior). However, if we restrict the analysis by eliminating the 316 and 1000 \(\Omega \)m (due to a significantly poorer fit of these models), the range decreases to just 0.06log \(\Omega \)m (from 2.52log \(\Omega \)m for the 100 \(\Omega \)m prior to 2.46log \(\Omega \)m for the 3 \(\Omega \)m prior). At 10 km depth, the range (again excluding 316 and 1000 \(\Omega \)m) is 0.12log \(\Omega \)m (from 2.45log \(\Omega \)m for the 100 \(\Omega \)m prior to 2.33log \(\Omega \)m for the 3 \(\Omega \)m prior), and by 100 km, the range is 0.92log \(\Omega \)m (from 2.59log \(\Omega \)m for the 100 \(\Omega \)m prior to 1.67log \(\Omega \)m for the 3 \(\Omega \)m prior). These results highlight the importance of choosing a reasonable prior model as the absolute values of the resistivity of the converged model is very dependent on the starting model. Whilst these results do not give a definitive answer of which of these models is best, we have confidence that 316 and 1000 \(\Omega \)m are too resistive as indicated by the substantially higher initial and final RMS values (Figs. 6 and 7). Averaging the apparent resistivity for every site and every period as we did with the 69 \(\Omega \)m model similar to the method of Meqbel et al. (2014) seems like a suitable approach for a ball-park resistivity for a prior model half-space with the RMS of 1.84 for this model 10% higher than the best achieved RMS of 1.66. In regions where sedimentation is known to occur, a lower starting resistivity (using the averaged resistivity across only the shortest periods) may serve to minimise the 'speckling' in shallow inversion slices; however, we note that in our tests these low starting resistivities introduce large heterogeneities in deep model slices which should be treated with caution (e.g. 172 km depth slice for 3 and 10 \(\Omega \)m models in Fig. 9).
Column 1: depth slice through inversions with 3, 10, 31, 69, 100, 312 and 1000 \(\Omega \)m prior resistivity at a depth of 42 km. Column 2: as column 1 but for a depth of 172 km. The location of the sections from Fig. 10 is shown as grey dashed lines on the right-hand column
Columns 1–3: east–west cross section through inversions with 3, 10, 31, 69, 100, 312 and 1000 \(\Omega \)m prior resistivity, shown to a depth of 300 km. The location of the sections is shown as grey dashed lines in Fig. 9. Column 4: north–south transect. Conductors referred to in the text are labelled
To gain further insight from this test, the average of the best four models (each had a final RMS within 15% of the lowest RMS achieved for inversions within this test; 3, 10, 31 and 69 \(\Omega \)m) was used to calculate the average and the standard deviation of the models. The average model is not very useful information given that it is dependent on the starting resistivities that are used (e.g. if we had only run 10, 31 and 100 \(\Omega \)m models, then the average resistivity model would be a lot more conductive than when we include the higher resistivity prior models of 316 and 1000 \(\Omega \)m). However, the standard deviation provides useful information on the uncertainty of features and a snapshot of the standard deviation at depths of 42 and 172 km is shown in Fig. 11.
Average and standard deviation of ModEM inversions with prior resistivities of 10, 31.6, 69, 100, 316 and 1000 \(\Omega \)m. Black triangles are locations of MT sites. A standard deviation of 0.2 at a certain point in the model means that 68% of the data lie within 0.2 log resistivity of the average value of that cell (averaged across all starting models). For example, average value at a point = 100 \(\Omega \)m (or 2log \(\Omega \)m), St. Dev = 0.2log \(\Omega \)m, and then 68% of values lie within ± 0.2log \(\Omega \)m or between 1.8 and 2.2log \(\Omega \)m or between 65 and 160 \(\Omega \)m. The top row shows the average resistivity values (with 0.5 \(\Omega \)m contours), and the bottom row shows the standard deviation (with 0.2 contours)
Some of the main features of the model are labelled in the depth slices of Fig. 9 in the 3 \(\Omega \)m prior model. C1a is evident in all of the models (Figs. 9 and 10), whereas C3 is only distinct in the 3 and 10 \(\Omega \)m prior models. For models with 3 and 10 \(\Omega \)m prior resistivities, C1a is further north than the other models, and conductor C1b appears to the southwest. For higher starting resistivity models, the C1a feature is further south and the C1b is no longer required in the model. The southwest corner of the model is associated with the highest uncertainties at 42 km depth (Fig. 11). The location of the C1a and/or C1b is uncertain. Conductor C2 is west–northwest oriented and occurs on the edge of a transition from resistive in the southwest to the more conductive northeast of the model for inversions with priors of 31 \(\Omega \)m or less. This feature is not present in all models and as such the standard deviation here is very high in the 172 km depth slice. C3 is very conductive and situated in the mantle in the southeast of the model region. It only appears in those models with priors of 10 and 3 \(\Omega \)m or less. This region also has very high uncertainties (Fig. 11).
For subsequent inversions, we choose to use the 31 \(\Omega \)m model for the rest of the investigations; it has the lowest RMS\(_{\text{var}}\) of those models that have an overall RMS within 10% of the minimum overall RMS of all tests. Visually, this inversion was also preferred as the extremely conductive C2 and C3 features in the mantle in the 3 and 10 \(\Omega \)m prior model are less pronounced which is preferred given the high uncertainty of these features (Fig. 11). We used 31 \(\Omega \)m for further testing given that it also takes the least number of iterations.
Covariance
The ModEM3DMT code penalises smoothed deviations from a prior model. The covariance controls the behaviour of the model norm, a higher covariance results in a smoother overall model with fewer small-scale and rough features which are more heavily penalised. The ModEM inversion routine has the ability to define how many times the covariance matrix is applied across the cells of the horizontal layers at the same model perturbation. For our tests, we use the value of 2, i.e. the covariance matrix is applied twice (with the exception of one test), which results in generally smoother gradients compared to applying it once. Many covariance values were tested within the possible range of values between 0 and 1 (default covariance is 0.3), where larger covariance values result in smoother models and smaller covariance values result in rougher models. Choosing the best covariance is a trade-off between fitting the data well and creating a geologically plausible model without the resistivity being too 'patchy' or too smooth. Inversions with a standard uniform covariance (with the exception of ocean cells that are fixed) of 0.1, 0.2, 0.3, 0.4, 0.5, 0.75 were tested (Table 2, Figs. 12 and 13).
Table 2 Results of covariance testing
Each row contains two electrical resistivity depth slices at 42 and 172 km for one covariance test. The eight rows correspond to the eight covariance tests mentioned including homogeneous covariance across the model of 0.1, 0.2, 0.3, 0.4, 0.5, 0.75 and 0.4 down to 10 km then 0.2 at greater depths. The grey dashed lines show the locations of the cross sections in Fig.13. The location of the MT sites inverted is shown by black dots
East–west cross section through model to a depth of 300 km with covariance of 0.1, 0.2, 0.2 with the smoothing applied once, 0.3, 0.4, 0.5, 0.75 and 0.4 to 10 km then 0.2 beneath. The locations of these transects are shown in Fig. 12
The results of these test are outlined in Table 2. The best covariance value for a dataset is largely dependent on the model cell sizes, where this choice is dictated by the MT site spacing and complexity of the data (where closer site spacing and more complex data require smaller cell sizes). For the AusLAMP dataset with 55 km site spacing, large covariances (> 0.5) cause a substantially higher RMS. Whilst the model with a covariance of 0.2 has the lowest overall RMS and RMS per decade (the averaged RMS of all sites in the inversion only for data in the period range of the specified decade; Table 2), the model has a very speckled appearance until a depth of about 35 km. We tested a covariance of 0.4 for the top 10 km, and 0.2 beneath to see whether this would encourage a closer fit of the data beneath 10 km, and a smoother, less speckled model in the top 10 km. Whilst there was not much improvement, this method may be useful, but more testing would need to be done to find the optimum depth to change covariance, or a gradual decrease in covariance with depth could be trialled, to reduce the distinct change that occurs at the transition from 0.4 to 0.2. Our approach is opposite to that preferred by Yang et al. (2015) with the Earthscope US Array data (site spacing 70 km). They found that having a decreased covariance at depths shallower than 2 km gave them optimal results with a rougher shallow structure and a slightly smoother and simpler deep structure. The models with covariances of 0.5 or greater start to lose definition in the resistivity features and increase in RMS as covariance increases (Fig. 14). The models with a covariance value of 0.2, 0.3 and 0.4 have better resolved features (less smooth). Whilst 0.2 has the lowest RMS, the model has a speckled appearance throughout the top \(\sim\) 40 km. The next lowest RMS, 0.3, has a speckled appearance down to \(\sim \) 25 km. Thus, 0.4 was chosen as the best covariance for this dataset due to the visual appearance and second lowest RMS\(_{\text{var}}\), although the overall RMS (1.75) is 14% larger than the minimum RMS for the 0.2 model (1.53).
Plot of covariance vs. RMS for the overall RMS, and for the period ranges of 10–100, 100–1000 and 1000–10,000 s
All models in this study (except the one half-space inversion in Table 1) were tested with the ocean included and locked to a resistivity of 0.3 \(\Omega \)m throughout the inversion. However for each of the covariances tested in Table 2, we also ran them with a starting/prior resistivity of 0.3 \(\Omega \)m, but the cells were not locked, for comparison. Columns 7 and 8 in Table 2 show the overall RMS and number of iterations taken to converge when the ocean cells are not locked. In most instances, the overall RMS decreases, but the number of iterations to converge increases.
Lateral cell size
A choice of model cell size in the horizontal direction is a trade-off between computational time and having enough cells between each MT station to be able to incorporate interstation features. Whilst there is no hard and fast rule, a reasonable cell size seems to be about a fifth to a sixth of the spacing (Miensopust 2017, and references therein). The USArray has site spacing of 70 km, with various modellers using a range of site spacing of 15 km (Bedrosian and Feucht 2014, cell size 21% of site spacing), 10 km (Bedrosian 2016, 14%) and 12.5 km (Meqbel et al. 2014, 18% but also tested 25 km; 35%). AusLAMP has a smaller site spacing of approximately 55 km, and Robertson et al. (2016) and Thiel et al. (2018) use a cell size of 10 km (18%). It is important to note the implicit relationship of the covariance with the number of cells (rather than physical cell size) in its implementation in ModEM3DMT. For example, if the same model covariance is applied to a 'delta function model' with a 1 \(\Omega \)m cell in a 100 \(\Omega \)m background, the resulting smoothed model will show this anomaly smoothed out across multiple cells. If the same covariance value is applied but the cell size is doubled, the 'smoothing distance' is also doubled.
Here, we test a lateral cell size of 5, 7.5 and 10 km (percentage of site spacing 9.1, 13.6 and 18.2 %, respectively). We first compare varying the cell size, and using the preferred covariance of 0.4 that was used for the 7.5 km cell size throughout all previous tests. It was found that the smaller the cell size the better the RMS (1.64 for 5 km vs. 1.99 for 10 km; Table 3). However, the time taken for the models to converge does not directly relate to the cell size. Whilst the 5 km cell size used 10 KSU and the 10 km used 4.9 KSU, unexpectedly the 7.5 km cell size model used only 5.3 KSU.
Table 3 Results of lateral cell size tests
To further test cell sizes though, the covariance needed to be adjusted to suit the different cell sizes, due to the application of the covariance in terms of cells and not distance. For a smaller cell size of 5 km, a larger covariance of 0.5 was tested to reduce the speckled appearance of the model. This increased the RMS as expected, from 1.64 to 2.03, but produced a smoother model. This also decreased the time taken, from 10 to 8.4 KSU. Deeper in the model (see 172 km depth slice; Fig. 15), the conductivity is more enhanced. (We have seen this earlier in covariance testing that a larger covariance introduced more conductive features at depth; Fig. 12.) Conversely, we tested the 10 km cell size model with a smaller covariance of 0.3 which resulted in a large decrease in RMS (1.54 for covariance 0.3 from 1.99 for covariance 0.4), and a small increase in the time taken (5.3 KSU up from 4.9 KSU). The lower covariance improved the model, helping to get more resolution from the larger cell size. With larger cell sizes, the deeper conductivity structures became more conductive. This model achieved the lowest overall RMS, but in comparison with the smaller cell size models, there appears to be less resolution at shallow depths. At mantle depths (see, for example, the 172 km depth slice, Fig. 15), conductive features similar to C2 and C3 observed in Fig. 9 are apparent. The resistivity structure appears to vary laterally more rapidly than the expected sensitivity of the MT data at those depths which may be a manifestation of anisotropy and similar features have observed in other global examples such as beneath the Arabian Shield (Bedrosian et al. 2019). Further testing such as synthetic modelling studies and investigation into the possibility of anisotropy are required to determine whether these features are indeed required by the data and/or anisotropic. Meqbel et al. (2014) found when testing 12.5 versus 25 km cell size for modelling the USArray MT data that there was a significant increase in RMS for the larger cell size, and whilst the large-scale structures are relatively unchanged, conductors tend to be narrower and sometimes higher in amplitude for the smaller cell size. They note the biggest changes occur near the surface; however, deeper structures are also affected. Our results also show an increase in overall RMS with cell size that can be accounted for by decreasing the covariance for larger cell sizes. The computational time for an inversion is heavily dependent on the number of cells in the model; thus, it is favourable to have a larger cell size if the resultant model is comparable in resolution.
Modelling of data using different lateral cell sizes of 5, 7.5 and 10 km, respectively. These were run with all the same covariance (0.4) and then with a larger covariance for the 5 km to reduce speckling (0.5) and smaller for the 10 km to get a more detailed model (0.3)
Components to invert
Within ModEM3DMT, there are various options for which data components to invert. Users of 3D codes commonly invert the full impedance tensor in conjunction with the tipper (e.g. Heise et al. 2013; Thiel and Heinson 2013; Robertson et al. 2016, 2017). In a 2D inversion, only the off-diagonal components, Zxy and Zyx, are inverted, and the same can be done in a 3D inversion (e.g. Zhdanov 2010; Lindsey and Newman 2015). There are arguments for and against including the diagonal components such as that the diagonal components of the impedance tensor (which can be as little as 1/10th the magnitude of the off-diagonal component) can degrade the inversion due to lower signal-to-noise ratios (Newman et al. 2008), whereas others found that inverting the full impedance tensor provided a more detailed resistivity structure and revealed features that were otherwise missed by either 2D or 3D off-diagonal component inversions only (Tietze and Ritter 2013; Patro and Egbert 2011). If the tipper on its own is inverted, then the horizontal position and relative resistivity contrasts can be resolved, but the depth of features and absolute resistivity values cannot (e.g. Siripunvaraporn and Egbert 2009). To investigate the effects of inverting the different components, four inversions were run as outlined in Table 4, using a starting resistivity of 31 \(\Omega \)m, error floors of 7% of \(\sqrt{|Zxy * Zyx|}\) for Zxx and Zyy, 3% of \(\sqrt{|Zxy * Zyx|}\) for Zxy and Zyx and 0.01 for Tzx and Tzy. These results summarised in Table 4 show that when only data subsets are inverted, the resultant model does not fit the components that were not inverted, highlighting the importance of joint inversion of the various data components (full Z and T). The results show that C1a is present to some degree in each of the data components (as visible in the 42 km depth slice of Fig. 16). The most conductive parts of C2 are from the tipper and the diagonal components of the impedance tensor, although all components show enhanced conductivity in the northeast of the model. An inversion of the tipper does not constrain depths or absolute resistivities well, and as such the 10 km depth slice is very conductive and appears more like the conductance (conductivity × distance) of the top 10 km rather than the conductivity at 10 km (see bottom row in Fig. 16). These results highlight the importance of a full Z and T inversion and their respective contributions.
Table 4 Investigation of how inverting different components of the impedance tensor and tipper affects the final model
Depth slices at 10, 31 and 185 km for different components of the impedance tensor inverted. Across each row from top to bottom are: full impedance tensor and tipper (Zxx, Zxy, Zyy, Zyx), tipper only (Tzx, Tzy), full impedance tensor only (Zxy, Zyx, Zxx, Zyy), off-diagonals of impedance tensor (Zxy, Zyx), tipper (Tzx, Tzy) and conductance of the full impedance and tipper from 0 to 10 km
Whilst vast improvements have been made in recent years, 3D inversion of MT datasets is still very computationally expensive and takes a lot of time. As a result, it can be difficult to explore the impact that varying modelling parameters can have on the final model. The purpose of our investigation using the ModEM3DMT inversion code was to determine the types of changes that can have significant impact on the resultant inversion, and which tests are required to help ensure robustness of the final model features, and guide the reader to more informed choice of model parameters. The AusLAMP dataset provides an ideal base for this testing. Whilst it is not our main intention to choose the best parameters for this dataset, we do conclude each section with our choice of preferred parameter/s, based on a combination of overall RMS, visual appearance and minimising the variance of the RMS across the period range and the different data components (Z and T), represented by a parameter we introduce, RMS\(_{\text{var}}\).//
A reduction in the damping parameter, \(\lambda \), from 1000 to 1 made little difference to the resultant model and the overall RMS. The greatest overall RMS of 1.98 was for the \(\lambda \) of 10. \(\lambda \) = 1 had the smallest overall RMS of 1.74, but took 17 more iterations. Our preferred \(\lambda \) starting value is \(\lambda \) = 1; however, we performed subsequent testing using \(\lambda \) = 10 to save computational time. Given the savings in computational time that the initial \(\lambda \) value can provide, we suggest testing values early in the inversion process.
We investigated the effects of including prior knowledge in the starting/prior model. The inclusion of bathymetry and underlying ocean sediments required 10 less iterations than a half-space but a slightly higher RMS (1.98 instead of 1.82). The addition of conductive mantle of 10 \(\Omega \)m beneath 410 km decreased the RMS slightly (to 1.81), but took an extra 30 iterations when compared to the model that only included bathymetry. When the prior model has the resistivity beneath 660 km reduced to 1 \(\Omega \)m, the final RMS is the lowest, 1.71, taking one less iteration to converge than the model without the 660 km further reduction in resistivity. Our two preferred models were on both ends of the spectrum, a half-space, and the model that includes bathymetry and a 10 \(\Omega \)m resistivity beneath 410 km depth and a 1 \(\Omega \)m resistivity beneath 660 km (i.e. all prior information is included). However, our model area is several hundred kilometres from the ocean, so it is probably unlikely that a survey area closer to the ocean would find a half-space without ocean information included to be the preferred model.
The dependence of the final inverted model on the prior/starting model was further tested by trialling seven different half-space prior models between 3 and 1000 \(\Omega \)m, including one inversion with prior/starting resistivity of 69 \(\Omega \)m, which was the averaged apparent resistivity across all sites and all data periods. It was found that the inverted models generally showed the same features but at different absolute resistivities, thus highlighting the importance of using a suitable starting model. The robustness of features was also assessed by deriving the standard deviation across the converged models of these seven half-space priors. We found that using the averaged apparent resistivity across all data points for the prior/starting resistivity was a suitable starting point and resistivity values close to this could be tested. Our preferred model used a prior resistivity of 31 \(\Omega \)m which had the second best overall RMS and took the least number of iterations.
Similarly, a good choice in the covariance smoothing parameter is also important and is dependent on the site spacing, cell size and complexity of the resistivity structure. For the site layout (55 km spaced array) and cell size used here (7.5 km), a covariance of 0.4 was most suitable as this reduced spotting in shallower depth slices that would arise with smaller covariances and prevented the dramatic over-smoothing of higher covariances. Given a smaller cell size, a higher covariance is required which can be seen in the depth slices of the 5 km cell size model of Fig. 12, where the resistivity is spotty. This is due to the current formulation of covariance in terms of cells and not distance. When the covariance for the 5 km cell size model is increased from 0.4 to 0.5, the model looks more geologically plausible and smoother and is a better choice. Similarly, when the cell size is increased to 10 km, a covariance of 0.3 was preferred.
We inverted data subsets individually in the last section (full Z and T, Z only, T only, Zxy, Zyx only) and found that it is best to include all of the information (i.e. full Z and T inversion) as inversion of subsets fails to explain the other data components not included in the inversion.
With the advance and parallelisation of 3D inversion software and more readily available high-performance computing facilities, it is now possible to invert large datasets with hundreds of sites across several 1000 km. However, the entire AusLAMP set (and the USArray) will include thousands of MT sites and a task for the future is to consider how these will be inverted to obtain a suitable model of the resistivity of the Australian (or US) lithosphere within a feasible amount of time and within the limits of computing facilities. For example, the desired output of AusLAMP is to image the resistivity structure of the entire lithosphere beneath Australia with approximately 3200 sites and 7.7 million km\(^2\) in area, but for now outputs are centred on the modelling of smaller regions. But whether it be the smaller regions we currently manage, or whole continent inversions, and whether AusLAMP or another large-scale array, this information should serve to focus the inversions and parameter testing required and reduce the time and computational demand.
Availability of data
The magnetotelluric data used in this manuscript can be downloaded in EDI format from the South Australian Resources Information Gateway (SARIG) https://map.sarig.sa.gov.au/Shortcut/MTInterpreted.
Bedrosian PA (2016) Making it and breaking it in the midwest: continental assembly and rifting from modeling of earthscope magnetotelluric data. Precambr Res 278:337–361
Bedrosian PA, Feucht DW (2014) Structure and tectonics of the northwestern United States from Earthscope USArray magnetotelluric data. Earth Planet Sci Lett 402:275–289
Bedrosian PA, Peacock JR, Dhary M, Sharif A, Feucht DW, Zahran H (2019) Crustal magmatism and anisotropy beneath the Arabian shield-a cautionary tale. J Geophys Res Solid Earth 124(10):10153–10179
Cerv V, Menvielle M, Pek J (2007) Stochastic interpretation of magnetotelluric data, comparison of methods. Ann Geophys. https://doi.org/10.4401/ag-3084
Chave A, Thomson D (2004) Bounded influence magnetotelluric response function estimation. Geophys J Int 157(3):988–1006
Chen J, Hoversten GM, Key K, Nordquist G, Cumming W (2012) Stochastic inversion of magnetotelluric data using a sharp boundary parameterization and application to a geothermal site. Geophysics 77(4):E265–E279
Constable S (2015) Geomagnetic induction studies, vol 5, 2nd edn. Elsevier, Oxford, pp 219–254
Didana YL, Heinson G, Thiel S, Krieger L (2017) Magnetotelluric monitoring of permeability enhancement at enhanced geothermal system project. Geothermics 66:23–38
Egbert GD, Kelbert A (2012) Computational recipes for electromagnetic inverse problems. Geophys J Int 189:251–267
Gamble TD, Goubau WM, Clarke J (1979) Error analysis for remote reference magnetotellurics. Geophysics 44(5):959–968
Heise W, Caldwell TG, Bertrand EA, Hill GJ, Bennie SL, Ogawa Y (2013) Changes in electrical resistivity track changes in tectonic plate coupling. Geophys Res Lett 40(19):5029–5033
Huang X, Xu Y, Karato S-I (2005) Water content in the transition zone from electrical conductivity of wadsleyite and ringwoodite. Nature 434:746
Ishii T, Huang R, Fei H, Koemets I, Liu Z, Maeda F, Yuan L, Wang L, Druzhbin D, Yamamoto T, Bhat S, Farla R, Kawazoe T, Tsujino N, Kulik E, Higo Y, Tange Y, Katsura T (2018) Complete agreement of the post-spinel transition with the 660-km seismic discontinuity. Sci Rep 8(1):6358
Ito E, Takahashi E (1989) Postspinel transformations in the system Mg2SiO4-Fe2SiO4 and some geophysical implications. J Geophys Res Solid Earth 94(B8):10637–10646
Kelbert A, Egbert GD, deGroot Hedlin C (2012) Crust and upper mantle electrical conductivity beneath the yellowstone hotspot track. Geology 40(5):447
Kelbert A, Meqbel N, Egbert G, Tandon K (2014) ModEM: a modular system for inversion of electromagnetic geophysical data. Computat Geosci 66:40–53
Kennett BLN, Salmon M, Saygin E, Group AW (2011) AusMoho: the variation of Moho depth in Australia. Geophys J Int 187(2):946–958
Kirkby A, Zhang F, Peacock J, Hassan R, Duan J (2019) The MTPy software package for magnetotelluric data analysis and visualisation. J Open Source Softw 4:1358
Krieger L, Peacock J (2014) MTpy: a Python toolbox for magnetotellurics. Comput Geosci 72:167–175
Lindsey NJ, Newman GA (2015) Improved workflow for 3D inverse modeling of magnetotelluric data: examples from five geothermal systems. Geothermics 53:527–532
Meqbel NM, Egbert GD, Wannamaker PE, Kelbert A, Schultz A (2014) Deep electrical resistivity structure of the Northwestern US derived from 3-D inversion of usarray magnetotelluric data. Earth Planet Sci Lett 402:290–304 (Special issue on USArray science)
Meqbel N, Weckmann U, Muoz G, Ritter O (2016) Crustal metamorphic fluid flux beneath the Dead Sea Basin: constraints from 2-D and 3-D magnetotelluric modelling. Geophys J Int 207(3):1609–1629
Miensopust MP (2017) Application of 3-D electromagnetic inversion in practice: challenges, pitfalls and solution approaches. Surv Geophys 38(5):869–933
Muñoz G, Rath V (2006) Beyond smooth inversion: the use of nullspace projection for the exploration of non-uniqueness in MT. Geophys J Int 164(2):301–311
Newman GA, Gasperikova E, Hoversten GM, Wannamaker PE (2008) Three-dimensional magnetotelluric characterization of the Coso geothermal field. Geothermics 37(4):369–399
Parkinson WD, Jones FW (1979) The geomagnetic coast effect. Revi Geophys 17(8):1999–2015
Patro PK, Egbert GD (2011) Application of 3D inversion to magnetotelluric profile data from the Deccan Volcanic Province of Western India. Phys Earth Planet Inter 187(1):33–46
Robertson K, Heinson G, Thiel S (2016) Lithospheric reworking at the Proterozoic–Phanerozoic transition of Australia imaged using AusLAMP magnetotelluric data. Earth Planet Sci Lett 452:27–35
Robertson KE, Heinson GS, Taylor DH, Thiel S (2017) The lithospheric transition between the Delamerian and Lachlan orogens in Western Victoria: new insights from 3D magnetotelluric imaging. Aust J Earth Sci 64(3):385–399
Shearer PM, Flanagan MP (1999) Seismic velocity and density jumps across the 410- and 660-kilometer discontinuities. Science 285(5433):1545–1548
Siripunvaraporn W, Egbert G (2009) WSINV3DMT: vertical magnetic field transfer function inversion and parallel implementation. Phys Earth Planet Inter 173(3):317–329
Slezak K, Jozwiak W, Nowozynski K, Orynski S, Brasse H (2019) 3-D studies of mt data in the Central Polish Basin: influence of inversion parameters, model space and transfer function selection. J Appl Geophys 161:26–36
Thiel S, Heinson G (2013) Electrical conductors in Archean mantle-result of plume interaction? Geophys Res Lett 40(12):2947–2952
Thiel S, Reid A, Heinson G, Robertson K (2018) Mapping and characterizing lithosphere discontinuities: examples of southern Australia using AusLAMP MT. In: Proceedings of the 24th electromagnetic induction workshop, At Helsingor, Denmark
Tietze K, Ritter O (2013) Three-dimensional magnetotelluric inversion in practice-the electrical conductivity structure of the San Andreas fault in Central California. Geophys J Int 195(1):130–147
Xu Y, Shankland T, Poe B (2000) Laboratory-based electrical conductivity in the Earth's mantle. J Geophys Res Solid Earth 105(B12):27865–27875
Yang B, Egbert GD, Kelbert A, Meqbel NM (2015) Three-dimensional electrical resistivity of the North-Central USA from EarthScope long period magnetotelluric data. Earth Planet Sci Lett 422:87–93
Yoshino T, Nishi M, Matsuzaki T, Yamazaki D, Katsura T (2008) Electrical conductivity of majorite garnet and its implications for electrical structure in the mantle transition zone. Phys Earth Planet Inter 170(3–4):193–200
Zhdanov MS (2010) Electromagnetic geophysics: notes from the past and the road ahead. Geophysics 75(5):75A49–75A66
All inversions were performed using Raijin from the National Computational Infrastructure in Canberra, Australia provided by the Australian Government under the National Computational Merit Allocation Scheme. Data were collected by Philippa Mawby, Geoffrey Axford and Bruce Goleby. The MT instruments used were from the AuScope instrumentation pool. Naser Meqbel developed the software 3Dgrid that was used for generating modelling inputs and viewing some outputs. Most plots were produced using the open-source MT software MTpy (Krieger and Peacock 2014; Kirkby et al. 2019). K. Robertson and S. Thiel publish with the permission of the Director of the Geological Survey of South Australia.
Funding for data acquisition was from Geological Survey of South Australia's PACE Copper Initiative.
N. Meqbel
Present address: National Observatory of Brazil, Rio de Janeiro, Brazil
Department for Energy and Mining, Geological Survey of South Australia, Adelaide, Australia
K. Robertson & S. Thiel
School of Physical Sciences, University of Adelaide, Adelaide, Australia
The Helmholtz Centre Potsdam-GFZ German Research Centre for Geosciences, Potsdam, Germany
K. Robertson
S. Thiel
KR was involved in data acquisition, processing, modelling and writing the text. ST was involved in data processing and generating ideas for modelling and editing the manuscript. NM provided technical advice regarding the ModEM3DMT inversion code and developed the software 3Dgrid that was used for generating modelling inputs and viewing outputs and edited the manuscript. All authors read and approved the final manuscript.
Correspondence to K. Robertson.
Robertson, K., Thiel, S. & Meqbel, N. Quality over quantity: on workflow and model space exploration of 3D inversion of MT data. Earth Planets Space 72, 2 (2020). https://doi.org/10.1186/s40623-019-1125-4
Magnetotellurics
3D inversion
AusLAMP
ModEM3DMT
1. Geomagnetism
Studies on Electromagnetic Induction in the Earth: Recent advances and Future Directions
|
CommonCrawl
|
OSTI.GOV Journal Article: Cluster mass calibration at high redshift: HST weak lensing analysis of 13 distant galaxy clusters from the South Pole Telescope Sunyaev–Zel'dovich Survey
Title: Cluster mass calibration at high redshift: HST weak lensing analysis of 13 distant galaxy clusters from the South Pole Telescope Sunyaev–Zel'dovich Survey
We present an HST/ACS weak gravitational lensing analysis of 13 massive high-redshift (z_median=0.88) galaxy clusters discovered in the South Pole Telescope (SPT) Sunyaev-Zel'dovich Survey. This study is part of a larger campaign that aims to robustly calibrate mass-observable scaling relations over a wide range in redshift to enable improved cosmological constraints from the SPT cluster sample. We introduce new strategies to ensure that systematics in the lensing analysis do not degrade constraints on cluster scaling relations significantly. First, we efficiently remove cluster members from the source sample by selecting very blue galaxies in V-I colour. Our estimate of the source redshift distribution is based on CANDELS data, where we carefully mimic the source selection criteria of the cluster fields. We apply a statistical correction for systematic photometric redshift errors as derived from Hubble Ultra Deep Field data and verified through spatial cross-correlations. We account for the impact of lensing magnification on the source redshift distribution, finding that this is particularly relevant for shallower surveys. Finally, we account for biases in the mass modelling caused by miscentring and uncertainties in the concentration-mass relation using simulations. In combination with temperature estimates from Chandra we constrain the normalisation of the mass-temperature scaling relationmore » ln(E(z) M_500c/10^14 M_sun)=A+1.5 ln(kT/7.2keV) to A=1.81^{+0.24}_{-0.14}(stat.) +/- 0.09(sys.), consistent with self-similar redshift evolution when compared to lower redshift samples. Additionally, the lensing data constrain the average concentration of the clusters to c_200c=5.6^{+3.7}_{-1.8}.« less
Schrabback, T. [1]; Applegate, D. [2]; Dietrich, J. P. [3]; Hoekstra, H. [4];
Search OSTI.GOV for author "Hoekstra, H."
Search OSTI.GOV for ORCID "0000-0002-0641-3231"
Search orcid.org for ORCID "0000-0002-0641-3231"
Bocquet, S. [5]; Gonzalez, A. H. [6]; von der Linden, A. [7]; McDonald, M. [8]; Morrison, C. B. [9]; Raihan, S. F. [10]; Allen, S. W. [11]; Bayliss, M. [12]; Benson, B. A. [13]; Bleem, L. E. [14]; Chiu, I. [15]; Desai, S. [16]; Foley, R. J. [17]; de Haan, T. [18]; High, F. W. [19]; Hilbert, S. [3] more »; Mantz, A. B. [20]; Massey, R. [21];
Search OSTI.GOV for author "Massey, R."
Mohr, J. [22]; Reichardt, C. L. [23]; Saro, A. [3]; Simon, P. [10]; Stern, C. [3]; Stubbs, C. W. [24]; Zenteno, A. [25] « less
Argelander-Institut für Astronomie, Universität Bonn, Auf dem Hügel 71, D-53121 Bonn, Germany; Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305-4060, USA; Department of Physics, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305-4060, USA
Argelander-Institut für Astronomie, Universität Bonn, Auf dem Hügel 71, D-53121 Bonn, Germany; Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
Faculty of Physics, Ludwig-Maximilians University, Scheinerstr 1, D-81679 München, Germany; Excellence Cluster Universe, Boltzmannstr 2, D-85748 Garching, Germany
Leiden Observatory, Leiden University, Niels Bohrweg 2, NL-2300 CA Leiden, the Netherlands
Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA; Faculty of Physics, Ludwig-Maximilians University, Scheinerstr 1, D-81679 München, Germany; Excellence Cluster Universe, Boltzmannstr 2, D-85748 Garching, Germany; Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439, USA
Department of Astronomy, University of Florida, Gainesville, FL 3261, USA
Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305-4060, USA; Department of Physics, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305-4060, USA; Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen, Denmark; Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA
MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
Argelander-Institut für Astronomie, Universität Bonn, Auf dem Hügel 71, D-53121 Bonn, Germany; Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195, USA
Argelander-Institut für Astronomie, Universität Bonn, Auf dem Hügel 71, D-53121 Bonn, Germany
Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305-4060, USA; Department of Physics, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305-4060, USA; SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025, USA
Department of Physics, Harvard University, 17 Oxford Street, Cambridge, MA 02138, USA; Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA; Department of Physics & Astronomy, Colby College, 5800 Mayflower Hill, Waterville, ME 04901, USA
Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA; Fermi National Accelerator Laboratory, Batavia, IL 60510-0500, USA; Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA; Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439, USA; Department of Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
Faculty of Physics, Ludwig-Maximilians University, Scheinerstr 1, D-81679 München, Germany; Excellence Cluster Universe, Boltzmannstr 2, D-85748 Garching, Germany; Academia Sinica Institute of Astronomy and Astrophysics (ASIAA), 11F of AS/NTU Astronomy-Mathematics Building, No. 1, Section 4, Roosevelt Rd, Taipei 10617, Taiwan
Faculty of Physics, Ludwig-Maximilians University, Scheinerstr 1, D-81679 München, Germany; Excellence Cluster Universe, Boltzmannstr 2, D-85748 Garching, Germany; Department of Physics, IIT Hyderabad, Kandi, Telangana 502285, India
Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064, USA
Department of Physics, McGill University, 3600 Rue University, Montreal, Quebec H3A 2T8, Canada; Department of Physics, University of California, Berkeley, CA 94720, USA
Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA; Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305-4060, USA; Department of Physics, Stanford University, 382 Via Pueblo Mall, Stanford, CA 94305-4060, USA
Institute for Computational Cosmology, Durham University, South Road, Durham DH1 3LE, UK
Faculty of Physics, Ludwig-Maximilians University, Scheinerstr 1, D-81679 München, Germany; Excellence Cluster Universe, Boltzmannstr 2, D-85748 Garching, Germany; Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse 1, D-85748 Garching, Germany
School of Physics, University of Melbourne, Parkville, VIC 3010, Australia
Department of Physics, Harvard University, 17 Oxford Street, Cambridge, MA 02138, USA; Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA
Cerro Tololo Inter-American Observatory, Casilla 603, La Serena, Chile
Argonne National Lab. (ANL), Argonne, IL (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25); National Aeronautic and Space Administration (NASA); National Science Foundation (NSF); German Research Foundation (DFG); Australian Research Council (ARC)
Contributing Org.:
Alternate Identifier(s):
OSTI ID: 1346377
FERMILAB-PUB-16-646-AE; arXiv:1611.03866
Journal ID: ISSN 0035-8711; 1497686
Grant/Contract Number:
AC02-07CH11359; AC02-76SF00515; AC02-06CH11357; NAS 5-26555; AST-0444059-001; NSF PHY-1125897; 279396
Journal Article: Accepted Manuscript
Additional Journal Information:
Journal Volume: 474; Journal Issue: 2; Journal ID: ISSN 0035-8711
Royal Astronomical Society
79 ASTRONOMY AND ASTROPHYSICS; gravitational lensing: weak; cosmology: observations; galaxies: clusters: general
Schrabback, T., Applegate, D., Dietrich, J. P., Hoekstra, H., Bocquet, S., Gonzalez, A. H., von der Linden, A., McDonald, M., Morrison, C. B., Raihan, S. F., Allen, S. W., Bayliss, M., Benson, B. A., Bleem, L. E., Chiu, I., Desai, S., Foley, R. J., de Haan, T., High, F. W., Hilbert, S., Mantz, A. B., Massey, R., Mohr, J., Reichardt, C. L., Saro, A., Simon, P., Stern, C., Stubbs, C. W., and Zenteno, A. Cluster mass calibration at high redshift: HST weak lensing analysis of 13 distant galaxy clusters from the South Pole Telescope Sunyaev–Zel'dovich Survey. United States: N. p., 2017. Web. doi:10.1093/mnras/stx2666.
Schrabback, T., Applegate, D., Dietrich, J. P., Hoekstra, H., Bocquet, S., Gonzalez, A. H., von der Linden, A., McDonald, M., Morrison, C. B., Raihan, S. F., Allen, S. W., Bayliss, M., Benson, B. A., Bleem, L. E., Chiu, I., Desai, S., Foley, R. J., de Haan, T., High, F. W., Hilbert, S., Mantz, A. B., Massey, R., Mohr, J., Reichardt, C. L., Saro, A., Simon, P., Stern, C., Stubbs, C. W., & Zenteno, A. Cluster mass calibration at high redshift: HST weak lensing analysis of 13 distant galaxy clusters from the South Pole Telescope Sunyaev–Zel'dovich Survey. United States. doi:10.1093/mnras/stx2666.
Schrabback, T., Applegate, D., Dietrich, J. P., Hoekstra, H., Bocquet, S., Gonzalez, A. H., von der Linden, A., McDonald, M., Morrison, C. B., Raihan, S. F., Allen, S. W., Bayliss, M., Benson, B. A., Bleem, L. E., Chiu, I., Desai, S., Foley, R. J., de Haan, T., High, F. W., Hilbert, S., Mantz, A. B., Massey, R., Mohr, J., Reichardt, C. L., Saro, A., Simon, P., Stern, C., Stubbs, C. W., and Zenteno, A. Sat . "Cluster mass calibration at high redshift: HST weak lensing analysis of 13 distant galaxy clusters from the South Pole Telescope Sunyaev–Zel'dovich Survey". United States. doi:10.1093/mnras/stx2666. https://www.osti.gov/servlets/purl/1426165.
@article{osti_1426165,
title = {Cluster mass calibration at high redshift: HST weak lensing analysis of 13 distant galaxy clusters from the South Pole Telescope Sunyaev–Zel'dovich Survey},
author = {Schrabback, T. and Applegate, D. and Dietrich, J. P. and Hoekstra, H. and Bocquet, S. and Gonzalez, A. H. and von der Linden, A. and McDonald, M. and Morrison, C. B. and Raihan, S. F. and Allen, S. W. and Bayliss, M. and Benson, B. A. and Bleem, L. E. and Chiu, I. and Desai, S. and Foley, R. J. and de Haan, T. and High, F. W. and Hilbert, S. and Mantz, A. B. and Massey, R. and Mohr, J. and Reichardt, C. L. and Saro, A. and Simon, P. and Stern, C. and Stubbs, C. W. and Zenteno, A.},
abstractNote = {We present an HST/ACS weak gravitational lensing analysis of 13 massive high-redshift (z_median=0.88) galaxy clusters discovered in the South Pole Telescope (SPT) Sunyaev-Zel'dovich Survey. This study is part of a larger campaign that aims to robustly calibrate mass-observable scaling relations over a wide range in redshift to enable improved cosmological constraints from the SPT cluster sample. We introduce new strategies to ensure that systematics in the lensing analysis do not degrade constraints on cluster scaling relations significantly. First, we efficiently remove cluster members from the source sample by selecting very blue galaxies in V-I colour. Our estimate of the source redshift distribution is based on CANDELS data, where we carefully mimic the source selection criteria of the cluster fields. We apply a statistical correction for systematic photometric redshift errors as derived from Hubble Ultra Deep Field data and verified through spatial cross-correlations. We account for the impact of lensing magnification on the source redshift distribution, finding that this is particularly relevant for shallower surveys. Finally, we account for biases in the mass modelling caused by miscentring and uncertainties in the concentration-mass relation using simulations. In combination with temperature estimates from Chandra we constrain the normalisation of the mass-temperature scaling relation ln(E(z) M_500c/10^14 M_sun)=A+1.5 ln(kT/7.2keV) to A=1.81^{+0.24}_{-0.14}(stat.) +/- 0.09(sys.), consistent with self-similar redshift evolution when compared to lower redshift samples. Additionally, the lensing data constrain the average concentration of the clusters to c_200c=5.6^{+3.7}_{-1.8}.},
doi = {10.1093/mnras/stx2666},
journal = {Monthly Notices of the Royal Astronomical Society},
number = 2,
volume = 474,
Journal Article:
Free Publicly Available Full Text
Accepted Manuscript (DOE)
Publisher's Version of Record
Search WorldCat to find libraries that may hold this journal
Citation Metrics:
Cited by: 10 works
Citation information provided by
SPT-3G: A Multichroic Receiver for the South Pole Telescope
Journal Article Anderson, A. J. ; Ade, P. A. R. ; Ahmed, Z. ; ... - Journal of Low Temperature Physics
A new receiver for the South Pole Telescope, SPT-3G, was deployed in early 2017 to map the cosmic microwave background at 95, 150, and 220 GHz with ~ 16,000 detectors, 10 times more than its predecessor SPTpol. The increase in detector count is made possible by lenslet-coupled trichroic polarization-sensitive pixels fabricated at Argonne National Laboratory, new 68 × frequency-domain multiplexing readout electronics, and a higher-throughput optical design. The enhanced sensitivity of SPT-3G will enable a wide range of results including constraints on primordial B-mode polarization, measurements of gravitational lensing of the CMB, and a galaxy cluster survey. Here we presentmore » an overview of the instrument and its science objectives, highlighting its measured performance and plans for the upcoming 2018 observing season.« less
DOI: 10.1007/s10909-018-2007-z
High–frequency cluster radio galaxies: Luminosity functions and implications for SZE–selected cluster samples
Journal Article Gupta, Nikhel ; Saro, A. ; Mohr, J. J. ; ... - Monthly Notices of the Royal Astronomical Society
We study the overdensity of point sources in the direction of X-ray-selected galaxy clusters from the meta-catalogue of X-ray-detected clusters of galaxies (MCXC; < z > = 0.14) at South Pole Telescope (SPT) and Sydney University Molonglo Sky Survey (SUMSS) frequencies. Flux densities at 95, 150 and 220 GHz are extracted from the 2500 deg 2 SPT-SZ survey maps at the locations of SUMSS sources, producing a multifrequency catalogue of radio galaxies. In the direction of massive galaxy clusters, the radio galaxy flux densities at 95 and 150 GHz are biased low by the cluster Sunyaev–Zel'dovich Effect (SZE) signal, whichmore » is negative at these frequencies. We employ a cluster SZE model to remove the expected flux bias and then study these corrected source catalogues. We find that the high-frequency radio galaxies are centrally concentrated within the clusters and that their luminosity functions (LFs) exhibit amplitudes that are characteristically an order of magnitude lower than the cluster LF at 843 MHz. We use the 150 GHz LF to estimate the impact of cluster radio galaxies on an SPT-SZ like survey. The radio galaxy flux typically produces a small bias on the SZE signal and has negligible impact on the observed scatter in the SZE mass–observable relation. If we assume there is no redshift evolution in the radio galaxy LF then 1.8 ± 0.7 per cent of the clusters with detection significance ξ ≥ 4.5 would be lost from the sample. As a result, allowing for redshift evolution of the form (1 + z) 2.5 increases the incompleteness to 5.6 ± 1.0 per cent. Improved constraints on the evolution of the cluster radio galaxy LF require a larger cluster sample extending to higher redshift.« less
DOI: 10.1093/mnras/stx095
High Frequency Cluster Radio Galaxies: Luminosity Functions and Implications for SZE Selected Cluster Samples
Journal Article Gupta, N. ; Saro, A. ; Mohr, J. J. ; ... - Monthly Notices of the Royal Astronomical Society
We study the overdensity of point sources in the direction of X-ray-selected galaxy clusters from the Meta-Catalog of X-ray detected Clusters of galaxies (MCXC;more » $$\langle z \rangle = 0.14$$) at South Pole Telescope (SPT) and Sydney University Molonglo Sky Survey (SUMSS) frequencies. Flux densities at 95, 150 and 220 GHz are extracted from the 2500 deg$^2$ SPT-SZ survey maps at the locations of SUMSS sources, producing a multi-frequency catalog of radio galaxies. In the direction of massive galaxy clusters, the radio galaxy flux densities at 95 and 150 GHz are biased low by the cluster Sunyaev-Zel'dovich Effect (SZE) signal, which is negative at these frequencies. We employ a cluster SZE model to remove the expected flux bias and then study these corrected source catalogs. We find that the high frequency radio galaxies are centrally concentrated within the clusters and that their luminosity functions (LFs) exhibit amplitudes that are characteristically an order of magnitude lower than the cluster LF at 843 MHz. We use the 150 GHz LF to estimate the impact of cluster radio galaxies on an SPT-SZ like survey. The radio galaxy flux typically produces a small bias on the SZE signal and has negligible impact on the observed scatter in the SZE mass-observable relation. If we assume there is no redshift evolution in the radio galaxy LF then $$1.8\pm0.7$$ percent of the clusters would be lost from the sample. Allowing for redshift evolution of the form $$(1+z)^{2.5}$$ increases the incompleteness to $$5.6\pm1.0$$ percent. Improved constraints on the evolution of the cluster radio galaxy LF require a larger cluster sample extending to higher redshift.« less
Fabrication of Detector Arrays for the SPT-3G Receiver
Journal Article Posada, C. M. ; Ade, P. A. R. ; Ahmed, Z. ; ... - Journal of Low Temperature Physics
The South Pole Telescope third-generation (SPT-3G) receiver was installed during the austral summer of 2016–2017. It is designed to measure the cosmic microwave background across three frequency bands centered at 95, 150, and 220 GHz. The SPT-3G receiver has ten focal plane modules, each with 269 pixels. Each pixel features a broadband sinuous antenna coupled to a niobium microstrip transmission line. In-line filters define the desired band-passes before the signal is coupled to six bolometers with Ti/Au/Ti/Au transition edge sensors (three bands × two polarizations). In total, the SPT-3G receiver is composed of 16,000 detectors, which are read out usingmore » a 68 × frequency-domain multiplexing scheme. In this paper, we present the process employed in fabricating the detector arrays.« less
Mass Calibration of Optically Selected DES Clusters Using a Measurement of CMB-cluster Lensing with SPTpol Data
Journal Article Raghunathan, S. ; Patil, S. ; Baxter, E. ; ... - The Astrophysical Journal (Online)
We use cosmic microwave background (CMB) temperature maps from the 500 degmore » $$^{2}$$ SPTpol survey to measure the stacked lensing convergence of galaxy clusters from the Dark Energy Survey (DES) Year-3 redMaPPer (RM) cluster catalog. The lensing signal is extracted through a modified quadratic estimator designed to be unbiased by the thermal Sunyaev-Zel{'}dovich (tSZ) effect. The modified estimator uses a tSZ-free map, constructed from the SPTpol 95 and 150 GHz datasets, to estimate the background CMB gradient. For lensing reconstruction, we employ two versions of the RM catalog: a flux-limited sample containing 4003 clusters and a volume-limited sample with 1741 clusters. We detect lensing at a significance of 8.7$$\sigma$$(6.7$$\sigma$$) with the flux(volume)-limited sample. By modeling the reconstructed convergence using the Navarro-Frenk-White profile, we find the average lensing masses to be $$M_{200m}$$ = ($$1.62^{+0.32}_{-0.25}$$ [stat.] $$\pm$$ 0.04 [sys.]) and ($$1.28^{+0.14}_{-0.18}$$ [stat.] $$\pm$$ 0.03 [sys.]) $$\times\ 10^{14}\ M_{\odot}$$ for the volume- and flux-limited samples respectively. The systematic error budget is much smaller than the statistical uncertainty and is dominated by the uncertainties in the RM cluster centroids. We use the volume-limited sample to calibrate the normalization of the mass-richness scaling relation, and find a result consistent with the galaxy weak-lensing measurements from DES.« less
DOI: 10.3847/1538-4357/ab01ca
|
CommonCrawl
|
Page: Minor and Major energy losses
The loss of energy due to friction in a pipe is known as a major loss.
"The loss of energy due to the changes of velocity of the fluid in the magnitude is called a minor loss of energy."
Loss of energy include the following cases:-
1) due to sudden enlargement:-
$h_e=\dfrac{(V_1-V_2)^2}{2g}$
$h_e$ = loss ofhead due to sudden enlargement
2) loss due to sudden contraction:-
$h_c=0.5.\dfrac{V_2^2}{2g}$
Numericals:
Q1) Find the loss of head when a pipe of diameter 200 mm is suddenly enlarged to a diameter of 400 mm. The rate of flow of water through the pipe is 200 lit/s.
Solution: Given:
Diameter of smaller pipe
$D_1=200mm=\dfrac {200}{1000}=0.20m$
Area, $A_1=\dfrac \pi 4\times D_1^2$
$A_1=\dfrac \pi4\times (0.20)^2=0.03141m^2$
Diameter of larger pipe
Discharge, $Q=250 lit/sec=\dfrac {250}{1000}=0.25m^3/s$
Velocity, $V_1=\dfrac Q{A_1}=\dfrac {0.25}{0.03141}=7.96m/s$
Velocity, $V_2=\dfrac Q{A_2}=\dfrac {0.25}{0.12564}=1.996m/s$
Now to find the loss due to enlargement
$h_e=\dfrac{(V_1-V_2)^2}{2\times g}$
$h_e=\dfrac{(7.96-1.99)^2}{2\times 9.81}$
$h_e=1.816m$ of water
Q2) The rate of flow of water through a horizontal pipe is $0.25m^3/s$. The diameter of the pipe which is 200 mm is suddenly enlarged to 400 mm. The pressure intensity in the smaller pipe is $11.772N/cm^2$.
Find:-
1) loss of head due to sudden enlargement
2) pressure intensity in large pipe
3) power lost due to enlargement
Discharge, $Q=0.25m^3/s$
Pressure in smaller pipe,
$P_1=11.772N/cm^2=11.772\times 10^4N/m^2$
(i) Loss of head due to sudden enlargement
(ii) Let pressure intensity in large pipe = $P_2$
Apply Bernoulli's equation,
$\dfrac {P_1}{\rho g}+\dfrac {V_1^2}{2\times g}+z_1=\dfrac {P_2}{\rho\times g}+\dfrac {V_2^2}{2\times g}+z_2+h_e$
Since $z_1=z_2$
$\dfrac {P_1}{\rho g}+\dfrac {V_1^2}{2\times g}=\dfrac {P_2}{\rho\times g}+\dfrac {V_2^2}{2\times g}+h_e....................(1)$
Rewrite as
$\dfrac {P_2}{\rho g}=\dfrac {P_1}{\rho g}+\dfrac {V_1^2}{2g}-\dfrac {V_2^2}{2g}-h_e........................(2)$
Putting values in equation (2), we get
$\dfrac {P_2}{\rho g}=\dfrac {11.772\times 10^4}{100\times 9.81}+\dfrac {7.96^2}{2\times 9.81}-\dfrac {1.99^2}{2\times 9.81}-1.816$
$\dfrac {P_2}{\rho g}=12.0+3.229-0.2018-1.8160$
$\dfrac {P_2}{\rho g}=15.229-2.0178=13.21m$ of water
$\therefore P_2=13.21\times \rho g$
$P_2=13.21\times 1000\times 9.81$
$P_2=12.96N/cm^2$
(iii) Power lost due to sudden enlargement
$P=\dfrac {\rho.g.Q.h_e}{1000}$
$P=\dfrac {1000\times 9.81\times 0.25\times 1.816}{1000}$
$P=4.453kW$
Q3) A horizontal pipe of diameter 500 mm is suddenly contracted to a diameter of 250 mm. The pressure intensities in large and smaller pipe are given as $13.734N/cm^2$ and $11.772N/cm^2$. Find loss of head due to contraction if $C_c=0.62$. Also determine the rate of flow of water.
Solution: Given:-
Diameter of large pipe
$A_1=\dfrac \pi4\times (0.50)^2=0.1963m^2$
Diameter of small pipe
Pressure in large pipe,
To find the head loss due to contraction,
$=\dfrac {V_2^2}{2g}\left[\dfrac 1{C_c}-1.0\right]^2$
$=\dfrac {V_2^2}{2g}\left[\dfrac 1{0.62}-1.0\right]^2$
$=0.375\dfrac {V_2^2}{2g}$
From continuity equation,
$A_1V_1=A_2V_2$
$\therefore, V_1=\dfrac {A_2V_2}{A_1}=\dfrac {\dfrac \pi 4\times (D_2)^2\times V_2}{\dfrac \pi 4(D_1)^2}$
$V_1= \left[\dfrac {D_2}{D_1}\right]^2\times V_2$
$V_1=\left(\dfrac {0.25}{0.50}\right)^2\times V_2$
$V_1=\dfrac {V_2}4$
Apply Bernoulli's equation, ($Z_1=z_2$)
$\dfrac {P_2}{\rho g}=\dfrac {P_1}{\rho g}+\dfrac {V_1^2}{2g}-\dfrac {V_2^2}{2g}-h_c$
But $h_c=0.375\dfrac {V_2^2}{2g}$ and $V_1=\dfrac {V_2}4$
Putting values in equation, we get
$\dfrac {13.374\times10^4}{9.81\times 1000}+\dfrac {\left(\dfrac{V_2} 4\right)^2}{2\times 9.81}=\dfrac {11.772\times 10^4}{1000\times 9.81}+\dfrac {V_2^2}{2g}+0.375\dfrac {V_2^2}{2g}$
$14.0+\dfrac {V_2^2}{16\times 2\times 9.81}=12.0+1.375\dfrac {V_2^2}{2\times 9.8}$
$\therefore, 14-12=1.375\dfrac {V_2^2}{2\times 9.81}-\dfrac 1{16}\dfrac {V_2^2}{2\times 9.81}$
$2=1.3125\dfrac {V_2^2}{2\times 9.81}$
$\therefore V_2=\sqrt{\dfrac{2\times2\times9.81}{1.3125}}=5.467m/s$
$\therefore h_c=0.375\times\dfrac {V_2^2}{2g}$
$h_=\dfrac {0.375\times (5.467)^2}{2\times 9.81}=0.571$
Loss of head at the entrance of pipe:-
This loss is similar to loss due to sudden contraction. This loss occurs when a liquid enters A pipe which is connected to a large tank.
This loss is also dependent on the type of entrance.
$h_i=0.5\dfrac {V^2}{2g}$
Loss of head at the exit of pipe:-
This loss is mainly due to the velocity of liquid at the outlet of the pipe.
$h_o=\dfrac {V^2}{2g}$
Loss of head due to an obstruction in a pipe:-
This loss occurs when there is a reduction in the cross-section area of the pipe due to some obstruction. The area suddenly increases or enlarges after the obstruction.
$=\dfrac {(V_c-V)^2}{2g}=\dfrac {\dfrac {A\times V}{C_c(A-a)}-V}{2g}$
$\therefore$ Loss of head due to obstruction
$=\dfrac {V^2}{2g}\left(\dfrac A{C_c(A-a)}-1\right)^2$
Loss of head due to bend in a pipe:-
This loss occurs when there is a bend in a pipe which causes a change in velocity which leads to separation of the flow from the boundaries and it also forms eddies.
Thus the energy is lost.
$h_b=\dfrac {KV^2}{2g}$
Loss of head in various pipe fittings:-
$=\dfrac {KV^2}{2g}$
Numericals
Q4) A horizontal pipeline 40m long is connected to a water tank at one end and discharges freely into the atmosphere at the other end. For the first 25 m of its length from the tank, the pipe is 150 mm diameter and its diameter is suddenly enlarged to 300 mm. The height of the water level in the tank is 8 m above the centre of the pipe. Considering all losses of head which occurs, determine the rate of flow. Take f=0.01 for both sections of the pipes.
Total length of pipe, L=40m
Length of first pipe, $L_1=25m$
Diameter of first pipe, $d_1=150mm=0.15m$
Length of second pipe, $L_2=40-25=15m$
Diameter of second pipe, $d_2=300mm=0.30m$
Height of water, $H=8m$
Coefficient of friction, f=0.01
Applying Bernoulli's Theorem,
$0+0+8=\dfrac {P_2}{\rho g}+\dfrac {V_2^2}{2g}+0+\text{all losses}$
$8=0+\dfrac {V_2^2}{2g}+h_i+hf_1+h_e+hf_2................(A)$
loss at entrance, $h_i=0.5\dfrac {V_1^2}{2g}$
Head lost due to friction in pipe 1, $hf_1=\dfrac {4\times f\times L_1\times V_1^2}{d_1\times 2g}$
Loss due to sudden enlargement, $h_e=\dfrac {(V_1-V_2)^2}{2g}$
But continuity equation,
$\therefore, V_1=\dfrac {A_2V_2}{A_1}$
$=\dfrac {\dfrac \pi 4 (d_2)^2\times V_2}{\dfrac \pi 4(d_1)^2}$
$=\left(\dfrac {d_2}{d_1}\right)^2\times V_2$
$=\left(\dfrac {0.3}{0.15}\right)^2\times V_2$
$V_1=4V_2......................(1)$
Substituting the values of $V_1$ in different losses, we get
$h_i=0.5\dfrac {V_1^2}{2g}=0.5\dfrac {(4V_2)^2}{2g}=\dfrac {8V_2^2}{2\times9.81}$
$hf_1=\dfrac {4\times f\times L_1\times V_1^2}{d_1\times 2g}=\dfrac {4\times 0.01\times 25\times (4V_2)^2}{0.15\times 2\times 9.81}=106.67\dfrac {V^2_2}{2\times 9.81}$
$h_e=\dfrac {(V_1-V_2)^2}{2g}=\dfrac {(4V_2-V_2)^2}{2g}=\dfrac {9V_2^2}{2\times 9.81}$
$hf_2=\dfrac {4\times f\times L_2\times V_2^2}{d_2\times 2g}=\dfrac {4\times 0.01\times15\times V_2^2}{0.3\times 2\times 9.81}=2.0\dfrac {V^2_2}{2\times 9.81}$
Put the values in equation (A), we get
$8=0+\dfrac {V_2^2}{2g}+\dfrac {8V_2^2}{2\times9.81}+106.67\dfrac {V^2_2}{2\times 9.81}+\dfrac {9V_2^2}{2\times 9.81}+2.0\dfrac {V^2_2}{2\times 9.81}$
$8=\dfrac {V_2^2}{2\times9.81}[1+8+106.67+9+2]$
$\therefore 8.0=126.67\dfrac {V_2^2}{2\times9.81}$
$\therefore V_2=\sqrt{\dfrac {8.0\times 2\times 2.91}{126.67}}$
$\therefore V_2=\sqrt{1.2391}$
$V_2=1.113m/s$
Therefore the rate of flow,
$Q=A_2\times V_2$
$Q=\dfrac \pi4\times (0.3)^2\times 1.113$
$Q=0.07867m^3/s$
or, $Q=78.67$ litres/sec
page fluid mechanics 2 fm2 • 50 views
modified 4 months ago by Sanket Shingote ★ 250 written 4 months ago by Syedahina Mohi • 0
|
CommonCrawl
|
Difference between revisions of "Convex set"
(Importing text file)
Siko1056 (talk | contribs)
(TeX done)
{{TEX|done}}
''in a Euclidean or in another vector space''
A set containing with two arbitrary points all points of the segment connecting these points. The intersection of any family of convex sets is itself a convex set.
The smallest dimension of a plane (i.e. affine subspace) containing a given convex set is called the dimension of that set. The closure of a convex set (i.e. the result of adding to the convex set all its boundary points) yields a convex set of the same dimension. The principal subject of the theory of convex sets is the study of convex bodies, which are finite (i.e. bounded) convex sets of dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c0263401.png" />. If boundedness is not stipulated, one speaks of infinite convex bodies, and if the dimension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c0263402.png" /> is not stipulated, one speaks of degenerate convex bodies or of convex bodies of lower dimension.
The smallest dimension of a plane (i.e. affine subspace) containing a given convex set is called the dimension of that set. The closure of a convex set (i.e. the result of adding to the convex set all its boundary points) yields a convex set of the same dimension. The principal subject of the theory of convex sets is the study of convex bodies, which are finite (i.e. bounded) convex sets of dimension $n$. If boundedness is not stipulated, one speaks of infinite convex bodies, and if the dimension $n$ is not stipulated, one speaks of degenerate convex bodies or of convex bodies of lower dimension.
A convex body is homeomorphic to a closed ball. An infinite convex body not containing straight lines is homeomorphic to a half-space, while those containing a straight line are cylinders with a convex (possibly, infinite) cross-section.
Through each point of the boundary of a convex set there passes at least one hyperplane such that the convex set lies in one of the two closed half-spaces defined by this hyperplane. Such hyperplanes and such half-spaces are called supporting for this set at the given point of the boundary. A closed convex set is the intersection of its supporting half-spaces. The intersection of a finite number of closed half-spaces is a convex polyhedron. The faces of a convex body are its intersections with the supporting hyperplanes. A face is a convex body of lower dimension. The convex body is considered to be its own <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c0263403.png" />-dimensional face. As distinct from a polyhedron, a face of a face need not be a face of the initial convex body.
Through each point of the boundary of a convex set there passes at least one hyperplane such that the convex set lies in one of the two closed half-spaces defined by this hyperplane. Such hyperplanes and such half-spaces are called supporting for this set at the given point of the boundary. A closed convex set is the intersection of its supporting half-spaces. The intersection of a finite number of closed half-spaces is a convex polyhedron. The faces of a convex body are its intersections with the supporting hyperplanes. A face is a convex body of lower dimension. The convex body is considered to be its own $n$-dimensional face. As distinct from a polyhedron, a face of a face need not be a face of the initial convex body.
With each boundary point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c0263404.png" /> of a convex body is connected: an open tangent cone, filled by the rays issuing from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c0263405.png" /> and passing through interior points of the convex body; the closed tangent cone, which is its closure; and the surface tangent cone which is its boundary. The two first-mentioned cones are convex.
With each boundary point $x$ of a convex body is connected: an open tangent cone, filled by the rays issuing from $x$ and passing through interior points of the convex body; the closed tangent cone, which is its closure; and the surface tangent cone which is its boundary. The two first-mentioned cones are convex.
The points of the boundary of a convex body are classified by the minimal dimension of the faces to which they belong, and also by the dimension of the set of supporting hyperplanes at the point. The points of zero-dimensional faces are called exposed points. Extremal points of a convex body are points which are not interior to any segment belonging to that convex body. The problem of the possible abundance of points and of the set of directions of faces of various types is being studied. For instance, the points with a non-unique supporting hyperplane have zero <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c0263406.png" />-dimensional area on the boundary; the directions of the segments lying on the boundary have measure zero among all directions in space.
The points of the boundary of a convex body are classified by the minimal dimension of the faces to which they belong, and also by the dimension of the set of supporting hyperplanes at the point. The points of zero-dimensional faces are called exposed points. Extremal points of a convex body are points which are not interior to any segment belonging to that convex body. The problem of the possible abundance of points and of the set of directions of faces of various types is being studied. For instance, the points with a non-unique supporting hyperplane have zero $(n-1)$-dimensional area on the boundary; the directions of the segments lying on the boundary have measure zero among all directions in space.
Each point not belonging to a convex body is strictly separated from it by a hyperplane such that this point and the convex body are in distinct open half-spaces. Two non-intersecting convex sets are separated by a hyperplane, leaving them in different closed half-spaces. This separation property is retained in the case of convex sets in infinite-dimensional vector spaces.
A convex body <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c0263407.png" /> has associated with it its support function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c0263408.png" />, defined by the equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c0263409.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634010.png" /> is the scalar product. The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634011.png" /> is positively homogeneous of the first degree: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634012.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634013.png" />, and is convex:
A convex body $F$ has associated with it its support function $H\colon E^n \rightarrow E^1$, defined by the equation $H(u)=\sup\{ux\colon x\in F\}$, where $ux$ is the scalar product. The function $H(u)$ is positively homogeneous of the first degree: $H(\alpha u)=\alpha Hu$ for $\alpha\geq 0$, and is convex:
\begin{equation} H(u+v)\leq H(u) + H(v). \end{equation}
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634014.png" /></td> </tr></table>
All functions with these two properties are support functions for some unique convex body. Specifying the support function is one of the principal methods of specifying a convex body.
If the coordinate origin is located inside a convex body, one introduces a distance function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634015.png" />, which, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634016.png" />, is defined by the equation
If the coordinate origin is located inside a convex body, one introduces a distance function $D\colon E^n \rightarrow E^1$, which, for $u\neq 0$, is defined by the equation
\begin{equation} D(u)=\inf\left\{\alpha\colon\frac{u}{\alpha}\in F\right\}, \end{equation}
under the assumption that $D(0)=0$. This is also a positively homogeneous convex function of the first degree, defining $F$. Two convex bodies are called polar (or dual) with respect to each other if the support function of one is the distance function of the other. The existence of dual convex bodies is connected with the self-duality of $E^n$.
If a convex body $F$ is symmetric with respect to the coordinate origin, the function $\rho(u,v)=D(u-v)$ is a metric. This is the metric of the Minkowski space (of a finite-dimensional Banach space), $F$ playing the role of the unit ball. In a similar manner, the unit ball in an infinite-dimensional Banach space is a convex set. The properties of the space are connected with the geometry of this ball, in particular with the presence of points of different types on its boundary [[#References|[3]]].
under the assumption that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634018.png" />. This is also a positively homogeneous convex function of the first degree, defining <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634019.png" />. Two convex bodies are called polar (or dual) with respect to each other if the support function of one is the distance function of the other. The existence of dual convex bodies is connected with the self-duality of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634020.png" />.
If a convex body <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634021.png" /> is symmetric with respect to the coordinate origin, the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634022.png" /> is a metric. This is the metric of the Minkowski space (of a finite-dimensional Banach space), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634023.png" /> playing the role of the unit ball. In a similar manner, the unit ball in an infinite-dimensional Banach space is a convex set. The properties of the space are connected with the geometry of this ball, in particular with the presence of points of different types on its boundary [[#References|[3]]].
A convex body may be given as the [[Convex hull|convex hull]] of the points on its boundary or of some of these points.
There are a number of criteria permitting one to conclude whether or not a set (or any one set from some family) is convex. For instance, if a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634024.png" />-smooth closed surface in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634025.png" /> has non-negative Gaussian curvature at all of its points, this surface is the boundary of a convex body; if the intersection of a compact set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634026.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634027.png" /> with any plane which leaves <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634028.png" /> in one half-space is simply connected, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634029.png" /> is convex [[#References|[4]]].
There are a number of criteria permitting one to conclude whether or not a set (or any one set from some family) is convex. For instance, if a $C^2$-smooth closed surface in $E^3$ has non-negative Gaussian curvature at all of its points, this surface is the boundary of a convex body; if the intersection of a compact set $F$ in $E^3$ with any plane which leaves $F$ in one half-space is simply connected, $F$ is convex [[#References|[4]]].
There are many ways of introducing a metric on a set of convex bodies including degenerate convex bodies but not the empty convex body. The Hausdorff metric is the one most commonly used (cf. [[Convex sets, metric space of|Convex sets, metric space of]]). In this metric each convex body can be approximated by convex polyhedra, and also by convex bodies defined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634030.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634031.png" /> is a polynomial in the coordinates, and which have positive principal curvatures at all points on the boundary.
There are many ways of introducing a metric on a set of convex bodies including degenerate convex bodies but not the empty convex body. The Hausdorff metric is the one most commonly used (cf. [[Convex sets, metric space of]]). In this metric each convex body can be approximated by convex polyhedra, and also by convex bodies defined by $P(x_1,\ldots,x_n)\leq0$, where $P$ is a polynomial in the coordinates, and which have positive principal curvatures at all points on the boundary.
A convex body always has a finite volume (in the sense of Jordan), which is identical with its <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634032.png" />-dimensional Lebesgue measure. The boundary of a convex body has finite <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634033.png" />-dimensional area, and the various ways of introducing an area in such a case are equivalent. The volume and the area of the boundary depend continuously (in the Hausdorff metric) on the convex body.
A convex body always has a finite volume (in the sense of Jordan), which is identical with its $n$-dimensional Lebesgue measure. The boundary of a convex body has finite $(n-1)$-dimensional area, and the various ways of introducing an area in such a case are equivalent. The volume and the area of the boundary depend continuously (in the Hausdorff metric) on the convex body.
[[Mixed-volume theory|Mixed-volume theory]] is connected with the study of the dependence of the volume of a linear combination <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634034.png" /> of convex bodies on the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634035.png" />. Mixed volumes include not only the volume and the area of the boundary, but also many other functionals connected with convex bodies [[#References|[5]]], such as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634036.png" />-dimensional volumes of projections in different directions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634037.png" />-dimensional planes and their average values. The principal results of this theory are various inequalities between mixed volumes, including the classical isoperimetric inequality (cf. [[Isoperimetric inequality, classical|Isoperimetric inequality, classical]]).
[[Mixed-volume theory]] is connected with the study of the dependence of the volume of a linear combination $\sum\lambda_i F_i$ of convex bodies on the coefficients $\lambda_i$. Mixed volumes include not only the volume and the area of the boundary, but also many other functionals connected with convex bodies [[#References|[5]]], such as $k$-dimensional volumes of projections in different directions on $k$-dimensional planes and their average values. The principal results of this theory are various inequalities between mixed volumes, including the classical isoperimetric inequality (cf. [[Isoperimetric inequality, classical]]).
Convex bodies are related to several simple figures. Thus, each convex body has a unique largest (with respect to volume) inscribed and a smallest circumscribed ellipsoid [[#References|[6]]]. Criteria have been found to characterize the balls, ellipsoids and centrally symmetric bodies among other convex bodies [[#References|[1]]], [[#References|[2]]]. Theorems on families of convex sets form a special subject of the theory of convex sets [[#References|[6]]].
The importance of the theory of convex sets lies in the illustrative nature of its methods and results and in the fact that they are general and independent of analytic requirements of smoothness (non-smooth convex bodies often represent solutions of extremal problems).
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> T. Bonnesen, W. Fenchel, "Theorie der konvexen Körper" , Springer (1934)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> F. Valentine, "Convex sets" , McGraw-Hill (1964)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> M.M. Day, "Normed linear spaces" , Springer (1958)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> Yu.D. Burago, V.A. Zalgaller, "Sufficient conditions of convexity" ''J. Soviet Math.'' , '''16''' : 3 (1978) pp. 395–434 ''Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov.'' , '''45''' (1974) pp. 3–53</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> H. Hadwiger, "Vorlesungen über Inhalt, Oberfläche und Isoperimetrie" , Springer (1957)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> L. Danzer, B. Grünbaum, V.L. Klee, "Helly's theorem and its relatives" , ''Proc. Symp. Pure Math.'' , '''7''' , Amer. Math. Soc. (1963) pp. 101–180</TD></TR></table>
<TR><TD valign="top">[1]</TD> <TD valign="top"> T. Bonnesen, W. Fenchel, "Theorie der konvexen Körper" , Springer (1934)</TD></TR>
<TR><TD valign="top">[2]</TD> <TD valign="top"> F. Valentine, "Convex sets" , McGraw-Hill (1964)</TD></TR>
<TR><TD valign="top">[3]</TD> <TD valign="top"> M.M. Day, "Normed linear spaces" , Springer (1958)</TD></TR>
<TR><TD valign="top">[4]</TD> <TD valign="top"> Yu.D. Burago, V.A. Zalgaller, "Sufficient conditions of convexity" ''J. Soviet Math.'' , '''16''' : 3 (1978) pp. 395–434 ''Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov.'' , '''45''' (1974) pp. 3–53</TD></TR>
<TR><TD valign="top">[5]</TD> <TD valign="top"> H. Hadwiger, "Vorlesungen über Inhalt, Oberfläche und Isoperimetrie" , Springer (1957)</TD></TR>
<TR><TD valign="top">[6]</TD> <TD valign="top"> L. Danzer, B. Grünbaum, V.L. Klee, "Helly's theorem and its relatives" , ''Proc. Symp. Pure Math.'' , '''7''' , Amer. Math. Soc. (1963) pp. 101–180</TD></TR></table>
====Comments====
The polar set of a convex set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634038.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634039.png" /> is defined directly by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634040.png" />. The support function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634041.png" /> is then also defined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634042.png" />, and similarly the distance function is given by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634043.png" />. Given the distance function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634044.png" />, the corresponding closed convex set is defined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c026/c026340/c02634045.png" />.
The polar set of a convex set $X$ in $E^n$ is defined directly by $X^*=\inf\left\{ u\in E^n\colon ux < 1 \quad\forall \square x\in X \right\}$. The support function of $X$ is then also defined by $H(u)=\inf\left\{\rho > 0\colon u\in \rho X^* \right\}$, and similarly the distance function is given by $D(x)=\sup\left\{ux\colon u\in X^*\right\}$. Given the distance function $D(x)$, the corresponding closed convex set is defined by $X=\left\{x\in E^n\colon D(x)\leq 1\right\}$.
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> H.G. Eggleston, "Convexity" , Cambridge Univ. Press (1969)</TD></TR></table>
<TR><TD valign="top">[a1]</TD> <TD valign="top"> H.G. Eggleston, "Convexity" , Cambridge Univ. Press (1969)</TD></TR>
in a Euclidean or in another vector space
A convex body $F$ has associated with it its support function $H\colon E^n \rightarrow E^1$, defined by the equation $H(u)=\sup\{ux\colon x\in F\}$, where $ux$ is the scalar product. The function $H(u)$ is positively homogeneous of the first degree: $H(\alpha u)=\alpha Hu$ for $\alpha\geq 0$, and is convex: \begin{equation} H(u+v)\leq H(u) + H(v). \end{equation}
If the coordinate origin is located inside a convex body, one introduces a distance function $D\colon E^n \rightarrow E^1$, which, for $u\neq 0$, is defined by the equation \begin{equation} D(u)=\inf\left\{\alpha\colon\frac{u}{\alpha}\in F\right\}, \end{equation} under the assumption that $D(0)=0$. This is also a positively homogeneous convex function of the first degree, defining $F$. Two convex bodies are called polar (or dual) with respect to each other if the support function of one is the distance function of the other. The existence of dual convex bodies is connected with the self-duality of $E^n$.
If a convex body $F$ is symmetric with respect to the coordinate origin, the function $\rho(u,v)=D(u-v)$ is a metric. This is the metric of the Minkowski space (of a finite-dimensional Banach space), $F$ playing the role of the unit ball. In a similar manner, the unit ball in an infinite-dimensional Banach space is a convex set. The properties of the space are connected with the geometry of this ball, in particular with the presence of points of different types on its boundary [3].
A convex body may be given as the convex hull of the points on its boundary or of some of these points.
There are a number of criteria permitting one to conclude whether or not a set (or any one set from some family) is convex. For instance, if a $C^2$-smooth closed surface in $E^3$ has non-negative Gaussian curvature at all of its points, this surface is the boundary of a convex body; if the intersection of a compact set $F$ in $E^3$ with any plane which leaves $F$ in one half-space is simply connected, $F$ is convex [4].
There are many ways of introducing a metric on a set of convex bodies including degenerate convex bodies but not the empty convex body. The Hausdorff metric is the one most commonly used (cf. Convex sets, metric space of). In this metric each convex body can be approximated by convex polyhedra, and also by convex bodies defined by $P(x_1,\ldots,x_n)\leq0$, where $P$ is a polynomial in the coordinates, and which have positive principal curvatures at all points on the boundary.
Mixed-volume theory is connected with the study of the dependence of the volume of a linear combination $\sum\lambda_i F_i$ of convex bodies on the coefficients $\lambda_i$. Mixed volumes include not only the volume and the area of the boundary, but also many other functionals connected with convex bodies [5], such as $k$-dimensional volumes of projections in different directions on $k$-dimensional planes and their average values. The principal results of this theory are various inequalities between mixed volumes, including the classical isoperimetric inequality (cf. Isoperimetric inequality, classical).
Convex bodies are related to several simple figures. Thus, each convex body has a unique largest (with respect to volume) inscribed and a smallest circumscribed ellipsoid [6]. Criteria have been found to characterize the balls, ellipsoids and centrally symmetric bodies among other convex bodies [1], [2]. Theorems on families of convex sets form a special subject of the theory of convex sets [6].
[1] T. Bonnesen, W. Fenchel, "Theorie der konvexen Körper" , Springer (1934)
[2] F. Valentine, "Convex sets" , McGraw-Hill (1964)
[3] M.M. Day, "Normed linear spaces" , Springer (1958)
[4] Yu.D. Burago, V.A. Zalgaller, "Sufficient conditions of convexity" J. Soviet Math. , 16 : 3 (1978) pp. 395–434 Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. , 45 (1974) pp. 3–53
[5] H. Hadwiger, "Vorlesungen über Inhalt, Oberfläche und Isoperimetrie" , Springer (1957)
[6] L. Danzer, B. Grünbaum, V.L. Klee, "Helly's theorem and its relatives" , Proc. Symp. Pure Math. , 7 , Amer. Math. Soc. (1963) pp. 101–180
[a1] H.G. Eggleston, "Convexity" , Cambridge Univ. Press (1969)
Convex set. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Convex_set&oldid=38654
This article was adapted from an original article by Yu.D. BuragoV.A. Zalgaller (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Convex_set&oldid=38654"
|
CommonCrawl
|
Definite integrals
Existence theorems
Fundamental theorem (FTOC)
Integration by substitution
Area between curves
Rational decomposition
Trigonometric integrals
Integral as sum
Tough integrals
Area in polar coordinates
Disk method
Shell method
Slice method
Arc length (Cartesian)
xaktly | Mathematics | Physics
Mathematics of waves
Graphs of trigonometric functions in this section use units of radians. If you're not familiar with or comfortable with radians, it's a simple conversion: π radians = 180˚. Look here for a refresher on angle measurement.
Waves are modeled by trigonometric functions
Waves are periodic disturbances in some medium, like water waves in water, vibrations of a string or wire (e.g. guitar), sound waves in air, or electromagnetic waves in the electromagnetic field. Those kinds of waves are covered in other sections. This section is about the mathematics we use to model wave behavior.
Waves are well-modeled using the fundamental trigonometric functions, sine and cosine.
In this section, we'll review the basic anatomy and transformations of a sine wave. It's the same for the cosine function; recall that the functions are the same except for a shift of π/2 along the time axis.
Anatomy of a wave
The graph below shows two cycles of a sine function. Time is along the horizontal axis, and f(t) = sin(t) is plotted along the vertical axis. We need to become familiar with several terms that describe waves.
Peaks, troughs & amplitude
Waves have peaks or crests, the high points, and troughs, the low points. The amplitude of a wave is the measure of the height of a peak (or the depth of a trough) from the center line, or the line of zero displacement. The points at which the amplitude of a wave is zero are called nodes. If you've studied p-orbitals and d-orbitals in chemistry, you've learned about nodes of 3-dimensional waves.
Wavelength (λ)
Wavelength is the length, in units of length (e.g. meters), of one complete cycle or period of a wave. It's convenient to measure wavelength from node-to-node, peak-to-peak or trough-to-trough, but it can be anywhere, as long as the chunk of time represents one complete cycle of the wave. Wavelength is usually given the Greek symbol lambda, λ.
In later sections on this page, we'll develop other concepts such as frequency and phase of waves, then talk about the mathematical model of a wave.
The Greek alphabet
alpha Α α
beta Β β
gamma Γ γ
delta Δ δ
epsilon Ε ε
zeta Ζ ζ
eta Η η
theta Θ θ
iota Ι ι
kappa Κ κ
lambda Λ λ
mu Μ μ
nu Ν ν
xi Ξ ξ
omicron Ο ο
pi Π π
rho Ρ ρ
sigma Σ σ
tau Τ τ
upsilon Υ υ
phi Φ φ
chi Χ χ
psi Ψ ψ
omega Ω ω
Glossary of wave terms
Wave A wave is a periodic disturbance in some medium like air, water, solid materials or the electromagnetic field. Waves have mathematically-predictable shapes. The wave is not the medium itself, but rather something that is happening to the medium. For example, a water wave isn't the water itself, but a motion of the water.
Medium The medium is the "substance" through which the wave moves. Water waves move through water, sound waves can move through air, liquids or solids. Electromagnetic waves (light) can move through vacuum (absense of matter) which we call the electromagnetic field (once called the "ether."
Wavelength The wavelength of a wave, denoted by the Greek letter lambda ( λ ), is the measure the length of one period or cycle of the repeated form. It is measured from one place on the wave to an identical place in the next cycle. We often cite wavelength as measured from "peak-to-peak" of the wave crests (tops).
Period The period of a wave is the time it takes for a complete wave to pass a point, or the time between wave crests.
Frequency The frequency of a wave, denoted by the Greek letter nu ( ν ), is the reciprocal of the period. Its units are "per second," or 1/s, which is called "Hertz" and has the symbol Hz. 1 Hz is also 1 cycle per second or cps.
Speed The speed of a wave is also called its speed of propagation. It's how fast the wave moves (or propagates) through some medium. The speed of sound in air is about 340 m/s. The speed of light in vacuum is 2.99792458 × 108 m/s (exactly), and it's a little slower in any other medium.
Amplitude The amplitude of a wave is its height, measured from the average (line of zero-displacement) to the top of a peak. It corresponds to the vertical scaling parameter, $A,$ from your knowledge of function transformations: $g(x) = \color{#E90F89}{A}f(x - h) + k.$
Node Nodes are points where a wave crosses the line of zero-displacement. The nodes of a water wave occur where the height of the wave is exactly equal to the level of the undisturbed water. Looking only at a node, an observer would not notice the wave.
Peak/trough Peaks are high points of waves and troughs (trofs) are the low points. We often measure wavelength from one peak to the next.
Phase The phase of a wave is a relative concept. We can only speak of the phase of a wave relative to another. Phase is the right-left shifting of a wave along its axis of travel. It corresponds to the $h$ parameter in our list of function transformations: $g(x) = Af(x - \color{#E90F89}{h}) + k.$ Phase is often unimportant, but in some applications, knowing relative phase is crucial.
Wavelength and frequency
Imagine for a moment that your eyes are at the surface of the water of a smooth pool when a pebble is dropped in nearby. Waves will pass (say left to right) in time, and they'll pass at a speed that is limited by the medium — water in this case. From our experience, water waves move at about 1-2 meters per second.
Each medium has its own characteristic speed for waves passing through it. Sound waves travel at about 340 ms-1 in dry air, but they actually travel much faster through most solid materials, like steel or aluminum.
Light waves don't actually need a medium in which to travel. They are disturbances in the electromagnetic field, but more on that later. Light waves have the highest speed, 2.99792458 × 108 m·s-1 (exactly).
The number of waves that pass by your fixed point of view in some unit of time (usually a second) is called the frequency of the wave. Think of it as the measure of how frequently you'll see a wave crest. The units of frequency are "reciprocal seconds" (s-1), and the reciprocal second is often called Hertz (Hz). A 240 Hz wave has a frequency of 240 per second (240 s-1= 240 Hz), and 240 waves pass a fixed point each second. (That would be some water wave).
The symbol for frequency is the Greek lower-case letter nu, ν. It's basically a lower-case "v" that looks like the wind is blowing it over from the right.
The unit of frequency is the reciprocal second,
Unit of ν is 1/s
and the unit of wavelength is a unit of length. We'll use the meter. So when we multiply frequency by wavelength, the units are the unit of speed:
$$\lambda \cdot \nu = m \left( \frac{1}{s} \right) = \frac{m}{s} = ms^{-1}$$
Thus, the product of wavelength and frequency is the speed of the wave.
$$\lambda \cdot \nu = speed$$
Here are some speeds of sound in various materials. Notice that the speed of sound in solid materials can be much larger than the speed of sound in air.
The unit of wavelength is the meter, and the unit of frequency is Hertz (Hz). $1 \; Hz = \frac{1}{s} = 1 \, s^{-1}.$
The wavelength ( λ ) of a wave multiplied by its frequency ( ν ) is its speed. In a given medium, an increase in wavelength means a decrease in frequency, and a decrease in wavelength means an increase in frequency.
The wavelength of the visible red beam of a helium-neon (HeNe) laser is 632.8 nm. Calculate the frequency of this light.
The wavelength-frequency equation for this kind of wave (electromagnetic radiation or light) is
$$\lambda \cdot \nu = 2.99 \times 10^8 \frac{m}{s}$$
where the speed of light is about 3 x 108 ms-1. Rearranging to solve for the frequency and plugging in the given wavelength (632 nm is red light), we get
$$\nu = \frac{2.99 \times 10^8 \, \frac{m}{s}}{632.8 \times 10^{-9} \, m}$$
The frequency is
$$\nu = 4.72 \times 10^{14} \; \text{Hz}$$
We should always try to simplify such numbers with our metric prefixes: 103 = Kilo (K); 106 = Mega (M), 109 = Giga (G), and 1012 = Tera (T). so this frequency is best presented in units of Terahertz (THz):
$$\nu = 472 \: \text{THz}$$
The lower and upper ranges of human hearing are 20 Hz and 20,000 Hz (20 KHz). Calculate the wavelengths of these two sounds in dry air, in which the speed of sound is 340 ms-1
The wavelength-frequency equation is
$$\lambda \cdot \nu = 340 \, \frac{m}{s}$$
where the speed of sound (about 760 mi./h) is given. Rearranging to solve for the wavelength gives us:
$$\lambda = \frac{340 \, \frac{m}{s}}{\nu}$$
Now we can find the wavelength of the lower frequency (low-pitch sounds).
$$\lambda_1 = \frac{340 \, \frac{m}{s}}{\frac{20}{s}} = 17 \; m$$
and the higher frequency (high pitched sounds):
$$\lambda_2 = \frac{340 \frac{m}{s}}{\frac{20,000}{s}} = 0.017 \; m = 17 \; cm$$
So the waves most of us can hear have wavelengths between 17 cm and 17 m.
The sine function can be transformed in a number of different ways. The simplest is multiplication of the function by a constant, A, like this:
$$f(t) = A\cdot sin(t)$$
Here A corresponds to the amplitude of the sine wave. So when using a sine function to model a wave, we can adjust the A parameter to get the amplitude right.
The graph shows a sine function (it's sin(2t) — more on that below) with an amplitude of 1 and the same function multiplied by 2 (magenta).
Move the slider on the plot of $f(t) = A \, sin(t)$ below to see how changing the parameter A affects the graph.
How do we increase or decrease the frequency of a sine wave to match the wave we're trying to model?
It turns out that changing the frequency is just stretching or compressing the function horizontally (along the time axis). It looks like this:
$$f(t) = sin(\omega t)$$
where ω is the Greek lower-case letter "omega," and is often called the frequency factor when the sine function is associated with waves.
The frequency factor is the number of full cycles of the sine wave that fit between 0 and 2π.
The graph shows two full cycles of sin(t) between 0 and 4π, and four full cycles of sin(2t) (magenta) between 0 and 4π.
Move the slider below to change the frequency of this sine wave. Notice that as ω gets larger the number of cycles per unit time increases.
The phase of a wave is unimportant in some applications, but crucial in others. You can think of a phase difference as the difference in relative "starting points" of two or more waves. Here's a picture. The magenta wave is shifted to the right by π/2 compared to the gray wave.
A phase shift of a sine wave is accomplished mathematically by employing the usual horizontal translation transformation:
$$f(t) = sin(t + \phi)$$
The Greek letter phi, φ, is often used to denote phase.
Lasers produce what is called coherent light. This means that all waves have the sam phase, or are "in phase." In laser light all peaks and troughs line up and the range of emitted wavelengths is very tight. Compare that to a white light bulb, which produces light across the visible spectrum, and generally of all phases — non-coherent light.
Phase differences lead to all sorts of interesting and useful effects because they can produce constructive and destructive interference. More on that later.
Move the slider on the plot below to see how adjusting φ shifts the wave along the time (horizontal) axis.
The last transformation of the sine function is the vertical translation or vertical shift. In wave mathematics it's sometimes called the vertical offset or DC offset (DC for "direct current").
The transformation is acheived by simply adding or subtracting a constant from the function,
$$f(t) = sin(t) + k$$
where k is the offset. In the graph, a normal sine function is elevated by 2 units along the vertical axis by adding 2 to f(x).
Move the slider on the plot of f(t) = sin(t) below to see how the horizontal offset parameter (k) works.
A parameter is an adjustable constant in the definition of a function that is different from the independent variable(s). Parameters are not independent variables. For example, in the quadratic function
f(x) = Ax2 + Bx + C
A, B and C are parameters which change the shape of the graph of the function. x is the independent variable. A, B and C are fixed for any particular version of f(x), but x can range from -∞ to +∞
Putting it all together – wave transformations
We can put all four wave transformations together into one equation. It's the wave equation you should know.
The periodic nature of waves allows them to add in interesting ways. Waves can interfere with each other, producing a wave of larger or smaller amplitude than its "parents." When waves add to increase the amplitude, we call it constructive interference. When they add to produce a lower amplitude than the absolute sum, it's destrucive interference. Perfect destructive interference between two waves can completely cancel both.
In this animation, two waves are added, point-by-point, along the t axis
Move the slider on the plot of
$$f(t) = sin(t) + sin(t - \phi)$$
below to see how changing the phase of one wave gradually diminishes the sum. When the waves are out of phase, that is their phases differ by π = 180˚, they cancel each other.
Video example
Three examples
Here are three examples of how to use the wavelenght-frequency-speed relation for electromagnetic waves: λ·ν = c
Minutes of your life: 4:28
xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to [email protected].
|
CommonCrawl
|
Sample records for bifunctional asymmetric catalysis
Multicatalyst system in asymmetric catalysis
Zhou, Jian
This book introduces multi-catalyst systems by describing their mechanism and advantages in asymmetric catalysis. Helps organic chemists perform more efficient catalysis with step-by-step methods Overviews new concepts and progress for greener and economic catalytic reactions Covers topics of interest in asymmetric catalysis including bifunctional catalysis, cooperative catalysis, multimetallic catalysis, and novel tandem reactions Has applications for pharmaceuticals, agrochemicals, materials, and flavour and fragrance
Heterobimetallic transition metal/rare earth metal bifunctional catalysis: a Cu/Sm/Schiff base complex for syn-selective catalytic asymmetric nitro-Mannich reaction.
Handa, Shinya; Gnanadesikan, Vijay; Matsunaga, Shigeki; Shibasaki, Masakatsu
The full details of a catalytic asymmetric syn-selective nitro-Mannich reaction promoted by heterobimetallic Cu/Sm/dinucleating Schiff base complexes are described, demonstrating the effectiveness of the heterobimetallic transition metal/rare earth metal bifunctional catalysis. The first-generation system prepared from Cu(OAc)(2)/Sm(O-iPr)(3)/Schiff base 1a = 1:1:1 with an achiral phenol additive was partially successful for achieving the syn-selective catalytic asymmetric nitro-Mannich reaction. The substrate scope and limitations of the first-generation system remained problematic. After mechanistic studies on the catalyst prepared from Sm(O-iPr)(3), we reoptimized the catalyst preparation method, and a catalyst derived from Sm(5)O(O-iPr)(13) showed broader substrate generality as well as higher reactivity and stereoselectivity compared to Sm(O-iPr)(3). The optimal system with Sm(5)O(O-iPr)(13) was applicable to various aromatic, heteroaromatic, and isomerizable aliphatic N-Boc imines, giving products in 66-99% ee and syn/anti = >20:1-13:1. Catalytic asymmetric synthesis of nemonapride is also demonstrated using the catalyst derived from Sm(5)O(O-iPr)(13).
Asymmetric cation-binding catalysis
Oliveira, Maria Teresa; Lee, Jiwoong
The employment of metal salts is quite limited in asymmetric catalysis, although it would provide an additional arsenal of safe and inexpensive reagents to create molecular functions with high optical purity. Cation chelation by polyethers increases the salts' solubility in conventional organic...... solvents, thus increasing their applicability in synthesis. The expansion of this concept to chiral polyethers led to the emergence of asymmetric cation-binding catalysis, where chiral counter anions are generated from metal salts, particularly using BINOL-based polyethers. Alkali metal salts, namely KF...... highly enantioselective silylation reactions in polyether-generated chiral environments, and leading to a record-high turnover in asymmetric organocatalysis. This can lead to further applications by the asymmetric use of other inorganic salts in various organic transformations....
Impact of Secondary Interactions in Asymmetric Catalysis
Frölander, Anders
This thesis deals with secondary interactions in asymmetric catalysis and their impact on the outcome of catalytic reactions. The first part revolves around the metal-catalyzed asymmetric allylic alkylation reaction and how interactions within the catalyst affect the stereochemistry. An OH–Pd hydrogen bond in Pd(0)–π-olefin complexes of hydroxy-containing oxazoline ligands was identified by density functional theory computations and helped to rationalize the contrasting results obtained emplo...
Asymmetric Aminalization via Cation-Binding Catalysis
Park, Sang Yeon; Liu, Yidong; Oh, Joong Suk
Asymmetric cation-binding catalysis, in principle, can generate "chiral" anionic nucleophiles, where the counter cations are coordinated within chiral environments. Nitrogen-nucleophiles are intrinsically basic, therefore, its use as nucleophiles is often challenging and limiting the scope of the...
Bifunctional organocatalysts for the asymmetric synthesis of axially chiral benzamides
Ryota Miyaji
Full Text Available Bifunctional organocatalysts bearing amino and urea functional groups in a chiral molecular skeleton were applied to the enantioselective synthesis of axially chiral benzamides via aromatic electrophilic bromination. The results demonstrate the versatility of bifunctional organocatalysts for the enantioselective construction of axially chiral compounds. Moderate to good enantioselectivities were afforded with a range of benzamide substrates. Mechanistic investigations were also carried out.
DNA-based asymmetric organometallic catalysis in water
Oelerich, Jens; Roelfes, Gerard
Here, the first examples of DNA-based organometallic catalysis in water that give rise to high enantioselectivities are described. Copper complexes of strongly intercalating ligands were found to enable the asymmetric intramolecular cyclopropanation of alpha-diazo-beta-keto sulfones in water. Up to
Asymmetric organocatalytic Michael addition of Meldrum's acid to nitroalkenes: probing the mechanism of bifunctional thiourea organocatalysts
Kataja, Antti O.; Koskinen, Ari M.P.
The asymmetric Michael addition of Meldrum's acid to nitroalkenes was studied using a novel type of Cinchona alkaloid-based bifunctional thiourea organocatalyst. The functionality of the thiourea catalysts was also probed by preparing and testing thiourea-N-methylated analogues of the well-known bis-(3,5-trifluoromethyl)phenyl-substituted catalyst. Peer reviewed
Asymmetric Aldol Additions: A Guided-Inquiry Laboratory Activity on Catalysis
King, Jorge H. Torres; Wang, Hong; Yezierski, Ellen J.
Despite the importance of asymmetric catalysis in both the pharmaceutical and commodity chemicals industries, asymmetric catalysis is under-represented in undergraduate chemistry laboratory curricula. A novel guided-inquiry experiment based on the asymmetric aldol addition was developed. Students conduct lab work to compare the effectiveness of…
Tandem rhodium catalysis: exploiting sulfoxides for asymmetric transition-metal catalysis.
Kou, K G M; Dong, V M
Sulfoxides are uncommon substrates for transition-metal catalysis due to their propensity to inhibit catalyst turnover. In a collaborative effort with Ken Houk, we developed the first dynamic kinetic resolution (DKR) of allylic sulfoxides using asymmetric rhodium-catalyzed hydrogenation. A detailed mechanistic analysis of this transformation using both experimental and theoretical methods revealed rhodium to be a tandem catalyst that promoted both hydrogenation of the alkene and racemization of the allylic sulfoxide. Using a combination of deuterium labelling and DFT studies, a novel mode of allylic sulfoxide racemization via a Rh(III)-Ï€-allyl intermediate was identified.
Novel phosphonium salts and bifunctional organocatalysts in asymmetric synthesis
Moore, Graham
This thesis details the syntheses of catalysts and their applications in asymmetric reactions. Initially, the project focused on phase transfer catalysts; quaternary phosphonium salts derived from diethyl tartrate or from commercially available phosphorus compounds and their use primarily in the alkylation of N,N-diphenyl methylene glycine tert-butyl ester. Although some of the salts showed the ability to catalyse the alkylation reaction, all products obtained were racemic. The project then f...
Chiral 2-Aminobenzimidazole as Bifunctional Catalyst in the Asymmetric Electrophilic Amination of Unprotected 3-Substituted Oxindoles
Llorenç Benavent
Full Text Available The use of readily available chiral trans-cyclohexanediamine-benzimidazole derivatives as bifunctional organocatalysts in the asymmetric electrophilic amination of unprotected 3-substituted oxindoles is presented. Different organocatalysts were evaluated; the most successful one contained a dimethylamino moiety (5. With this catalyst under optimized conditions, different oxindoles containing a wide variety of substituents at the 3-position were aminated in good yields and with good to excellent enantioselectivities using di-tert-butylazodicarboxylate as the aminating agent. The procedure proved to be also efficient for the amination of 3-substituted benzofuranones, although with moderate results. A bifunctional role of the catalyst, acting as Brønsted base and hydrogen bond donor, is proposed according to the experimental results observed.
Asymmetric catalysis in the cyclopropanation of olefins; Catalise assimetrica na ciclopropanacao de olefinas
Leao, Raquel A.C.; Ferreira, Vitor F.; Pinheiro, Sergio [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Dept. de Quimica Organica]. E-mail: [email protected]
The main methodologies in the asymmetric cyclopropanation of alkenes with emphasis on asymmetric catalysis are covered. Examples are the Simmons-Smith reaction, the use of diazoalkanes and reactions carried out by decomposition of alpha-diazoesters in the presence of transition metals. (author)
Chiral ferrocenes in asymmetric catalysis: synthesis and applications
National Research Council Canada - National Science Library
Dai, Li-Xin; Hou, Xue-Long
.... It provides a thorough overview of the synthesis and characterization of different types of chiral ferrocene ligands, their application to various catalytic asymmetric reactions, and versatile chiral...
Novel Cinchona derived organocatalysts: new asymmetric transformations and catalysis
Breman, A.C.
Cinchona alkaloids have a long history as being a powerful medicine against malaria. Since a relative short period (about 50 years) chemist have also used these alkaloids as chiral catalyst for a wide variety of asymmetric transformations. Especially since the beginning of this century, when strong
Recent Progress in Asymmetric Catalysis and Chromatographic Separation by Chiral Metal–Organic Frameworks
Suchandra Bhattacharjee
Full Text Available Metal–organic frameworks (MOFs, as a new class of porous solid materials, have emerged and their study has established itself very quickly into a productive research field. This short review recaps the recent advancement of chiral MOFs. Here, we present simple, well-ordered instances to classify the mode of synthesis of chiral MOFs, and later demonstrate the potential applications of chiral MOFs in heterogeneous asymmetric catalysis and enantioselective separation. The asymmetric catalysis sections are subdivided based on the types of reactions that have been successfully carried out recently by chiral MOFs. In the part on enantioselective separation, we present the potentiality of chiral MOFs as a stationary phase for high-performance liquid chromatography (HPLC and high-resolution gas chromatography (GC by considering fruitful examples from current research work. We anticipate that this review will provide interest to researchers to design new homochiral MOFs with even greater complexity and effort to execute their potential functions in several fields, such as asymmetric catalysis, enantiomer separation, and chiral recognition.
"Click" chemistry mildly stabilizes bifunctional gold nanoparticles for sensing and catalysis.
Li, Na; Zhao, Pengxiang; Liu, Na; Echeverria, María; Moya, Sergio; Salmon, Lionel; Ruiz, Jaime; Astruc, Didier
A large family of bifunctional 1,2,3-triazole derivatives that contain both a polyethylene glycol (PEG) chain and another functional fragment (e.g., a polymer, dendron, alcohol, carboxylic acid, allyl, fluorescence dye, redox-robust metal complex, or a β-cyclodextrin unit) has been synthesized by facile "click" chemistry and mildly coordinated to nanogold particles, thus providing stable water-soluble gold nanoparticles (AuNPs) in the size range 3.0-11.2 nm with various properties and applications. In particular, the sensing properties of these AuNPs are illustrated through the detection of an analogue of a warfare agent (i.e., sulfur mustard) by means of a fluorescence "turn-on" assay, and the catalytic activity of the smallest triazole-AuNPs (core of 3.0 nm) is excellent for the reduction of 4-nitrophenol in water. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Asymmetric catalysis in Brazil: development and potential for advancement of Brazilian chemical industry
Braga, Antonio Luiz; Luedtke, Diogo Seibert; Schneider, Paulo Henrique; Andrade, Leandro Helgueira; Paixao, Marcio Weber
The preparation of enantiomerically pure or enriched substances is of fundamental importance to pharmaceutical, food, agrochemical, and cosmetics industries and involves a growing market of hundreds of billions of dollars. However, most chemical processes used for their production are not environmentally friendly because in most cases, stoichiometric amounts of chiral inductors are used and substantial waste is produced. In this context, asymmetric catalysis has emerged as an efficient tool for the synthesis of enantiomerically enriched compounds using chiral catalysts. More specifically, considering the current scenario in the Brazilian chemical industry, especially that of pharmaceuticals, the immediate prospect for the use of synthetic routes developed in Brazil in an enantioselective fashion or even the discovery of new drugs is practically null. Currently, the industrial production of drugs in Brazil is primarily focused on the production of generic drugs and is basically supported by imports of intermediates from China and India. In order to change this panorama and move forward toward the gradual incorporation of genuinely Brazilian synthetic routes, strong incentive policies, especially those related to continuous funding, will be needed. These incentives could be a breakthrough once we establish several research groups working in the area of organic synthesis and on the development and application of chiral organocatalysts and ligands in asymmetric catalysis, thus contributing to boost the development of the Brazilian chemical industry. Considering these circumstances, Brazil can benefit from this opportunity because we have a wide biodiversity and a large pool of natural resources that can be used as starting materials for the production of new chiral catalysts and are creating competence in asymmetric catalysis and related areas. This may decisively contribute to the growth of chemistry in our country. (author)
Asymmetric Catalysis with Organic Azides and Diazo Compounds Initiated by Photoinduced Electron Transfer.
Huang, Xiaoqiang; Webster, Richard D; Harms, Klaus; Meggers, Eric
Electron-acceptor-substituted aryl azides and α-diazo carboxylic esters are used as substrates for visible-light-activated asymmetric α-amination and α-alkylation, respectively, of 2-acyl imidazoles catalyzed by a chiral-at-metal rhodium-based Lewis acid in combination with a photoredox sensitizer. This novel proton- and redox-neutral method provides yields of up to 99% and excellent enantioselectivities of up to >99% ee with broad functional group compatibility. Mechanistic investigations suggest that an intermediate rhodium enolate complex acts as a reductive quencher to initiate a radical process with the aryl azides and α-diazo carboxylic esters serving as precursors for nitrogen and carbon-centered radicals, respectively. This is the first report on using aryl azides and α-diazo carboxylic esters as substrates for asymmetric catalysis under photoredox conditions. These reagents have the advantage that molecular nitrogen is the leaving group and sole byproduct in this reaction.
Multiple Hydrogen-Bond Activation in Asymmetric Brønsted Acid Catalysis
KAUST Repository
Liao, Hsuan-Hung
An efficient protocol for the asymmetric synthesis of chiral tetrahydroquinolines bearing multiple stereogenic centers by means of asymmetric Brønsted acid catalysis was developed. A chiral 1,1′�spirobiindane�7,7′�diol (SPINOL)�based N�triflylphosphoramide (NTPA) proved to be an effective Brønsted acid catalyst for the in situ generation of aza�ortho�quinone methides (aza�o�QMs) and their subsequent cycloaddition reaction with unactivated alkenes to provide the products with excellent diastereo� and enantioselectivities. In addition, DFT calculations provided insight into the activation mode and nature of the interactions between the N�triflylphosphoramide catalyst and the generated aza�o�QMs.
Liao, Hsuan-Hung; Hsiao, Chien-Chi; Atodiresei, Iuliana; Rueping, Magnus
Catalysis engineering of bifunctional solids for the one-step synthesis of liquid fuels from syngas : A review
Sartipi, S.; Makkee, M.; Kapteijn, F.; Gascon, J.
The combination of acidic zeolites and Fischer–Tropsch synthesis (FTS) catalysts for one-step production of liquid fuels from syngas is critically reviewed. Bifunctional systems are classified by the proximity between FTS and acid functionalities on three levels: reactor, catalyst particle, and
Catalysis engineering of bifunctional solids for the one-step synthesis of liquid fuels from syngas: A review
The combination of acidic zeolites and Fischer–Tropsch synthesis (FTS) catalysts for one-step production of liquid fuels from syngas is critically reviewed. Bifunctional systems are classified by the proximity between FTS and acid functionalities on three levels: reactor, catalyst particle, and active phase. A thorough analysis of the published literature on this topic reveals that efficiency in the production of liquid fuels correlates well with the proximity of FTS and acid sites. Moreover,...
Bifunctional bamboo-like CoSe2 arrays for high-performance asymmetric supercapacitor and electrocatalytic oxygen evolution
Chen, Tian; Li, Songzhan; Gui, Pengbin; Wen, Jian; Fu, Xuemei; Fang, Guojia
Bifunctional bamboo-like CoSe2 arrays are synthesized by thermal annealing of Co(CO3)0.5OH grown on carbon cloth in Se atmosphere. The CoSe2 arrays obtained have excellent electrical conductivity, larger electrochemical active surface areas, and can directly serve as a binder-free electrode for supercapacitors and the oxygen evolution reaction (OER). When tested as a supercapacitor electrode, the CoSe2 delivers a higher specific capacitance (544.6 F g‑1 at current density of 1 mA cm‑2) compared with CoO (308.2 F g‑1) or Co3O4 (201.4 F g‑1). In addition, the CoSe2 electrode possesses excellent cycling stability. An asymmetric supercapacitor (ASC) is also assembled based on bamboo-like CoSe2 as a positive electrode and active carbon as a negative electrode in a 3.0 M KOH aqueous electrolyte. Owing to the unique stucture and good electrochemical performance of bamboo-like CoSe2, the as-assembled ACS can achieve a maximum operating voltage window of 1.7 V, a high energy density of 20.2 Wh kg‑1 at a power density of 144.1 W kg‑1, and an outstanding cyclic stability. As the catalyst for the OER, the CoSe2 exhibits a lower potential of 1.55 V (versus RHE) at current density of 10 mA cm‑2, a smaller Tafel slope of 62.5 mV dec‑1 and an also outstanding stability.
Catálise assimétrica na ciclopropanação de olefinas Asymmetric catalysis in the cyclopropanation of olefins
Raquel A. C. Leão
Full Text Available The main methodologies in the asymmetric cyclopropanation of alkenes with emphasis on asymmetric catalysis are covered. Exemples are the Simmons-Smith reaction, the use of diazoalkanes and reactions carried out by decomposition of alpha-diazoesters in the presence of transition metals.
Recent Advances in Dynamic Kinetic Resolution by Chiral Bifunctional (Thiourea- and Squaramide-Based Organocatalysts
Full Text Available The organocatalysis-based dynamic kinetic resolution (DKR process has proved to be a powerful strategy for the construction of chiral compounds. In this feature review, we summarized recent progress on the DKR process, which was promoted by chiral bifunctional (thiourea and squaramide catalysis via hydrogen-bonding interactions between substrates and catalysts. A wide range of asymmetric reactions involving DKR, such as asymmetric alcoholysis of azlactones, asymmetric Michael–Michael cascade reaction, and enantioselective selenocyclization, are reviewed and demonstrate the efficiency of this strategy. The (thiourea and squaramide catalysts with dual activation would be efficient for more unmet challenges in dynamic kinetic resolution.
Origin of Stereodivergence in Cooperative Asymmetric Catalysis with Simultaneous Involvement of Two Chiral Catalysts.
Bhaskararao, Bangaru; Sunoj, Raghavan B
Accomplishing high diastereo- and enantioselectivities simultaneously is a persistent challenge in asymmetric catalysis. The use of two chiral catalysts in one-pot conditions might offer new avenues to this end. Chirality transfer from a catalyst to product gets increasingly complex due to potential chiral match-mismatch issues. The origin of high enantio- and diastereoselectivities in the reaction between a racemic aldehyde and an allyl alcohol, catalyzed by using axially chiral iridium phosphoramidites PR/S-Ir and cinchona amine is established through transition-state modeling. The multipoint contact analysis of the stereocontrolling transition state revealed how the stereodivergence could be achieved by inverting the configuration of the chiral catalysts that are involved in the activation of the reacting partners. While the enantiocontrol is identified as being decided in the generation of PR/S-Ir-π-allyl intermediate from the allyl alcohol, the diastereocontrol arises due to the differential stabilizations in the C-C bond formation transition states. The analysis of the weak interactions in the transition states responsible for chiral induction revealed that the geometric disposition of the quinoline ring at the C8 chiral carbon of cinchona-enamine plays an anchoring role. The quinolone ring is noted as participating in a π-stacking interaction with the phenyl ring of the Ir-π-allyl moiety in the case of PR with the (8R,9R)-cinchona catalyst combination, whereas a series of C-H···π interactions is identified as vital to the relative stabilization of the stereocontrolling transition states when PR is used with (8S,9S)-cinchona.
The Development of Multidimensional Analysis Tools for Asymmetric Catalysis and Beyond.
Sigman, Matthew S; Harper, Kaid C; Bess, Elizabeth N; Milo, Anat
In most modern organic chemistry reports, including many of ours, reaction optimization schemes are typically presented to showcase how reaction conditions have been tailored to augment the reaction's yield and selectivity. In asymmetric catalysis, this often involves evaluation of catalyst, solvent, reagent, and, sometimes, substrate features. Such an article will then detail the process's scope, which mainly focuses on its successes and briefly outlines the "limitations". These limitations or poorer-performing substrates are occasionally the result of obvious, significant changes to structure (e.g., a Lewis basic group binds to a catalyst), but frequently, a satisfying explanation for inferior performance is not clear. This is one of several reasons such results are not often reported. These apparent outliers are also commonplace in the evaluation of catalyst structure, although most of this information is placed in the Supporting Information. These practices are unfortunate because results that appear at first glance to be peculiar or poor are considerably more interesting than ones that follow obvious or intuitive trends. In other words, all of the data from an optimization campaign contain relevant information about the reaction under study, and the "outliers" may be the most revealing. Realizing the power of outliers as an entry point to entirely new reaction development is not unusual. Nevertheless, the concept that no data should be wasted when considering the underlying phenomena controlling the observations of a given reaction is at the heart of the strategy we describe in this Account. The idea that one can concurrently optimize a reaction to expose the structural features that control its outcomes would represent a transformative addition to the arsenal of catalyst development and, ultimately, de novo design. Herein we outline the development of a recently initiated program in our lab that unites optimization with mechanistic interrogation by
Recent Advances in Dynamic Kinetic Resolution by Chiral Bifunctional (Thio)urea- and Squaramide-Based Organocatalysts.
Li, Pan; Hu, Xinquan; Dong, Xiu-Qin; Zhang, Xumu
The organocatalysis-based dynamic kinetic resolution (DKR) process has proved to be a powerful strategy for the construction of chiral compounds. In this feature review, we summarized recent progress on the DKR process, which was promoted by chiral bifunctional (thio)urea and squaramide catalysis via hydrogen-bonding interactions between substrates and catalysts. A wide range of asymmetric reactions involving DKR, such as asymmetric alcoholysis of azlactones, asymmetric Michael-Michael cascade reaction, and enantioselective selenocyclization, are reviewed and demonstrate the efficiency of this strategy. The (thio)urea and squaramide catalysts with dual activation would be efficient for more unmet challenges in dynamic kinetic resolution.
New chiral ligands in asymmetric catalysis. Application in stabilization of metal nanoparticles
Axet Martí, M. Rosa
Thesis M. Rosa AxetThis thesis deals with the development and application of diphosphite ligands derived from carbohydrates to rhodium-catalysed asymmetric hydroformylation and hydrogenation reactions. The use of various carbohydrate derivative ligands as stabilisers of metal nanoparticles is also studied. The synthesis and the characterisation of the series of diphosphite ligands are described in Chapter 2. The results of the asymmetric hydroformylation of styrene and related vinyl arenes ar...
Chiral phosphites as ligands in asymmetric metal complex catalysis and synthesis of coordination compounds
Gavrilov, Konstantin N; Bondarev, Oleg G; Polosukhin, Aleksei I
The data published during the last five years on the application of chiral derivatives of phosphorous acid in coordination chemistry and enantioselective catalysis are summarised and discussed. The effect of the nature of these ligands on the structure of metal complexes and on the efficiency of catalytic organic syntheses is shown. Hydroformylation, hydrogenation, allylic substitution and conjugate addition catalysed by transition metal complexes with optically active phosphites and hydrophosphoranes are considered. The prospects for the development of this field of research are demonstrated.
Asymmetric Organocatalysis and Photoredox Catalysis for the α-Functionalization of Tetrahydroisoquinolines
Hou, Hong
The asymmetric α�alkylation of tetrahydroisoquinolines with cyclic ketones has been accomplished in the presence of a combined catalytic system consisting of a visible�light photoredox catalyst and a chiral primary amine organocatalyst. The desired products were obtained in good yields, high enantioselectivity, and good to excellent diastereoselectivity. (PC: photoredox cycle, EN: enamine cycle).
Hou, Hong; Zhu, Shaoqun; Atodiresei, Iuliana; Rueping, Magnus
Synthesis of Main-Chain Chiral Quaternary Ammonium Polymers for Asymmetric Catalysis Using Quaternization Polymerization
Md. Masud Parvez
Full Text Available Main-chain chiral quaternary ammonium polymers were successfully synthesized by the quaternization polymerization of cinchonidine dimer with dihalides. The polymerization occurred smoothly under optimized conditions to give novel type of main-chain chiral quaternary ammonium polymers. The catalytic activity of the polymeric chiral organocatalysts was investigated on the asymmetric benzylation of N-(diphenylmethylideneglycine tert-butyl ester.
Asymmetric Catalytic Aza-Diels-Alder/Ring-Closing Cascade Reaction Forming Bicyclic Azaheterocycles by Trienamine Catalysis.
Li, Yang; Barløse, Casper; Jørgensen, Julie; Carlsen, Bjørn Dreiø; Jørgensen, Karl Anker
An asymmetric catalytic aza-Diels-Alder/ring-closing cascade reaction between acylhydrazones and in situ formed trienamines is presented. The reaction proceeds through a formal aza-Diels-Alder cycloaddition, followed by a ring-closing reaction forming the hemiaminal ring leading to chiral bicyclic azaheterocycles in moderate to good yield (up to 71 %), good enantio- (up to 92 % ee) and diastereoselectivity (up to >20:1 d.r.). Furthermore, transformations are presented to show the potential application of the formed product. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NeoPHOX – a structurally tunable ligand system for asymmetric catalysis
Jaroslav Padevět
Full Text Available A synthesis of new NeoPHOX ligands derived from serine or threonine has been developed. The central intermediate is a NeoPHOX derivative bearing a methoxycarbonyl group at the stereogenic center next to the oxazoline N atom. The addition of methylmagnesium chloride leads to a tertiary alcohol, which can be acylated or silylated to produce NeoPHOX ligands with different sterical demand. The new NeoPHOX ligands were tested in the iridium-catalyzed asymmetric hydrogenation and palladium-catalyzed allylic substitution. In both reactions high enantioselectivities were achieved, that were comparable to the enantioselectivities obtained with the up to now best NeoPHOX ligand derived from expensive tert-leucine.
Exploiting nanospace for asymmetric catalysis: confinement of immobilized, single-site chiral catalysts enhances enantioselectivity.
Thomas, John Meurig; Raja, Robert
In the mid-1990s, it became possible to prepare high-area silicas having pore diameters controllably adjustable in the range ca. 20-200 Ã…. Moreover, the inner walls of these nanoporous solids could be functionalized to yield single-site, chiral, catalytically active organometallic centers, the precise structures of which could be determined using in situ X-ray absorption and FTIR and multinuclear magic angle spinning (MAS) NMR spectroscopy. This approach opened up the prospect of performing heterogeneous enantioselective conversions in a novel manner, under the spatial restrictions imposed by the nanocavities within which the reactions occur. In particular, it suggested an alternative method for preparing pharmaceutically and agrochemically useful asymmetric products by capitalizing on the notion, initially tentatively perceived, that spatial confinement of prochiral reactants (and transition states formed at the chiral active center) would provide an altogether new method of boosting the enantioselectivity of the anchored chiral catalyst. Initially, we anchored chiral single-site heterogeneous catalysts to nanopores covalently via a ligand attached to Pd(II) or Rh(I) centers. Later, we employed a more convenient and cheaper electrostatic method, relying in part on strong hydrogen bonding. This Account provides many examples of these processes, encompassing hydrogenations, oxidations, and aminations. Of particular note is the facile synthesis from methyl benzoylformate of methyl mandelate, which is a precursor in the synthesis of pemoline, a stimulant of the central nervous system; our procedure offers several viable methods for reducing ketocarboxylic acids. In addition to relying on earlier (synchrotron-based) in situ techniques for characterizing catalysts, we have constructed experimental procedures involving robotically controlled catalytic reactors that allow the kinetics of conversion and enantioselectivity to be monitored continually, and we have access to
Asymmetric synthesis including enzymatic catalysis of 11C and 13N labelled amino acids
Langstrom, B.; Antonio, G.; Bjurling, P.; Fasth, K.J.; Westerberg, G.; Watanabe, Y.
Use of asymmetric synthesis in production of 11 C- and 13 N-labelled amino acids has been shown to be a useful approach in order to prepare amino acids routinely for PET-studies. Such PET-studies are focused either on problems related to amino acid transport, protein synthesis rate or the turnover of neurotransmitters from amino acids. The paper discusses matters regarding synthetic strategies and techniques involving production of precursors, labelled intermediates and main reaction sequences. In synthesis using the short-lived β + -emitters like 11 C and 13 N with T 1/2 of 20.3 and 10.0 min respectively, many special aspects have to be considered. The use of enzymes as catalysts has shown to be a useful tool in such preparations. The design of the labelled amino acids especially considering the stereochemistry, the position of the label will be addressed since these points are important both with regard to the application of the labelled amino acids as well as to the synthesis itself. In this presentation of the synthesis of labelled amino acids these various aspects are discussed
Bifunctional nanocrystalline MgO for chiral epoxy ketones via Claisen-Schmidt condensation-asymmetric epoxidation reactions.
Choudary, Boyapati M; Kantam, Mannepalli L; Ranganath, Kalluri V S; Mahendar, Koosam; Sreedhar, Bojja
Design and development of a truly nanobifunctional heterogeneous catalyst for the Claisen-Schmidt condensation (CSC) of benzaldehydes with acetophenones to yield chalcones quantitatively followed by asymmetric epoxidation (AE) to afford chiral epoxy ketones with moderate to good yields and impressive ee's is described. The nanomagnesium oxide (aerogel prepared) NAP-MgO was found to be superior over the NA-MgO and CM-MgO in terms of activity and enantioselectivity as applicable in these reactions. An elegant strategy for heterogenization of homogeneous catalysts is presented here to evolve single-site chiral catalysts for AE by a successful transfer of molecular chemistry to surface metal-organic chemistry with the retention of activity, selectivity/enantioselectivity. Brønsted hydroxyls are established as sole contributors for the epoxidation reaction, while they add on to the CSC, which is largely driven by Lewis basic O2-sites. Strong hydrogen-bond interactions between the surface -OH on MgO and -OH groups of diethyl tartrate are found inducing enantioselectivity in the AE reaction. Thus, the nanocrystalline NAP-MgO with its defined shape, size, and accessible OH groups allows the chemisorption of TBHP, DET, and olefin on its surface to accomplish single-site chiral catalysts to provide optimum ee's in AE reactions.
Identifying active surface phases for metal oxide electrocatalysts: a study of manganese oxide bi-functional catalysts for oxygen reduction and water oxidation catalysis
Su, Hai-Yan; Gorlin, Yelena; Man, Isabela Costinela
Progress in the field of electrocatalysis is often hampered by the difficulty in identifying the active site on an electrode surface. Herein we combine theoretical analysis and electrochemical methods to identify the active surfaces in a manganese oxide bi-functional catalyst for the oxygen...... reduction reaction (ORR) and the oxygen evolution reaction (OER). First, we electrochemically characterize the nanostructured α-Mn2O3 and find that it undergoes oxidation in two potential regions: initially, between 0.5 V and 0.8 V, a potential region relevant to the ORR and, subsequently, between 0.8 V...
Asymmetric catalysis in Brazil: development and potential for advancement of Brazilian chemical industry; Catalise assimetrica no Brasil: desenvolvimento e potencialidades para o avanco da industria quimica brasileira
Braga, Antonio Luiz, E-mail: [email protected] [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil). Departamento de Quimica; Luedtke, Diogo Seibert; Schneider, Paulo Henrique [Universidade Federal do Rio Grande do Sul (UFRS), Porto Alegre, RS (Brazil). Instituto de Quimica; Andrade, Leandro Helgueira [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Quimica; Paixao, Marcio Weber [Universidade Federal de Sao Carlos (UFSCar), SP (Brazil). Departamento de Quimica
Asymmetric Radical Cyclopropanation of Alkenes with In Situ-Generated Donor-Substituted Diazo Reagents via Co(II)-Based Metalloradical Catalysis.
Wang, Yong; Wen, Xin; Cui, Xin; Wojtas, Lukasz; Zhang, X Peter
Donor-substituted diazo reagents, generated in situ from sulfonyl hydrazones in the presence of base, can serve as suitable radical precursors for Co(II)-based metalloradical catalysis (MRC). The cobalt(II) complex of D 2 -symmetric chiral porphyrin [Co(3,5-Di t Bu-Xu(2'-Naph)Phyrin)] is an efficient metalloradical catalyst that is capable of activating different N-arylsulfonyl hydrazones for asymmetric radical cyclopropanation of a broad range of alkenes, affording the corresponding cyclopropanes in high yields with effective control of both diastereo- and enantioselectivity. This Co(II)-based metalloradical system represents the first catalytic protocol that can effectively utilize donor-type diazo reagents for asymmetric olefin cyclopropanation.
Stereodirection of an α-ketoester at sub-molecular sites on chirally modified Pt(111): Heterogeneous asymmetric catalysis
Demers-Carpentier, V.; Rasmussen, A.M.H.; Goubert, G.
Chirally modified Pt catalysts are used in the heterogeneous asymmetric hydrogenation of α-ketoesters. Stereoinduction is believed to occur through the formation of chemisorbed modifier–substrate complexes. In this study, the formation of diastereomeric complexes by coadsorbed methyl 3,3,3-triflu......Chirally modified Pt catalysts are used in the heterogeneous asymmetric hydrogenation of α-ketoesters. Stereoinduction is believed to occur through the formation of chemisorbed modifier–substrate complexes. In this study, the formation of diastereomeric complexes by coadsorbed methyl 3...
Solid acid catalysis from fundamentals to applications
Hattori, Hideshi
IntroductionTypes of solid acid catalystsAdvantages of solid acid catalysts Historical overviews of solid acid catalystsFuture outlookSolid Acids CatalysisDefinition of acid and base -Br�nsted acid and Lewis acid-Acid sites on surfacesAcid strengthRole of acid sites in catalysisBifunctional catalysisPore size effect on catalysis -shape selectivity-Characterization of Solid Acid Catalysts Indicator methodTemperature programmed desorption (TPD) of ammoniaCalorimetry of adsorption of basic moleculesInfrare
Asymmetric catalysis in organic synthesis
Reilly, S.D.; Click, D.R.; Grumbine, S.K.; Scott, B.L.; Watkins, J.G.
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The goal of the project was to prepare new catalyst systems, which would perform chemical reactions in an enantioselective manner so as to produce only one of the possible optical isomers of the product molecule. The authors have investigated the use of lanthanide metals bearing both diolate and Schiff-base ligands as catalysts for the enantioselective reduction of prochiral ketones to secondary alcohols. The ligands were prepared from cheap, readily available starting materials, and their synthesis was performed in a ''modular'' manner such that tailoring of specific groups within the ligand could be carried out without repeating the entire synthetic procedure. In addition, they have developed a new ligand system for Group IV and lanthanide-based olefin polymerization catalysts. The ligand system is easily prepared from readily available starting materials and offers the opportunity to rapidly prepare a wide range of closely related ligands that differ only in their substitution patterns at an aromatic ring. When attached to a metal center, the ligand system has the potential to carry out polymerization reactions in a stereocontrolled manner.
3D hollow sphere Co3O4/MnO2-CNTs: Its high-performance bi-functional cathode catalysis and application in rechargeable zinc-air battery
Xuemei Li
Full Text Available There has been a continuous need for high active, excellently durable and low-cost electrocatalysts for rechargeable zinc-air batteries. Among many low-cost metal based candidates, transition metal oxides with the CNTs composite have gained increasing attention. In this paper, the 3-D hollow sphere MnO2 nanotube-supported Co3O4 nanoparticles and its carbon nanotubes hybrid material (Co3O4/MnO2-CNTs have been synthesized via a simple co-precipitation method combined with post-heat treatment. The morphology and composition of the catalysts are thoroughly analyzed through SEM, TEM, TEM-mapping, XRD, EDX and XPS. In comparison with the commercial 20% Pt/C, Co3O4/MnO2, bare MnO2 nanotubes and CNTs, the hybrid Co3O4/MnO2-CNTs-350 exhibits perfect bi-functional catalytic activity toward oxygen reduction reaction and oxygen evolution reaction under alkaline condition (0.1 M KOH. Therefore, high cell performances are achieved which result in an appropriate open circuit voltage (∼1.47 V, a high discharge peak power density (340 mW cm−2 and a large specific capacity (775 mAh g−1 at 10 mA cm−2 for the primary Zn-air battery, a small charge–discharge voltage gap and a high cycle-life (504 cycles at 10 mA cm−2 with 10 min per cycle for the rechargeable Zn-air battery. In particular, the simple synthesis method is suitable for a large-scale production of this bifunctional material due to a green, cost effective and readily available process. Keywords: Bi-functional catalyst, Oxygen reduction reaction, Oxygen evolution reaction, Activity and stability, Rechargeable zinc-air battery
Cooperative catalysis designing efficient catalysts for synthesis
Peters, René
Written by experts in the field, this is a much-needed overview of the rapidly emerging field of cooperative catalysis. The authors focus on the design and development of novel high-performance catalysts for applications in organic synthesis (particularly asymmetric synthesis), covering a broad range of topics, from the latest progress in Lewis acid / Br?nsted base catalysis to e.g. metal-assisted organocatalysis, cooperative metal/enzyme catalysis, and cooperative catalysis in polymerization reactions and on solid surfaces. The chapters are classified according to the type of cooperating acti
Efficient hydrodeoxygenation of biomass-derived ketones over bifunctional Pt-polyoxometalate catalyst.
Alotaibi, Mshari A; Kozhevnikova, Elena F; Kozhevnikov, Ivan V
Acidic heteropoly salt Cs(2.5)H(0.5)PW(12)O(40) doped with Pt nanoparticles is a highly active and selective catalyst for one-step hydrogenation of methyl isobutyl and diisobutyl ketones to the corresponding alkanes in the gas phase at 100 °C with 97-99% yield via metal-acid bifunctional catalysis.
Sustainable green catalysis by supported metal nanoparticles.
Fukuoka, Atsushi; Dhepe, Paresh L
The recent progress of sustainable green catalysis by supported metal nanoparticles is described. The template synthesis of metal nanoparticles in ordered porous materials is studied for the rational design of heterogeneous catalysts capable of high activity and selectivity. The application of these materials in green catalytic processes results in a unique activity and selectivity arising from the concerted effect of metal nanoparticles and supports. The high catalytic performances of Pt nanoparticles in mesoporous silica is reported. Supported metal catalysts have also been applied to biomass conversion by heterogeneous catalysis. Additionally, the degradation of cellulose by supported metal catalysts, in which bifunctional catalysis of acid and metal plays the key role for the hydrolysis and reduction of cellulose, is also reported. Copyright 2009 The Japan Chemical Journal Forum and Wiley Periodicals, Inc.
Environmental catalysis
Montes Consuelo; Villa, Aida Luz
The term environmental catalysis has been used lately to refer to a variety of applications of the catalysis, those which, they have grouped in the following categories: a) Control of emissions (chimney Gases and gases of the vehicles, Compound Organic Volatile (VOC), Scents, Chlorofluorocarbons) b) Conversion of having undone solids or liquids. C) Selective obtaining of alternating products that replace polluting compounds. d)replacement of catalysis environmentally dangerous And e)Development of catalysts for the obtaining of valuable chemical products without the formation of polluting by-products. In the group of Environmental Catalysis comes working in the first category, Particularly, in the exploration of active catalysts in the decrease of the emissions coming from combustion systems, carbon monoxide, hydrocarbons, nitrogen oxides (NOx), N20 and sulfur (SOx). Our fundamental premise is that the molecular meshes are catalytic potential for the development of a technology environmentally clean. These materials understand a class of inorganic compound with unique properties and intimately related with the structure. The net of the molecular meshes consists on tetrahedral configuration atoms (Al,Si, P, etc.) united to each other by oxygen atoms. As a result they are not formed three-dimensional structures alone with channels and cavities but also, with openings bounded by rings that consist of a certain number of tetrahedral atoms
Catalysis studies
Taylor, T.N.; Ellis, W.P.
The New Research Initiatives Program (NRIP) project on catalysis in Los Alamos Scientific Laboratory (LASL) Group CMB-8 has made significant progress towards performing the first basic in situ experimental studies of heterogeneous catalysis on solid compound surfaces in a LEED-Auger system. To further understand the surface crystallography of a possible catalyst compound, LEED-Auger measurements were made on UO 2 (approximately 100) vicinal surfaces. These (approximately 100) vicinal surfaces were shown to decompose irreversibly into lower index facets, including prominent (100) facets, at temperatures below those needed for creation of lowest index faceting on (approximately 111) vicinal surfaces. LEED examination of fully faceted surfaces from both types of UO 2 vicinal cuts did not show evidence of cyclopropane or propene chemisorption. The existing LEED-Auger system was modified to allow catalytic reactions at approximately less than 10 -3 torr. A sample holder, specifically designed for catalysis measurements in the modified system, was tested while examining single crystals of CoO and Cr 2 O 3 . Extensive LEED-Auger measurements were made on CoO in vacuo and in the presence of light hydrocarbons and alcohols plus H 2 O, NO, and NH 3 . No chemisorptive behavior was observed except with H 2 O in the presence of the electron beam. Although only examined briefly, the Cr 2 O 3 was remarkable for the sharp LEED features obtained prior to any surface treatment in the vacuum system
Dark catalysis
Agrawal, Prateek; Cyr-Racine, Francis-Yan; Randall, Lisa; Scholtz, Jakub, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Department of Physics, Harvard University, 17 Oxford St., Cambridge, MA 02138 (United States)
Recently it was shown that dark matter with mass of order the weak scale can be charged under a new long-range force, decoupled from the Standard Model, with only weak constraints from early Universe cosmology. Here we consider the implications of an additional charged particle C that is light enough to lead to significant dissipative dynamics on galactic times scales. We highlight several novel features of this model, which can be relevant even when the C particle constitutes only a small fraction of the number density (and energy density). We assume a small asymmetric abundance of the C particle whose charge is compensated by a heavy X particle so that the relic abundance of dark matter consists mostly of symmetric X and X-bar , with a small asymmetric component made up of X and C . As the universe cools, it undergoes asymmetric recombination binding the free C s into ( XC ) dark atoms efficiently. Even with a tiny asymmetric component, the presence of C particles catalyzes tight coupling between the heavy dark matter X and the dark photon plasma that can lead to a significant suppression of the matter power spectrum on small scales and lead to some of the strongest bounds on such dark matter theories. We find a viable parameter space where structure formation constraints are satisfied and significant dissipative dynamics can occur in galactic haloes but show a large region is excluded. Our model shows that subdominant components in the dark sector can dramatically affect structure formation.
Selective Homogeneous Catalysis in Asymmetric Synthesis
Fristrup, Peter
of twelve "substrate-probes�, which were designed and synthesized specifically for this purpose. Both the stoichiometric reaction with OsO4 in toluene and the more environmentally benign catalytic reaction in a two-phase system were studied. The obtained experimental results were in good agreement...
DIFLUORPHOS and SYNPHOS in asymmetric catalysis: Synthetic ...
Indian Academy of Sciences (India)
ture did not significantly affect the stereochemical out- come. For this process ..... economy concepts.36 In 2012, Zhang, Ratovelomanana-. Vidal and co-workers ..... broad spectrum cancer therapeutic agents.53 To this end, dynamic kinetic ...
Janssen, F.J.J.G.; Santen, R.A. van (eds.)
Catalysts play key roles in the production of clean fuels, the conversion of waste and green raw materials into energy, clean combustion engines including control of NOx and soot production and reduction of greenhouse gases, production of clean water and polymers, as well as reduction from polymers to monometers. This book contains 15 chapters by experts in the field, on the theme of catalysts used to create a sustainable society. Chapters include: catalysts for renewable energy and chemicals, fuel cells, catalytic processes for high-quality transportation fuels; oxidative coupling of methane, methane utilisation via synthesis gas generation, catalytic combustion, catalytical removal of nitrate from water, contribution of catalysis towards the reduction of atmospheric air pollution (CO{sub 2}, CFCs, N{sub 2}O), ozone), emission control from mobile sources and from stationary sources, and deactivation, regeneration and recycling of hydroprocessing catalysts.
Bifunctional redox flow battery
Wen, Y.H.; Cheng, J.; Xun, Y.; Ma, P.H.; Yang, Y.S.
A new bifunctional redox flow battery (BRFB) system, V(III)/V(II)-L-cystine(O 2 ), was systematically investigated by using different separators. It is shown that during charge, water transfer is significantly restricted with increasing the concentration of HBr when the Nafion 115 cation exchange membrane is employed. The same result can be obtained when the gas diffusion layer (GDL) hot-pressed separator is used. The organic electro-synthesis is directly correlated with the crossover of vanadium. When employing the anion exchange membrane, the electro-synthesis efficiency is over 96% due to a minimal crossover of vanadium. When the GDL hot-pressed separator is applied, the crossover of vanadium and water transfer are noticeably prevented and the electro-synthesis efficiency of over 99% is obtained. Those impurities such as vanadium ions and bromine can be eliminated through the purification of organic electro-synthesized products. The purified product is identified to be L-cysteic acid by IR spectrum. The BRFB shows a favorable discharge performance at a current density of 20 mA cm -2 . Best discharge performance is achieved by using the GDL hot-pressed separator. The coulombic efficiency of 87% and energy efficiency of about 58% can be obtained. The cause of major energy losses is mainly associated with the cross-contamination of anodic and cathodic active electrolytes
Bifunctional Phosphorus Dendrimers and Their Properties.
Caminade, Anne-Marie; Majoral, Jean-Pierre
Dendrimers are hyperbranched and monodisperse macromolecules, generally considered as a special class of polymers, but synthesized step-by-step. Most dendrimers have a uniform structure, with a single type of terminal function. However, it is often desirable to have at least two different functional groups. This review will discuss the case of bifunctional phosphorus-containing dendrimers, and the consequences for their properties. Besides the terminal functions, dendritic structures may have also a function at the core, or linked off-center to the core, or at the core of dendrons (dendritic wedges). Association of two dendrons having different terminal functions leads to Janus dendrimers (two faces). The internal structure can also possess functional groups on one layer, or linked to one layer, or on several layers. Finally, there are several ways to have two types of terminal functions, besides the case of Janus dendrimers: either each terminal function bears two functions sequentially, or two different functions are linked to each terminal branching point. Examples of each type of structure will be given in this review, as well as practical uses of such sophisticated structures in the fields of fluorescence, catalysis, nanomaterials and biology.
Cyclodextrins in Asymmetric and Stereospecific Synthesis
Fliur Macaev
Full Text Available Since their discovery, cyclodextrins have widely been used as green and easily available alternatives to promoters or catalysts of different chemical reactions in water. This review covers the research and application of cyclodextrins and their derivatives in asymmetric and stereospecific syntheses, with their division into three main groups: (1 cyclodextrins promoting asymmetric and stereospecific catalysis in water; (2 cyclodextrins' complexes with transition metals as asymmetric and stereospecific catalysts; and (3 cyclodextrins' non-metallic derivatives as asymmetric and stereospecific catalysts. The scope of this review is to systematize existing information on the contribution of cyclodextrins to asymmetric and stereospecific synthesis and, thus, to facilitate further development in this direction.
Synthesis and application of aryl-ferrocenyl(pseudo-biarylic) complexes. Part 5. Design and synthesis of a new type of ferrocene-based planar chiral DMAP analogues. A new catalyst system for asymmetric nucleophilic catalysis
Seitzberg, J.G; Dissing, C; Søtofte, Inger
A new first-generation catalyst system for nucleophilic catalysis has been developed. It is based on a planar chiral ferrocene skeleton with either the potent nucleophile 4-(dimethylamino)pyridine (DMAP) or the related 4-nitropyridine N-oxide attached in either the 2- or the 3-position. The synth......A new first-generation catalyst system for nucleophilic catalysis has been developed. It is based on a planar chiral ferrocene skeleton with either the potent nucleophile 4-(dimethylamino)pyridine (DMAP) or the related 4-nitropyridine N-oxide attached in either the 2- or the 3-position...
Advances in catalysis
Gates, Bruce C
Advances in Catalysis fills the gap between the journal papers and the textbooks across the diverse areas of catalysis research. For more than 60 years Advances in Catalysis has been dedicated to recording progress in the field of catalysis and providing the scientific community with comprehensive and authoritative reviews. This series in invaluable to chemical engineers, physical chemists, biochemists, researchers and industrial chemists working in the fields of catalysis and materials chemistry. * In-depth, critical, state-of-the-art reviews * Comprehensive, covers of all as
Recyclable enantioselective catalysts based on copper(II) complexes of 2-(pyridine-2-yl)imidazolidine-4-thione: their application in asymmetric Henry reactions
Nováková, G.; Drabina, P.; Frumarová, Božena; Sedlák, M.
Ro�. 358, �. 15 (2016), s. 2541-2552 ISSN 1615-4150 Institutional support: RVO:61389013 Keywords : asymmetric catalysis * enantioselectivity * heterogeneous catalysis Subject RIV: CC - Organic Chemistry Impact factor: 5.646, year: 2016
Catalysis seen in action
Tromp, M.
Synchrotron radiation techniques are widely applied in materials research and heterogeneous catalysis. In homogeneous catalysis, its use so far is rather limited despite its high potential. Here, insights in the strengths and limitations of X-ray spectroscopy technique in the field of homogeneous
Monopole catalysis: an overview
Dawson, S.
A summary of the talks presented in the topological workshop on monopole catalysis at this conference is given. We place special emphasis on the conservation laws which determine the allowed monopole-fermion interactions and on catalysis as a probe of the structure of a grand unified theory. 11 references
Catalysis of Supramolecular Hydrogelation
Trausel, F.; Versluis, F.; Maity, C.; Poolman, J.M.; Lovrak, M.; van Esch, J.H.; Eelkema, R.
ConspectusOne often thinks of catalysts as chemical tools to accelerate a reaction or to have a reaction run under more benign conditions. As such, catalysis has a role to play in the chemical industry and in lab scale synthesis that is not to be underestimated. Still, the role of catalysis in
Horizons in catalysis
Idol, J D
A discussion covers a brief historical review of industrial catalysis; a survey of major present-day catalytic processes in the petroleum and petrochemical industries; the outlook for the industrial catalyst applications in coal liquefaction, conversion of coal liquids, shale oil, and other synthetic crude sources for transportation fuels, and synthesis gas-based processes; some important directions for future developments, including phase transfer catalysis, photocatalysis, and advanced techniques for catalyst studies; and the need for closer industry-university and industry-government cooperation in the field of catalysis.
Surface and nanomolecular catalysis
Richards, Ryan
Using new instrumentation and experimental techniques that allow scientists to observe chemical reactions and molecular properties at the nanoscale, the authors of Surface and Nanomolecular Catalysis reveal new insights into the surface chemistry of catalysts and the reaction mechanisms that actually occur at a molecular level during catalysis. While each chapter contains the necessary background and explanations to stand alone, the diverse collection of chapters shows how developments from various fields each contributed to our current understanding of nanomolecular catalysis as a whole. The
Molecular water oxidation catalysis
Llobet, Antoni
Photocatalytic water splitting is a promising strategy for capturing energy from the sun by coupling light harvesting and the oxidation of water, in order to create clean hydrogen fuel. Thus a deep knowledge of the water oxidation catalysis field is essential to be able to come up with useful energy conversion devices based on sunlight and water splitting. Molecular Water Oxidation Catalysis: A Key Topic for New Sustainable Energy Conversion Schemes presents a comprehensive and state-of-the-art overview of water oxidation catalysis in homogeneous phase, describing in detail the most importan
Catalysis seen in action.
Tromp, Moniek
Synchrotron radiation techniques are widely applied in materials research and heterogeneous catalysis. In homogeneous catalysis, its use so far is rather limited despite its high potential. Here, insights in the strengths and limitations of X-ray spectroscopy technique in the field of homogeneous catalysis are given, including new technique developments. A relevant homogeneous catalyst, used in the industrially important selective oligomerization of ethene, is taken as a worked-out example. Emphasis is placed on time-resolved operando X-ray absorption spectroscopy with outlooks to novel high energy resolution and emission techniques. All experiments described have been or can be done at the Diamond Light Source Ltd (Didcot, UK). © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Concepts in catalysis
Boudart, M.
This paper reports on concept in catalysis which are very important in heterogeneous catalysis, even today, when in spite of surface science, the complexity of events at a real catalytic surface is still evading the understanding necessary for design. In this paper the authors will attempt to give an update on evolving concepts in heterogeneous catalysis. The topics include: counting active centers on metal surfaces; the notion of turnover frequency for a catalytic cycle; the concept of structure (in) sensitive reactions; the ensemble (geometric) vs. The ligand (electronic) effect following Sachtler's school; the idea of a rate determining step and of a most abundant reactive intermediate; the effect of surface non-uniformity on catalytic kinetics; what makes catalytic cycles turnover
Catalysis induced by radiations
Jimenez B, J.; Gonzalez J, J. C.
In Mexico is generated a great quantity of residuals considered as dangerous, for its capacity of corrosion, reactivity, toxicity to the environment, inflammability and biological-infectious potential. It is important to mention that the toxic compounds cannot be discharged to the sewerage systems and much less to the receiving bodies of water. The usual treatment that receives the dangerous residuals is the incineration and the bordering. The incineration is an efficient form of treating the residuals, but it can be dioxins source and benzofurans, being the phenol and chloro phenol the precursors of these compounds. At the present time the radiolytic degradation of organic compounds has been broadly studied, especially the 4-chloro phenol and of same form the photo catalysis of organic compounds. However the combination of both processes, called radio catalysis is barely reported. In this work the results of the experiments realized for to degrade the 4-chloro phenol by means of radio catalysis are reported. (Author)
Catalysis for alternative energy generation
Summarizes recent problems in using catalysts in alternative energy generation and proposes novel solutions Reconsiders the role of catalysis in alternative energy generation Contributors include catalysis and alternative energy experts from across the globe
Editorial: Nanoscience makes catalysis greener
Polshettiwar, Vivek; Basset, Jean-Marie; Astruc, Didier
Green chemistry by nanocatalysis: Catalysis is a strategic field of science because it involves new ways of meeting energy and sustainability challenges. The concept of green chemistry, which makes the science of catalysis even more creative, has
Isotopes in heterogeneous catalysis
Hargreaves, Justin SJ
The purpose of this book is to review the current, state-of-the-art application of isotopic methods to the field of heterogeneous catalysis. Isotopic studies are arguably the ultimate technique in in situ methods for heterogeneous catalysis. In this review volume, chapters have been contributed by experts in the field and the coverage includes both the application of specific isotopes - Deuterium, Tritium, Carbon-14, Sulfur-35 and Oxygen-18 - as well as isotopic techniques - determination of surface mobility, steady state transient isotope kinetic analysis, and positron emission profiling.
Pollution Control by Catalysis
Eriksen, Kim Michael; Fehrmann, Rasmus
The report summarises the results of two years of collaboration supported by INTAS between Department of Chemistry,DTU,DK , IUSTI,Universite de Provence,FR, ICE/HT University 6of Patras,GR, and Boreskov Institute of Catalysis,RU.The project has been concerned with mechanistic studies of deNOx and...
Preface: Catalysis Today
Li, Yongdan
This special issue of Catalysis Today with the theme "Sustain-able Energy� results from a great success of the session "Catalytic Technologies Accelerating the Establishment of Sustainable and Clean Energy�, one of the two sessions of the 1st International Symposium on Catalytic Science and Techn...
Ascorbic acid as a bifunctional hydrogen bond donor for the synthesis of cyclic carbonates from CO2 under ambient conditions
Arayachukiat, Sunatda
Readily available ascorbic acid was discovered as an environmentally benign hydrogen bond donor (HBD) for the synthe-sis of cyclic organic carbonates from CO2 and epoxides in the presence of nucleophilic co-catalysts. The ascorbic acid/TBAI (TBAI: tetrabutylammonium iodide) binary system could be applied for the cycloaddition of CO2 to various epoxides under ambient or mild conditions. DFT calculations and catalysis experiments revealed an intriguing bifunctional mechanism in the step of CO2 insertion involving different hydroxyl moieties (enediol, ethyldiol) of the ascorbic acid scaffold.
Arayachukiat, Sunatda; Kongtes, Chutima; Barthel, Alexander; Vummaleti, Sai V. C.; Poater, Albert; Wannakao, Sippakorn; Cavallo, Luigi; D'Elia, Valerio
Eley, D.D.; Pine, H.; Weisz, P.B.
This book reports on the current state of knowledge concerning structure and catalysis of metals and metal oxide particles, old and new. It addresses the basic and broad problems of what the catalytically relevant surface structures of metals are, where we stand in techniques capable of attacking this problem, and what the current state of knowledge is. The focus is on long-standing, important, and central problem of general investigative methodology and strategy: the pressure gap is created by the fact that the best techniques of surface analysis require high-vacuum conditions, while useful catalysis is confined to conditions of near ambient or higher pressures. The authors review the basic question of the influence of particle size on catalytic behavior of metal particles which involves questions of the basic sciences as much as practical considerations of catalyst design and use. They discuss preparatory techniques, analytical technology, and methods of characterization of these materials
Solid Base Catalysis
Ono, Yoshio
The importance of solid base catalysts has come to be recognized for their environmentally benign qualities, and much significant progress has been made over the past two decades in catalytic materials and solid base-catalyzed reactions. The book is focused on the solid base. Because of the advantages over liquid bases, the use of solid base catalysts in organic synthesis is expanding. Solid bases are easier to dispose than liquid bases, separation and recovery of products, catalysts and solvents are less difficult, and they are non-corrosive. Furthermore, base-catalyzed reactions can be performed without using solvents and even in the gas phase, opening up more possibilities for discovering novel reaction systems. Using numerous examples, the present volume describes the remarkable role solid base catalysis can play, given the ever increasing worldwide importance of "green" chemistry. The reader will obtain an overall view of solid base catalysis and gain insight into the versatility of the reactions to whic...
Nanocarbon/oxide composite catalysts for bifunctional oxygen reduction and evolution in reversible alkaline fuel cells: A mini review
Chen, Mengjie; Wang, Lei; Yang, Haipeng; Zhao, Shuai; Xu, Hui; Wu, Gang
A reversible fuel cell (RFC), which integrates a fuel cell with an electrolyzer, is similar to a rechargeable battery. This technology lies on high-performance bifunctional catalysts for the oxygen reduction reaction (ORR) in the fuel cell mode and the oxygen evolution reaction (OER) in the electrolyzer mode. Current catalysts are platinum group metals (PGM) such as Pt and Ir, which are expensive and scarce. Therefore, it is highly desirable to develop PGM-free catalysts for large-scale application of RFCs. In this mini review, we discussed the most promising nanocarbon/oxide composite catalysts for ORR/OER bifunctional catalysis in alkaline media, which is mainly based on our recent progress. Starting with the effectiveness of selected oxides and nanocarbons in terms of their activity and stability, we outlined synthetic methods and the resulting structures and morphologies of catalysts to provide a correlation between synthesis, structure, and property. A special emphasis is put on understanding of the possible synergistic effect between oxide and nanocarbon for enhanced performance. Finally, a few nanocomposite catalysts are discussed as typical examples to elucidate the rules of designing highly active and durable bifunctional catalysts for RFC applications.
Crystallization and preliminary X-ray analysis of a bifunctional catalase-phenol oxidase from Scytalidium thermophilum
Sutay Kocabas, Didem; Pearson, Arwen R.; Phillips, Simon E. V.; Bakir, Ufuk; Ogel, Zumrut B.; McPherson, Michael J.; Trinh, Chi H.
The bifunctional enzyme catalase-phenol oxidase from S. thermophilum was crystallized by the hanging-drop vapour-diffusion method in space group P2 1 and diffraction data were collected to 2.8 Ã… resolution. Catalase-phenol oxidase from Scytalidium thermophilum is a bifunctional enzyme: its major activity is the catalase-mediated decomposition of hydrogen peroxide, but it also catalyzes phenol oxidation. To understand the structural basis of this dual functionality, the enzyme, which has been shown to be a tetramer in solution, has been purified by anion-exchange and gel-filtration chromatography and has been crystallized using the hanging-drop vapour-diffusion technique. Streak-seeding was used to obtain larger crystals suitable for X-ray analysis. Diffraction data were collected to 2.8 Ã… resolution at the Daresbury Synchrotron Radiation Source. The crystals belonged to space group P2 1 and contained one tetramer per asymmetric unit
Purification, crystallization and preliminary X-ray crystallographic analysis of rice bifunctional α-amylase/subtilisin inhibitor from Oryza sativa
Lin, Yi-Hung; Peng, Wen-Yan; Huang, Yen-Chieh; Guan, Hong-Hsiang; Hsieh, Ying-Cheng; Liu, Ming-Yih; Chang, Tschining; Chen, Chun-Jung
The crystallization of rice α-amylase/subtilisin bifunctional inhibitor is reported. Rice bifunctional α-amylase/subtilisin inhibitor (RASI) can inhibit both α-amylase from larvae of the red flour beetle (Tribolium castaneum) and subtilisin from Bacillus subtilis. The synthesis of RASI is up-regulated during the late milky stage in developing seeds. The 8.9 kDa molecular-weight RASI from rice has been crystallized using the hanging-drop vapour-diffusion method. According to 1.81 Å resolution X-ray diffraction data from rice RASI crystals, the crystal belongs to space group P2 1 2 1 2, with unit-cell parameters a = 79.99, b = 62.95, c = 66.70 Å. Preliminary analysis indicates two RASI molecules in an asymmetric unit with a solvent content of 44%
Fat & fabulous: bifunctional lipids in the spotlight.
Haberkant, Per; Holthuis, Joost C M
Understanding biological processes at the mechanistic level requires a systematic charting of the physical and functional links between all cellular components. While protein-protein and protein-nucleic acid networks have been subject to many global surveys, other critical cellular components such as membrane lipids have rarely been studied in large-scale interaction screens. Here, we review the development of photoactivatable and clickable lipid analogues-so-called bifunctional lipids-as novel chemical tools that enable a global profiling of lipid-protein interactions in biological membranes. Recent studies indicate that bifunctional lipids hold great promise in systematic efforts to dissect the elaborate crosstalk between proteins and lipids in live cells and organisms. This article is part of a Special Issue entitled Tools to study lipid functions. Copyright © 2014 Elsevier B.V. All rights reserved.
Identifying and annotating human bifunctional RNAs reveals their versatile functions.
Chen, Geng; Yang, Juan; Chen, Jiwei; Song, Yunjie; Cao, Ruifang; Shi, Tieliu; Shi, Leming
Bifunctional RNAs that possess both protein-coding and noncoding functional properties were less explored and poorly understood. Here we systematically explored the characteristics and functions of such human bifunctional RNAs by integrating tandem mass spectrometry and RNA-seq data. We first constructed a pipeline to identify and annotate bifunctional RNAs, leading to the characterization of 132 high-confidence bifunctional RNAs. Our analyses indicate that bifunctional RNAs may be involved in human embryonic development and can be functional in diverse tissues. Moreover, bifunctional RNAs could interact with multiple miRNAs and RNA-binding proteins to exert their corresponding roles. Bifunctional RNAs may also function as competing endogenous RNAs to regulate the expression of many genes by competing for common targeting miRNAs. Finally, somatic mutations of diverse carcinomas may generate harmful effect on corresponding bifunctional RNAs. Collectively, our study not only provides the pipeline for identifying and annotating bifunctional RNAs but also reveals their important gene-regulatory functions.
Metallic nanosystems in catalysis
Bukhtiyarov, Valerii I; Slin'ko, Mikhail G
The reactivities of metallic nanosystems in catalytic processes are considered. The activities of nanoparticles in catalysis are due to their unique microstructures, electronic properties and high specific surfaces of the active centres. The problems of increasing the selectivities of catalytic processes are discussed using several nanosystems as examples. The mutual effects of components of bimetallic nanoparticles are discussed. The prospects for theoretical and experimental investigations into catalytic nanosystems and the construction of industrial catalysts based on them are evaluated. The bibliography includes 207 references.
Asymmetric synthesis II more methods and applications
Christmann, Mathias
After the overwhelming success of 'Asymmetric Synthesis - The Essentials', narrating the colorful history of asymmetric synthesis, this is the second edition with latest subjects and authors. While the aim of the first edition was mainly to honor the achievements of the pioneers in asymmetric syntheses, the aim of this new edition was bringing the current developments, especially from younger colleagues, to the attention of students. The format of the book remained unchanged, i.e. short conceptual overviews by young leaders in their field including a short biography of the authors. The growing multidisciplinary research within chemistry is reflected in the selection of topics including metal catalysis, organocatalysis, physical organic chemistry, analytical chemistry, and its applications in total synthesis. The prospective reader of this book is a graduate or undergraduate student of advanced organic chemistry as well as the industrial chemist who wants to get a brief update on the current developments in th...
CATALYSIS OF CHEMICAL PROCESSES: PARTICULAR ...
IICBA01
secondary/high schools and universities, the inhibition of the chemical reactions is frequently ... As a result, the lesson catalysis is frequently included in chemistry education curricula at ... Misinterpretations in teaching and perception of catalysis ... profile is shown as a dependence of energy on reaction progress, without ...
Spectroscopy in catalysis : an introduction
Niemantsverdriet, J.W.
Spectroscopy in Catalysis is an introduction to the most important analytical techniques that are nowadays used in catalysis and in catalytic surface chemistry. The aim of the book is to give the reader a feeling for the type of information that characterization techniques provide about questions
The structure of Haemophilus influenzae prephenate dehydrogenase suggests unique features of bifunctional TyrA enzymes
Chiu, Hsiu-Ju; Abdubek, Polat; Astakhova, Tamara; Axelrod, Herbert L.; Carlton, Dennis; Clayton, Thomas; Das, Debanu; Deller, Marc C.; Duan, Lian; Feuerhelm, Julie; Grant, Joanna C.; Grzechnik, Anna; Han, Gye Won; Jaroszewski, Lukasz; Jin, Kevin K.; Klock, Heath E.; Knuth, Mark W.; Kozbial, Piotr; Krishna, S. Sri; Kumar, Abhinav; Marciano, David; McMullan, Daniel; Miller, Mitchell D.; Morse, Andrew T.; Nigoghossian, Edward; Okach, Linda; Reyes, Ron; Tien, Henry J.; Trame, Christine B.; Bedem, Henry van den; Weekes, Dana; Xu, Qingping; Hodgson, Keith O.; Wooley, John; Elsliger, Marc-André; Deacon, Ashley M.; Godzik, Adam; Lesley, Scott A.; Wilson, Ian A.
The crystal structure of the prephenate dehydrogenase component of the bifunctional H. influenzae TyrA reveals unique structural differences between bifunctional and monofunctional TyrA enzymes. Chorismate mutase/prephenate dehydrogenase from Haemophilus influenzae Rd KW20 is a bifunctional enzyme that catalyzes the rearrangement of chorismate to prephenate and the NAD(P) + -dependent oxidative decarboxylation of prephenate to 4-hydroxyphenylpyruvate in tyrosine biosynthesis. The crystal structure of the prephenate dehydrogenase component (HinfPDH) of the TyrA protein from H. influenzae Rd KW20 in complex with the inhibitor tyrosine and cofactor NAD + has been determined to 2.0 Å resolution. HinfPDH is a dimeric enzyme, with each monomer consisting of an N-terminal α/β dinucleotide-binding domain and a C-terminal α-helical dimerization domain. The structure reveals key active-site residues at the domain interface, including His200, Arg297 and Ser179 that are involved in catalysis and/or ligand binding and are highly conserved in TyrA proteins from all three kingdoms of life. Tyrosine is bound directly at the catalytic site, suggesting that it is a competitive inhibitor of HinfPDH. Comparisons with its structural homologues reveal important differences around the active site, including the absence of an α–β motif in HinfPDH that is present in other TyrA proteins, such as Synechocystis sp. arogenate dehydrogenase. Residues from this motif are involved in discrimination between NADP + and NAD + . The loop between β5 and β6 in the N-terminal domain is much shorter in HinfPDH and an extra helix is present at the C-terminus. Furthermore, HinfPDH adopts a more closed conformation compared with TyrA proteins that do not have tyrosine bound. This conformational change brings the substrate, cofactor and active-site residues into close proximity for catalysis. An ionic network consisting of Arg297 (a key residue for tyrosine binding), a water molecule, Asp206 (from
Build/Couple/Pair and Multifunctional Catalysis Strategies for the Synthesis of Heterocycles from Simple Starting Materials
Ascic, Erhad
. Multifunctional Catalysis: Synthesis of Heterocycles from Simple Starting Materials A multifunctional catalysis approach, involving a ruthenium-catalyzed tandem ringclosing metathesis/isomerization/N-acyliminium cyclization sequence, is described. Double bonds created during ring-closing metathesis isomerize......, a series of interesting indolizidinones are formed in good yields with excellent diastereoselectivities, including a formal total synthesis of the antiparasitic natural product harmicine and the first total synthesis of mescalotam. Furthermore, preliminary asymmetric variants of the tandem process have...
Heterogeneous Catalysis of Polyoxometalate Based Organic–Inorganic Hybrids
Yuanhang Ren
Full Text Available Organic–inorganic hybrid polyoxometalate (POM compounds are a subset of materials with unique structures and physical/chemical properties. The combination of metal-organic coordination complexes with classical POMs not only provides a powerful way to gain multifarious new compounds but also affords a new method to modify and functionalize POMs. In parallel with the many reports on the synthesis and structure of new hybrid POM compounds, the application of these compounds for heterogeneous catalysis has also attracted considerable attention. The hybrid POM compounds show noteworthy catalytic performance in acid, oxidation, and even in asymmetric catalytic reactions. This review summarizes the design and synthesis of organic–inorganic hybrid POM compounds and particularly highlights their recent progress in heterogeneous catalysis.
Practical Engineering Aspects of Catalysis in Microreactors
Křišťál, Jiří; Stavárek, Petr; Vajglová, Zuzana; Vondrá�ková, Magdalena; Pavlorková, Jana; Jiři�ný, Vladimír
Ro�. 41, �. 12 (2015), s. 9357-9371 ISSN 0922-6168. [Pannonian Symposium on Catalysis /12./. Castle Trest, 16.09.2014-20.09.2014] Institutional support: RVO:67985858 Keywords : heterogeneous catalysis * homogeneous catalysis * photo catalysis Subject RIV: CI - Industrial Chemistry, Chemical Engineering Impact factor: 1.833, year: 2015
Magnetic catalysis and inverse magnetic catalysis in QCD
Mueller, N.
We investigate the effects of strong magnetic fields on the QCD phase structure at vanishing density by solving the gluon and quark gap equations. The chiral crossover temperature as well as the chiral condensate is computed. For asymptotically large magnetic fields we find magnetic catalysis, while we find inverse magnetic catalysis for intermediate magnetic fields. Moreover, for large magnetic fields the chiral phase transition for massless quarks turns into a crossover. The underlying mechanisms are then investigated analytically within a few simplifications of the full numerical analysis. We find that a combination of gluon screening effects and the weakening of the strong coupling is responsible for the phenomenon of inverse catalysis seen in lattice studies. In turn, the magnetic catalysis at large magnetic field is already indicated by simple arguments based on dimensionality. (author)
Asymmetric collider
Bharadwaj, V.; Colestock, P.; Goderre, G.; Johnson, D.; Martin, P.; Holt, J.; Kaplan, D.
The study of CP violation in beauty decay is one of the key challenges facing high energy physics. Much work has not yielded a definitive answer how this study might best be performed. However, one clear conclusion is that new accelerator facilities are needed. Proposals include experiments at asymmetric electron-positron colliders and in fixed-target and collider modes at LHC and SSC. Fixed-target and collider experiments at existing accelerators, while they might succeed in a first observation of the effect, will not be adequate to study it thoroughly. Giomataris has emphasized the potential of a new approach to the study of beauty CP violation: the asymmetric proton collider. Such a collider might be realized by the construction of a small storage ring intersecting an existing or soon-to-exist large synchrotron, or by arranging collisions between a large synchrotron and its injector. An experiment at such a collider can combine the advantages of fixed-target-like spectrometer geometry, facilitating triggering, particle identification and the instrumentation of a large acceptance, while the increased √s can provide a factor > 100 increase in beauty-production cross section compared to Tevatron or HERA fixed-target. Beams crossing at a non-zero angle can provide a small interaction region, permitting a first-level decay-vertex trigger to be implemented. To achieve large √s with a large Lorentz boost and high luminosity, the most favorable venue is the high-energy booster (HEB) at the SSC Laboratory, though the CERN SPS and Fermilab Tevatron are also worth considering
Astaxanthin diferulate as a bifunctional antioxidant
Papa, T.B.R.; Pinho, V.D.; Nascimento, E.P. do
Abstract Astaxanthin when esterified with ferulic acid is better singlet oxygen quencher with k2 = (1.58 ± 0.1) 10(10) L mol(- 1)s(- 1) in ethanol at 25°C compared with astaxanthin with k2 = (1.12 ± 0.01) 10(9) L mol(- 1)s(- 1). The ferulate moiety in the astaxanthin diester is a better radical....... The mutual enhancement of antioxidant activity for the newly synthetized astaxanthin diferulate becoming a bifunctional antioxidant is rationalized according to a two-dimensional classification plot for electron donation and electron acceptance capability....
Spectroscopy in Catalysis describes the most important modern analytical techniques used to investigate catalytic surfaces. These include electron spectroscopy (XPS, UPS, AES, EELS), ion spectroscopy (SIMS, SNMS, RBS, LEIS), vibrational spectroscopy (infrared, Raman, EELS), temperature-programmed
Polshettiwar, Vivek
Green chemistry by nanocatalysis: Catalysis is a strategic field of science because it involves new ways of meeting energy and sustainability challenges. The concept of green chemistry, which makes the science of catalysis even more creative, has become an integral part of sustainability. This special issue is at the interface of green chemistry and nanocatalysis, and features excellent background articles as well as the latest research results. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Molecular ingredients of heterogeneous catalysis
Somorjai, G.A.
The purpose of this paper is to present a review and status report to those in theoretical chemistry of the rapidly developing surface science of heterogeneous catalysis. The art of catalysis is developing into science. This profound change provides one with opportunities not only to understand the molecular ingredients of important catalytic systems but also to develop new and improved catalyst. The participation of theorists to find answers to important questions is sorely needed for the sound development of the field. It is the authors hope that some of the outstanding problems of heterogeneous catalysis that are identified in this paper will be investigated. For this purpose the paper is divided into several sections. The brief Introduction to the methodology and recent results of the surface science of heterogeneous catalysis is followed by a review of the concepts of heterogeneous catalysis. Then, the experimental results that identified the three molecular ingredients of catalysis, structure, carbonaceous deposit and the oxidation state of surface atoms are described. Each section is closed with a summary and a list of problems that require theoretical and experimental scrutiny. Finally attempts to build new catalyst systems and the theoretical and experimental problems that appeared in the course of this research are described
The purpose of this paper is to present a review and status report to those in theoretical chemistry of the rapidly developing surface science of heterogeneous catalysis. The art of catalysis is developing into science. This profound change provides one with opportunities not only to understand the molecular ingredients of important catalytic systems but also to develop new and improved catalyst. The participation of theorists to find answers to important questions is sorely needed for the sound development of the field. It is the authors hope that some of the outstanding problems of heterogeneous catalysis that are identified in this paper will be investigated. For this purpose the paper is divided into several sections. The brief Introduction to the methodology and recent results of the surface science of heterogeneous catalysis is followed by a review of the concepts of heterogeneous catalysis. Then, the experimental results that identified the three molecular ingredients of catalysis, structure, carbonaceous deposit and the oxidation state of surface atoms are described. Each section is closed with a summary and a list of problems that require theoretical and experimental scrutiny. Finally attempts to build new catalyst systems and the theoretical and experimental problems that appeared in the course of this research are described.
Bifunctional electrodes for unitised regenerative fuel cells
Altmann, Sebastian; Kaz, Till; Friedrich, Kaspar Andreas
Research highlights: → Different oxygen electrode configurations for the operation in a unitised reversible fuel cell were tested. → Polarisation curves and EIS measurements were recorded. → The mixture of catalysts performs best for the present stage of electrode development. → Potential improvements for the different compositions are discussed. - Abstract: The effects of different configurations and compositions of platinum and iridium oxide electrodes for the oxygen reaction of unitised regenerative fuel cells (URFC) are reported. Bifunctional oxygen electrodes are important for URFC development because favourable properties for the fuel cell and the electrolysis modes must be combined into a single electrode. The bifunctional electrodes were studied under different combinations of catalyst mixtures, multilayer arrangements and segmented configurations with single catalyst areas. Distinct electrochemical behaviour was observed for both modes and can be explained on the basis of impedance spectroscopy. The mixture of both catalysts performs best for the present stage of electrode development. Also, the multilayer electrodes yielded good results with the potential for optimisation. The influence of ionic and electronic resistances on the relative performance is demonstrated. However, penalties due to cross currents in the heterogeneous electrodes were identified and explained by comparing the performance curves with electrodes composed of a single catalyst. Potential improvements for the different compositions are discussed.
Enhanced Micellar Catalysis LDRD.
Betty, Rita G.; Tucker, Mark D; Taggart, Gretchen; Kinnan, Mark K.; Glen, Crystal Chanea; Rivera, Danielle; Sanchez, Andres; Alam, Todd Michael
The primary goals of the Enhanced Micellar Catalysis project were to gain an understanding of the micellar environment of DF-200, or similar liquid CBW surfactant-based decontaminants, as well as characterize the aerosolized DF-200 droplet distribution and droplet chemistry under baseline ITW rotary atomization conditions. Micellar characterization of limited surfactant solutions was performed externally through the collection and measurement of Small Angle X-Ray Scattering (SAXS) images and Cryo-Transmission Electron Microscopy (cryo-TEM) images. Micellar characterization was performed externally at the University of Minnesotas Characterization Facility Center, and at the Argonne National Laboratory Advanced Photon Source facility. A micellar diffusion study was conducted internally at Sandia to measure diffusion constants of surfactants over a concentration range, to estimate the effective micelle diameter, to determine the impact of individual components to the micellar environment in solution, and the impact of combined components to surfactant phase behavior. Aerosolized DF-200 sprays were characterized for particle size and distribution and limited chemical composition. Evaporation rates of aerosolized DF-200 sprays were estimated under a set of baseline ITW nozzle test system parameters.
Operando research in heterogeneous catalysis
Groot, Irene
This book is devoted to the emerging field of techniques for visualizing atomic-scale properties of active catalysts under actual working conditions, i.e. high gas pressures and high temperatures. It explains how to understand these observations in terms of the surface structures and dynamics and their detailed interplay with the gas phase. This provides an important new link between fundamental surface physics and chemistry, and applied catalysis. The book explains the motivation and the necessity of operando studies, and positions these with respect to the more traditional low-pressure investigations on the one hand and the reality of industrial catalysis on the other. The last decade has witnessed a rapid development of new experimental and theoretical tools for operando studies of heterogeneous catalysis. The book has a strong emphasis on the new techniques and illustrates how the challenges introduced by the harsh, operando conditions are faced for each of these new tools. Therefore, one can also read th...
Green chemistry by nano-catalysis
Polshettiwar, Vivek; Varma, Rajender S.
the homogeneous catalysts. This review focuses on the use of nano-catalysis for green chemistry development including the strategy of using microwave heating with nano-catalysis in benign aqueous reaction media which offers an extraordinary synergistic effect
Positron studies in catalysis research
During the past eight months, the authors have made progress in several areas relevant to the eventual use of positron techniques in catalysis research. They have come closer to the completion of their positron microscope, and at the same time have performed several studies in their non-microscopic positron spectrometer which should ultimately be applicable to catalysis. The current status of the efforts in each of these areas is summarized in the following sections: Construction of the positron microscope (optical element construction, data collection software, and electronic sub-assemblies); Doppler broadening spectroscopy of metal silicide; Positron lifetime spectroscopy of glassy polymers; and Positron lifetime measurements of pore-sizes in zeolites
Catalysis and sustainable (green) chemistry
Centi, Gabriele; Perathoner, Siglinda [Dipartimento di Chimica Industriale ed Ingegneria dei Materiali, University of Messina, Salita Sperone 31, 98166 Messina (Italy)
Catalysis is a key technology to achieve the objectives of sustainable (green) chemistry. After introducing the concepts of sustainable (green) chemistry and a brief assessment of new sustainable chemical technologies, the relationship between catalysis and sustainable (green) chemistry is discussed and illustrated via an analysis of some selected and relevant examples. Emphasis is also given to the concept of catalytic technologies for scaling-down chemical processes, in order to develop sustainable production processes which reduce the impact on the environment to an acceptable level that allows self-depuration processes of the living environment.
Catalysis. Innovative applications in petrochemistry and refining. Preprints
Ernst, S.; Balfanz, U.; Jess, A.; Lercher, J.A.; Lichtscheidl, J.; Marchionna, M.; Nees, F.; Santacesaria, E. (eds.)
Within the DGMK conference at 4th to 6th October, 2011 in Dresden (Federal Republic of Germany) the following lectures were held: (1) Developing linear-alpha-olefins technology - From laboratory to a commercial plant (A. Meiswinkel); (2) New developments in oxidation catalysis (F. Rosowski); (3) Study of the performance of vanadium based catalysts prepared by grafting in the oxidative dehydrogenation of propane (E. Santacesaria); (4) Hydrocracking for oriented conversion of heavy oils: recent trends for catalyst development (F. Bertoncini); (5) Acidic ionic liquids for n-alkane isomerization in a liquid-liquid or slurry-phase reaction mode (C. Meyer); (6) Dual catalyst system for the hydrocracking of heavy oils and residues (G. Bellussi); (7) Understanding hydrodenitrogenation on novel unsupported sulphide Mo-W-Ni catalysts (J. Hein); (8) Hydrocracking of ethyllaurate on bifunctional micro-/mesoporous composite materials (M. Adam); (9) Catalytic dehydration of ethanol to ethylene (Ying Zhu); (10) The Evonik-Uhde HPPO process for propylene oxide production (B. Jaeger); (11) A green two-step process for adipic acid production from cyclohexene: A study on parameters affecting selectivity (F. Cavani); (12) DISY: The direct synthesis of hydrogen peroxide, a bridge for innovative applications (R, Buzzoni); (13) Solid catalyst with ionic liquid layer (SCILL) - A concept to improve the selectivity of selective hydrogenations (A. Jess); (14) Co-Zn-Al based hydrotalcites as catalysts for Fischer-Tropsch process (C.L. Bianchi); (15) Honeycomb supports with high thermal conductivity for the Fischer-Tropsch synthesis (C.G. Visconti); (16) How to make Fischer-Tropsch catalyst scale-up fully reliable (L. Fischer); (17) New developments in FCC catalysis (C.P. Kelkar); (18) The potential of medium-pore zeolites for improved propene yields from catalytic cracking (F. Bager).
EMSL and Institute for Integrated Catalysis (IIC) Catalysis Workshop
Campbell, Charles T.; Datye, Abhaya K.; Henkelman, Graeme A.; Lobo, Raul F.; Schneider, William F.; Spicer, Leonard D.; Tysoe, Wilfred T.; Vohs, John M.; Baer, Donald R.; Hoyt, David W.; Thevuthasan, Suntharampillai; Mueller, Karl T.; Wang, Chong M.; Washton, Nancy M.; Lyubinetsky, Igor; Teller, Raymond G.; Andersen, Amity; Govind, Niranjan; Kowalski, Karol; Kabius, Bernd C.; Wang, Hongfei; Campbell, Allison A.; Shelton, William A.; Bylaska, Eric J.; Peden, Charles HF; Wang, Yong; King, David L.; Henderson, Michael A.; Rousseau, Roger J.; Szanyi, Janos; Dohnalek, Zdenek; Mei, Donghai; Garrett, Bruce C.; Ray, Douglas; Futrell, Jean H.; Laskin, Julia; DuBois, Daniel L.; Kuprat, Laura R.; Plata, Charity
Within the context of significantly accelerating scientific progress in research areas that address important societal problems, a workshop was held in November 2010 at EMSL to identify specific and topically important areas of research and capability needs in catalysis-related science.
A combined continuous microflow photochemistry and asymmetric organocatalysis approach for the enantioselective synthesis of tetrahydroquinolines
Erli Sugiono
Full Text Available A continuous-flow asymmetric organocatalytic photocyclization–transfer hydrogenation cascade reaction has been developed. The new protocol allows the synthesis of tetrahydroquinolines from readily available 2-aminochalcones using a combination of photochemistry and asymmetric Brønsted acid catalysis. The photocylization and subsequent reduction was performed with catalytic amount of chiral BINOL derived phosphoric acid diester and Hantzsch dihydropyridine as hydrogen source providing the desired products in good yields and with excellent enantioselectivities.
Asymmetric Ashes
that oscillate in certain directions. Reflection or scattering of light favours certain orientations of the electric and magnetic fields over others. This is why polarising sunglasses can filter out the glint of sunlight reflected off a pond. When light scatters through the expanding debris of a supernova, it retains information about the orientation of the scattering layers. If the supernova is spherically symmetric, all orientations will be present equally and will average out, so there will be no net polarisation. If, however, the gas shell is not round, a slight net polarisation will be imprinted on the light. This is what broad-band polarimetry can accomplish. If additional spectral information is available ('spectro-polarimetry'), one can determine whether the asymmetry is in the continuum light or in some spectral lines. In the case of the Type Ia supernovae, the astronomers found that the continuum polarisation is very small so that the overall shape of the explosion is crudely spherical. But the much larger polarization in strongly blue-shifted spectral lines evidences the presence, in the outer regions, of fast moving clumps with peculiar chemical composition. "Our study reveals that explosions of Type Ia supernovae are really three-dimensional phenomena," says Dietrich Baade. "The outer regions of the blast cloud is asymmetric, with different materials found in 'clumps', while the inner regions are smooth." "This study was possible because polarimetry could unfold its full strength thanks to the light-collecting power of the Very Large Telescope and the very precise calibration of the FORS instrument," he adds. The research team first spotted this asymmetry in 2003, as part of the same observational campaign (ESO PR 23/03 and ESO PR Photo 26/05). The new, more extensive results show that the degree of polarisation and, hence, the asphericity, correlates with the intrinsic brightness of the explosion. The brighter the supernova, the smoother, or less clumpy
Evaluation of commercial and sulfated ZrO_2 aiming application catalysis
Silva, F.N.; Dantas, J.; Costa, A.C.F.M.; Pallone, E.M.J.A.; Dutra, R.C.L.
This study evaluates the performance of commercial and sulfated ZrO_2 for future application in catalysis. Commercial ZrO_2 was provided by the company Saint-Gobain Zirpro. The sulfation occurred with SO_4"-"2 ion content of 30% compared to the mass of ZrO_2. The samples were characterized by XRD, FTIR, EDX and GD. The results revealed the formation of a monoclinic phase for the commercial sample, and a monoclinic major phase with tetragonal traces for the sulfated sample. The commercial ZrO_2 showed a narrow, bimodal and asymmetric agglomerates distribution, while the sulfated sample showed a narrow, tetramodal and asymmetric agglomerates distribution. The presence of traces of the tetragonal phase in the SO_4"-"2/ZrO_2 XRD, and the presence of SO_3 in the EDX were good indicators for future use in catalysis to provide ester. (author)
Cyclopalladated complexes in enantioselective catalysis
Dunina, Valeria V; Gorunova, Olga N; Zykov, P A; Kochetkov, Konstantin A
The results of the use of optically active palladacycles in enantioselective catalysis of [3,3]-sigmatropic rearrangements, aldol condensation, the Michael reaction and cross-coupling are analyzed. Reactions with allylic substrates or reagents and some other transformations are considered.
Catalysis in Molten Ionic Media
Boghosian, Soghomon; Fehrmann, Rasmus
This chapter deals with catalysis in molten salts and ionic liquids, which are introduced and reviewed briefly, while an in-depth review of the oxidation catalyst used for the manufacturing of sulfuric acid and cleaning of flue gas from electrical power plants is the main topic of the chapter...
Molecular catalysis science: Perspective on unifying the fields of catalysis.
Ye, Rong; Hurlburt, Tyler J; Sabyrov, Kairat; Alayoglu, Selim; Somorjai, Gabor A
Colloidal chemistry is used to control the size, shape, morphology, and composition of metal nanoparticles. Model catalysts as such are applied to catalytic transformations in the three types of catalysts: heterogeneous, homogeneous, and enzymatic. Real-time dynamics of oxidation state, coordination, and bonding of nanoparticle catalysts are put under the microscope using surface techniques such as sum-frequency generation vibrational spectroscopy and ambient pressure X-ray photoelectron spectroscopy under catalytically relevant conditions. It was demonstrated that catalytic behavior and trends are strongly tied to oxidation state, the coordination number and crystallographic orientation of metal sites, and bonding and orientation of surface adsorbates. It was also found that catalytic performance can be tuned by carefully designing and fabricating catalysts from the bottom up. Homogeneous and heterogeneous catalysts, and likely enzymes, behave similarly at the molecular level. Unifying the fields of catalysis is the key to achieving the goal of 100% selectivity in catalysis.
New and future developments in catalysis catalysis by nanoparticles
Suib, Steven L
New and Future Developments in Catalysis is a package of seven books that compile the latest ideas concerning alternate and renewable energy sources and the role that catalysis plays in converting new renewable feedstock into biofuels and biochemicals. Both homogeneous and heterogeneous catalysts and catalytic processes will be discussed in a unified and comprehensive approach. There will be extensive cross-referencing within all volumes. The use of catalysts in the nanoscale offers various advantages (increased efficiency and less byproducts), and these are discussed in this volume along with the various catalytic processes using nanoparticles. However, this is not without any risks and the safety aspects and effects on humans and the environment are still unknown. The present data as well as future needs are all part of this volume along with the economics involved. Offers in-depth coverage of all catalytic topics of current interest and outlines future challenges and research areas A clear and visual descr...
Chemical catalysis in biodiesel production (I): enzymatic catalysis processes
Jachmarian, I.; Dobroyan, M.; Veira, J.; Vieitez, I.; Mottini, M.; Segura, N.; Grompone, M.
There are some well known advantages related with the substitution of chemical catalysis by enzymatic catalysis processes.Some commercial immobilized lipases are useful for the catalysis of bio diesel reaction, which permits the achievement of high conversions and the recovery of high purity products, like a high quality glycerine. The main disadvantage of this alternative method is related with the last inactivation of the enzyme (by both the effect of the alcohol and the absorption of glycerol on catalyst surface), which added to the high cost of the catalyst, produces an unfavourable economical balance of the entire process. In the work the efficiency of two commercial immobilized lipases (Lipozyme TL IM y Novozyme 435 NNovozymes-Dinamarca) in the catalysis of the continuous transesterification of sunflower oil with different alcohols was studied. The intersolubility of the different mixturesinvolving reactans (S oil/alkyl esters/alcohol) and products (P mixtures with a higher content of 1% of glycerol,while for ethanol homogeneous mixtures were obtained at 12% of glycerol (44.44 12).Using and ethanolic substrate at the proportion S=19:75:6 and Lipozyme TL IM, it was possible to achieve a 98% of convertion to the corresponding biodiesel.When Novozymes 435 catalyzed the process it was possible to increase the oil concentration in the substrateaccording to proportion S=35:30:35, and a 78% conversion was obtained. The productivity shown by the firt enzyme was 70mg biodiesel g enzime-1, hora-1 while with the second one the productivity increased to 230. Results suggested that the convenient adjustement of substrate composition with the addition of biodiesel to reactants offers an efficient method for maximizing the enzyme productivity, hence improving the profitability of the enzymatic catalyzed process. (author)
Catalysis in the Primordial World
Full Text Available Catalysis provides orderly prebiotic synthesis and eventually its evolution into autocatalytic (self-reproduction systems. Research on homogeneous catalysis is concerned mostly with random peptide synthesis and the chances to produce catalytic peptide oligomers. Synthesis of ribose via formose reaction was found to be catalysed by B(OH4−, presumably released by weathering of borate minerals. Oxide and clay mineral surfaces provide catalytic sites for the synthesis of oligopeptides and oligonucleotides. Chemoautotrophic or iron-sulphur-world theory assumes that the first (pioneer organisms developed by catalytic processes on (Fe/NiS particles formed near/close hydrothermal vents. The review provides an overlay of possible catalytic reactions in prebiotic environment, discussing their selectivity (regioselectivity, stereoselectivity as well as geological availability of catalytic minerals and geochemical conditions enabling catalytic reactions on early Earth.
New Tools for CO2 Fixation by Homogeneous Catalysis - Final Technical Report
Jessop, Phillip G.
The overall goal is the development of new or more efficient methods for the conversion of CO{sub 2} into useful organic products, via the design or discovery of new catalysts, ligands, solvents, and methods. Specific objectives for this funded period: (1) To develop a high-throughput screening technique and use it to develop an efficient catalyst/reagent/solvent system for the synthesis of ureas or carboxylic acids. (2) To use in-situ spectroscopic and kinetic methods to study the mechanism of the synthesis of ureas or carboxylic acids. (3) To develop bifunctional ligands capable of secondary interactions with CO{sub 2}, to detect the interactions, and to demonstrate applications to catalysis.
Active sites engineering of metal-organic frameworks for heterogeneous catalysis
Li, Xinle [Iowa State Univ., Ames, IA (United States)
In conclusion, we have for the first time developed a novel solid base catalyst, Ndoped MOF-253 derived porous carbons (Cz-MOF-253). Cz-MOF-253 is highly porous and exhibit high efficiency in Knoevenagel condensation reaction. Furthermore, Cz-MOF-253 is robust and can be reused up to five times. In comparison, the analogous nitrogen-free catalyst-Cz-DUT-5, and other nitrogen- MOFs derived carbon showed an inferior performance. Moreover, the high basicity and porous nature enable the design of bifunctional catalyst and facilitate tandem condensation-hydrogenation reactions. This work delineates the first attempt that demonstrates MOF-derived carbons as solid base catalyst and its potential application in tandem catalysis. Future work on exploring new catalytic reactions based on such porous Lewis basic MOF-derived carbons is currently underway.
Carbon in bifunctional air electrodes in alkaline solution
Tryk, D.; Aldred, W.; Yeager, E.
Bifunctional O 2 electrodes can be used both to reduce and to generate O 2 in rechargeable metal-air batteries and fuel cells. The factors controlling the O 2 reduction and generation reactions in gas-diffusional bifunctional O 2 electrodes are discussed. The resistance of such electrodes, as established from voltammetry curves, has been found to increase markedly during anodic polarization and to be dependent upon the electrode fabrication technique. Carbon blacks with more graphitic structure than Shawinigan black have been found to be more resistant to electro-oxidation. The further extension of cycle life of bifunctional electrodes using carbon is critically dependent on finding more oxidation-resistant carbons that at the same time have other surface properties meeting the requirements for catalyzed gas-diffusion electrodes
Main regularities of radiolytic transformations of bifunctional organic compounds
Petryaev, E.P.; Shadyro, O.I.
General regularities of the radiolysis of bifunctional organic compounds (α-diols, ethers of α-diols, amino alcohols, hydroxy aldehydes and hydroxy asids) in aqueous solutions from the early stages of the process to formation of finite products are traced. It is pointed out that the most characteristic course of radiation-chemical, transformation of bifunctional compounds in agueous solutions in the fragmentation process with monomolecular decomposition of primary radicals of initial substrances and simultaneous scission of two vicinal in respect to radical centre bonds via five-membered cyclic transient state. The data obtained are of importance for molecular radiobiology
Cosmic strings and baryon decay catalysis
Gregory, R.; Perkins, W.B.; Davis, A.C.; Brandenberger, R.H. (Fermi National Accelerator Lab., Batavia, IL (USA); Cambridge Univ. (UK); Brown Univ., Providence, RI (USA). Dept. of Physics)
Cosmic strings, like monopoles, can catalyze proton decay. For integer charged fermions, the cross section for catalysis is not amplified, unlike in the case of monopoles. We review the catalysis processes both in the free quark and skyrmion pictures and discuss the implications for baryogenesis. We present a computation of the cross section for monopole catalyzed skyrmion decay using classical physics. We also discuss some effects which can screen catalysis processes. 32 refs., 1 fig.
Magnetic monopole catalysis of proton decay
Marciano, W.J.; Salvino, D.
Catalysis of proton decay by GUT magnetic monopoles (the Rubakov-Callan effect) is discussed. Combining a short-distance cross section calculation by Bernreuther and Craigie with the long-distance velocity dependent distortion factors of Arafune and Fukugita, catalysis rate predictions which can be compared with experiment are obtained. At present, hydrogen rich detectors such as water (H 2 O) and methane (CH 4 ) appear to be particularly well suited for observing catalysis by very slow monopoles. 17 refs., 1 fig
Gregory, R.; Perkins, W.B.; Davis, A.C.; Brandenberger, R.H.; Cambridge Univ.; Brown Univ., Providence, RI
Cosmic strings, like monopoles, can catalyze proton decay. For integer charged fermions, the cross section for catalysis is not amplified, unlike in the case of monopoles. We review the catalysis processes both in the free quark and skyrmion pictures and discuss the implications for baryogenesis. We present a computation of the cross section for monopole catalyzed skyrmion decay using classical physics. We also discuss some effects which can screen catalysis processes. 32 refs., 1 fig
Catalysis by Design Using Surface Organometallic Nitrogen-Containing Fragments
Hamzaoui, Bilel
The aim of this thesis is to explore the chemistry of well-defined silica-supported group 4 and group 5 complexes that contain one or more multiply-bonded nitrogen atoms. Such species have been recognized as crucial intermediates in many catalytic reactions (e.g. hydroaminoalkylation, olefin hydrogenation, imine metathesis…). The first chapter provided a bibliographic overview of the preparation and the reactivity of group 4 and 5 complexes towards hydroaminoalkylation and imine metathesis catalysis. The second chapter deals with the isolation and the characterization of a series of well-defined group 4 ƞ2-imine complexes surfaces species. 2D solid-state NMR (1H–13C HETCOR, Multiple Quantum) experiments have revealed consistently a unique structural rearrangement, viz azametallacycle occurring on the immobilized metal-amido ligands. Hydrogenolysis of the sole Zr-C bond in such species gives selectively a silica-supported zirconium monohydride that can perform the catalytic hydrogenation of olefins. The third chapter examines the mechanistic studies of the intermolecular hydroaminoalkylation using SOMC to identify the key metallacyclic surface intermediates (silica-supported three-membred and five-membered). The catalyst was regenerated by protonolysis and afforded pure amine. Catalytic testing of a selection of amine compounds with variable electronic properties was carried out. The fourth chapter deals with the generation and the characterization of well-defined silica-supported zirconium-imido complexes. The resulting species effectively catalyzes imine/imine cross-metathesis and thus considered as the first heterogeneous catalysts active for imine metathesis reaction. The fifth chapter studies the reaction of SBA15.1100 ºC with dry aniline and derivatives leading to opening strained siloxane bridges into acid-base paired functionalities (formation of N-phenylsilanamine-silanol pairs). This approach was successfully applied to the design of a series of
Hydrogen Production by Homogeneous Catalysis: Alcohol Acceptorless Dehydrogenation
Nielsen, Martin
in hydrogen production from biomass using homogeneous catalysis. Homogeneous catalysis has the advance of generally performing transformations at much milder conditions than traditional heterogeneous catalysis, and hence it constitutes a promising tool for future applications for a sustainable energy sector...
The synthesis of new oxazoline-containing bifunctional catalysts and their application in the addition of diethylzinc to aldehydes.
Coeffard, Vincent; Müller-Bunz, Helge; Guiry, Patrick J
The straightforward preparation of new modular oxazoline-containing bifunctional catalysts is reported employing a microwave-assisted Buchwald-Hartwig aryl amination as the key step. Covalent attachment of 2-(o-aminophenyl)oxazolines and pyridine derivatives generated in good-to-high yields a series of ligands in two or three steps in which each part was altered independently to tune the activity and the selectivity of the corresponding catalysts. These catalysts prepared in situ were subsequently applied in the asymmetric addition of diethylzinc to various aldehydes, producing the corresponding alcohols with enantioselectivities of up to 68%. A transition state model, based on relevant X-ray crystal structures, has also been proposed to explain the observed stereoselectivities.
Crystallization of bi-functional ligand protein complexes.
Antoni, Claudia; Vera, Laura; Devel, Laurent; Catalani, Maria Pia; Czarny, Bertrand; Cassar-Lajeunesse, Evelyn; Nuti, Elisa; Rossello, Armando; Dive, Vincent; Stura, Enrico Adriano
Homodimerization is important in signal transduction and can play a crucial role in many other biological systems. To obtaining structural information for the design of molecules able to control the signalization pathways, the proteins involved will have to be crystallized in complex with ligands that induce dimerization. Bi-functional drugs have been generated by linking two ligands together chemically and the relative crystallizability of complexes with mono-functional and bi-functional ligands has been evaluated. There are problems associated with crystallization with such ligands, but overall, the advantages appear to be greater than the drawbacks. The study involves two matrix metalloproteinases, MMP-12 and MMP-9. Using flexible and rigid linkers we show that it is possible to control the crystal packing and that by changing the ligand-enzyme stoichiometric ratio, one can toggle between having one bi-functional ligand binding to two enzymes and having the same ligand bound to each enzyme. The nature of linker and its point of attachment on the ligand can be varied to aid crystallization, and such variations can also provide valuable structural information about the interactions made by the linker with the protein. We report here the crystallization and structure determination of seven ligand-dimerized complexes. These results suggest that the use of bi-functional drugs can be extended beyond the realm of protein dimerization to include all drug design projects. Copyright © 2013 Elsevier Inc. All rights reserved.
Environmentally Benign Bifunctional Solid Acid and Base Catalysts
Elmekawy, A.; Shiju, N.R.; Rothenberg, G.; Brown, D.R.
Solid bifunctional acid-base catalysts were prepd. in two ways on an amorphous silica support: (1) by grafting mercaptopropyl units (followed by oxidn. to propylsulfonic acid) and aminopropyl groups to the silica surface (NH2-SiO2-SO3H), and (2) by grafting only aminopropyl groups and then
Bifunctional xylanases and their potential use in biotechnology
Digital Repository Service at National Institute of Oceanography (India)
Khandeparker, R.; Numan, M.Th.
. J Chromatography 919:389–394 33. Hong SY, Lee JS, Cho KM, Math RK, Kim YH, Hong SJ, Cho YU, Kim H, Yun HD (2006) Assembling a novel bifunctional cel- lulase–xylanase from Thermotoga maritima by end-to-end fusion. Biotechnol Lett 28:1857–1862 34...
Single flexible nanofiber to simultaneously realize electricity-magnetism bifunctionality
Yang, Ming; Sheng, Shujuan; Ma, Qianli; Lv, Nan; Yu, Wensheng; Wang, Jinxian; Dong, Xiangting; Liu, Guixia
In order to develop new-typed multifunctional composite nanofibers, PANI/Fe 3 O 4 /PVP flexible bifunctional composite nanofibers with simultaneous electrical conduction and magnetism have been successfully fabricated via a facile electrospinning technology. Polyvinyl pyrrolidone (PVP) is used as a matrix to construct composite nanofibers containing different amounts of polyaniline (PANI) and Fe 3 O 4 nanoparticles (NPs). The bifunctional composite nanofibers simultaneously possess excellent electrical conductivity and magnetic properties. The electrical conductivity reaches up to the order of 10 -3 S·cm -1 . The electrical conductivity and saturation magnetization of the composite nanofibers can be respectively tuned by adding various amounts of PANI and Fe 3 O 4 NPs. The obtained electricity-magnetism bifunctional composite nanofibers are expected to possess many potential applications in areas such as electromagnetic interference shielding, special coating, microwave absorption, molecular electronics and future nanomechanics. More importantly, the design concept and construct technique are of universal significance to fabricate other bifunctional one-dimensional nanostructures. (author)
Yang, Ming; Sheng, Shujuan; Ma, Qianli; Lv, Nan; Yu, Wensheng; Wang, Jinxian; Dong, Xiangting; Liu, Guixia, E-mail: [email protected], E-mail: [email protected] [Key Laboratory of Applied Chemistry and Nanotechnology at Universities of Jilin Province, Changchun University of Science and Technology, Changchun (China)
In order to develop new-typed multifunctional composite nanofibers, PANI/Fe{sub 3}O{sub 4}/PVP flexible bifunctional composite nanofibers with simultaneous electrical conduction and magnetism have been successfully fabricated via a facile electrospinning technology. Polyvinyl pyrrolidone (PVP) is used as a matrix to construct composite nanofibers containing different amounts of polyaniline (PANI) and Fe{sub 3}O{sub 4} nanoparticles (NPs). The bifunctional composite nanofibers simultaneously possess excellent electrical conductivity and magnetic properties. The electrical conductivity reaches up to the order of 10{sup -3} S·cm{sup -1}. The electrical conductivity and saturation magnetization of the composite nanofibers can be respectively tuned by adding various amounts of PANI and Fe{sub 3}O{sub 4} NPs. The obtained electricity-magnetism bifunctional composite nanofibers are expected to possess many potential applications in areas such as electromagnetic interference shielding, special coating, microwave absorption, molecular electronics and future nanomechanics. More importantly, the design concept and construct technique are of universal significance to fabricate other bifunctional one-dimensional nanostructures. (author)
Fundamental concepts in heterogeneous catalysis
Norskov, Jens K; Abild-Pedersen, Frank; Bligaard, Thomas
This book is based on a graduate course and suitable as a primer for any newcomer to the field, this book is a detailed introduction to the experimental and computational methods that are used to study how solid surfaces act as catalysts. Features include:First comprehensive description of modern theory of heterogeneous catalysisBasis for understanding and designing experiments in the field Allows reader to understand catalyst design principlesIntroduction to important elements of energy transformation technologyTest driven at Stanford University over several semesters
DOE Laboratory Catalysis Research Symposium - Abstracts
Dunham, T.
The conference consisted of two sessions with the following subtopics: (1) Heterogeneous Session: Novel Catalytic Materials; Photocatalysis; Novel Processing Conditions; Metals and Sulfides; Nuclear Magnetic Resonance; Metal Oxides and Partial Oxidation; Electrocatalysis; and Automotive Catalysis. (2) Homogeneous Catalysis: H-Transfer and Alkane Functionalization; Biocatalysis; Oxidation and Photocatalysis; and Novel Medical, Methods, and Catalyzed Reactions.
Computational Design of Clusters for Catalysis
Jimenez-Izal, Elisa; Alexandrova, Anastassia N.
When small clusters are studied in chemical physics or physical chemistry, one perhaps thinks of the fundamental aspects of cluster electronic structure, or precision spectroscopy in ultracold molecular beams. However, small clusters are also of interest in catalysis, where the cold ground state or an isolated cluster may not even be the right starting point. Instead, the big question is: What happens to cluster-based catalysts under real conditions of catalysis, such as high temperature and coverage with reagents? Myriads of metastable cluster states become accessible, the entire system is dynamic, and catalysis may be driven by rare sites present only under those conditions. Activity, selectivity, and stability are highly dependent on size, composition, shape, support, and environment. To probe and master cluster catalysis, sophisticated tools are being developed for precision synthesis, operando measurements, and multiscale modeling. This review intends to tell the messy story of clusters in catalysis.
"Nanocrystal bilayer for tandem catalysis"
Yamada, Yusuke; Tsung, Chia Kuang; Huang, Wenyu; Huo, Ziyang; E.Habas, Susan E; Soejima, Tetsuro; Aliaga, Cesar E; Samorjai, Gabor A; Yang, Peidong
Supported catalysts are widely used in industry and can be optimized by tuning the composition and interface of the metal nanoparticles and oxide supports. Rational design of metal-metal oxide interfaces in nanostructured catalysts is critical to achieve better reaction activities and selectivities. We introduce here a new class of nanocrystal tandem catalysts that have multiple metal-metal oxide interfaces for the catalysis of sequential reactions. We utilized a nanocrystal bilayer structure formed by assembling platinum and cerium oxide nanocube monolayers of less than 10 nm on a silica substrate. The two distinct metal-metal oxide interfaces, CeO2-Pt and Pt-SiO2, can be used to catalyse two distinct sequential reactions. The CeO2-Pt interface catalysed methanol decomposition to produce CO and H2, which were subsequently used for ethylene hydroformylation catalysed by the nearby Pt-SiO2 interface. Consequently, propanal was produced selectively from methanol and ethylene on the nanocrystal bilayer tandem catalyst. This new concept of nanocrystal tandem catalysis represents a powerful approach towards designing high-performance, multifunctional nanostructured catalysts
Curvature bound from gravitational catalysis
Gies, Holger; Martini, Riccardo
We determine bounds on the curvature of local patches of spacetime from the requirement of intact long-range chiral symmetry. The bounds arise from a scale-dependent analysis of gravitational catalysis and its influence on the effective potential for the chiral order parameter, as induced by fermionic fluctuations on a curved spacetime with local hyperbolic properties. The bound is expressed in terms of the local curvature scalar measured in units of a gauge-invariant coarse-graining scale. We argue that any effective field theory of quantum gravity obeying this curvature bound is safe from chiral symmetry breaking through gravitational catalysis and thus compatible with the simultaneous existence of chiral fermions in the low-energy spectrum. With increasing number of dimensions, the curvature bound in terms of the hyperbolic scale parameter becomes stronger. Applying the curvature bound to the asymptotic safety scenario for quantum gravity in four spacetime dimensions translates into bounds on the matter content of particle physics models.
Molecular complexity from polyunsaturated substrates: the gold catalysis approach.
Fensterbank, Louis; Malacria, Max
Over the last two decades, electrophilic catalysis relying on platinum(II), gold(I), and gold(III) salts has emerged as a remarkable synthetic methodology. Chemists have discovered a large variety of organic transformations that convert a great assortment of highly functionalized precursors into valuable final products. In many cases, these methodologies offer unique features, allowing access to unprecedented molecular architectures. Due to the mild reaction conditions and high function compatibility, scientists have successfully developed applications in total synthesis of natural products, as well as in asymmetric catalysis. In addition, all these developments have been accompanied by the invention of well-tailored catalysts, so that a palette of different electrophilic agents is now commercially available or readily synthesized at the bench. In some respects, researchers' interests in developing homogeneous gold catalysis can be compared with the Californian gold rush of the 19th century. It has attracted into its fervor thousands of scientists, providing a huge number of versatile and important reports. More notably, it is clear that the contribution to the art of organic synthesis is very valuable, though the quest is not over yet. Because they rely on the intervention of previously unknown types of intermediates, new retrosynthetic disconnections are now possible. In this Account, we discuss our efforts on the use of readily available polyunsaturated precursors, such as enynes, dienynes, allenynes, and allenenes to give access to highly original polycyclic structures in a single operation. These transformations transit via previously undescribed intermediates A, B, D, F, and H that will be encountered later on. All these intermediates have been determined by both ourselves and others by DFT calculations and in some cases have been confirmed on the basis of experimental data. In addition, dual gold activation can be at work in some of these transformations
Principles of asymmetric synthesis
Gawley, Robert E; Aube, Jeffrey
The world is chiral. Most of the molecules in it are chiral, and asymmetric synthesis is an important means by which enantiopure chiral molecules may be obtained for study and sale. Using examples from the literature of asymmetric synthesis, this book presents a detailed analysis of the factors that govern stereoselectivity in organic reactions. After an explanation of the basic physical-organic principles governing stereoselective reactions, the authors provide a detailed, annotated glossary of stereochemical terms. A chapter on "Practical Aspects of Asymmetric Synthesis" provides a critical overview of the most common methods for the preparation of enantiomerically pure compounds, techniques for analysis of stereoisomers using chromatographic, spectroscopic, and chiroptical methods. The authors then present an overview of the most important methods in contemporary asymmetric synthesis organized by reaction type. Thus, there are four chapters on carbon-carbon bond forming reactions, one chapter on reductions...
Surface science and heterogeneous catalysis
The catalytic reactions studied include hydrocarbon conversion over platinum, the transition metal-catalyzed hydrogenation of carbon monoxide, and the photocatalyzed dissociation of water over oxide surfaces. The method of combined surface science and catalytic studies is similar to those used in synthetic organic chemistry. The single-crystal models for the working catalyst are compared with real catalysts by comparing the rates of cyclopropane ring opening on platinum and the hydrogenation of carbon monoxide on rhodium single crystal surface with those on practical commercial catalyst systems. Excellent agreement was obtained for these reactions. This document reviews what was learned about heterogeneous catalysis from these surface science approaches over the past 15 years and present models of the active catalyst surface
Tandem catalysis: a new approach to polymers.
Robert, Carine; Thomas, Christophe M
The creation of polymers by tandem catalysis represents an exciting frontier in materials science. Tandem catalysis is one of the strategies used by Nature for building macromolecules. Living organisms generally synthesize macromolecules by in vivo enzyme-catalyzed chain growth polymerization reactions using activated monomers that have been formed within cells during complex metabolic processes. However, these biological processes rely on highly complex biocatalysts, thus limiting their industrial applications. In order to obtain polymers by tandem catalysis, homogeneous and enzyme catalysts have played a leading role in the last two decades. In the following feature article, we will describe selected published efforts to achieve these research goals.
Supported Ionic Liquid Phase (SILP) catalysis
Riisager, Anders; Fehrmann, Rasmus; Haumann, Marco
Applications of ionic liquids to replace conventional solvents in homogeneous transition-metal catalysis have increased significantly during the last decade. Biphasic ionic liquid/organic liquid systems offer advantages with regard to product separation, catalyst stability, and recycling...... but utilise in the case of fast chemical reactions only a small amount of expensive ionic liquid and catalyst. The novel Supported Ionic Liquid Phase (SILP) catalysis concept overcomes these drawbacks and allows the use of fixed-bed reactors for continuous reactions. In this Microreview the SILP catalysis...
The aminoindanol core as a key scaffold in bifunctional organocatalysts
Isaac G. Sonsona
Full Text Available The 1,2-aminoindanol scaffold has been found to be very efficient, enhancing the enantioselectivity when present in organocatalysts. This may be explained by its ability to induce a bifunctional activation of the substrates involved in the reaction. Thus, it is easy to find hydrogen-bonding organocatalysts ((thioureas, squaramides, quinolinium thioamide, etc. in the literature containing this favored structural core. They have been successfully employed in reactions such as Friedel–Crafts alkylation, Michael addition, Diels–Alder and aza-Henry reactions. However, the 1,2-aminoindanol core incorporated into proline derivatives has been scarcely explored. Herein, the most representative and illustrative examples are compiled and this review will be mainly focused on the cases where the aminoindanol moiety confers bifunctionality to the organocatalysts.
Bifunctional chelates of Rh-105 and Au-199 as potential radiotherapeutic agents
Troutner, D.E.; Schlemper, E.O.
Since last year we have: continued the synthesis of pentadentate bifunctional chelating agents based on diethylene triamine; studied the chelation Rh-105, Au-198 (as model for Au-199) and Tc-99m with these agents as well as chelation of Pd-109, Cu-67, In-111, and Co-57 with some of them; synthesized a new class of potential bifunctional chelating agents based on phenylene diamine; investigated the behavior of Au-198 as a model for Au-199; begun synthesis of bifunctional chelating agents based on terpyridly and similar ligands; and continued attempts to produce tetradentate bifunctional chelates based on diaminopropane. Each of these will be addressed in this report
[Bifunctional chelates of Rh-105, Au-199, and other metallic radionuclides as potential radiotherapeutic agents
Progress during this period is reported under the following headings: Diethylenetriamine based and related bifunctional chelating agents and their complexation with Rh-105, Au-198, Pd-109, cu-67, In-111, and Co-57; studies of Pd-109, Rh-105 and Tc-99m with bifunctional chelates based on phenylenediamine; establishment of an appropriate protein assay method for conjugated proteins; studies of new bifunctional Bi, Tri and tetradentate amine oxime ligands with Rh-105; IgG and antibody B72.3 conjugation studies by HPLC Techniques with bifunctional metal chelates; and progress on ligand systems for Au(III)
Progress during this period is reported under the following headings: Diethylenetriamine based and related bifunctional chelating agents and their complexation with Rh-105, Au-198, Pd-109, cu-67, In-111, and Co-57; studies of Pd-109, Rh-105 and Tc-99m with bifunctional chelates based on phenylenediamine; establishment of an appropriate protein assay method for conjugated proteins; studies of new bifunctional Bi, Tri and tetradentate amine oxime ligands with Rh-105; IgG and antibody B72.3 conjugation studies by HPLC Techniques with bifunctional metal chelates; and progress on ligand systems for Au(III).
(Bifunctional chelates of Rh-105, Au-199, and other metallic radionuclides as potential radiotherapeutic agents)
Nanoscale intimacy in bifunctional catalysts for selective conversion of hydrocarbons
Zecevic, Jovana; Vanbutsele, Gina; de Jong, Krijn P.; Martens, Johan A.
The ability to control nanoscale features precisely is increasingly being exploited to develop and improve monofunctional catalysts. Striking effects might also be expected in the case of bifunctional catalysts, which are important in the hydrocracking of fossil and renewable hydrocarbon sources to provide high-quality diesel fuel. Such bifunctional hydrocracking catalysts contain metal sites and acid sites, and for more than 50 years the so-called intimacy criterion has dictated the maximum distance between the two types of site, beyond which catalytic activity decreases. A lack of synthesis and material-characterization methods with nanometre precision has long prevented in-depth exploration of the intimacy criterion, which has often been interpreted simply as 'the closer the better' for positioning metal and acid sites. Here we show for a bifunctional catalyst—comprising an intimate mixture of zeolite Y and alumina binder, and with platinum metal controllably deposited on either the zeolite or the binder—that closest proximity between metal and zeolite acid sites can be detrimental. Specifically, the selectivity when cracking large hydrocarbon feedstock molecules for high-quality diesel production is optimized with the catalyst that contains platinum on the binder, that is, with a nanoscale rather than closest intimacy of the metal and acid sites. Thus, cracking of the large and complex hydrocarbon molecules that are typically derived from alternative sources, such as gas-to-liquid technology, vegetable oil or algal oil, should benefit especially from bifunctional catalysts that avoid locating platinum on the zeolite (the traditionally assumed optimal location). More generally, we anticipate that the ability demonstrated here to spatially organize different active sites at the nanoscale will benefit the further development and optimization of the emerging generation of multifunctional catalysts.
Relation between Hydrogen Evolution and Hydrodesulfurization Catalysis
Å aric, Manuel; Moses, Poul Georg; Rossmeisl, Jan
A relation between hydrogen evolution and hydrodesulfurization catalysis was found by density functional theory calculations. The hydrogen evolution reaction and the hydrogenation reaction in hydrodesulfurization share hydrogen as a surface intermediate and, thus, have a common elementary step...
Special section on Nano-Catalysis
CSIR Research Space (South Africa)
Makgwane, PR
Full Text Available to achieve sustainable and green catalytic processes. The special issue contains 40 peer reviewed scientific papers that include four comprehensive review articles contributions from the invited experts in the respective catalysis fields....
Current trends of surface science and catalysis
Park, Jeong Young
Including detail on applying surface science in renewable energy conversion, this book covers the latest results on model catalysts including single crystals, bridging "materials and pressure gaps", and hot electron flows in heterogeneous catalysis.
Faraday Discussions meeting Catalysis for Fuels.
Fischer, Nico; Kondrat, Simon A; Shozi, Mzamo
Welcome to Africa was the motto when after more than 100 years the flag ship conference series of the Royal Society of Chemistry, the Faraday Discussions was hosted for the first time on the African Continent. Under the fitting topic 'Catalysis for Fuels' over 120 delegates followed the invitation by the conference chair Prof. Graham Hutchings FRS (Cardiff Catalysis Institute), his organizing committee and the co-organizing DST-NRF Centre of Excellence in Catalysis c*change (). In the presentations of 21 invited speakers and 59 posters, cutting edge research in the field of catalysis for fuels, designing new catalysts for synthetic fuels, hydrocarbon conversion in the production of synthetic fuels and novel photocatalysis was presented over the two-day meeting. The scene was set by the opening lecture of Prof. Enrique Iglesias (UC Berkeley) and wrapped-up with the concluding remarks by Philip Gibson (SASOL).
Advancing Sustainable Catalysis with Magnetite Surface ...
This article surveys the recent developments in the synthesis, surface modification, and synthetic applications of magnetitenanoparticles. The emergence of iron(II,III) oxide (triiron tetraoxide or magnetite; Fe3O4, or FeO•Fe2O3) nanoparticles as a sustainable support in heterogeneous catalysis is highlighted. Use of an oxide of earth-abundant iron for various applications in catalysis and environmental remediation.
Aminomethylation of enals through carbene and acid cooperative catalysis: concise access to β(2)-amino acids.
Xu, Jianfeng; Chen, Xingkuan; Wang, Ming; Zheng, Pengcheng; Song, Bao-An; Chi, Yonggui Robin
A convergent, organocatalytic asymmetric aminomethylation of α,β-unsaturated aldehydes by N-heterocyclic carbene (NHC) and (in situ generated) Brønsted acid cooperative catalysis is disclosed. The catalytically generated conjugated acid from the base plays dual roles in promoting the formation of azolium enolate intermediate, formaldehyde-derived iminium ion (as an electrophilic reactant), and methanol (as a nucleophilic reactant). This redox-neutral strategy is suitable for the scalable synthesis of enantiomerically enriched β(2) -amino acids bearing various substituents. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
New developments in oxidation catalysis
Rosowski, F. [BASF SE, Ludwigshafen (Germany)
The impact of heterogeneous catalysis on the economy can be depicted by the global revenue of the chemical industry in 2006, which accounted for 2200 billion Euros with a share of all chemical products produced applying heterogeneous catalysis of about two thirds. [1] The range of products is enormous and they contribute greatly to the quality of our lifes. The advancement in the development of basic and intermediate chemical products is crucially dependent on either the further development of existing catalyst systems or the development of new catalysts and key to success for the chemical industry. Within the context of oxidation catalysis, the following driving forces are guiding research activities: There is a continuous desire to increase the selectivity of a given process in response to both economic as well as ecological needs and taking advantage of higher efficiencies in terms of cost savings and a better utilization of raw materials. A second motivation focuses on raw material change to all abundant and competitive feedstocks requiring both new developments in catalyst design as well as process technology. A more recent motivation refers to the use of metal oxide redox systems which are key to success for the development of novel technologies allowing for the separation of carbon dioxide and the use of carbon dioxide as a feedstock molecule as well as storing renewable energy in a chemical. To date, general ab initio approaches are known for the design of novel catalytic materials only for a few chemical reactions, whereas most industrial catalytic processes have been developed by empirical methods. [2] The development of catalytic materials are either based on the targeted synthesis of catalytic lead structures as well as high throughput methods that allow for the screening of a large range of parameters. [3 - 5] The successful development of catalysts together with reactor technology has led to both significant savings in raw materials and emissions. The
Quantifying social asymmetric structures.
Solanas, Antonio; Salafranca, Lluís; Riba, Carles; Sierra, Vicenta; Leiva, David
Many social phenomena involve a set of dyadic relations among agents whose actions may be dependent. Although individualistic approaches have frequently been applied to analyze social processes, these are not generally concerned with dyadic relations, nor do they deal with dependency. This article describes a mathematical procedure for analyzing dyadic interactions in a social system. The proposed method consists mainly of decomposing asymmetric data into their symmetric and skew-symmetric parts. A quantification of skew symmetry for a social system can be obtained by dividing the norm of the skew-symmetric matrix by the norm of the asymmetric matrix. This calculation makes available to researchers a quantity related to the amount of dyadic reciprocity. With regard to agents, the procedure enables researchers to identify those whose behavior is asymmetric with respect to all agents. It is also possible to derive symmetric measurements among agents and to use multivariate statistical techniques.
Asymmetrical field emitter
Fleming, J.G.; Smith, B.K.
A method is disclosed for providing a field emitter with an asymmetrical emitter structure having a very sharp tip in close proximity to its gate. One preferred embodiment of the present invention includes an asymmetrical emitter and a gate. The emitter having a tip and a side is coupled to a substrate. The gate is connected to a step in the substrate. The step has a top surface and a side wall that is substantially parallel to the side of the emitter. The tip of the emitter is in close proximity to the gate. The emitter is at an emitter potential, and the gate is at a gate potential such that with the two potentials at appropriate values, electrons are emitted from the emitter. In one embodiment, the gate is separated from the emitter by an oxide layer, and the emitter is etched anisotropically to form its tip and its asymmetrical structure. 17 figs.
ISOTOPE METHODS IN HOMOGENEOUS CATALYSIS.
BULLOCK,R.M.; BENDER,B.R.
The use of isotope labels has had a fundamentally important role in the determination of mechanisms of homogeneously catalyzed reactions. Mechanistic data is valuable since it can assist in the design and rational improvement of homogeneous catalysts. There are several ways to use isotopes in mechanistic chemistry. Isotopes can be introduced into controlled experiments and followed where they go or don't go; in this way, Libby, Calvin, Taube and others used isotopes to elucidate mechanistic pathways for very different, yet important chemistries. Another important isotope method is the study of kinetic isotope effects (KIEs) and equilibrium isotope effect (EIEs). Here the mere observation of where a label winds up is no longer enough - what matters is how much slower (or faster) a labeled molecule reacts than the unlabeled material. The most careti studies essentially involve the measurement of isotope fractionation between a reference ground state and the transition state. Thus kinetic isotope effects provide unique data unavailable from other methods, since information about the transition state of a reaction is obtained. Because getting an experimental glimpse of transition states is really tantamount to understanding catalysis, kinetic isotope effects are very powerful.
Asymmetric ion trap
Barlow, Stephan E.; Alexander, Michael L.; Follansbee, James C.
An ion trap having two end cap electrodes disposed asymmetrically about a center of a ring electrode. The inner surface of the end cap electrodes are conformed to an asymmetric pair of equipotential lines of the harmonic formed by the application of voltages to the electrodes. The asymmetry of the end cap electrodes allows ejection of charged species through the closer of the two electrodes which in turn allows for simultaneously detecting anions and cations expelled from the ion trap through the use of two detectors charged with opposite polarity.
Specific acid catalysis and Lewis acid catalysis of Diels–Alder reactions in aqueous media
Mubofu, Egid B.; Engberts, Jan B.F.N.
A comparative study of specific acid catalysis and Lewis acid catalysis of Diels–Alder reactions between dienophiles (1, 4 and 6) and cyclopentadiene (2) in water and mixed aqueous media is reported. The reactions were performed in water with copper(II) nitrate as the Lewis acid catalyst whereas
Specific acid catalysis and Lewis acid catalysis of Diels-Alder reactions in aqueous media
Mubofu, E.B.; Engberts, J.B.F.N.
A comparative study of specific acid catalysis and Lewis acid catalysis of Diells-Alder reactions between dienophiles (1, 4 and 6) and cyclopentadiene (2) in water and mixed aqueous media is reported. The reactions were performed in water with copper(II) nitrate as the Lewis acid catalyst whereas
How Is Nature Asymmetric?
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 6. How Is Nature Asymmetric? - Discrete Symmetries in Particle Physics and their Violation ... Indian Institute of Technology, Chennai. Aligarh Muslim University. University of Rajasthan, Jaipur. Indian Institute of Science, Bangalore 560012, India.
Exploring asymmetric catalytic transformations
Guduguntla, Sureshbabu
In Chapter 2, we report a highly enantioselective synthesis of β-alkyl-substituted alcohols through a one-pot Cu- catalyzed asymmetric allylic alkylation with organolithium reagents followed by reductive ozonolysis. The synthesis of γ-alkyl-substituted alcohols was also achieved through Cu-catalyzed
Final Technical Report: Metal—Organic Surface Catalyst for Low-temperature Methane Oxidation: Bi-functional Union of Metal—Organic Complex and Chemically Complementary Surface
Tait, Steven L. [Indiana Univ., Bloomington, IN (United States)
Stabilization and chemical control of transition metal centers is a critical problem in the advancement of heterogeneous catalysts to next-generation catalysts that exhibit high levels of selectivity, while maintaining strong activity and facile catalyst recycling. Supported metal nanoparticle catalysts typically suffer from having a wide range of metal sites with different coordination numbers and varying chemistry. This project is exploring new possibilities in catalysis by combining features of homogeneous catalysts with those of heterogeneous catalysts to develop new, bi-functional systems. The systems are more complex than traditional heterogeneous catalysts in that they utilize sequential active sites to accomplish the desired overall reaction. The interaction of metal—organic catalysts with surface supports and their interactions with reactants to enable the catalysis of critical reactions at lower temperatures are at the focus of this study. Our work targets key fundamental chemistry problems. How do the metal—organic complexes interact with the surface? Can those metal center sites be tuned for selectivity and activity as they are in the homogeneous system by ligand design? What steps are necessary to enable a cooperative chemistry to occur and open opportunities for bi-functional catalyst systems? Study of these systems will develop the concept of bringing together the advantages of heterogeneous catalysis with those of homogeneous catalysis, and take this a step further by pursuing the objective of a bi-functional system. The use of metal-organic complexes in surface catalysts is therefore of interest to create well-defined and highly regular single-site centers. While these are not likely to be stable in the high temperature environments (> 300 °C) typical of industrial heterogeneous catalysts, they could be applied in moderate temperature reactions (100-300 °C), made feasible by lowering reaction temperatures by better catalyst control. They also
Nano catalysis: Academic Discipline and Industrial Realities
Olveira, S.; Forster, S.P.; Seeger, S.
Nano technology plays a central role in both academic research and industrial applications. Nano enabled products are not only found in consumer markets, but also importantly in business to business markets (B2B). One of the oldest application areas of nano technology is nano catalysis—an excellent example for such a B2 B market. Several existing reviews illustrate the scientific developments in the field of nano catalysis. The goal of the present review is to provide an up-to-date picture of academic research and to extend this picture by an industrial and economic perspective. We therefore conducted an extensive search on several scientific databases and we further analyzed more than 1,500 nano catalysis-related patents and numerous market studies. We found that scientists today are able to prepare nano catalysts with superior characteristics regarding activity, selectivity, durability, and recoverability, which will contribute to solve current environmental, social, and industrial problems. In industry, the potential of nano catalysis is recognized, clearly reflected by the increasing number of nano catalysis-related patents and products on the market. The current nano catalysis research in academic and industrial laboratories will therefore enable a wealth of future applications in the industry
Achieving bifunctional cloak via combination of passive and active schemes
Lan, Chuwen; Bi, Ke; Gao, Zehua; Li, Bo; Zhou, Ji
In this study, a simple and delicate approach to realizing manipulation of multi-physics field simultaneously through combination of passive and active schemes is proposed. In the design, one physical field is manipulated with passive scheme while the other with active scheme. As a proof of this concept, a bifunctional device is designed and fabricated to behave as electric and thermal invisibility cloak simultaneously. It is found that the experimental results are consistent with the simulated ones well, confirming the feasibility of our method. Furthermore, the proposed method could also be extended to other multi-physics fields, which might lead to potential applications in thermal, electric, and acoustic areas.
Nano-materials are important in many diverse areas, from basic research to various applications in electronics, biochemical sensors, catalysis and energy. They have emerged as sustainable alternatives to conventional materials, as robust high surface area heterogeneous catalysts and catalyst supports. The nano-sized particles increase the exposed surface area of the active component of the catalyst, thereby enhancing the contact between reactants and catalyst dramatically and mimicking the homogeneous catalysts. This review focuses on the use of nano-catalysis for green chemistry development including the strategy of using microwave heating with nano-catalysis in benign aqueous reaction media which offers an extraordinary synergistic effect with greater potential than these three components in isolation. To illustrate the proof-of-concept of this "green and sustainable" approach, representative examples are discussed in this article. © 2010 The Royal Society of Chemistry.
Quantum catalysis : the modelling of catalytic transition states
Hall, M.B.; Margl, P.; Naray-Szabo, G.; Schramm, Vern; Truhlar, D.G.; Santen, van R.A.; Warshel, A.; Whitten, J.L.; Truhlar, D.G.; Morokuma, K.
A review with 101 refs.; we present an introduction to the computational modeling of transition states for catalytic reactions. We consider both homogeneous catalysis and heterogeneous catalysis, including organometallic catalysts, enzymes, zeolites and metal oxides, and metal surfaces. We summarize
Competing role of catalysis-coagulation and catalysis-fragmentation in kinetic aggregation behaviours
Li Xiao-Dong; Lin Zhen-Quan; Song Mei-Xia; Ke Jian-Hong
We propose a kinetic aggregation model where species A aggregates evolve by the catalysis-coagulation and the catalysis-fragmentation, while the catalyst aggregates of the same species B or C perform self-coagulation processes. By means of the generalized Smoluchowski rate equation based on the mean-field assumption, we study the kinetic behaviours of the system with the catalysis-coagulation rate kernel K(i,j;l) � l ν and the catalysis-fragmentation rate kernel F(i,j;l) � l μ , where l is the size of the catalyst aggregate, and ν and μ are two parameters reflecting the dependence of the catalysis reaction on the size of the catalyst aggregate. The relation between the values of parameters ν and μ reflects the competing roles between the two catalysis processes in the kinetic evolution of species A. It is found that the competing roles of the catalysis-coagulation and catalysis-fragmentation in the kinetic aggregation behaviours are not determined simply by the relation between the two parameters ν and μ, but also depend on the values of these two parameters. When ν > μ and ν ≥ 0, the kinetic evolution of species A is dominated by the catalysis-coagulation and its aggregate size distribution a k (t) obeys the conventional or generalized scaling law; when ν k (t) approaches the scale-free form; and in other cases, a balance is established between the two competing processes at large times and a k (t) obeys a modified scaling law. (cross-disciplinary physics and related areas of science and technology)
Next-Generation Catalysis for Renewables: Combining Enzymatic with Inorganic Heterogeneous Catalysis for Bulk Chemical Production
Vennestrøm, Peter Nicolai Ravnborg; Christensen, C.H.; Pedersen, S.
chemical platform under different conditions than those conventionally employed. Indeed, new process and catalyst concepts need to be established. Both enzymatic catalysis (biocatalysis) and heterogeneous inorganic catalysis are likely to play a major role and, potentially, be combined. One type...... of combination involves one-pot cascade catalysis with active sites from bio- and inorganic catalysts. In this article the emphasis is placed specifically on oxidase systems involving the coproduction of hydrogen peroxide, which can be used to create new in situ collaborative oxidation reactions for bulk...
A molecular view of heterogeneous catalysis
Christensen, Claus H.; Nørskov, Jens Kehlet
The establishment of a molecular view of heterogeneous catalysis has been hampered for a number of reasons. There are, however, recent developments, which show that we are now on the way towards reaching a molecular-scale picture of the way solids work as catalysts. By a combination of new...... by enabling a rational design of new catalysts. We illustrate this important development in heterogeneous catalysis by highlighting recent examples of catalyst systems for which it has been possible to achieve such a detailed understanding. In particular, we emphasize examples where this progress has made...
Keynotes in energy-related catalysis
Kaliaguine, S
Catalysis by solid acids, which includes (modified) zeolites, is of special relevance to energy applications. Acid catalysis is highly important in modern petroleum refining operations - large-scale processes such as fluid catalytic cracking, catalytic reforming, alkylation and olefin oligomerization rely on the transformation of hydrocarbons by acid catalysts. (Modified) zeolites are therefore essential for the improvement of existing processes and for technical innovations in the conversion of crude. There can be little doubt that zeolite-based catalysts will play a major role in the futu
Heterogeneous catalysis at nanoscale for energy applications
Tao, Franklin (Feng); Kamat, Prashant V
This book presents both the fundamentals concepts and latest achievements of a field that is growing in importance since it represents a possible solution for global energy problems. It focuses on an atomic-level understanding of heterogeneous catalysis involved in important energy conversion processes. It presents a concise picture for the entire area of heterogeneous catalysis with vision at the atomic- and nano- scales, from synthesis, ex-situ and in-situ characterization, catalytic activity and selectivity, to mechanistic understanding based on experimental exploration and theoretical si
Catalysis by nonmetals rules for catalyst selection
Krylov, Oleg V
Catalysis by Non-metals: Rules of Catalyst Selection presents the development of scientific principles for the collection of catalysts. It discusses the investigation of the mechanism of chemosorption and catalysis. It addresses a series of properties of solid with catalytic activity. Some of the topics covered in the book are the properties of a solid and catalytic activity in oxidation-reduction reactions; the difference of electronegativities and the effective charges of atoms; the role of d-electrons in the catalytic properties of a solid; the color of solids; and proton-acid and proton-ba
Perspectives in the development of hybrid bifunctional antitumour agents.
Musso, Loana; Dallavalle, Sabrina; Zunino, Franco
In spite of the development of a large number of novel target-specific antitumour agents, the single-agent therapy is in general not able to provide an effective durable control of the malignant process. The limited efficacy of the available agents (both conventional cytotoxic and novel target-specific) reflects not only the expression of defence mechanisms, but also the complexity of tumour cell alterations and the redundancy of survival pathways, thus resulting in tumour cell ability to survive under stress conditions. A well-established strategy to improve the efficacy of antitumour therapy is the rational design of drug combinations aimed at achieving synergistic effects and overcoming drug resistance. An alternative strategy could be the use of agents designed to inhibit simultaneously multiple cellular targets relevant to tumour growth/survival. Among these novel agents are hybrid bifunctional drugs, i.e. compounds resulting by conjugation of different drugs or containing the pharmocophores of different drugs. This strategy has been pursued using various conventional or target-specific agents (with DNA damaging agents and histone deacetylase inhibitors as the most exploited compounds). A critical overview of the most representative compounds is provided with emphasis on the HDAC inhibitor-based hybrid agents. In spite of some promising results, the actual pharmacological advantages of the hybrid agents remain to be defined. This commentary summarizes the recent advances in this field and highlights the pharmacological basis for a rational design of hybrid bifunctional agents. Copyright © 2015. Published by Elsevier Inc.
Multipartite asymmetric quantum cloning
Iblisdir, S.; Gisin, N.; Acin, A.; Cerf, N.J.; Filip, R.; Fiurasek, J.
We investigate the optimal distribution of quantum information over multipartite systems in asymmetric settings. We introduce cloning transformations that take N identical replicas of a pure state in any dimension as input and yield a collection of clones with nonidentical fidelities. As an example, if the clones are partitioned into a set of M A clones with fidelity F A and another set of M B clones with fidelity F B , the trade-off between these fidelities is analyzed, and particular cases of optimal N→M A +M B cloning machines are exhibited. We also present an optimal 1→1+1+1 cloning machine, which is an example of a tripartite fully asymmetric cloner. Finally, it is shown how these cloning machines can be optically realized
Asymmetric information and economics
Frieden, B. Roy; Hawkins, Raymond J.
We present an expression of the economic concept of asymmetric information with which it is possible to derive the dynamical laws of an economy. To illustrate the utility of this approach we show how the assumption of optimal information flow leads to a general class of investment strategies including the well-known Q theory of Tobin. Novel consequences of this formalism include a natural definition of market efficiency and an uncertainty principle relating capital stock and investment flow.
Asymmetric Evolutionary Games
McAvoy, Alex; Hauert, Christoph
Evolutionary game theory is a powerful framework for studying evolution in populations of interacting individuals. A common assumption in evolutionary game theory is that interactions are symmetric, which means that the players are distinguished by only their strategies. In nature, however, the microscopic interactions between players are nearly always asymmetric due to environmental effects, differing baseline characteristics, and other possible sources of heterogeneity. To model these phenomena, we introduce into evolutionary game theory two broad classes of asymmetric interactions: ecological and genotypic. Ecological asymmetry results from variation in the environments of the players, while genotypic asymmetry is a consequence of the players having differing baseline genotypes. We develop a theory of these forms of asymmetry for games in structured populations and use the classical social dilemmas, the Prisoner's Dilemma and the Snowdrift Game, for illustrations. Interestingly, asymmetric games reveal essential differences between models of genetic evolution based on reproduction and models of cultural evolution based on imitation that are not apparent in symmetric games. PMID:26308326
leading to opening strained siloxane bridges into acid-base paired functionalities (formation of N-phenylsilanamine-silanol pairs). This approach was successfully applied to the design of a series of aniline derivatives bifunctional SBA15. The efficiency
µ-reactors for Heterogeneous Catalysis
Jensen, Robert
is described in detail. Since heating and temperature measurement is an extremely important point in heterogeneous catalysis an entire chapter is dedicated to this subject. Three different types of heaters have been implemented and tested both for repeatability and homogeneity of the heating as well...
Heterogeneous catalysis in highly sensitive microreactors
Olsen, Jakob Lind
This thesis present a highly sensitive silicon microreactor and examples of its use in studying catalysis. The experimental setup built for gas handling and temperature control for the microreactor is described. The implementation of LabVIEW interfacing for all the experimental parts makes...
Supported ionic liquid-phase (SILP) catalysis
Riisager, Anders; Fehrmann, Rasmus; Wasserscheid, P.
The concept of supported ionic liquid-phase (SILP) catalysis has been demonstrated for gas- and liquid-phase continuous fixed-bed reactions using rhodium phosphine catalyzed hydroformylation of propene and 1-octene as examples. The nature of the support had important influence on both the catalytic...
Rate tracer studies of heterogeneous catalysis
Happel, J; Kiang, S
An analysis is presented of the extent to which parameters involved in transient tracing of isotopic species in heterogeneous catalysis can be determined by experiments in which tracer concentrations are measured as a function of time. Different treatments for open and closed systems with the over-all reaction at equilibrium or irreversible were developed.
Transition metal catalysis in confined spaces
Leenders, S.H.A.M.
Chemical reactions are required for the conversion of feedstocks to valuable materials, such as different types of plastics, pharmaceutical ingredients and advanced materials. In order to facilitate the conversion of these feedstocks to a wide array of products, catalysis plays a prominent role.
Constructing Asymmetric Polyion Complex Vesicles via Template Assembling Strategy: Formulation Control and Tunable Permeability
Junbo Li
Full Text Available A strategy for constructing polyion complex vesicles (PICsomes with asymmetric structure is described. Poly(methylacrylic acid-block-poly(N-isopropylacrylamide modified gold nanoparticles (PMAA-b-PNIPAm-@-Au NPs were prepared and then assembled with poly(ethylene glycol-block-poly[1-methyl-3-(2-methacryloyloxy propylimidazolium bromine] (PEG-b-PMMPImB via polyion complex of PMMA and PMMPImB. After removing the Au NPs template, asymmetric PICsomes composed of a PNIPAm inner-shell, PIC wall, and PEG outer-corona were obtained. These PICsomes have low protein absorption and thermally tunable permeability, provided by the PEG outer-corona and the PNIPAm inner-shell, respectively. Moreover, PICsome size can be tailored by using templates of predetermined sizes. This novel strategy for constructing asymmetric PICsomes with well-defined properties and controllable size is valuable for applications such as drug delivery, catalysis and monitoring of chemical reactions, and biomimetics.
Insights into reaction mechanisms in heterogeneous catalysis revealed by in situ NMR spectroscopy.
Blasco, Teresa
This tutorial review intends to show the possibilities of in situ solid state NMR spectroscopy in the elucidation of reaction mechanisms and the nature of the active sites in heterogeneous catalysis. After a brief overview of the more usual experimental devices used for in situ solid state NMR spectroscopy measurements, some examples of applications taken from the recent literature will be presented. It will be shown that in situ NMR spectroscopy allows: (i) the identification of stable intermediates and transient species using indirect methods, (ii) to prove shape selectivity in zeolites, (iii) the study of reaction kinetics, and (iv) the determination of the nature and the role played by the active sites in a catalytic reaction. The approaches and methodology used to get this information will be illustrated here summarizing the most relevant contributions on the investigation of the mechanisms of a series of reactions of industrial interest: aromatization of alkanes on bifunctional catalysts, carbonylation reaction of methanol with carbon monoxide, ethylbenzene disproportionation, and the Beckmann rearrangement reaction. Special attention is paid to the research carried out on the role played by carbenium ions and alkoxy as intermediate species in the transformation of hydrocarbon molecules on solid acid catalysts.
Alkene Metathesis Catalysis: A Key for Transformations of Unsaturated Plant Oils and Renewable Derivatives
Dixneuf Pierre H.
Full Text Available This account presents the importance of ruthenium-catalysed alkene cross-metathesis for the catalytic transformations of biomass derivatives into useful intermediates, especially those developed by the authors in the Rennes (France catalysis team in cooperation with chemical industry. The cross-metathesis of a variety of functional alkenes arising from plant oils, with acrylonitrile and fumaronitrile and followed by catalytic tandem hydrogenation, will be shown to afford linear amino acid derivatives, the precursors of polyamides. The exploration of cross-metathesis of bio-sourced unsaturated nitriles with acrylate with further catalytic hydrogenation has led to offer an excellent route to α,ω-amino acid derivatives. That of fatty aldehydes has led to bifunctional long chain aldehydes and saturated diols. Two ways of access to functional dienes by ruthenium-catalyzed ene-yne cross-metathesis of plant oil alkene derivatives with alkynes and by cross-metathesis of bio-sourced alkenes with allylic chloride followed by catalytic dehydrohalogenation, are reported. Ricinoleate derivatives offer a direct access to chiral dihydropyrans and tetrahydropyrans via ring closing metathesis. Cross-metathesis giving value to terpenes and eugenol for the straightforward synthesis of artificial terpenes and functional eugenol derivatives without C=C bond isomerization are described.
Biomedical Applications of Gold Nanoparticles Functionalized Using Hetero-Bifunctional Poly(ethylene glycol) Spacer
Fu, Wei; Shenoy, Dinesh; Li, Jane; Crasto, Curtis; Jones, Graham; Dimarzio, Charles; Sridhar, Srinivas; Amiji, Mansoor
To increase the targeting potential, circulation time, and the flexibility of surface-attached biomedically-relevant ligands on gold nanoparticles, hetero-bifunctional poly(ethylene glycol) (PEG, MW 1,500...
Bifunctional chelating agent for the design and development of site specific radiopharmaceuticals and biomolecule conjugation strategy
Katti, Kattesh V.; Prabhu, Kandikere R.; Gali, Hariprasad; Pillarsetty, Nagavara Kishore; Volkert, Wynn A.
There is provided a method of labeling a biomolecule with a transition metal or radiometal in a site specific manner to produce a diagnostic or therapeutic pharmaceutical compound by synthesizing a P.sub.2 N.sub.2 -bifunctional chelating agent intermediate, complexing the intermediate with a radio metal or a transition metal, and covalently linking the resulting metal-complexed bifunctional chelating agent with a biomolecule in a site specific manner. Also provided is a method of synthesizing the --PR.sub.2 containing biomolecules by synthesizing a P.sub.2 N.sub.2 -bifunctional chelating agent intermediate, complexing the intermediate with a radiometal or a transition metal, and covalently linking the resulting radio metal-complexed bifunctional chelating agent with a biomolecule in a site specific manner. There is provided a therapeutic or diagnostic agent comprising a --PR.sub.2 containing biomolecule.
Design and Testing of Bi-Functional, P-Loop-Targeted MDM2 Inhibitors
Prives, Carol L; Stockwell, Brent R
Our proposal is to design and evaluate a novel class of bifunctional MDM2 inhibitors, based on the discovery that nucleotides can bind to the P-loop of MDM2 and cause its relocalization to the nucleolus...
Prives, Carol L
This proposal is to design and evaluate a novel class of bifunctional MDM2 inhibitors, based on the discovery that nucleotides can bind to the P-loop of MDM2 and cause its relocalization to the nucleolus...
Synergistic Interaction within Bifunctional Ruthenium Nanoparticle/SILP Catalysts for the Selective Hydrodeoxygenation of Phenols.
Luska, Kylie L; Migowski, Pedro; El Sayed, Sami; Leitner, Walter
Ruthenium nanoparticles immobilized on acid-functionalized supported ionic liquid phases (Ru NPs@SILPs) act as efficient bifunctional catalysts in the hydrodeoxygenation of phenolic substrates under batch and continuous flow conditions. A synergistic interaction between the metal sites and acid groups within the bifunctional catalyst leads to enhanced catalytic activities for the overall transformation as compared to the individual steps catalyzed by the separate catalytic functionalities. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Asymmetric quantum cloning machines
Cerf, N.J.
A family of asymmetric cloning machines for quantum bits and N-dimensional quantum states is introduced. These machines produce two approximate copies of a single quantum state that emerge from two distinct channels. In particular, an asymmetric Pauli cloning machine is defined that makes two imperfect copies of a quantum bit, while the overall input-to-output operation for each copy is a Pauli channel. A no-cloning inequality is derived, characterizing the impossibility of copying imposed by quantum mechanics. If p and p ' are the probabilities of the depolarizing channels associated with the two outputs, the domain in (√p,√p ' )-space located inside a particular ellipse representing close-to-perfect cloning is forbidden. This ellipse tends to a circle when copying an N-dimensional state with N→∞, which has a simple semi-classical interpretation. The symmetric Pauli cloning machines are then used to provide an upper bound on the quantum capacity of the Pauli channel of probabilities p x , p y and p z . The capacity is proven to be vanishing if (√p x , √p y , √p z ) lies outside an ellipsoid whose pole coincides with the depolarizing channel that underlies the universal cloning machine. Finally, the tradeoff between the quality of the two copies is shown to result from a complementarity akin to Heisenberg uncertainty principle. (author)
Enantioselective syntheses of aeruginosin 298-A and its analogues using a catalytic asymmetric phase-transfer reaction and epoxidation.
Ohshima, Takashi; Gnanadesikan, Vijay; Shibuguchi, Tomoyuki; Fukuta, Yuhei; Nemoto, Tetsuhiro; Shibasaki, Masakatsu
We developed a versatile synthetic process for aeruginosin 298-A as well as several attractive analogues, in which all stereocenters were controlled by a catalytic asymmetric phase-transfer reaction and epoxidation. Furthermore, drastic counteranion effects in phase-transfer catalysis were observed for the first time, making it possible to three-dimensionally fine-tune the catalyst (ketal part, aromatic part, and counteranion).
Synthesis of deuterium-labeled analogs of the lipid hydroperoxide-derived bifunctional electrophile 4-oxo-2(E)-nonenal
Arora, Jasbir S.; Oe, Tomoyuki; Blair, Ian A.
Lipid hydroperoxides undergo homolytic decomposition into the bifunctional 4-hydroxy-2(E)-nonenal and 4-oxo-2(E)-nonenal (ONE). These bifunctional electrophiles are highly reactive and can readily modify intracellular molecules including glutathione (GSH), deoxyribonucleic acid (DNA) and proteins. Lipid hydroperoxide-derived bifunctional electrophiles are thought to contribute to the pathogenesis of a number of diseases. ONE is an α,β-unsaturated aldehyde that can react in multiple ways and w...
Primary amine/CSA ion pair: A powerful catalytic system for the asymmetric enamine catalysis
Liu, Chen; Zhu, Qiang; Huang, Kuo-Wei; Lu, Yixin
A novel ion pair catalyst containing a chiral counteranion can be readily derived by simply mixing cinchona alkaloid-derived diamine with chiral camphorsulfonic acid (CSA). A mixture of 9-amino(9-deoxy)epi-quinine 8 and (-)-CSA was found to be the best catalyst with matching chirality, enabling the direct amination of α-branched aldehydes to proceed in quantitative yields and with nearly perfect enantioselectivities. A 0.5 mol % catalyst loading was sufficient to catalyze the reaction, and a gram scale enantioselective synthesis of biologically important α-methyl phenylglycine has been successfully demonstrated. © 2011 American Chemical Society.
DNA-based asymmetric catalysis : Sequence-dependent rate acceleration and enantioselectivity
Boersma, Arnold J.; Klijn, Jaap E.; Feringa, Ben L.; Roelfes, Gerard
This study shows that the role of DNA in the DNA-based enantioselective Diels-Alder reaction of azachalcone with cyclopentadiene is not limited to that of a chiral scaffold. DNA in combination with the copper complex of 4,4'-dimethyl-2,2'-bipyridine (Cu-L1) gives rise to a rate acceleration of up to
Liu, Chen
as a Novel Chiral Ligand for Catalysis of the Asymmetric Diels-Al
NJD
Nov 25, 2008 ... P.I. Arvidsson, T. Govender, H.G. Kruger, G.E.M. Maguire and T. Naicker,. 60 .... Ni(ClO4)2. 120. 60 ..... 4 G. Desimoni, G. Faita and K.A. Jorgensen, Chem. ... 27 D.A. Evans, S.J. Miller, T. Lectka and P. von Matt, J. Am. Chem.
Approaches to single-nanoparticle catalysis.
Sambur, Justin B; Chen, Peng
Nanoparticles are among the most important industrial catalysts, with applications ranging from chemical manufacturing to energy conversion and storage. Heterogeneity is a general feature among these nanoparticles, with their individual differences in size, shape, and surface sites leading to variable, particle-specific catalytic activity. Assessing the activity of individual nanoparticles, preferably with subparticle resolution, is thus desired and vital to the development of efficient catalysts. It is challenging to measure the activity of single-nanoparticle catalysts, however. Several experimental approaches have been developed to monitor catalysis on single nanoparticles, including electrochemical methods, single-molecule fluorescence microscopy, surface plasmon resonance spectroscopy, X-ray microscopy, and surface-enhanced Raman spectroscopy. This review focuses on these experimental approaches, the associated methods and strategies, and selected applications in studying single-nanoparticle catalysis with chemical selectivity, sensitivity, or subparticle spatial resolution.
ELECTROCHEMICAL PROMOTED CATALYSIS: TOWARDS PRACTICAL UTILIZATION
DIMITRIOS TSIPLAKIDES
Full Text Available Electrochemical promotion (EP of catalysis has already been recognized as "a valuable development in catalytic research� (J. Pritchard, 1990 and as "one of the most remarkable advances in electrochemistry since 1950� (J. O'M. Bockris, 1996. Laboratory studies have clearly elucidated the phenomenology of electrochemical promotion and have proven that EP is a general phenomenon at the interface of catalysis and electrochemistry. The major progress toward practical utilization of EP is surveyed in this paper. The focus is given on the electropromotion of industrial ammonia synthesis catalyst, the bipolar EP and the development of a novel monolithic electropromoted reactor (MEPR in conjunction with the electropromotion of thin sputtered metal films. Future perspectives of electrochemical promotion applications in the field of hydrogen technologies are discussed.
Symmetry and asymmetry in mandelate racemase catalysis
Whitman, C.P.; Hegeman, G.D.; Cleland, W.W.; Kenyon, G.L.
Kinetic properties of mandelate racemase catalysis (Vmax, Km, deuterium isotope effects, and pH profiles) were all measured in both directions by the circular dichroic assay of Sharp. These results, along with those of studying interactions of mandelate racemase with resolved, enantiomeric competitive inhibitors [(R)- and (S)-alpha-phenylglycerates], indicate a high degree of symmetry in both binding and catalysis. Racemization of either enantiomer of mandelate in D 2 O did not show an overshoot region of molecular ellipticity in circular dichroic measurements upon approach to equilibrium. Both the absence of such an overshoot region and the high degree of kinetic symmetry are consistent with a one-base acceptor mechanism for mandelate racemase. On the other hand, results of irreversible inhibition with partially resolved, enantiomeric affinity labels [(R)- and (S)-alpha-phenylglycidates] reveal a ''functional asymmetry'' at the active site. Mechanistic proposals, consistent with these results, are presented
Cinchona alkaloids in asymmetric organocatalysis
Marcelli, T.; Hiemstra, H.
This article reviews the applications of cinchona alkaloids as asymmetric catalysts. In the last few years, characterized by the resurgence of interest in asymmetric organocatalysis, cinchona derivatives have been shown to catalyze an outstanding array of chemical reactions, often with remarkable
Alternative Asymmetric Stochastic Volatility Models
M. Asai (Manabu); M.J. McAleer (Michael)
textabstractThe stochastic volatility model usually incorporates asymmetric effects by introducing the negative correlation between the innovations in returns and volatility. In this paper, we propose a new asymmetric stochastic volatility model, based on the leverage and size effects. The model is
USD Catalysis Group for Alternative Energy
Hoefelmeyer, James D.; Koodali, Ranjit; Sereda, Grigoriy; Engebretson, Dan; Fong, Hao; Puszynski, Jan; Shende, Rajesh; Ahrenkiel, Phil
The South Dakota Catalysis Group (SDCG) is a collaborative project with mission to develop advanced catalysts for energy conversion with two primary goals: (1) develop photocatalytic systems in which polyfunctionalized TiO2 are the basis for hydrogen/oxygen synthesis from water and sunlight (solar fuels group), (2) develop new materials for hydrogen utilization in fuel cells (fuel cell group). In tandem, these technologies complete a closed chemical cycle with zero emissions.
Confined catalysis under two-dimensional materials
Li, Haobo; Xiao, Jianping; Fu, Qiang; Bao, Xinhe
Small spaces in nanoreactors may have big implications in chemistry, because the chemical nature of molecules and reactions within the nanospaces can be changed significantly due to the nanoconfinement effect. Two-dimensional (2D) nanoreactor formed under 2D materials can provide a well-defined model system to explore the confined catalysis. We demonstrate a general tendency for weakened surface adsorption under the confinement of graphene overlayer, illustrating the feasible modulation of su...
Hybrid nuclear reactors and muon catalysis
Petrov, Yu.
Three methods are described of the conversion of isotope 238 U to 239 Pu by neutron capture in fast breeder reactors, in the breeding blanket of hybrid thermonuclear reactors using neutrons generated by fusion and electronuclear breeding in which the target is bombarded with 1 GeV protons. Their possible use in power production is discussed. Another prospective energy source is the use of muon catalysis in the fusion of deuterium and tritium nuclei. (J.P.)
Nanoscale Advances in Catalysis and Energy Applications
Li, Yimin; Somorjai, Gabor A.
In this perspective, we present an overview of nanoscience applications in catalysis, energy conversion, and energy conservation technologies. We discuss how novel physical and chemical properties of nanomaterials can be applied and engineered to meet the advanced material requirements in the new generation of chemical and energy conversion devices. We highlight some of the latest advances in these nanotechnologies and provide an outlook at the major challenges for further developments.
Predictive Modeling in Actinide Chemistry and Catalysis
Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.
Catalysis in micellar and macromoleular systems
Fendler, Janos
Catalysis in Micellar and Macromolecular Systems provides a comprehensive monograph on the catalyses elicited by aqueous and nonaqueous micelles, synthetic and naturally occurring polymers, and phase-transfer catalysts. It delineates the principles involved in designing appropriate catalytic systems throughout. Additionally, an attempt has been made to tabulate the available data exhaustively. The book discusses the preparation and purification of surfactants; the physical and chemical properties of surfactants and micelles; solubilization in aqueous micellar systems; and the principles of
Mechanish of dTTP Inhibition of the Bifunctional dCTP Deaminase:dUTPase Encoded by Mycobacterium tuberculosis
Helt, Signe Smedegaard; Thymark, Majbritt; Harris, Pernille
Recombinant deoxycytidine triphosphate (dCTP) deaminase from Mycobacterium tuberculosis was produced in Escherichia coli and purified. The enzyme proved to be a bifunctional dCTP deaminase:deoxyuridine triphosphatase. As such, the M. tuberculosis enzyme is the second bifunctional enzyme to be cha......Recombinant deoxycytidine triphosphate (dCTP) deaminase from Mycobacterium tuberculosis was produced in Escherichia coli and purified. The enzyme proved to be a bifunctional dCTP deaminase:deoxyuridine triphosphatase. As such, the M. tuberculosis enzyme is the second bifunctional enzyme...
Asymmetric Realized Volatility Risk
David E. Allen
Full Text Available In this paper, we document that realized variation measures constructed from high-frequency returns reveal a large degree of volatility risk in stock and index returns, where we characterize volatility risk by the extent to which forecasting errors in realized volatility are substantive. Even though returns standardized by ex post quadratic variation measures are nearly Gaussian, this unpredictability brings considerably more uncertainty to the empirically relevant ex ante distribution of returns. Explicitly modeling this volatility risk is fundamental. We propose a dually asymmetric realized volatility model, which incorporates the fact that realized volatility series are systematically more volatile in high volatility periods. Returns in this framework display time varying volatility, skewness and kurtosis. We provide a detailed account of the empirical advantages of the model using data on the S&P 500 index and eight other indexes and stocks.
Asymmetric Higgsino dark matter.
Blum, Kfir; Efrati, Aielet; Grossman, Yuval; Nir, Yosef; Riotto, Antonio
In the supersymmetric framework, prior to the electroweak phase transition, the existence of a baryon asymmetry implies the existence of a Higgsino asymmetry. We investigate whether the Higgsino could be a viable asymmetric dark matter candidate. We find that this is indeed possible. Thus, supersymmetry can provide the observed dark matter abundance and, furthermore, relate it with the baryon asymmetry, in which case the puzzle of why the baryonic and dark matter mass densities are similar would be explained. To accomplish this task, two conditions are required. First, the gauginos, squarks, and sleptons must all be very heavy, such that the only electroweak-scale superpartners are the Higgsinos. With this spectrum, supersymmetry does not solve the fine-tuning problem. Second, the temperature of the electroweak phase transition must be low, in the (1-10) GeV range. This condition requires an extension of the minimal supersymmetric standard model.
Asymmetric Organocatalytic Cycloadditions
Mose, Rasmus
has gained broad recognition as it has found several applications in academia and industry. The [4+2] cycloaddition has also been performed in an enantioselective aminocatalytic fashion which allows the generation of optically active products. In this thesis it is demonstrated how trienamines can......Since the onset of the new millennium the field of organocatalysis has undergone a great expansion led by investigations in the field of aminocatalysis. This thesis will address some recent developments in aminocatalyzed cycloadditions and provide a theoretical background hereto. Cycloadditions...... undergo cascade reactions with different electron deficient dienophiles in Diels Alder – nucleophilic ring closing reactions. This methodology opens up for the direct asymmetric formation of hydroisochromenes and hydroisoquinolines which may possess interesting biological activities. It is also...
Neutrons for Catalysis: A Workshop on Neutron Scattering Techniques for Studies in Catalysis
Overbury, Steven H.; Coates, Leighton; Herwig, Kenneth W.; Kidder, Michelle
This report summarizes the Workshop on Neutron Scattering Techniques for Studies in Catalysis, held at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) on September 16 and 17, 2010. The goal of the Workshop was to bring experts in heterogeneous catalysis and biocatalysis together with neutron scattering experimenters to identify ways to attack new problems, especially Grand Challenge problems in catalysis, using neutron scattering. The Workshop locale was motivated by the neutron capabilities at ORNL, including the High Flux Isotope Reactor (HFIR) and the new and developing instrumentation at the SNS. Approximately 90 researchers met for 1 1/2 days with oral presentations and breakout sessions. Oral presentations were divided into five topical sessions aimed at a discussion of Grand Challenge problems in catalysis, dynamics studies, structure characterization, biocatalysis, and computational methods. Eleven internationally known invited experts spoke in these sessions. The Workshop was intended both to educate catalyst experts about the methods and possibilities of neutron methods and to educate the neutron community about the methods and scientific challenges in catalysis. Above all, it was intended to inspire new research ideas among the attendees. All attendees were asked to participate in one or more of three breakout sessions to share ideas and propose new experiments that could be performed using the ORNL neutron facilities. The Workshop was expected to lead to proposals for beam time at either the HFIR or the SNS; therefore, it was expected that each breakout session would identify a few experiments or proof-of-principle experiments and a leader who would pursue a proposal after the Workshop. Also, a refereed review article will be submitted to a prominent journal to present research and ideas illustrating the benefits and possibilities of neutron methods for catalysis research.
Overbury, Steven {Steve} H [ORNL; Coates, Leighton [ORNL; Herwig, Kenneth W [ORNL; Kidder, Michelle [ORNL
Controllable Catalysis with Nanoparticles: Bimetallic Alloy Systems and Surface Adsorbates
Chen, Tianyou
Transition metal nanoparticles are privileged materials in catalysis due to their high specific surface areas and abundance of active catalytic sites. While many of these catalysts are quite useful, we are only beginning to understand the underlying catalytic mechanisms. Opening the "black box� of nanoparticle catalysis is essential to achieve the ultimate goal of catalysis by design. In this Perspective we highlight recent work addressing the topic of controlled catalysis with bimetallic alloy and "designer� adsorbate-stabilized metal nanoparticles.
Chen, Tianyou; Rodionov, Valentin
Bifunctional avidin with covalently modifiable ligand binding site.
Jenni Leppiniemi
Full Text Available The extensive use of avidin and streptavidin in life sciences originates from the extraordinary tight biotin-binding affinity of these tetrameric proteins. Numerous studies have been performed to modify the biotin-binding affinity of (streptavidin to improve the existing applications. Even so, (streptavidin greatly favours its natural ligand, biotin. Here we engineered the biotin-binding pocket of avidin with a single point mutation S16C and thus introduced a chemically active thiol group, which could be covalently coupled with thiol-reactive molecules. This approach was applied to the previously reported bivalent dual chain avidin by modifying one binding site while preserving the other one intact. Maleimide was then coupled to the modified binding site resulting in a decrease in biotin affinity. Furthermore, we showed that this thiol could be covalently coupled to other maleimide derivatives, for instance fluorescent labels, allowing intratetrameric FRET. The bifunctional avidins described here provide improved and novel tools for applications such as the biofunctionalization of surfaces.
Bioinspired Bifunctional Membrane for Efficient Clean Water Generation.
Liu, Yang; Lou, Jinwei; Ni, Mengtian; Song, Chengyi; Wu, Jianbo; Dasgupta, Neil P; Tao, Peng; Shang, Wen; Deng, Tao
Solving the problems of water pollution and water shortage is an urgent need for the sustainable development of modern society. Different approaches, including distillation, filtration, and photocatalytic degradation, have been developed for the purification of contaminated water and the generation of clean water. In this study, we explored a new approach that uses solar light for both water purification and clean water generation. A bifunctional membrane consisting of a top layer of TiO2 nanoparticles (NPs), a middle layer of Au NPs, and a bottom layer of anodized aluminum oxide (AAO) was designed and fabricated through multiple filtration processes. Such a design enables both TiO2 NP-based photocatalytic function and Au NP-based solar-driven plasmonic evaporation. With the integration of these two functions into a single membrane, both the purification of contaminated water through photocatalytic degradation and the generation of clean water through evaporation were demonstrated using simulated solar illumination. Such a demonstration should also help open up a new strategy for maximizing solar energy conversion and utilization.
Plutonium and americium extraction studies with bifunctional organophosphorus extractants
Navratil, J.D.
Neutral bifunctional organophosphorus extractants, such as octylphenyl-N,N-diisobutylcarbamoylmethylphosphine oxide (CMPO) and dihexyl-N,N-diethylcarbamoylmethylphosphonate (CMP), are under study at the Rocky Flats Plant (RFP) to remove plutonium and americium from the 7M nitric acid waste. These compounds extract trivalent actinides from strong nitric acid, a property which distinguishes them from monofunctional organiphosphorus reagents. Furthermore, the reagents extract hydroytic plutonium (IV) polymer which is present in the acid waste stream. The compounds extract trivalent actinides with a 3:1 stoichiometry, whereas tetra- and hexavalent actinides extract with a stoichiometry of 2:1. Preliminary studies indicate that the extracted plutonium polymer complex contains one to two molecules of CMP per plutonium ion and the plutonium(IV) maintains a polymeric structure. Recent studies by Horwitz and co-workers conclude that the CMPO and CMP reagents behave as monodentate ligands. At RFP, three techniques are being tested for using CMP and CMPO to remove plutonium and americium from nitric acid waste streams. The different techniques are liquid-liquid extraction, extraction chromatography, and solid-supported liquid membranes. Recent tests of the last two techniques will be briefly described. In all the experiments, CMP was an 84% pure material from Bray Oil Co. and CMPO was 98% pure from M and T Chemicals
A conserved regulatory mechanism in bifunctional biotin protein ligases.
Wang, Jingheng; Beckett, Dorothy
Class II bifunctional biotin protein ligases (BirA), which catalyze post-translational biotinylation and repress transcription initiation, are broadly distributed in eubacteria and archaea. However, it is unclear if these proteins all share the same molecular mechanism of transcription regulation. In Escherichia coli the corepressor biotinoyl-5'-AMP (bio-5'-AMP), which is also the intermediate in biotin transfer, promotes operator binding and resulting transcription repression by enhancing BirA dimerization. Like E. coli BirA (EcBirA), Staphylococcus aureus, and Bacillus subtilis BirA (Sa and BsBirA) repress transcription in vivo in a biotin-dependent manner. In this work, sedimentation equilibrium measurements were performed to investigate the molecular basis of this biotin-responsive transcription regulation. The results reveal that, as observed for EcBirA, Sa, and BsBirA dimerization reactions are significantly enhanced by bio-5'-AMP binding. Thus, the molecular mechanism of the Biotin Regulatory System is conserved in the biotin repressors from these three organisms. © 2017 The Protein Society.
Single flexible nanofiber to achieve simultaneous photoluminescence-electrical conductivity bifunctionality.
Sheng, Shujuan; Ma, Qianli; Dong, Xiangting; Lv, Nan; Wang, Jinxian; Yu, Wensheng; Liu, Guixia
In order to develop new-type multifunctional composite nanofibers, Eu(BA)3 phen/PANI/PVP bifunctional composite nanofibers with simultaneous photoluminescence and electrical conductivity have been successfully fabricated via electrospinning technology. Polyvinyl pyrrolidone (PVP) is used as a matrix to construct composite nanofibers containing different amounts of Eu(BA)3 phen and polyaniline (PANI). X-Ray diffractometry (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), vibrating sample magnetometry (VSM), fluorescence spectroscopy and a Hall effect measurement system are used to characterize the morphology and properties of the composite nanofibers. The results indicate that the bifunctional composite nanofibers simultaneously possess excellent photoluminescence and electrical conductivity. Fluorescence emission peaks of Eu(3+) ions are observed in the Eu(BA)3 phen/PANI/PVP photoluminescence-electrical conductivity bifunctional composite nanofibers. The electrical conductivity reaches up to the order of 10(-3) S/cm. The luminescent intensity and electrical conductivity of the composite nanofibers can be tuned by adjusting the amounts of Eu(BA)3 phen and PANI. The obtained photoluminescence-electrical conductivity bifunctional composite nanofibers are expected to possess many potential applications in areas such as microwave absorption, molecular electronics, biomedicine and future nanomechanics. More importantly, the design concept and construction technique are of universal significance to fabricate other bifunctional one-dimensional naonomaterials. Copyright © 2014 John Wiley & Sons, Ltd.
Force on an Asymmetric Capacitor
Bahder, Thomas
.... At present, the physical basis for the Biefeld-Brown effect is not understood. The order of magnitude of the net force on the asymmetric capacitor is estimated assuming two different mechanisms of charge conduction between its electrodes...
Surface Science Foundations of Catalysis and Nanoscience
Kolasinski, Kurt K
Surface science has evolved from being a sub-field of chemistry or physics, and has now established itself as an interdisciplinary topic. Knowledge has developed sufficiently that we can now understand catalysis from a surface science perspective. No-where is the underpinning nature of surface science better illustrated than with nanoscience. Now in its third edition, this successful textbook aims to provide students with an understanding of chemical transformations and the formation of structures at surfaces. The chapters build from simple to more advanced principles with each featuring exerc
Catalysis-enhanced strengthening of porous materials
Sokolova, L.N.; Shchukin, E.D.; Burenkova, L.N.; Romanovskij, B.V.
Change in the strength of compressed tablets of the catalyst on the basis of ZrO 2 (84 mass %) and Y 2 O 3 (16 mass %) after conducting the endothermal reaction of the methanol and ethanol dehydration at 700-800 deg C is studied. It is shown, that the key factor, determining the strengthening effect by 65-88% is not at all the reaction exothermal nature, which could lead to local heating of the catalyst surface. In reality significant increase in concentration of surface defects, as compared to the equilibrium at the given temperature is achieved on the account of conjugation of processes of catalysis and surface defects formation [ru
Development of Ar-BINMOL-Derived Atropisomeric Ligands with Matched Axial and sp(3) Central Chirality for Catalytic Asymmetric Transformations.
Xu, Zheng; Xu, Li-Wen
Recently, academic chemists have renewed their interest in the development of 1,1'-binaphthalene-2,2'-diol (BINOL)-derived chiral ligands. Six years ago, a working hypothesis, that the chirality matching of hybrid chirality on a ligand could probably lead to high levels of stereoselective induction, prompted us to use the axial chirality of BINOL derivatives to generate new stereogenic centers within the same molecule with high stereoselectivity, obtaining as a result sterically favorable ligands for applications in asymmetric catalysis. This Personal Account describes our laboratory's efforts toward the development of a novel class of BINOL-derived atropisomers bearing both axial and sp(3) central chirality, the so-called Ar-BINMOLs, for asymmetric synthesis. Furthermore, on the basis of the successful application of Ar-BINMOLs and their derivatives in asymmetric catalysis, the search for highly efficient and enantioselective processes also compelled us to give special attention to the BINOL-derived multifunctional ligands with multiple stereogenic centers for use in catalytic asymmetric reactions. Copyright © 2015 The Chemical Society of Japan and Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Catalysis in electrochemistry: from fundamentals to strategies for fuel cell development
Santos, Elizabeth; Schmickler, Wolfgang
"Catalysis in Electrochemistry: From Fundamentals to Strategies for Fuel Cell Development is a modern, comprehensive reference work on catalysis in electrochemistry, including principles, methods, strategies, and applications...
The nature of the active site in heterogeneous metal catalysis
Nørskov, Jens Kehlet; Bligaard, Thomas; Larsen, Britt Hvolbæk
This tutorial review, of relevance for the surface science and heterogeneous catalysis communities, provides a molecular-level discussion of the nature of the active sites in metal catalysis. Fundamental concepts such as "Bronsted-Evans-Polanyi relations'' and "volcano curves'' are introduced...
Asymmetric Gepner models (revisited)
Gato-Rivera, B. [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands)] [Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); Schellekens, A.N., E-mail: [email protected] [NIKHEF Theory Group, Kruislaan 409, 1098 SJ Amsterdam (Netherlands)] [Instituto de Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain)] [IMAPP, Radboud Universiteit, Nijmegen (Netherlands)
We reconsider a class of heterotic string theories studied in 1989, based on tensor products of N=2 minimal models with asymmetric simple current invariants. We extend this analysis from (2,2) and (1,2) spectra to (0,2) spectra with SO(10) broken to the Standard Model. In the latter case the spectrum must contain fractionally charged particles. We find that in nearly all cases at least some of them are massless. However, we identify a large subclass where the fractional charges are at worst half-integer, and often vector-like. The number of families is very often reduced in comparison to the 1989 results, but there are no new tensor combinations yielding three families. All tensor combinations turn out to fall into two classes: those where the number of families is always divisible by three, and those where it is never divisible by three. We find an empirical rule to determine the class, which appears to extend beyond minimal N=2 tensor products. We observe that distributions of physical quantities such as the number of families, singlets and mirrors have an interesting tendency towards smaller values as the gauge groups approaches the Standard Model. We compare our results with an analogous class of free fermionic models. This displays similar features, but with less resolution. Finally we present a complete scan of the three family models based on the triply-exceptional combination (1,16{sup *},16{sup *},16{sup *}) identified originally by Gepner. We find 1220 distinct three family spectra in this case, forming 610 mirror pairs. About half of them have the gauge group SU(3)xSU(2){sub L}xSU(2){sub R}xU(1){sup 5}, the theoretical minimum, and many others are trinification models.
UV Catalysis, Cyanotype Photography, and Sunscreens
Lawrence, Glen D.; Fishelson, Stuart
This laboratory experiment is intended for a chemistry course for non-science majors. The experiment utilizes one of the earliest photographic processes, the cyanotype process, to demonstrate UV catalysis of chemical reactions. In addition to making photographic prints from negatives, the process can be used to test the effectiveness of sunscreens and the relative efficacy of the SPF (sun protection factor) rating of sunscreens. This is an inexpensive process, requiring solutions of ammonium ferric citrate and potassium ferricyanide, with options to use hydrogen peroxide and ammonium hydroxide solutions. Students can prepare their own UV-sensitized paper with the indicated chemicals and watch the photographic image appear as it is exposed to sunlight or fluorescent UV lamps in a light box designed for use in this experiment. The laboratory experiment should stimulate discussion of UV catalysis, photographic processes and photochemistry, sunscreens, and UV damage to biological organisms. The chemicals used are relatively nontoxic, and the procedure is simple enough to be used by groups of diverse ages and abilities.
Synthesis, characterization and catalytic activity of acid-base bifunctional materials through protection of amino groups
Shao, Yanqiu [College of Chemistry, Jilin University, Changchun 130023 (China); College of Chemistry, Mudanjiang Normal University, Mudanjiang 157012 (China); Liu, Heng; Yu, Xiaofang [College of Chemistry, Jilin University, Changchun 130023 (China); Guan, Jingqi, E-mail: [email protected] [College of Chemistry, Jilin University, Changchun 130023 (China); Kan, Qiubin, E-mail: [email protected] [College of Chemistry, Jilin University, Changchun 130023 (China)
Graphical abstract: Acid-base bifunctional mesoporous material SO{sub 3}H-SBA-15-NH{sub 2} was successfully synthesized under low acidic medium through protection of amino groups. Highlights: Black-Right-Pointing-Pointer The acid-base bifunctional material SO{sub 3}H-SBA-15-NH{sub 2} was successfully synthesized through protection of amino groups. Black-Right-Pointing-Pointer The obtained bifunctional material was tested for aldol condensation. Black-Right-Pointing-Pointer The SO{sub 3}H-SBA-15-NH{sub 2} catalyst containing amine and sulfonic acid groups exhibited excellent acid-basic properties. -- Abstract: Acid-base bifunctional mesoporous material SO{sub 3}H-SBA-15-NH{sub 2} was successfully synthesized under low acidic medium through protection of amino groups. X-ray diffraction (XRD), N{sub 2} adsorption-desorption, transmission electron micrographs (TEM), back titration, {sup 13}C magic-angle spinning (MAS) NMR and {sup 29}Si magic-angle spinning (MAS) NMR were employed to characterize the synthesized materials. The obtained bifunctional material was tested for aldol condensation reaction between acetone and 4-nitrobenzaldehyde. Compared with monofunctional catalysts of SO{sub 3}H-SBA-15 and SBA-15-NH{sub 2}, the bifunctional sample of SO{sub 3}H-SBA-15-NH{sub 2} containing amine and sulfonic acid groups exhibited excellent acid-basic properties, which make it possess high activity for the aldol condensation.
Consequences of acid strength for isomerization and elimination catalysis on solid acids.
Macht, Josef; Carr, Robert T; Iglesia, Enrique
We address here the manner in which acid catalysis senses the strength of solid acids. Acid strengths for Keggin polyoxometalate (POM) clusters and zeolites, chosen because of their accurately known structures, are described rigorously by their deprotonation energies (DPE). Mechanistic interpretations of the measured dynamics of alkane isomerization and alkanol dehydration are used to obtain rate and equilibrium constants and energies for intermediates and transition states and to relate them to acid strength. n-Hexane isomerization rates were limited by isomerization of alkoxide intermediates on bifunctional metal-acid mixtures designed to maintain alkane-alkene equilibrium. Isomerization rate constants were normalized by the number of accessible protons, measured by titration with 2,6-di-tert-butylpyridine during catalysis. Equilibrium constants for alkoxides formed by protonation of n-hexene increased slightly with deprotonation energies (DPE), while isomerization rate constants decreased and activation barriers increased with increasing DPE, as also shown for alkanol dehydration reactions. These trends are consistent with thermochemical analyses of the transition states involved in isomerization and elimination steps. For all reactions, barriers increased by less than the concomitant increase in DPE upon changes in composition, because electrostatic stabilization of ion-pairs at the relevant transition states becomes more effective for weaker acids, as a result of their higher charge density at the anionic conjugate base. Alkoxide isomerization barriers were more sensitive to DPE than for elimination from H-bonded alkanols, the step that limits 2-butanol and 1-butanol dehydration rates; the latter two reactions showed similar DPE sensitivities, despite significant differences in their rates and activation barriers, indicating that slower reactions are not necessarily more sensitive to acid strength, but instead reflect the involvement of more unstable organic
Reaction Current Phenomenon in Bifunctional Catalytic Metal-Semiconductor Nanostructures
Hashemian, Mohammad Amin
Energy transfer processes accompany every elementary step of catalytic chemical processes on material surface including molecular adsorption and dissociation on atoms, interactions between intermediates, and desorption of reaction products from the catalyst surface. Therefore, detailed understanding of these processes on the molecular level is of great fundamental and practical interest in energy-related applications of nanomaterials. Two main mechanisms of energy transfer from adsorbed particles to a surface are known: (i) adiabatic via excitation of quantized lattice vibrations (phonons) and (ii) non-adiabatic via electronic excitations (electron/hole pairs). Electronic excitations play a key role in nanocatalysis, and it was recently shown that they can be efficiently detected and studied using Schottky-type catalytic nanostructures in the form of measureable electrical currents (chemicurrents) in an external electrical circuit. These nanostructures typically contain an electrically continuous nanocathode layers made of a catalytic metal deposited on a semiconductor substrate. The goal of this research is to study the direct observations of hot electron currents (chemicurrents) in catalytic Schottky structures, using a continuous mesh-like Pt nanofilm grown onto a mesoporous TiO2 substrate. Such devices showed qualitatively different and more diverse signal properties, compared to the earlier devices using smooth substrates, which could only be explained on the basis of bifunctionality. In particular, it was necessary to suggest that different stages of the reaction are occurring on both phases of the catalytic structure. Analysis of the signal behavior also led to discovery of a formerly unknown (very slow) mode of the oxyhydrogen reaction on the Pt/TiO2(por) system occurring at room temperature. This slow mode was producing surprisingly large stationary chemicurrents in the range 10--50 microA/cm2. Results of the chemicurrent measurements for the bifunctional
Monodisperse Magneto-Fluorescent Bifunctional Nanoprobes for Bioapplications
Zhang, Hongwang; Huang, Heng; Pralle, Arnd; Zeng, Hao
We present the work on the synthesis of dye-doped monodisperse Fe/SiO2 core/shell nanoparticles as bifunctional probes for bioapplications. Magnetic nanoparticles (NP) have been widely studied as nano-probes for bio-imaging, sensing as well as for cancer therapy. Among all the NPs, Fe NPs have been the focus because they have very high magnetization. However, Fe NPs are usually not stable in ambient due to the fast surface oxidation of the NPs. On the other hand, dye molecules have long been used as probes for bio-imaging. But they are sensitive to environmental conditions. It requires passivation for both so that they can be stable for applications. In this work, monodisperse Fe NPs with sizes ranging from 13-20 nm have been synthesized through the chemical thermal-decomposition in a solution. Silica shells were then coated on the Fe NPs by a two-phase oil-in-water method. Dye molecules were first bonded to a silica precursor and then encapsulated into the silica shell during the coating process. The silica shells protect both the Fe NPs and dye molecules, which makes them as robust probes. The dye doped Fe/SiO2 core/shell NPs remain both highly magnetic and highly fluorescent. The stable dye doped Fe/SiO2NPs have been used as a dual functional probe for both magnetic heating and local nanoscale temperature sending, and their performance will be reported. Research supported by NSF DMR 0547036, DMR1104994.
New and future developments in catalysis catalysis for remediation and environmental concerns
New and Future Developments in Catalysis is a package of seven books that compile the latest ideas concerning alternate and renewable energy sources and the role that catalysis plays in converting new renewable feedstock into biofuels and biochemicals. Both homogeneous and heterogeneous catalysts and catalytic processes will be discussed in a unified and comprehensive approach. There will be extensive cross-referencing within all volumes. The various sources of environmental pollution are the theme of this volume. The volume lists all current environmentally friendly catalytic chemical processes used for environmental remediation and critically compares their economic viability. Offers in-depth coverage of all catalytic topics of current interest and outlines future challenges and research areas A clear and visual description of all parameters and conditions, enabling the reader to draw conclusions for a particular case Outlines the catalytic processes applicable to energy generation and design of green proce...
Does asymmetric correlation affect portfolio optimization?
Fryd, Lukas
The classical portfolio optimization problem does not assume asymmetric behavior of relationship among asset returns. The existence of asymmetric response in correlation on the bad news could be important information in portfolio optimization. The paper applies Dynamic conditional correlation model (DCC) and his asymmetric version (ADCC) to propose asymmetric behavior of conditional correlation. We analyse asymmetric correlation among S&P index, bonds index and spot gold price before mortgage crisis in 2008. We evaluate forecast ability of the models during and after mortgage crisis and demonstrate the impact of asymmetric correlation on the reduction of portfolio variance.
Microbial electro-catalysis in fuel cell
Dumas, Claire
Microbial fuel cells (MFC) are devices that ensure the direct conversion of organic matter into electricity using bacterial bio-films as the catalysts of the electrochemical reactions. This study aims at improving the comprehension of the mechanisms involved in electron transfer pathways between the adhered bacteria and the electrodes. This optimization of the MFC power output could be done, for example, in exploring and characterizing various electrode materials. The electrolysis experiments carried out on Geobacter sulfurreducens deal with the microbial catalysis of the acetate oxidation, on the one hand, and the catalysis of the fumarate reduction on the other hand. On the anodic side, differences in current densities appeared on graphite, DSA R and stainless steel (8 A/m 2 , 5 A/m 2 and 0.7 A/m 2 respectively). These variations were explained more by materials roughness differences rather than their nature. Impedance spectroscopy study shows that the electro-active bio-film developed on stainless steel does not seem to modify the evolution of the stainless steel oxide layer, only the imposed potential remains determining. On the cathodic side, stainless steel sustained current densities more than twenty times higher than those obtained with graphite electrodes. The adhesion study of G. sulfurreducens on various materials in a flow cell, suggests that the bio-films resist to the hydrodynamic constraints and are not detached under a shear stress threshold value. The installation of two MFC prototypes, one in a sea station and the other directly in Genoa harbour (Italy) confirms some results obtained in laboratory and were promising for a MFC scale-up. (author) [fr
Prebiotic RNA Synthesis by Montmorillonite Catalysis
Sohan Jheeta
Full Text Available This review summarizes our recent findings on the role of mineral salts in prebiotic RNA synthesis, which is catalyzed by montmorillonite clay minerals. The clay minerals not only catalyze the synthesis of RNA but also facilitate homochiral selection. Preliminary data of these findings have been presented at the "Horizontal Gene Transfer and the Last Universal Common Ancestor (LUCA� conference at the Open University, Milton Keynes, UK, 5–6 September 2013. The objective of this meeting was to recognize the significance of RNA in LUCA. We believe that the prebiotic RNA synthesis from its monomers must have been a simple process. As a first step, it may have required activation of the 5'-end of the mononucleotide with a leaving group, e.g., imidazole in our model reaction (Figure 1. Wide ranges of activating groups are produced from HCN under plausible prebiotic Earth conditions. The final step is clay mineral catalysis in the presence of mineral salts to facilitate selective production of functional RNA. Both the clay minerals and mineral salts would have been abundant on early Earth. We have demonstrated that while montmorillonite (pH 7 produced only dimers from its monomers in water, addition of sodium chloride (1 M enhanced the chain length multifold, as detected by HPLC. The effect of monovalent cations on RNA synthesis was of the following order: Li+ > Na+ > K+. A similar effect was observed with the anions, enhancing catalysis in the following order: Cl− > Br− > I−. The montmorillonite-catalyzed RNA synthesis was not affected by hydrophobic or hydrophilic interactions. We thus show that prebiotic synthesis of RNA from its monomers was a simple process requiring only clay minerals and a small amount of salt.
Asymmetric Synthesis via Chiral Aziridines
Tanner, David Ackland; Harden, Adrian; Wyatt, Paul
A series of chiral bis(aziridines) has been synthesised and evaluated as chelating ligands for a variety of asymmetric transformations mediated by metals [Os (dihydroxylation), Pd (allylic alkylation) Cu (cyclopropanation and aziridination, Li (1,2-addition of organolithiums to imines)]. In the b......A series of chiral bis(aziridines) has been synthesised and evaluated as chelating ligands for a variety of asymmetric transformations mediated by metals [Os (dihydroxylation), Pd (allylic alkylation) Cu (cyclopropanation and aziridination, Li (1,2-addition of organolithiums to imines...
Ideal 3D asymmetric concentrator
Garcia-Botella, Angel [Departamento Fisica Aplicada a los Recursos Naturales, Universidad Politecnica de Madrid, E.T.S.I. de Montes, Ciudad Universitaria s/n, 28040 Madrid (Spain); Fernandez-Balbuena, Antonio Alvarez; Vazquez, Daniel; Bernabeu, Eusebio [Departamento de Optica, Universidad Complutense de Madrid, Fac. CC. Fisicas, Ciudad Universitaria s/n, 28040 Madrid (Spain)
Nonimaging optics is a field devoted to the design of optical components for applications such as solar concentration or illumination. In this field, many different techniques have been used for producing reflective and refractive optical devices, including reverse engineering techniques. In this paper we apply photometric field theory and elliptic ray bundles method to study 3D asymmetric - without rotational or translational symmetry - concentrators, which can be useful components for nontracking solar applications. We study the one-sheet hyperbolic concentrator and we demonstrate its behaviour as ideal 3D asymmetric concentrator. (author)
Synthesis, characterization and use of ATRP bifunctional initiator with trichloromethyl end-groups
Toman, Luděk; Janata, Miroslav; Spěvá�ek, Jiří; Masař, Bohumil; Vl�ek, Petr; Látalová, Petra
Ro�. 43, �. 2 (2002), s. 18-19 ISSN 0032-3934 R&D Projects: GA ČR GA203/01/0513 Institutional research plan: CEZ:AV0Z4050913 Keywords : bifunctional initiator * ATRP polymerization * trichloromethyl end-groups Subject RIV: CD - Macromolecular Chemistry
Bi-functional glycosyltransferases catalyze both extension and termination of pectic galactan oligosaccharides
Laursen, Tomas; Stonebloom, Solomon H; Pidatala, Venkataramana R
. Transfer of Arap to galactan prevents further addition of galactose residues, resulting in a lower degree of polymerization. We show that this dual activity occurs both in vitro and in vivo. The herein described bi-functionality of AtGALS1 may suggest that plants can produce the incredible structural...
High surface area carbon for bifunctional air electrodes applied in zinc-air batteries
Arai, H [on leave from NTT Laboratories (Japan); Mueller, S; Haas, O [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
Bifunctional air electrodes with high surface area carbon substrates showed low reduction overpotential, thus are promising for enhancing the energy efficiency and power capability of zinc-air batteries. The improved performance is attributed to lower overpotential due to diffusion of the reaction intermediate, namely the peroxide ion. (author) 1 fig., 2 refs.
Direct catalytic transformation of carbohydrates into 5-ethoxymethylfurfural with acid–base bifunctional hybrid nanospheres
Li, Hu; Khokarale, Santosh Govind; Kotni, Ramakrishna
carbohydrates. A high EMF yield of 76.6%, 58.5%, 42.4%, and 36.5% could be achieved, when fructose, inulin, sorbose, and sucrose were used as starting materials, respectively. Although, the acid–base bifunctional nanocatalysts were inert for synthesis of EMF from glucose based carbohydrates, ethyl...
D-bifunctional protein deficiency associated with drug resistant infantile spasms
Buoni, Sabrina; Zannolli, Raffaella; Waterham, Hans; Wanders, Ronald; Fois, Alberto
Peroxisomal disorders appear with a frequency of about 1:5000 in newborns. Peroxisomal D-bifunctional protein (D-BP), encoded by the HSD17B4 gene (gene ID: 3294; locus tag: HGNC:5213, chromosome 5q2; official symbol: HSD17B4; name: hydroxysteroid (17-beta) dehydrogenase; gene type: protein coding)
Hydrodeoxygenation and coupling of aqueous phenolics over bifunctional zeolite-supported metal catalysts.
Hong, Do-Young; Miller, Stephen J; Agrawal, Pradeep K; Jones, Christopher W
Pt supported on HY zeolite is successfully used as a bifunctional catalyst for phenol hydrodeoxygenation in a fixed-bed configuration at elevated hydrogen pressures, leading to hydrogenation-hydrogenolysis ring-coupling reactions producing hydrocarbons, some with enhanced molecular weight.
Liquid phase in situ hydrodeoxygenation of biomass-derived phenolic compounds to hydrocarbons over bifunctional catalysts
Junfeng Feng; Chung-yun Hse; Zhongzhi Yang; Kui Wang; Jianchun Jiang; Junming Xu
The objective of this study was to find an effective method for converting renewable biomass-derived phenolic compounds into hydrocarbons bio-fuel via in situ catalytic hydrodeoxygenation. The in situ hydrodeoxygenation of biomass-derived phenolic compounds was carried out in methanol-water solvent over bifunctional catalysts of Raney Ni and HZSM-5 or H-Beta. In the in...
Bifunctional Interface of Au and Cu for Improved CO2 Electroreduction.
Back, Seoin; Kim, Jun-Hyuk; Kim, Yong-Tae; Jung, Yousung
Gold is known currently as the most active single-element electrocatalyst for CO2 electroreduction reaction to CO. In this work, we combine Au with a second metal element, Cu, to reduce the amount of precious metal content by increasing the surface-to-mass ratio and to achieve comparable activity to Au-based catalysts. In particular, we demonstrate that the introduction of a Au-Cu bifunctional "interface" is more beneficial than a simple and conventional homogeneous alloying of Au and Cu in stabilizing the key intermediate species, *COOH. The main advantages of the proposed metal-metal bifunctional interfacial catalyst over the bimetallic alloys include that (1) utilization of active materials is improved, and (2) intrinsic properties of metals are less affected in bifunctional catalysts than in alloys, which can then facilitate a rational bifunctional design. These results demonstrate for the first time the importance of metal-metal interfaces and morphology, rather than the simple mixing of the two metals homogeneously, for enhanced catalytic synergies.
Bifunctional catalysts for the direct production of liquid fuels from syngas
Sartipi, S.
Design and development of catalyst formulations that maximize the direct production of liquid fuels by combining Fischer-Tropsch synthesis (FTS), hydrocarbon cracking, and isomerization into one single catalyst particle (bifunctional FTS catalyst) have been investigated in this thesis. To achieve
Nanosheet Supported Single-Metal Atom Bifunctional Catalyst for Overall Water Splitting.
Ling, Chongyi; Shi, Li; Ouyang, Yixin; Zeng, Xiao Cheng; Wang, Jinlan
Nanosheet supported single-atom catalysts (SACs) can make full use of metal atoms and yet entail high selectivity and activity, and bifunctional catalysts can enable higher performance while lowering the cost than two separate unifunctional catalysts. Supported single-atom bifunctional catalysts are therefore of great economic interest and scientific importance. Here, on the basis of first-principles computations, we report a design of the first single-atom bifunctional eletrocatalyst, namely, isolated nickel atom supported on β 12 boron monolayer (Ni 1 /β 12 -BM), to achieve overall water splitting. This nanosheet supported SAC exhibits remarkable electrocatalytic performance with the computed overpotential for oxygen/hydrogen evolution reaction being just 0.40/0.06 V. The ab initio molecular dynamics simulation shows that the SAC can survive up to 800 K elevated temperature, while enacting a high energy barrier of 1.68 eV to prevent isolated Ni atoms from clustering. A viable experimental route for the synthesis of Ni 1 /β 12 -BM SAC is demonstrated from computer simulation. The desired nanosheet supported single-atom bifunctional catalysts not only show great potential for achieving overall water splitting but also offer cost-effective opportunities for advancing clean energy technology.
Boosting Bifunctional Oxygen Electrocatalysis with 3D Graphene Aerogel-Supported Ni/MnO Particles.
Fu, Gengtao; Yan, Xiaoxiao; Chen, Yifan; Xu, Lin; Sun, Dongmei; Lee, Jong-Min; Tang, Yawen
Electrocatalysts for oxygen-reduction and oxygen-evolution reactions (ORR and OER) are crucial for metal-air batteries, where more costly Pt- and Ir/Ru-based materials are the benchmark catalysts for ORR and OER, respectively. Herein, for the first time Ni is combined with MnO species, and a 3D porous graphene aerogel-supported Ni/MnO (Ni-MnO/rGO aerogel) bifunctional catalyst is prepared via a facile and scalable hydrogel route. The synthetic strategy depends on the formation of a graphene oxide (GO) crosslinked poly(vinyl alcohol) hydrogel that allows for the efficient capture of highly active Ni/MnO particles after pyrolysis. Remarkably, the resulting Ni-MnO/rGO aerogels exhibit superior bifunctional catalytic performance for both ORR and OER in an alkaline electrolyte, which can compete with the previously reported bifunctional electrocatalysts. The MnO mainly contributes to the high activity for the ORR, while metallic Ni is responsible for the excellent OER activity. Moreover, such bifunctional catalyst can endow the homemade Zn-air battery with better power density, specific capacity, and cycling stability than mixed Pt/C + RuO 2 catalysts, demonstrating its potential feasibility in practical application of rechargeable metal-air batteries. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Structure and potential applications of amido lanthanide complexes chelated by bifunctional b-diketiminate ligand
Olejník, R.; Padělková, Z.; Fridrichová, A.; Horá�ek, Michal; Merna, J.; Růži�ka, A.
Ro�. 759, JUN 2014 (2014), s. 1-10 ISSN 0022-328X R&D Projects: GA ČR GAP106/10/0924 Institutional support: RVO:61388955 Keywords : Bifunctional b-diketiminates * lanthanides * hydroamination Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.173, year: 2014
Li, Hu; Govind, Khokarale Santosh; Kotni, Ramakrishna; Shunmugavel, Saravanamurugan; Riisager, Anders; Yang, Song
Graphical abstract: Catalytic conversion of carbohydrates into HMF and EMF in ethanol/DMSO with acid–base bifunctional hybrid nanospheres prepared from self-assembly of corresponding basic amino acids and HPA. - Highlights: • Acid–base bifunctional nanospheres were efficient for production of EMF from sugars. • Synthesis of EMF in a high yield of 76.6% was realized from fructose. • Fructose based biopolymers could also be converted into EMF with good yields. • Ethyl glucopyranoside was produced in good yields from glucose in ethanol. - Abstract: A series of acid–base bifunctional hybrid nanospheres prepared from the self-assembly of basic amino acids and phosphotungstic acid (HPA) with different molar ratios were employed as efficient and recyclable catalysts for synthesis of liquid biofuel 5-ethoxymethylfurfural (EMF) from various carbohydrates. A high EMF yield of 76.6%, 58.5%, 42.4%, and 36.5% could be achieved, when fructose, inulin, sorbose, and sucrose were used as starting materials, respectively. Although, the acid–base bifunctional nanocatalysts were inert for synthesis of EMF from glucose based carbohydrates, ethyl glucopyranoside in good yields could be obtained from glucose in ethanol. Moreover, the nanocatalyst functionalized with acid and basic sites was able to be reused several times with no significant loss in catalytic activity
Comparison of bifunctional chelates for {sup 64}Cu antibody imaging
Ferreira, Cara L.; Crisp, Sarah; Bensimon, Corinne [MDS Nordion, Vancouver, BC (Canada); Yapp, Donald T.T.; Ng, Sylvia S.W. [British Columbia Cancer Agency Research Centre, Vancouver, BC (Canada); University of British Columba, The Faculty of Pharmaceutical Sciences, Vancouver, BC (Canada); Sutherland, Brent W. [British Columbia Cancer Agency Research Centre, Vancouver, BC (Canada); Gleave, Martin [Prostate Centre at Vancouver General Hospital, Vancouver, BC (Canada); Jurek, Paul; Kiefer, Garry E. [Macrocyclics Inc., Dallas, TX (United States)
Improved bifunctional chelates (BFCs) are needed to facilitate efficient {sup 64}Cu radiolabeling of monoclonal antibodies (mAbs) under mild conditions and to yield stable, target-specific agents. The utility of two novel BFCs, 1-Oxa-4,7,10-triazacyclododecane-5-S-(4-isothiocyanatobenzyl)-4,7,10-triacetic acid (p-SCN-Bn-Oxo-DO3A) and 3,6,9,15-tetraazabicyclo[9.3.1]pentadeca-1(15),11,13-triene-4-S-(4-isothiocyanatobenzyl)-3,6,9-triacetic acid (p-SCN-Bn-PCTA), for mAb imaging with {sup 64}Cu were compared to the commonly used S-2-(4-isothiocyanatobenzyl)-1,4,7,10-tetraazacyclododecane-tetraacetic acid (p-SCN-Bn-DOTA). The BFCs were conjugated to trastuzumab, which targets the HER2/neu receptor. {sup 64}Cu radiolabeling of the conjugates was optimized. Receptor binding was analyzed using flow cytometry and radioassays. Finally, PET imaging and biodistribution studies were done in mice bearing either HER2/neu-positive or HER2/neu-negative tumors. {sup 64}Cu-Oxo-DO3A- and PCTA-trastuzumab were prepared at room temperature in >95% radiochemical yield (RCY) in <30 min, compared to only 88% RCY after 2 h for the preparation of {sup 64}Cu-DOTA-trastuzumab under the same conditions. Cell studies confirmed that the immunoreactivity of the mAb was retained for each of the bioconjugates. In vivo studies showed that {sup 64}Cu-Oxo-DO3A- and PCTA-trastuzumab had higher uptake than the {sup 64}Cu-DOTA-trastuzumab at 24 h in HER2/neu-positive tumors, resulting in higher tumor to background ratios and better tumor images. By 40 h all three of the {sup 64}Cu-BFC-trastuzumab conjugates allowed for clear visualization of the HER2/neu-positive tumors but not the negative control tumor. The antibody conjugates of PCTA and Oxo-DO3A were shown to have superior {sup 64}Cu radiolabeling efficiency and stability compared to the analogous DOTA conjugate. In addition, {sup 64}Cu-PCTA and Oxo-DO3A antibody conjugates may facilitate earlier imaging with greater target to background ratios than
Asymmetric Penning trap coherent states
Contreras-Astorga, Alonso; Fernandez, David J.
By using a matrix technique, which allows to identify directly the ladder operators, the coherent states of the asymmetric Penning trap are derived as eigenstates of the appropriate annihilation operators. They are compared with those obtained through the displacement operator method.
JET and COMPASS asymmetrical disruptions
Gerasimov, S.N.; Abreu, P.; Baruzzo, M.; Drozdov, V.; Dvornova, A.; Havlí�ek, Josef; Hender, T.C.; Hronová-Bilyková, Olena; Kruezi, U.; Li, X.; Markovi�, Tomáš; Pánek, Radomír; Rubinacci, G.; Tsalas, M.; Ventre, S.; Villone, F.; Zakharov, L.E.
Ro�. 55, �. 11 (2015), s. 113006-113006 ISSN 0029-5515 R&D Projects: GA MŠk(CZ) LM2011021 Institutional support: RVO:61389021 Keywords : tokamak * asymmetrical disruption * JET * COMPASS Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 4.040, year: 2015
Density functional theory studies of transition metal nanoparticles in catalysis
Greeley, Jeffrey Philip; Rankin, Rees; Zeng, Zhenhua
Periodic Density Functional Theory calculations are capable of providing powerful insights into the structural, energetics, and electronic phenomena that underlie heterogeneous catalysis on transition metal nanoparticles. Such calculations are now routinely applied to single crystal metal surfaces...... and to subnanometer metal clusters. Descriptions of catalysis on truly nanosized structures, however, are generally not as well developed. In this talk, I will illustrate different approaches to analyzing nanocatalytic phenomena with DFT calculations. I will describe case studies from heterogeneous catalysis...... and electrocatalysis, in which single crystal models are combined with Wulff construction-based ideas to produce descriptions of average nanocatalyst behavior. Then, I will proceed to describe explicitly DFT-based descriptions of catalysis on truly nanosized particles (
Bridging heterogeneous and homogeneous catalysis concepts, strategies, and applications
Li, Can
This unique handbook fills the gap in the market for an up-to-date work that links both homogeneous catalysis applied to organic reactions and catalytic reactions on surfaces of heterogeneous catalysts.
Nanostructured Membranes for Green Synthesis of Nanoparticles and Enzyme Catalysis
Macroporous membranes functionalized with ionizable macromolecules provide promising applications in toxic metal capture at high capacity, nanoparticle synthesis, and catalysis. Our low�pressure membrane approach is marked by reaction and separation selectivity and their tunabili...
Nanostructured Membranes for Enzyme Catalysis and Green Synthesis of Nanoparticles
Macroporous membranes functionalized with ionizable macromolecules provide promising applications in toxic metal capture at high capacity, nanoparticle synthesis, and catalysis. Our low-pressure membrane approach is marked by reaction and separation selectivity and their tunabil...
Catalysis by metallic nanoparticles in solution: Thermosensitive microgels as nanoreactors
Roa, Rafael; Angioletti-Uberti, Stefano; Lu, Yan; Dzubiella, Joachim; Piazza, Francesco; Ballauff, Matthias
Metallic nanoparticles have been used as catalysts for various reactions, and the huge literature on the subject is hard to overlook. In many applications, the nanoparticles must be affixed to a colloidal carrier for easy handling during catalysis. These "passive carriers" (e.g., dendrimers) serve for a controlled synthesis of the nanoparticles and prevent coagulation during catalysis. Recently, hybrids from nanoparticles and polymers have been developed that allow us to change the catalytic ...
3. International conference on catalysis in membrane reactors
The 3. International Conference on Catalysis in Membrane Reactors, Copenhagen, Denmark, is a continuation of the previous conferences held in Villeurbanne 1994 and Moscow 1996 and will deal with the rapid developments taking place within membranes with emphasis on membrane catalysis. The approx. 80 contributions in form of plenary lectures and posters discuss hydrogen production, methane reforming into syngas, selectivity and specificity of various membranes etc. The conference is organised by the Danish Catalytic Society under the Danish Society for Chemical Engineering. (EG)
Dedicated Beamline Facilities for Catalytic Research. Synchrotron Catalysis Consortium (SCC)
Chen, Jingguang [Columbia Univ., New York, NY; Frenkel, Anatoly [Yeshiva Univ., New York, NY (United States); Rodriguez, Jose [Brookhaven National Lab. (BNL), Upton, NY (United States); Adzic, Radoslav [Brookhaven National Lab. (BNL), Upton, NY (United States); Bare, Simon R. [UOP LLC, Des Plaines, IL (United States); Hulbert, Steve L. [Brookhaven National Lab. (BNL), Upton, NY (United States); Karim, Ayman [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mullins, David R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Overbury, Steve [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
Synchrotron spectroscopies offer unique advantages over conventional techniques, including higher detection sensitivity and molecular specificity, faster detection rate, and more in-depth information regarding the structural, electronic and catalytic properties under in-situ reaction conditions. Despite these advantages, synchrotron techniques are often underutilized or unexplored by the catalysis community due to various perceived and real barriers, which will be addressed in the current proposal. Since its establishment in 2005, the Synchrotron Catalysis Consortium (SCC) has coordinated significant efforts to promote the utilization of cutting-edge catalytic research under in-situ conditions. The purpose of the current renewal proposal is aimed to provide assistance, and to develop new sciences/techniques, for the catalysis community through the following concerted efforts: Coordinating the implementation of a suite of beamlines for catalysis studies at the new NSLS-II synchrotron source; Providing assistance and coordination for catalysis users at an SSRL catalysis beamline during the initial period of NSLS to NSLS II transition; Designing in-situ reactors for a variety of catalytic and electrocatalytic studies; Assisting experimental set-up and data analysis by a dedicated research scientist; Offering training courses and help sessions by the PIs and co-PIs.
Overexpression, purification and crystallization of the two C-terminal domains of the bifunctional cellulase ctCel9D-Cel44A from Clostridium thermocellum
Najmudin, Shabir; Guerreiro, Catarina I. P. D.; Ferreira, Luís M. A.; Romão, Maria J. C.; Fontes, Carlos M. G. A.; Prates, José A. M.
The two C-terminal domains of the cellulase ctCel9D-Cel44A from C. thermocellum cellulosome have been crystallized in tetragonal space group P4 3 2 1 2 and X-ray diffraction data have been collected to 2.1 and 2.8 Ã… from native and seleno-l-methionine-derivative crystals, respectively. Clostridium thermocellum produces a highly organized multi-enzyme complex of cellulases and hemicellulases for the hydrolysis of plant cell-wall polysaccharides, which is termed the cellulosome. The bifunctional multi-modular cellulase ctCel9D-Cel44A is one of the largest components of the C. thermocellum cellulosome. The enzyme contains two internal catalytic domains belonging to glycoside hydrolase families 9 and 44. The C-terminus of this cellulase, comprising a polycystic kidney-disease module (PKD) and a carbohydrate-binding module (CBM44), has been crystallized. The crystals belong to the tetragonal space group P4 3 2 1 2, containing a single molecule in the asymmetric unit. Native and seleno-l-methionine-derivative crystals diffracted to 2.1 and 2.8 Ã…, respectively
Najmudin, Shabir [REQUIMTE, Departamento de Química, FCT-UNL, 2829-516 Caparica (Portugal); Guerreiro, Catarina I. P. D.; Ferreira, Luís M. A. [CIISA - Faculdade de Medicina Veterinária, Universidade Técnica de Lisboa, Avenida da Universidade Técnica, 1300-477 Lisboa (Portugal); Romão, Maria J. C. [REQUIMTE, Departamento de Química, FCT-UNL, 2829-516 Caparica (Portugal); Fontes, Carlos M. G. A.; Prates, José A. M., E-mail: [email protected] [CIISA - Faculdade de Medicina Veterinária, Universidade Técnica de Lisboa, Avenida da Universidade Técnica, 1300-477 Lisboa (Portugal); REQUIMTE, Departamento de Química, FCT-UNL, 2829-516 Caparica (Portugal)
The two C-terminal domains of the cellulase ctCel9D-Cel44A from C. thermocellum cellulosome have been crystallized in tetragonal space group P4{sub 3}2{sub 1}2 and X-ray diffraction data have been collected to 2.1 and 2.8 Ã… from native and seleno-l-methionine-derivative crystals, respectively. Clostridium thermocellum produces a highly organized multi-enzyme complex of cellulases and hemicellulases for the hydrolysis of plant cell-wall polysaccharides, which is termed the cellulosome. The bifunctional multi-modular cellulase ctCel9D-Cel44A is one of the largest components of the C. thermocellum cellulosome. The enzyme contains two internal catalytic domains belonging to glycoside hydrolase families 9 and 44. The C-terminus of this cellulase, comprising a polycystic kidney-disease module (PKD) and a carbohydrate-binding module (CBM44), has been crystallized. The crystals belong to the tetragonal space group P4{sub 3}2{sub 1}2, containing a single molecule in the asymmetric unit. Native and seleno-l-methionine-derivative crystals diffracted to 2.1 and 2.8 Ã…, respectively.
Loop residues and catalysis in OMP synthase
Wang, Gary P.; Hansen, Michael Riis; Grubmeyer, Charles
binding of OMP or PRPP in binary complexes was affected little by loop mutation, suggesting that the energetics of ground-state binding have little contribution from the catalytic loop, or that a favorable binding energy is offset by costs of loop reorganization. Pre-steady-state kinetics for mutants...... values for all four substrate molecules. The 20% (i.e., 1.20) intrinsic [1?-3H]OMP kinetic isotope effect (KIE) for WT is masked because of high forward and reverse commitment factors. K103A failed to express intrinsic KIEs fully (1.095 ± 0.013). In contrast, H105A, which has a smaller catalytic lesion...... (preceding paper in this issue, DOI 10.1021/bi300083p)]. The full expression of KIEs by H105A and E107A may result from a less secure closure of the catalytic loop. The lower level of expression of the KIE by K103A suggests that in these mutant proteins the major barrier to catalysis is successful closure...
Catalysis in high-temperature fuel cells.
Föger, K; Ahmed, K
Catalysis plays a critical role in solid oxide fuel cell systems. The electrochemical reactions within the cell--oxygen dissociation on the cathode and electrochemical fuel combustion on the anode--are catalytic reactions. The fuels used in high-temperature fuel cells, for example, natural gas, propane, or liquid hydrocarbons, need to be preprocessed to a form suitable for conversion on the anode-sulfur removal and pre-reforming. The unconverted fuel (economic fuel utilization around 85%) is commonly combusted using a catalytic burner. Ceramic Fuel Cells Ltd. has developed anodes that in addition to having electrochemical activity also are reactive for internal steam reforming of methane. This can simplify fuel preprocessing, but its main advantage is thermal management of the fuel cell stack by endothermic heat removal. Using this approach, the objective of fuel preprocessing is to produce a methane-rich fuel stream but with all higher hydrocarbons removed. Sulfur removal can be achieved by absorption or hydro-desulfurization (HDS). Depending on the system configuration, hydrogen is also required for start-up and shutdown. Reactor operating parameters are strongly tied to fuel cell operational regimes, thus often limiting optimization of the catalytic reactors. In this paper we discuss operation of an authothermal reforming reactor for hydrogen generation for HDS and start-up/shutdown, and development of a pre-reformer for converting propane to a methane-rich fuel stream.
Ferroelectric based catalysis: Switchable surface chemistry
Kakekhani, Arvin; Ismail-Beigi, Sohrab
We describe a new class of catalysts that uses an epitaxial monolayer of a transition metal oxide on a ferroelectric substrate. The ferroelectric polarization switches the surface chemistry between strongly adsorptive and strongly desorptive regimes, circumventing difficulties encountered on non-switchable catalytic surfaces where the Sabatier principle dictates a moderate surface-molecule interaction strength. This method is general and can, in principle, be applied to many reactions, and for each case the choice of the transition oxide monolayer can be optimized. Here, as a specific example, we show how simultaneous NOx direct decomposition (into N2 and O2) and CO oxidation can be achieved efficiently on CrO2 terminated PbTiO3, while circumventing oxygen (and sulfur) poisoning issues. One should note that NOx direct decomposition has been an open challenge in automotive emission control industry. Our method can expand the range of catalytically active elements to those which are not conventionally considered for catalysis and which are more economical, e.g., Cr (for NOx direct decomposition and CO oxidation) instead of canonical precious metal catalysts. Primary support from Toyota Motor Engineering and Manufacturing, North America, Inc.
Electron Jet of Asymmetric Reconnection
Khotyaintsev, Yu. V.; Graham, D. B.; Norgren, C.; Eriksson, E.; Li, W.; Johlander, A.; Vaivads, A.; Andre, M.; Pritchett, P. L.; Retino, A.;
We present Magnetospheric Multiscale observations of an electron-scale current sheet and electron outflow jet for asymmetric reconnection with guide field at the subsolar magnetopause. The electron jet observed within the reconnection region has an electron Mach number of 0.35 and is associated with electron agyrotropy. The jet is unstable to an electrostatic instability which generates intense waves with E(sub parallel lines) amplitudes reaching up to 300 mV/m and potentials up to 20% of the electron thermal energy. We see evidence of interaction between the waves and the electron beam, leading to quick thermalization of the beam and stabilization of the instability. The wave phase speed is comparable to the ion thermal speed, suggesting that the instability is of Buneman type, and therefore introduces electron-ion drag and leads to braking of the electron flow. Our observations demonstrate that electrostatic turbulence plays an important role in the electron-scale physics of asymmetric reconnection.
Stable walking with asymmetric legs
Merker, Andreas; Rummel, Juergen; Seyfarth, Andre
Asymmetric leg function is often an undesired side-effect in artificial legged systems and may reflect functional deficits or variations in the mechanical construction. It can also be found in legged locomotion in humans and animals such as after an accident or in specific gait patterns. So far, it is not clear to what extent differences in the leg function of contralateral limbs can be tolerated during walking or running. Here, we address this issue using a bipedal spring-mass model for simulating walking with compliant legs. With the help of the model, we show that considerable differences between contralateral legs can be tolerated and may even provide advantages to the robustness of the system dynamics. A better understanding of the mechanisms and potential benefits of asymmetric leg operation may help to guide the development of artificial limbs or the design novel therapeutic concepts and rehabilitation strategies.
Variable angle asymmetric cut monochromator
Smither, R.K.; Fernandez, P.B.
A variable incident angle, asymmetric cut, double crystal monochromator was tested for use on beamlines at the Advanced Photon Source (APS). For both undulator and wiggler beams the monochromator can expand area of footprint of beam on surface of the crystals to 50 times the area of incident beam; this will reduce the slope errors by a factor of 2500. The asymmetric cut allows one to increase the acceptance angle for incident radiation and obtain a better match to the opening angle of the incident beam. This can increase intensity of the diffracted beam by a factor of 2 to 5 and can make the beam more monochromatic, as well. The monochromator consists of two matched, asymmetric cut (18 degrees), silicon crystals mounted so that they can be rotated about three independent axes. Rotation around the first axis controls the Bragg angle. The second rotation axis is perpendicular to the diffraction planes and controls the increase of the area of the footprint of the beam on the crystal surface. Rotation around the third axis controls the angle between the surface of the crystal and the wider, horizontal axis for the beam and can make the footprint a rectangle with a minimum. length for this area. The asymmetric cut is 18 degrees for the matched pair of crystals, which allows one to expand the footprint area by a factor of 50 for Bragg angles up to 19.15 degrees (6 keV for Si[111] planes). This monochromator, with proper cooling, will be useful for analyzing the high intensity x-ray beams produced by both undulators and wigglers at the APS
Asymmetric information and bank runs
Gu, Chao
It is known that sunspots can trigger panic-based bank runs and that the optimal banking contract can tolerate panic-based runs. The existing literature assumes that these sunspots are based on a publicly observed extrinsic randomizing device. In this paper, I extend the analysis of panic-based runs to include an asymmetric-information, extrinsic randomizing device. Depositors observe different, but correlated, signals on the stability of the bank. I find that if the signals that depositors o...
Asymmetric information and macroeconomic dynamics
Hawkins, Raymond J.; Aoki, Masanao; Roy Frieden, B.
We show how macroeconomic dynamics can be derived from asymmetric information. As an illustration of the utility of this approach we derive the equilibrium density, non-equilibrium densities and the equation of motion for the response to a demand shock for productivity in a simple economy. Novel consequences of this approach include a natural incorporation of time dependence into macroeconomics and a common information-theoretic basis for economics and other fields seeking to link micro-dynamics and macro-observables.
Asymmetric Synthesis of Apratoxin E.
Mao, Zhuo-Ya; Si, Chang-Mei; Liu, Yi-Wen; Dong, Han-Qing; Wei, Bang-Guo; Lin, Guo-Qiang
An efficient method for asymmetric synthesis of apratoxin E 2 is described in this report. The chiral lactone 8, recycled from the degradation of saponin glycosides, was utilized to prepare the non-peptide fragment 6. In addition to this "from nature to nature" strategy, olefin cross-metathesis (CM) was applied as an alternative approach for the formation of the double bond. Moreover, pentafluorophenyl diphenylphosphinate was found to be an efficient condensation reagent for the macrocyclization.
Comprehensive asymmetric dark matter model
Lonsdale, Stephen J.; Volkas, Raymond R.
Asymmetric dark matter (ADM) is motivated by the similar cosmological mass densities measured for ordinary and dark matter. We present a comprehensive theory for ADM that addresses the mass density similarity, going beyond the usual ADM explanations of similar number densities. It features an explicit matter-antimatter asymmetry generation mechanism, has one fully worked out thermal history and suggestions for other possibilities, and meets all phenomenological, cosmological and astrophysical...
Tethering metal ions to photocatalyst particulate surfaces by bifunctional molecular linkers for efficient hydrogen evolution
Yu, Weili
A simple and versatile method for the preparation of photocatalyst particulates modified with effective cocatalysts is presented; the method involves the sequential soaking of photocatalyst particulates in solutions containing bifunctional organic linkers and metal ions. The modification of the particulate surfaces is a universal and reproducible method because the molecular linkers utilize strong covalent bonds, which in turn result in modified monolayer with a small but controlled quantity of metals. The photocatalysis results indicated that the CdS with likely photochemically reduced Pd and Ni, which were initially immobilized via ethanedithiol (EDT) as a linker, were highly efficient for photocatalytic hydrogen evolution from Na2S-Na2SO3-containing aqueous solutions. The method developed in this study opens a new synthesis route for the preparation of effective photocatalysts with various combinations of bifunctional linkers, metals, and photocatalyst particulate materials. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sorption of Pu(IV) from nitric acid by bifunctional anion-exchange resins
Bartsch, R.A.; Zhang, Z.Y.; Elshani, S.; Zhao, W.; Jarvinen, G.D.; Barr, M.E.; Marsh, S.F.; Chamberlin, R.M.
Anion exchange is attractive for separating plutonium because the Pu(IV) nitrate complex is very strongly sorbed and few other metal ions form competing anionic nitrate complexes. The major disadvantage of this process has been the unusually slow rate at which the Pu(IV) nitrate complex is sorbed by the resin. The paper summarizes the concept of bifunctional anion-exchange resins, proposed mechanism for Pu(IV) sorption, synthesis of the alkylating agent, calculation of K d values from Pu(IV) sorption results, and conclusions from the study of Pu(IV) sorption from 7M nitric acid by macroporous anion-exchange resins including level of crosslinking, level of alkylation, length of spacer, and bifunctional vs. monofunctional anion-exchange resins
Yu, Weili; Isimjan, Tayirjan T.; Del Gobbo, Silvano; Anjum, Dalaver Hussain; Abdel-Azeim, Safwat; Cavallo, Luigi; Garcia Esparza, Angel T.; Domen, Kazunari; Xu, Wei; Takanabe, Kazuhiro
Bifunctional fluorescent probes for detection of amyloid aggregates and reactive oxygen species.
Needham, Lisa-Maria; Weber, Judith; Fyfe, James W B; Kabia, Omaru M; Do, Dung T; Klimont, Ewa; Zhang, Yu; Rodrigues, Margarida; Dobson, Christopher M; Ghandi, Sonia; Bohndiek, Sarah E; Snaddon, Thomas N; Lee, Steven F
Protein aggregation into amyloid deposits and oxidative stress are key features of many neurodegenerative disorders including Parkinson's and Alzheimer's disease. We report here the creation of four highly sensitive bifunctional fluorescent probes, capable of H 2 O 2 and/or amyloid aggregate detection. These bifunctional sensors use a benzothiazole core for amyloid localization and boronic ester oxidation to specifically detect H 2 O 2 . We characterized the optical properties of these probes using both bulk fluorescence measurements and single-aggregate fluorescence imaging, and quantify changes in their fluorescence properties upon addition of amyloid aggregates of α-synuclein and pathophysiological H 2 O 2 concentrations. Our results indicate these new probes will be useful to detect and monitor neurodegenerative disease.
Bifunctional fluorescent probes for detection of amyloid aggregates and reactive oxygen species
Needham, Lisa-Maria; Weber, Judith; Fyfe, James W. B.; Kabia, Omaru M.; Do, Dung T.; Klimont, Ewa; Zhang, Yu; Rodrigues, Margarida; Dobson, Christopher M.; Ghandi, Sonia; Bohndiek, Sarah E.; Snaddon, Thomas N.; Lee, Steven F.
Protein aggregation into amyloid deposits and oxidative stress are key features of many neurodegenerative disorders including Parkinson's and Alzheimer's disease. We report here the creation of four highly sensitive bifunctional fluorescent probes, capable of H2O2 and/or amyloid aggregate detection. These bifunctional sensors use a benzothiazole core for amyloid localization and boronic ester oxidation to specifically detect H2O2. We characterized the optical properties of these probes using both bulk fluorescence measurements and single-aggregate fluorescence imaging, and quantify changes in their fluorescence properties upon addition of amyloid aggregates of α-synuclein and pathophysiological H2O2 concentrations. Our results indicate these new probes will be useful to detect and monitor neurodegenerative disease.
Loop Replacement Enhances the Ancestral Antibacterial Function of a Bifunctional Scorpion Toxin
Shangfei Zhang
Full Text Available On the basis of the evolutionary relationship between scorpion toxins targeting K+ channels (KTxs and antibacterial defensins (Zhu S., Peigneur S., Gao B., Umetsu Y., Ohki S., Tytgat J. Experimental conversion of a defensin into a neurotoxin: Implications for origin of toxic function. Mol. Biol. Evol. 2014, 31, 546–559, we performed protein engineering experiments to modify a bifunctional KTx (i.e., weak inhibitory activities on both K+ channels and bacteria via substituting its carboxyl loop with the structurally equivalent loop of contemporary defensins. As expected, the engineered peptide (named MeuTXKα3-KFGGI remarkably improved the antibacterial activity, particularly on some Gram-positive bacteria, including several antibiotic-resistant opportunistic pathogens. Compared with the unmodified toxin, its antibacterial spectrum also enlarged. Our work provides a new method to enhance the antibacterial activity of bifunctional scorpion venom peptides, which might be useful in engineering other proteins with an ancestral activity.
Center for Catalysis at Iowa State University
Kraus, George A.
The overall objective of this proposal is to enable Iowa State University to establish a Center that enjoys world-class stature and eventually enhances the economy through the transfer of innovation from the laboratory to the marketplace. The funds have been used to support experimental proposals from interdisciplinary research teams in areas related to catalysis and green chemistry. Specific focus areas included: • Catalytic conversion of renewable natural resources to industrial materials • Development of new catalysts for the oxidation or reduction of commodity chemicals • Use of enzymes and microorganisms in biocatalysis • Development of new, environmentally friendly reactions of industrial importance These focus areas intersect with barriers from the MYTP draft document. Specifically, section 2.4.3.1 Processing and Conversion has a list of bulleted items under Improved Chemical Conversions that includes new hydrogenation catalysts, milder oxidation catalysts, new catalysts for dehydration and selective bond cleavage catalysts. Specifically, the four sections are: 1. Catalyst development (7.4.12.A) 2. Conversion of glycerol (7.4.12.B) 3. Conversion of biodiesel (7.4.12.C) 4. Glucose from starch (7.4.12.D) All funded projects are part of a soybean or corn biorefinery. Two funded projects that have made significant progress toward goals of the MYTP draft document are: Catalysts to convert feedstocks with high fatty acid content to biodiesel (Kraus, Lin, Verkade) and Conversion of Glycerol into 1,3-Propanediol (Lin, Kraus). Currently, biodiesel is prepared using homogeneous base catalysis. However, as producers look for feedstocks other than soybean oil, such as waste restaurant oils and rendered animal fats, they have observed a large amount of free fatty acids contained in the feedstocks. Free fatty acids cannot be converted into biodiesel using homogeneous base-mediated processes. The CCAT catalyst system offers an integrated and cooperative catalytic
67Ga(NODASA): a new potential bifunctional radioligand for coupling to peptides
Andre, J.P.; Maecke, H.R.; Zehnder, M.; Macko, L.; Kaspar, A.
A new bifunctional chelator NODASA (1,4,7-triazacyclononane-1-succinic acid-4,7-diacetic acid) has been synthesised and its Ga(III) complex was crystallographically characterized by X-ray diffraction. The complex showed to be stable in serum and in acidic conditions and its stability constant was determined using a competition method with an auxiliary ligand. The conjugation of Ga(NODASA) to a model aminoacidamide proved the feasibility of a prelabelling approach. (author)
Synthesis of acid-base bifunctional mesoporous materials by oxidation and thermolysis
Yu, Xiaofang [College of Chemistry, Jilin University, Jiefang Road 2519, Changchun 130023 (China); Zou, Yongcun [State Key Laboratory of Inoranic Synthesis and Preparative Chemistryg, College of Chemistry, Jilin University, Changchun 130012 (China); Wu, Shujie; Liu, Heng [College of Chemistry, Jilin University, Jiefang Road 2519, Changchun 130023 (China); Guan, Jingqi, E-mail: [email protected] [College of Chemistry, Jilin University, Jiefang Road 2519, Changchun 130023 (China); Kan, Qiubin, E-mail: [email protected] [College of Chemistry, Jilin University, Jiefang Road 2519, Changchun 130023 (China)
Graphical abstract: A novel and efficient method has been developed for the synthesis of acid-base bifunctional catalyst. The obtained sample of SO{sub 3}H-MCM-41-NH{sub 2} containing amine and sulfonic acids exhibits excellent catalytic activity in aldol condensation reaction. Research highlights: {yields} Synthesize acid-base bifunctional mesoporous materials SO{sub 3}H-MCM-41-NH{sub 2}. {yields} Oxidation and then thermolysis to generate acidic site and basic site. {yields} Exhibit good catalytic performance in aldol condensation reaction between acetone and various aldehydes. -- Abstract: A novel and efficient method has been developed for the synthesis of acid-base bifunctional catalyst SO{sub 3}H-MCM-41-NH{sub 2}. This method was achieved by co-condensation of tetraethylorthosilicate (TEOS), 3-mercaptopropyltrimethoxysilane (MPTMS) and (3-triethoxysilylpropyl) carbamicacid-1-methylcyclohexylester (3TAME) in the presence of cetyltrimethylammonium bromide (CTAB), followed by oxidation and then thermolysis to generate acidic site and basic site. X-ray diffraction (XRD) and transmission electron micrographs (TEM) show that the resultant materials keep mesoporous structure. Thermogravimetric analysis (TGA), X-ray photoelectron spectra (XPS), back titration, solid-state {sup 13}C CP/MAS NMR and solid-state {sup 29}Si MAS NMR confirm that the organosiloxanes were condensed as a part of the silica framework. The bifunctional sample (SO{sub 3}H-MCM-41-NH{sub 2}) containing amine and sulfonic acids exhibits excellent acid-basic properties, which make it possess high activity in aldol condensation reaction between acetone and various aldehydes.
Radiation Induced Crosslinking of Polyethylene in the Presence of Bifunctional Vinyl Monomers
Joshi, M. S.; Singer, Klaus Albert Julius; Silverman, J.
Several reports have been published showing that the radiation induced grafting of bifunctional vinyl monomers to low density polyethylene results in a product with an unusually high density of crosslinks. The same grafting reactions are shown to reduce the incipient gel dose by more than a factor...... of fifty. This paper is concerned with the apparent crosslinking produced by the radiation grafting of two monomers to polyethylene: acrylic acid and acrylonitrile....
Generating carbyne equivalents with photoredox catalysis
Wang, Zhaofeng; Herraiz, Ana G.; Del Hoyo, Ana M.; Suero, Marcos G.
Carbon has the unique ability to bind four atoms and form stable tetravalent structures that are prevalent in nature. The lack of one or two valences leads to a set of species—carbocations, carbanions, radicals and carbenes—that is fundamental to our understanding of chemical reactivity. In contrast, the carbyne—a monovalent carbon with three non-bonded electrons—is a relatively unexplored reactive intermediate; the design of reactions involving a carbyne is limited by challenges associated with controlling its extreme reactivity and the lack of efficient sources. Given the innate ability of carbynes to form three new covalent bonds sequentially, we anticipated that a catalytic method of generating carbynes or related stabilized species would allow what we term an 'assembly point' disconnection approach for the construction of chiral centres. Here we describe a catalytic strategy that generates diazomethyl radicals as direct equivalents of carbyne species using visible-light photoredox catalysis. The ability of these carbyne equivalents to induce site-selective carbon-hydrogen bond cleavage in aromatic rings enables a useful diazomethylation reaction, which underpins sequencing control for the late-stage assembly-point functionalization of medically relevant agents. Our strategy provides an efficient route to libraries of potentially bioactive molecules through the installation of tailored chiral centres at carbon-hydrogen bonds, while complementing current translational late-stage functionalization processes. Furthermore, we exploit the dual radical and carbene character of the generated carbyne equivalent in the direct transformation of abundant chemical feedstocks into valuable chiral molecules.
Catalysis-by-design impacts assessment
Fassbender, L L; Young, J K [Pacific Northwest Lab., Richland, WA (USA); Sen, R K [Sen (R.K.) and Associates, Washington, DC (USA)
Catalyst researchers have always recognized the need to develop a detailed understanding of the mechanisms of catalytic processes, and have hoped that it would lead to developing a theoretical predictive base to guide the search for new catalysts. This understanding allows one to develop a set of hierarchical models, from fundamental atomic-level ab-initio models to detailed engineering simulations of reactor systems, to direct the search for optimized, efficient catalyst systems. During the last two decades, the explosions of advanced surface analysis techniques have helped considerably to develop the building blocks for understanding various catalytic reactions. An effort to couple these theoretical and experimental advances to develop a set of hierarchical models to predict the nature of catalytic materials is a program entitled Catalysis-by-Design (CRD).'' In assessing the potential impacts of CBD on US industry, the key point to remember is that the value of the program lies in developing a novel methodology to search for new catalyst systems. Industrial researchers can then use this methodology to develop proprietary catalysts. Most companies involved in catalyst R D have two types of ongoing projects. The first type, what we call market-driven R D,'' are projects that support and improve upon a company's existing product lines. Project of the second type, technology-driven R D,'' are longer term, involve the development of totally new catalysts, and are initiated through scientists' research ideas. The CBD approach will impact both types of projects. However, this analysis indicates that the near-term impacts will be on market-driven'' projects. The conclusions and recommendations presented in this report were obtained by the authors through personal interviews with individuals involved in a variety of industrial catalyst development programs and through the three CBD workshops held in the summer of 1989. 34 refs., 7 figs., 7 tabs.
Biodiesel forming reactions using heterogeneous catalysis
Liu, Yijun
Biodiesel synthesis from biomass provides a means for utilizing effectively renewable resources, a way to convert waste vegetable oils and animal fats to a useful product, a way to recycle carbon dioxide for a combustion fuel, and production of a fuel that is biodegradable, non-toxic, and has a lower emission profile than petroleum-diesel. Free fatty acid (FFA) esterification and triglyceride (TG) transesterification with low molecular weight alcohols constitute the synthetic routes to prepare biodiesel from lipid feedstocks. This project was aimed at developing a better understanding of important fundamental issues involved in heterogeneous catalyzed biodiesel forming reactions using mainly model compounds, representing part of on-going efforts to build up a rational base for assay, design, and performance optimization of solid acids/bases in biodiesel synthesis. As FFA esterification proceeds, water is continuously formed as a byproduct and affects reaction rates in a negative manner. Using sulfuric acid (as a catalyst) and acetic acid (as a model compound for FFA), the impact of increasing concentrations of water on acid catalysis was investigated. The order of the water effect on reaction rate was determined to be -0.83. Sulfuric acid lost up to 90% activity as the amount of water present increased. The nature of the negative effect of water on esterification was found to go beyond the scope of reverse hydrolysis and was associated with the diminished acid strength of sulfuric acid as a result of the preferential solvation by water molecules of its catalytic protons. The results indicate that as esterification progresses and byproduct water is produced, deactivation of a Bronsted acid catalyst like H2SO4 occurs. Using a solid composite acid (SAC-13) as an example of heterogeneous catalysts and sulfuric acid as a homogeneous reference, similar reaction inhibition by water was demonstrated for homogeneous and heterogeneous catalysis. This similarity together with
Kinetic evolutionary behavior of catalysis-select migration
Wu Yuan-Gang; Lin Zhen-Quan; Ke Jian-Hong
We propose a catalysis-select migration driven evolution model of two-species (A- and B-species) aggregates, where one unit of species A migrates to species B under the catalysts of species C, while under the catalysts of species D the reaction will become one unit of species B migrating to species A. Meanwhile the catalyst aggregates of species C perform self-coagulation, as do the species D aggregates. We study this catalysis-select migration driven kinetic aggregation phenomena using the generalized Smoluchowski rate equation approach with C species catalysis-select migration rate kernel K(k;i,j) = Kkij and D species catalysis-select migration rate kernel J(k;i,j)= Jkij. The kinetic evolution behaviour is found to be dominated by the competition between the catalysis-select immigration and emigration, in which the competition is between JD 0 and KC 0 (D 0 and C 0 are the initial numbers of the monomers of species D and C, respectively). When JD 0 −KC 0 > 0, the aggregate size distribution of species A satisfies the conventional scaling form and that of species B satisfies a modified scaling form. And in the case of JD 0 −KC 0 0 −KC 0 > 0 case. (interdisciplinary physics and related areas of science and technology)
Collagen/chitosan based two-compartment and bi-functional dermal scaffolds for skin regeneration
Wang, Feng [Department of Plastic Surgery and Burns, Shenzhen Second People' s Hospital, Shenzhen 518035 (China); Wang, Mingbo [Key Laboratory of Biomedical Materials and Implants, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057 (China); She, Zhending [Key Laboratory of Biomedical Materials and Implants, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057 (China); Shenzhen Lando Biomaterials Co., Ltd., Shenzhen 518057 (China); Fan, Kunwu; Xu, Cheng [Department of Plastic Surgery and Burns, Shenzhen Second People' s Hospital, Shenzhen 518035 (China); Chu, Bin; Chen, Changsheng [Key Laboratory of Biomedical Materials and Implants, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057 (China); Shi, Shengjun, E-mail: [email protected] [The Burns Department of Zhujiang Hospital, Southern Medical University, Guangzhou 510280 (China); Tan, Rongwei, E-mail: [email protected] [Key Laboratory of Biomedical Materials and Implants, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057 (China); Shenzhen Lando Biomaterials Co., Ltd., Shenzhen 518057 (China)
Inspired from the sophisticated bilayer structures of natural dermis, here, we reported collagen/chitosan based two-compartment and bi-functional dermal scaffolds. Two functions refer to mediating rapid angiogenesis based on recombinant human vascular endothelial growth factor (rhVEGF) and antibacterial from gentamicin, which were encapsulated in PLGA microspheres. The gentamicin and rhVEGF encapsulated PLGA microspheres were further combined with collagen/chitosan mixtures in low (lower layer) and high (upper layer) concentrations, and molded to generate the two-compartment and bi-functional scaffolds. Based on morphology and pore structure analyses, it was found that the scaffold has a distinct double layered porous and connective structure with PLGA microspheres encapsulated. Statistical analysis indicated that the pores in the upper layer and in the lower layer have great variations in diameter, indicative of a two-compartment structure. The release profiles of gentamicin and rhVEGF exceeded 28 and 49 days, respectively. In vitro culture of mouse fibroblasts showed that the scaffold can facilitate cell adhesion and proliferation. Moreover, the scaffold can obviously inhibit proliferation of Staphylococcus aureus and Serratia marcescens, exhibiting its unique antibacterial effect. The two-compartment and bi-functional dermal scaffolds can be a promising candidate for skin regeneration. - Highlights: • The dermal scaffold is inspired from the bilayer structures of natural dermis. • The dermal scaffold has two-compartment structures. • The dermal scaffold containing VEGF and gentamicin encapsulated PLGA microspheres • The dermal scaffold can facilitate cell adhesion and proliferation.
Bifunctional bridging linker-assisted synthesis and characterization of TiO{sub 2}/Au nanocomposites
Žuni�, Vojka, E-mail: [email protected], E-mail: [email protected]; Kurtjak, Mario; Suvorov, Danilo [Jožef Stefan Institute, Advanced Materials Department (Slovenia)
Using a simple organic bifunctional bridging linker, titanium dioxide (TiO{sub 2}) nanoparticles were coupled with the Au nanoparticles to form TiO{sub 2}/Au nanocomposites with a variety of Au loadings. This organic bifunctional linker, meso-2,3-dimercaptosuccinic acid, contains two types of functional groups: (i) the carboxyl group, which enables binding to the TiO{sub 2}, and (ii) the thiol group, which enables binding to the Au. In addition, the organic bifunctional linker acts as a stabilizing agent to prevent the agglomeration and growth of the Au particles, resulting in the formation of highly dispersed Au nanoparticles. To form the TiO{sub 2}/Au nanocomposites in a simple way, we deliberately applied a synthetic method that simultaneously ensures: (i) the capping of the Au nanoparticles and (ii) the binding of different amounts of Au to the TiO{sub 2}. The TiO{sub 2}/Au nanocomposites formed with this method show enhanced UV and Vis photocatalytic activities when compared to the pure TiO{sub 2} nanopowders.Graphical Abstract.
Wang, Feng; Wang, Mingbo; She, Zhending; Fan, Kunwu; Xu, Cheng; Chu, Bin; Chen, Changsheng; Shi, Shengjun; Tan, Rongwei
Inspired from the sophisticated bilayer structures of natural dermis, here, we reported collagen/chitosan based two-compartment and bi-functional dermal scaffolds. Two functions refer to mediating rapid angiogenesis based on recombinant human vascular endothelial growth factor (rhVEGF) and antibacterial from gentamicin, which were encapsulated in PLGA microspheres. The gentamicin and rhVEGF encapsulated PLGA microspheres were further combined with collagen/chitosan mixtures in low (lower layer) and high (upper layer) concentrations, and molded to generate the two-compartment and bi-functional scaffolds. Based on morphology and pore structure analyses, it was found that the scaffold has a distinct double layered porous and connective structure with PLGA microspheres encapsulated. Statistical analysis indicated that the pores in the upper layer and in the lower layer have great variations in diameter, indicative of a two-compartment structure. The release profiles of gentamicin and rhVEGF exceeded 28 and 49 days, respectively. In vitro culture of mouse fibroblasts showed that the scaffold can facilitate cell adhesion and proliferation. Moreover, the scaffold can obviously inhibit proliferation of Staphylococcus aureus and Serratia marcescens, exhibiting its unique antibacterial effect. The two-compartment and bi-functional dermal scaffolds can be a promising candidate for skin regeneration. - Highlights: • The dermal scaffold is inspired from the bilayer structures of natural dermis. • The dermal scaffold has two-compartment structures. • The dermal scaffold containing VEGF and gentamicin encapsulated PLGA microspheres • The dermal scaffold can facilitate cell adhesion and proliferation
Synthesis method of asymmetric gold particles.
Jun, Bong-Hyun; Murata, Michael; Hahm, Eunil; Lee, Luke P
Asymmetric particles can exhibit unique properties. However, reported synthesis methods for asymmetric particles hinder their application because these methods have a limited scale and lack the ability to afford particles of varied shapes. Herein, we report a novel synthetic method which has the potential to produce large quantities of asymmetric particles. Asymmetric rose-shaped gold particles were fabricated as a proof of concept experiment. First, silica nanoparticles (NPs) were bound to a hydrophobic micro-sized polymer containing 2-chlorotritylchloride linkers (2-CTC resin). Then, half-planar gold particles with rose-shaped and polyhedral structures were prepared on the silica particles on the 2-CTC resin. Particle size was controlled by the concentration of the gold source. The asymmetric particles were easily cleaved from the resin without aggregation. We confirmed that gold was grown on the silica NPs. This facile method for synthesizing asymmetric particles has great potential for materials science.
Evaluation of commercial and sulfated ZrO{sub 2} aiming application catalysis; Avaliacao de ZrO{sub 2} comercial e sulfatada visando aplicacao em catalise
Silva, F.N.; Dantas, J.; Costa, A.C.F.M., E-mail: [email protected] [Universidade Federal de Campina Grande (UFCG), PB (Brazil). Pos-Graduacao em Engenharia de Materiais; Pallone, E.M.J.A. [Universidade de Sao Paulo (USP), Pirassununga, SP (Brazil). Departamento de Ciencias Basicas; Dutra, R.C.L. [Instituto de Aeronautica e Espaco (AQI/IAE), Sao Jose dos Campos, SP (Brazil). Divisao de Quimica
This study evaluates the performance of commercial and sulfated ZrO{sub 2} for future application in catalysis. Commercial ZrO{sub 2} was provided by the company Saint-Gobain Zirpro. The sulfation occurred with SO{sub 4}{sup -2} ion content of 30% compared to the mass of ZrO{sub 2}. The samples were characterized by XRD, FTIR, EDX and GD. The results revealed the formation of a monoclinic phase for the commercial sample, and a monoclinic major phase with tetragonal traces for the sulfated sample. The commercial ZrO{sub 2} showed a narrow, bimodal and asymmetric agglomerates distribution, while the sulfated sample showed a narrow, tetramodal and asymmetric agglomerates distribution. The presence of traces of the tetragonal phase in the SO{sub 4}{sup -2}/ZrO{sub 2} XRD, and the presence of SO{sub 3} in the EDX were good indicators for future use in catalysis to provide ester. (author)
Francois Garin: Pioneer work in catalysis through synchrotron radiation
Bazin, Dominique
Starting from the late seventies, the progressively increased availability of beamlines dedicated to X-ray absorption spectroscopy allowed the execution of experiments in chemistry. In this manuscript, I describe the contribution of Francois Garin at the frontier of heterogeneous catalysis and synchrotron radiation. Working at LURE as a scientific in charge of a beamline dedicated to X-ray absorption spectroscopy during almost twenty years and thus, having the opportunity to discuss with research groups working in heterogeneous catalysis in Europe as well as in the United States, it was quite easy to show that his work is clearly at the origin of current research in heterogeneous catalysis, not only in France, but in different synchrotron radiation centres. (authors)
2008 Gordon Research Conference on Catalysis [Conference summary report
Soled, Stuart L.; Gray, Nancy Ryan
The GRC on Catalysis is one of the most prestigious catalysis conferences as it brings together leading researchers from around the world to discuss their latest, most exciting work in catalysis. The 2008 conference will continue this tradition. The conference will cover a variety of themes including new catalytic materials, theoretical and experimental approaches to improve understanding of kinetics and transport phenomena, and state of the art nanoscale characterization probes to monitor active sites. The conference promotes interactions among established researchers and young scientists. It provides a venue for students to meet, talk to and learn from some of the world leading researchers in the area. It also gives them a platform for displaying their own work during the poster sessions. The informal nature of the meeting, excellent quality of the presentations and posters, and ability to meet many outstanding colleagues makes this an excellent conference.
LG tools for asymmetric wargaming
Stilman, Boris; Yakhnis, Alex; Yakhnis, Vladimir
Asymmetric operations represent conflict where one of the sides would apply military power to influence the political and civil environment, to facilitate diplomacy, and to interrupt specified illegal activities. This is a special type of conflict where the participants do not initiate full-scale war. Instead, the sides may be engaged in a limited open conflict or one or several sides may covertly engage another side using unconventional or less conventional methods of engagement. They may include peace operations, combating terrorism, counterdrug operations, arms control, support of insurgencies or counterinsurgencies, show of force. An asymmetric conflict can be represented as several concurrent interlinked games of various kinds: military, transportation, economic, political, etc. Thus, various actions of peace violators, terrorists, drug traffickers, etc., can be expressed via moves in different interlinked games. LG tools allow us to fully capture the specificity of asymmetric conflicts employing the major LG concept of hypergame. Hypergame allows modeling concurrent interlinked processes taking place in geographically remote locations at different levels of resolution and time scale. For example, it allows us to model an antiterrorist operation taking place simultaneously in a number of countries around the globe and involving wide range of entities from individuals to combat units to governments. Additionally, LG allows us to model all sides of the conflict at their level of sophistication. Intelligent stakeholders are represented by means of LG generated intelligent strategies. TO generate those strategies, in addition to its own mathematical intelligence, the LG algorithm may incorporate the intelligence of the top-level experts in the respective problem domains. LG models the individual differences between intelligent stakeholders. The LG tools make it possible to incorporate most of the known traits of a stakeholder, i.e., real personalities involved in
Incompressibility of asymmetric nuclear matter
Chen, Liewen; Cai, Baojun; Shen, Chun; Ko, Cheming; Xu, Jun; Li, Baoan
Using an isospin- and momentum-dependent modified Gogny (MDI) interaction, the Skyrme-Hartree-Fock (SHF) approach, and a phenomenological modified Skyrme-like (MSL) model, we have studied the incompressibility K sat (δ) of isospin asymmetric nuclear matter at its saturation density. Our results show that in the expansion of K sat (δ) in powers of isospin asymmetry δ, i.e., K sat (δ) = K 0 + K sat,2 δ 2 + K sat,4 δ 4 + O(δ 6 ), the magnitude of the 4th-order K sat,4 parameter is generally small. The 2nd-order K sat,2 parameter thus essentially characterizes the isospin dependence of the incompressibility of asymmetric nuclear matter at saturation density. Furthermore, the K sat,2 can be expressed as K sat,2 = K sym – 6L – J 0 /K 0 L in terms of the slope parameter L and the curvature parameter K sym of the symmetry energy and the third-order derivative parameter J 0 of the energy of symmetric nuclear matter at saturation density, and we find the higher order J 0 contribution to K sat,2 generally cannot be neglected. Also, we have found a linear correlation between K sym and L as well as between J 0 /K 0 and K 0 . Using these correlations together with the empirical constraints on K 0 and L, the nuclear symmetry energy E sym (�0) at normal nuclear density, and the nucleon effective mass, we have obtained an estimated value of K sat,2 = -370 ± 120 MeV for the 2nd-order parameter in the isospin asymmetry expansion of the incompressibility of asymmetric nuclear matter at its saturation density. (author)
Asymmetric effects in customer satisfaction
Füller, Johann; Matzler, Kurt; Faullant, Rita
The results of this study on customer satisfaction in snowboard areas show that the relationship between an attribute and overall satisfaction can indeed be asymmetric. A 30-item self-administered survey was completed by snowboarders (n=2526) in 51 areas in Austria, Germany, Switzerland and Italy....... Results show that waiting time is a dissatisfier; it has a significant impact on overall customer satisfaction in the low satisfaction condition and becomes insignificant in the high satisfaction situation. Restaurants and bars are hybrids, i.e. importance does not depend on performance. Slopes, fun...
Asymmetric Formal Synthesis of Azadirachtin.
Mori, Naoki; Kitahara, Takeshi; Mori, Kenji; Watanabe, Hidenori
An asymmetric formal synthesis of azadirachtin, a potent insect antifeedant, was accomplished in 30 steps to Ley's synthetic intermediate (longest linear sequence). The synthesis features: 1) rapid access to the optically active right-hand segment starting from the known 5-hydroxymethyl-2-cyclopentenone scaffold; 2) construction of the B and E rings by a key intramolecular tandem radical cyclization; 3) formation of the hemiacetal moiety in the C ring through the α-oxidation of the six-membered lactone followed by methanolysis. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Spontaneous baryogenesis from asymmetric inflaton
Takahashi, Fuminobu
We propose a variant scenario of spontaneous baryogenesis from asymmetric inflaton based on current-current interactions between the inflaton and matter fields with a non-zero B-L charge. When the inflaton starts to oscillate around the minimum after inflation, it may lead to excitation of a CP-odd component, which induces an effective chemical potential for the B-L number through the current-current interactions. We study concrete inflation models and show that the spontaneous baryogenesis scenario can be naturally implemented in the chaotic inflation in supergravity.
Isoporphyrin Intermediate in Heme Oxygenase Catalysis
Evans, John P.; Niemevz, Fernando; Buldain, Graciela; de Montellano, Paul Ortiz
Human heme oxygenase-1 (hHO-1) catalyzes the O2- and NADPH-dependent oxidation of heme to biliverdin, CO, and free iron. The first step involves regiospecific insertion of an oxygen atom at the α-meso carbon by a ferric hydroperoxide and is predicted to proceed via an isoporphyrin π-cation intermediate. Here we report spectroscopic detection of a transient intermediate during oxidation by hHO-1 of α-meso-phenylheme-IX, α-meso-(p-methylphenyl)-mesoheme-III, and α-meso-(p-trifluoromethylphenyl)-mesoheme-III. In agreement with previous experiments (Wang, J., Niemevz, F., Lad, L., Huang, L., Alvarez, D. E., Buldain, G., Poulos, T. L., and Ortiz de Montellano, P. R. (2004) J. Biol. Chem. 279, 42593–42604), only the α-biliverdin isomer is produced with concomitant formation of the corresponding benzoic acid. The transient intermediate observed in the NADPH-P450 reductase-catalyzed reaction accumulated when the reaction was supported by H2O2 and exhibited the absorption maxima at 435 and 930 nm characteristic of an isoporphyrin. Product analysis by reversed phase high performance liquid chromatography and liquid chromatography electrospray ionization mass spectrometry of the product generated with H2O2 identified it as an isoporphyrin that, on quenching, decayed to benzoylbiliverdin. In the presence of H218O2, one labeled oxygen atom was incorporated into these products. The hHO-1-isoporphyrin complexes were found to have half-lives of 1.7 and 2.4 h for the p-trifluoromethyl- and p-methyl-substituted phenylhemes, respectively. The addition of NADPH-P450 reductase to the H2O2-generated hHO-1-isoporphyrin complex produced α-biliverdin, confirming its role as a reaction intermediate. Identification of an isoporphyrin intermediate in the catalytic sequence of hHO-1, the first such intermediate observed in hemoprotein catalysis, completes our understanding of the critical first step of heme oxidation. PMID:18487208
The Development of Visible-Light Photoredox Catalysis in Flow.
Garlets, Zachary J; Nguyen, John D; Stephenson, Corey R J
Visible-light photoredox catalysis has recently emerged as a viable alternative for radical reactions otherwise carried out with tin and boron reagents. It has been recognized that by merging photoredox catalysis with flow chemistry, slow reaction times, lower yields, and safety concerns may be obviated. While flow reactors have been successfully applied to reactions carried out with UV light, only recent developments have demonstrated the same potential of flow reactors for the improvement of visible-light-mediated reactions. This review examines the initial and continuing development of visible-light-mediated photoredox flow chemistry by exemplifying the benefits of flow chemistry compared with conventional batch techniques.
New and future developments in catalysis activation of carbon dioxide
New and Future Developments in Catalysis is a package of books that compile the latest ideas concerning alternate and renewable energy sources and the role that catalysis plays in converting new renewable feedstock into biofuels and biochemicals. Both homogeneous and heterogeneous catalysts and catalytic processes will be discussed in a unified and comprehensive approach. There will be extensive cross-referencing within all volumes. This volume presents a complete picture of all carbon dioxide (CO2) sources, outlines the environmental concerns regarding CO2, and critica
KCC1: First Nanoparticle developed by KAUST Catalysis Center
Basset, Jean-Marie
KCC1 is the first Nanoparticle developed by KAUST Catalysis Center. Director of KAUST Catalysis Center, Dr. Jean-Marie Basset, Senior Research Scientist at KCC, Dr. Vivek Polshettiwar, and Dr. Dongkyu Cha of the Advanced Nanofabrication Imaging & Characterization Core Laboratory discuss the details of this recent discovery. This video was produced by KAUST Visualization Laboratory and KAUST Technology Transfer and Innovation - Terence McElwee, Director, Technology Transfer and Innovation - [email protected] This technology is part of KAUST\\'s technology commercialization program that seeks to stimulate development and commercial use of KAUST-developed technologies. For more information email us at [email protected].
Seventh BES [Basic Energy Sciences] catalysis and surface chemistry research conference
Research programs on catalysis and surface chemistry are presented. A total of fifty-seven topics are included. Areas of research include heterogeneous catalysis; catalysis in hydrogenation, desulfurization, gasification, and redox reactions; studies of surface properties and surface active sites; catalyst supports; chemical activation, deactivation; selectivity, chemical preparation; molecular structure studies; sorption and dissociation. Individual projects are processed separately for the data bases
Seventh BES (Basic Energy Sciences) catalysis and surface chemistry research conference
Research programs on catalysis and surface chemistry are presented. A total of fifty-seven topics are included. Areas of research include heterogeneous catalysis; catalysis in hydrogenation, desulfurization, gasification, and redox reactions; studies of surface properties and surface active sites; catalyst supports; chemical activation, deactivation; selectivity, chemical preparation; molecular structure studies; sorption and dissociation. Individual projects are processed separately for the data bases. (CBS)
Field factors for asymmetric collimators
Turner, J.R.; Butler, A.P.H.
In recent years manufacturers have been supplying linear accelerators with either a single pair or a dual pair of collimators. The use of a model to relate off-axis field factors to on-axis field factors obviates the need for repeat measurements whenever the asymmetric collimators are employed. We have investigated the variation of collimator scatter Sc, with distance of the central ray x from the central axis for a variety of non square field sizes. Collimator scatter was measured by in-air measurements with a build-up cap. The Primaty-Off-Centre-Ratio (POCR) was measured in-air by scanning orthogonally across the beam with an ionization chamber. The result of the investigation is the useful prediction of off-axis field factors for a range of rectangular asymmetric fields using the simple product of the on-axis field factor and the POCR in air. The effect of asymmetry on the quality of the beam and hence the percent depth dose will be discussed. (author)
Ocular Toxicity Profile of ST-162 and ST-168 as Novel Bifunctional MEK/PI3K Inhibitors.
Smith, Andrew; Pawar, Mercy; Van Dort, Marcian E; Galbán, Stefanie; Welton, Amanda R; Thurber, Greg M; Ross, Brian D; Besirli, Cagri G
ST-162 and ST-168 are small-molecule bifunctional inhibitors of MEK and PI3K signaling pathways that are being developed as novel antitumor agents. Previous small-molecule and biologic MEK inhibitors demonstrated ocular toxicity events that were dose limiting in clinical studies. We evaluated in vitro and in vivo ocular toxicity profiles of ST-162 and ST-168. Photoreceptor cell line 661W and adult retinal pigment epithelium cell line ARPE-19 were treated with increasing concentrations of bifunctional inhibitors. Western blots, cell viability, and caspase activity assays were performed to evaluate MEK and PI3K inhibition and dose-dependent in vitro toxicity, and compared with monotherapy. In vivo toxicity profile was assessed by intravitreal injection of ST-162 and ST-168 in Dutch-Belted rabbits, followed by ocular examination and histological analysis of enucleated eyes. Retinal cell lines treated with ST-162 or ST-168 exhibited dose-dependent inhibition of MEK and PI3K signaling. Compared with inhibition by monotherapies and their combinations, bifunctional inhibitors demonstrated reduced cell death and caspase activity. In vivo, both bifunctional inhibitors exhibited a more favorable toxicity profile when compared with MEK inhibitor PD0325901. Novel MEK and PI3K bifunctional inhibitors ST-162 and ST-168 demonstrate favorable in vitro and in vivo ocular toxicity profiles, supporting their further development as potential therapeutic agents targeting multiple aggressive tumors.
An Artificial Biomimetic Catalysis Converting CO2 to Green Fuels
Li, Caihong; Wang, Zhiming
Researchers devote to design catalytic systems with higher activity, selectivity, and stability ideally based on cheap and earth-abundant elements to reduce CO2 to value-added hydrocarbon fuels under mild conditions driven by visible light. This may offer profound inspirations on that. A bi-functional molecular iron catalyst designed could not only catalyze two-electron reduction from CO2 to CO but also further convert CO to CH4 with a high selectivity of 82% stably over several days.
Asymmetric Frontal Brain Activity and Parental Rejection
Huffmeijer, R.; Alink, L.R.A.; Tops, M.; Bakermans-Kranenburg, M.J.; van IJzendoorn, M.H.
Asymmetric frontal brain activity has been widely implicated in reactions to emotional stimuli and is thought to reflect individual differences in approach-withdrawal motivation. Here, we investigate whether asymmetric frontal activity, as a measure of approach-withdrawal motivation, also predicts
Worst Asymmetrical Short-Circuit Current
Arana Aristi, Iván; Holmstrøm, O; Grastrup, L
In a typical power plant, the production scenario and the short-circuit time were found for the worst asymmetrical short-circuit current. Then, a sensitivity analysis on the missing generator values was realized in order to minimize the uncertainty of the results. Afterward the worst asymmetrical...
Mechanochemistry assisted asymmetric organocatalysis: A sustainable approach
Pankaj Chauhan
Full Text Available Ball-milling and pestle and mortar grinding have emerged as powerful methods for the development of environmentally benign chemical transformations. Recently, the use of these mechanochemical techniques in asymmetric organocatalysis has increased. This review highlights the progress in asymmetric organocatalytic reactions assisted by mechanochemical techniques.
Density functional theory in surface science and heterogeneous catalysis
Nørskov, Jens Kehlet; Scheffler, M.; Toulhoat, H.
Solid surfaces are used extensively as catalysts throughout the chemical industry, in the energy sector, and in environmental protection. Recently, density functional theory has started providing new insight into the atomic-scale mechanisms of heterogeneous catalysis, helping to interpret the large...
International symposium on 'applications of zeolites in heterogeneous catalysis'
The International Symposium on applications of zeolites in heterogeneous catalysis, organized by the Hungarian Chemical Society (Szeged, Hung. 9/11-14/78), included 48 papers, which were published in the Vertical Bar3Vertical BarActa Phys. Chem. (Szeged) 24.
Molecular catalysis and high-volume organic synthesis
Khidekel, M L; Vasserberg, V E
The field of catalysis is very wide. The properties of catalysts are briefly reviewed and compared with the properties of enzymes. Various uses of enxymes in industry (sugar from corn, cellulose breakdown, etc.) are pointed out. The types of homogeneous and heterogeneous catalysts for use in organic synthesis are discussed. 48 refs. (SJR)
Pincer-porphyrin hybrids : Synthesis, self-assembly, and catalysis
Suijkerbuijk, B.M.J.M.
Metal complexes play an important role in established research areas such as catalysis and materials chemistry as well as in emerging fields of chemical exploration such as bioinorganic chemistry. Changes in the metal center's ligand environment, i.e., the nature and number of the Lewis basic atoms
Sustainable Catalysis_Energy efficient reactions and Applications
This book chapter discusses various catalysts for environmental remediation. Detailed information on catalysis using ferrate and ferrite oxidation, TiO2 photocatalysis, and new catalysts (i.e., graphene, perovskites and graphitic carbon nitride) is provided for the degradation of...
Examining the role of glutamic acid 183 in chloroperoxidase catalysis
Yi, X.; Conesa, A.; Punt, P.J.; Hager, L.P.
Site-directed mutagenesis has been used to investigate the role of glutamic acid 183 in chloroperoxidase catalysis. Based on the x-ray crystallographic structure of chloroperoxidase, Glu-183 is postulated to function on distal side of the heme prosthetic group as an acid-base catalyst in
Alkylation of hydrothiophosphoryl compounds in conditions of interphase catalysis
Aladzheva, I.M.; Odinets, I.L.; Petrovskij, P.V.; Mastryukova, T.A.; Kabachkin, M.I.
A method of interphase catalysis permitted to develop a common method for synthesis of compounds with thiophosphoryl group. The effect of nature of hydrothiophosphoryl compound, alkylating agent, two-phase system and reaction conditions on alkylation product yields was investigated in detail
Towards a generic model of catalysis | Grayson | Bulletin of the ...
We consider polarizabilities and hardness/softness parameters to see how local polarizations of the electron density may also be responsible for activation of a localised area of a large molecule. KEY WORDS: Electrostatic catalysis, Geometrical strain, Environment strain, Entasis, Polarizability, Hardness and softness. Bull.
bond activation and catalysis by Ru -pac complexes
and their reactivity towards oxidation of a few organic compounds. Keywords. Kinetics; catalysis; -O–O- bond activation; Ru-pac complex; oxidation. 1. Introduction. Ru-pac complexes exhibit catalytic properties,1 in homogeneous conditions in the presence of oxygen atom donors, that mimic the biological enzymatic oxi-.
Nitrogen doped carbon nanotubes : synthesis, characterization and catalysis
van Dommele, S.
Nitrogen containing Carbon Nanotubes (NCNT) have altered physical- and chemical properties with respect to polarity, conductivity and reactivity as compared to conventional carbon nanotubes (CNT) and have potential for use in electronic applications or catalysis. In this thesis the incorporation of
Role of catalysis in sustainable production of synthetic elastomers
productions, the impact of synthetic elastomer business cannot be overlooked. The need of ... Keywords. Elastomers; catalysis; tyres and automobiles; mechanism; manufacturing process. 1. ..... level fractional factorial design model was also developed to ..... Polybutadiene can be manufactured by a number of pro- cesses ...
Two-dimensional zeolites in catalysis: current status and perspectives
Opanasenko, Maksym; Roth, Wieslaw Jerzy; Čejka, Jiří
Ro�. 6, �. 8 (2016), s. 2467-2484 ISSN 2044-4753 R&D Projects: GA ČR GP13-17593P; GA ČR(CZ) GAP106/12/0189 Institutional support: RVO:61388955 Keywords : mesoporous molecular sieves * catalysis * acylation reactions Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.773, year: 2016
Designing asymmetric multiferroics with strong magnetoelectric coupling
Lu, Xuezeng; Xiang, Hongjun; Rondinelli, James; Materials Theory; Design Group Team
Multiferroics offer exciting opportunities for electric-field control of magnetism. Single-phase multiferroics suitable for such applications at room temperature need much more study. Here, we propose the concept of an alternative type of multiferroics, namely, the ``asymmetric multiferroic.'' In asymmetric multiferroics, two locally stable ferroelectric states are not symmetrically equivalent, leading to different magnetic properties between these two states. Furthermore, we predict from first principles that a Fe-Cr-Mo superlattice with the LiNbO3-type structure is such an asymmetric multiferroic. The strong ferrimagnetism, high ferroelectric polarization, and significant dependence of the magnetic transition temperature on polarization make this asymmetric multiferroic an ideal candidate for realizing electric-field control of magnetism at room temperature. Our study suggests that the asymmetric multiferroic may provide an alternative playground for voltage control of magnetism and find its applications in spintronics and quantum computing.
A case of asymmetrical arthrogryposis
Hageman, G.; Vette, J.K.; Willemse, J.
Following the introduction of the conception that arthrogryposis is a symptom and not a clinical entity, a case of the very rare asymmetric form of neurogenic arthrogryposis is presented. The asymmetry of congenital contractures and weakness is associated with hemihypotrophy. The value of muscular CT-scanning prior to muscle biopsy is demonstrated. Muscular CT-scanning shows the extension of adipose tissue, which has replaced damaged muscles and therby indicates the exact site for muscle biopsy. Since orthopaedic treatment in arthrogryposis can be unrewarding due to severe muscular degeneration, preoperative scanning may provide additional important information on muscular function and thus be of benefit for surgery. The advantage of muscular CT-scanning in other forms of arthrogryposis requires further determination. The differential diagnosis with Werdnig-Hoffmann disease is discussed. (author)
Asymmetric dark matter (ADM) is motivated by the similar cosmological mass densities measured for ordinary and dark matter. We present a comprehensive theory for ADM that addresses the mass density similarity, going beyond the usual ADM explanations of similar number densities. It features an explicit matter-antimatter asymmetry generation mechanism, has one fully worked out thermal history and suggestions for other possibilities, and meets all phenomenological, cosmological and astrophysical constraints. Importantly, it incorporates a deep reason for why the dark matter mass scale is related to the proton mass, a key consideration in ADM models. Our starting point is the idea of mirror matter, which offers an explanation for dark matter by duplicating the standard model with a dark sector related by a Z2 parity symmetry. However, the dark sector need not manifest as a symmetric copy of the standard model in the present day. By utilizing the mechanism of "asymmetric symmetry breaking" with two Higgs doublets in each sector, we develop a model of ADM where the mirror symmetry is spontaneously broken, leading to an electroweak scale in the dark sector that is significantly larger than that of the visible sector. The weak sensitivity of the ordinary and dark QCD confinement scales to their respective electroweak scales leads to the necessary connection between the dark matter and proton masses. The dark matter is composed of either dark neutrons or a mixture of dark neutrons and metastable dark hydrogen atoms. Lepton asymmetries are generated by the C P -violating decays of heavy Majorana neutrinos in both sectors. These are then converted by sphaleron processes to produce the observed ratio of visible to dark matter in the universe. The dynamics responsible for the kinetic decoupling of the two sectors emerges as an important issue that we only partially solve.
Hydrodesulfurization on Transition Metal Catalysts: Elementary Steps of C-S Bond Activation and Consequences of Bifunctional Synergies
Yik, Edwin Shyn-Lo
The presence of heteroatoms (e.g. S, N) in crude oil poses formidable challenges in petroleum refining processes as a result of their irreversible binding on catalytically active sites at industrially relevant conditions. With increasing pressures from legislation that continues to lower the permissible levels of sulfur content in fuels, hydrodesulfurization (HDS), the aptly named reaction for removing heteroatoms from organosulfur compounds, has become an essential feedstock pretreatment step to remove deleterious species from affecting downstream processing. Extensive research in the area has identified the paradigm catalysts for desulfurization; MoSx or WSx, promoted with Co or Ni metal; however, despite the vast library of both empirical and fundamental studies, a clear understanding of site requirements, the elementary steps of C-S hydrogenolysis, and the properties that govern HDS reactivity and selectivity have been elusive. While such a lack of rigorous assessments has not prevented technological advancements in the field of HDS catalysis, fundamental interpretations can inform rational catalyst and process design, particularly in light of new requirements for "deep" desulfurization and in the absence of significant hydrotreatment catalyst developments in recent decades. We report HDS rates of thiophene, which belongs to a class of compounds that are most resistant to sulfur removal (i.e. substituted alkyldibenzothiophenes), over a range of industrially relevant temperatures and pressures, measured at differential conditions and therefore revealing their true kinetic origins. These rates, normalized by the number of exposed metal atoms, on various SiO 2-supported, monometallic transition metals (Re, Ru, Pt), range several orders of magnitude. Under relevant HDS conditions, Pt and Ru catalysts form a layer of chemisorbed sulfur on surfaces of a metallic bulk, challenging reports that assume the latter exists as its pyrite sulfide phase during reaction. While
Benzimidazolyl methyliminodiacetic acids: new bifunctional chelators of technetium for hepatobiliary scintigraphy
Hunt, F.C.; Wilson, J.G.; Maddalena, D.J.
Dimethyl- and chloro- substituted benzimidazolyl methyliminodiacetic acids have been synthesized and evaluated as new bifunctional chelators of /sup 99m/Tc. Stannous chelates of these compounds were prepared as freeze-dried kits and labeled with /sup 99m/Tc. The radiopharmaceuticals thus prepared were rapidly excreted by the hepatobiliary system of rats and rabbits with little urinary excretion. The chloro- compound had a higher biliary and lesser urinary excretion than the dimethyl- however both technetium complexes provided good scintigraphic images of the hepatobiliary system in animals. The compounds behaved similarly to the /sup 99m/Tc-lidocaine iminodiacetic acid [HIDA] complexes with respect to their biliary elimination
Bifunctional groups grafted polyethersulfone magnetic beads for selective sequestration of plutonium
Paul, Sumana; Aggarwal, S.K.; Pandey, A.K.
The present study involves synthesis of polyethersulfone (PES) beads grafted with two different monomers viz. 2-hydroxyethylmethacrylate phosphoric acid ester (HEMP) and 2-acrylamido-2-methyl-1-propane sulphonic acid (AMPS) by photo-induced free radical polymerization method. The selection of bifunctional polymer was based on our previous studies, which indicated its efficacy for selective preconcentration of Pu from 3-4 mol L -1 HNO 3 . The HEMP-co-AMPS grafted PES beads were used for selective extraction of plutonium from dissolver solution
A Proton-Switchable Bifunctional Ruthenium Complex That Catalyzes Nitrile Hydroboration.
Geri, Jacob B; Szymczak, Nathaniel K
A new bifunctional pincer ligand framework bearing pendent proton-responsive hydroxyl groups was prepared and metalated with Ru(II) and subsequently isolated in four discrete protonation states. Stoichiometric reactions with H2 and HBPin showed facile E-H (E = H or BPin) activation across a Ru(II)-O bond, providing access to unusual Ru-H species with strong interactions with neighboring proton and boron atoms. These complexes were found to promote the catalytic hydroboration of ketones and nitriles under mild conditions, and the activity was highly dependent on the ligand's protonation state. Mechanistic experiments revealed a crucial role of the pendent hydroxyl groups for catalytic activity.
Tunable catalytic properties of bi-functional mixed oxides in ethanol conversion to high value compounds
Ramasamy, Karthikeyan K.; Gray, Michel J.; Job, Heather M.; Smith, Colin D.; Wang, Yong
tA highly versatile ethanol conversion process to selectively generate high value compounds is pre-sented here. By changing the reaction temperature, ethanol can be selectively converted to >C2alcohols/oxygenates or phenolic compounds over hydrotalcite derived bi-functional MgO–Al2O3cata-lyst via complex cascade mechanism. Reaction temperature plays a role in whether aldol condensationor the acetone formation is the path taken in changing the product composition. This article containsthe catalytic activity comparison between the mono-functional and physical mixture counterpart to thehydrotalcite derived mixed oxides and the detailed discussion on the reaction mechanisms.
Bifunctional electrode performance for zinc-air flow cells with pulse charging
Pichler, Birgit; Weinberger, Stephan; Reš�ec, Lucas; Grimmer, Ilena; Gebetsroither, Florian; Bitschnau, Brigitte; Hacker, Viktor
Highlights: •Manufacture of bi-catalyzed bifunctional air electrodes via scalable process. •Direct synthesis of NiCo 2 O 4 on carbon nanofibers or nickel powder support. •450 charge and discharge cycles over 1000 h at 50 mA cm −2 demonstrated. •Pulse charging with 150 mA cm −2 is successfully applied on air electrodes. •Charge and discharge ΔV of <0.8 V at 50 mA cm −2 when supplied with O 2. -- Abstract: Bifunctional air electrodes with tuned composition consisting of two precious metal-free oxide catalysts are manufactured for application in rechargeable zinc-air flow batteries and electrochemically tested via long-term pulse charge and discharge cycling experiments at 50 mA cm −2 (mean). NiCo 2 O 4 spinel, synthesized via direct impregnation on carbon nanofibers or nickel powder and characterized by energy dispersive X-ray spectroscopy and X-ray diffraction experiments, shows high activity toward oxygen evolution reaction with low charge potentials of < 2.0 V vs. Zn/Zn 2+ . La 0.6 Sr 0.4 Co 0.2 Fe 0.8 O 3 perovskite exhibits bifunctional activity and outperforms the NiCo 2 O 4 spinel in long-term stability tenfold. By combining the catalysts in one bi-catalyzed bifunctional air electrode, stable performances of more than 1000 h and 450 cycles are achieved when supplied with oxygen and over 650 h and 300 cycles when supplied with synthetic air. In addition, the pulse charging method, which is beneficial for compact zinc deposition, is successfully tested on air electrodes during long-term operation. The oxygen evolution potentials during pulse, i.e. at tripled charge current density of 150 mA cm −2 , are only 0.06–0.08 V higher compared to constant charging current densities. Scanning electron microscopy confirms that mechanical degradation caused by bubble formation during oxygen evolution results in slowly decreasing discharge potentials.
Oxidations of amines with molecular oxygen using bifunctional gold–titania catalysts
Klitgaard, Søren Kegnæs; Egeblad, Kresten; Mentzel, Uffe Vie
–titania catalysts can be employed to facilitate the oxidation of amines into amides with high selectivity. Furthermore, we report that pure titania is in fact itself a catalyst for the oxidation of amines with molecular oxygen under very mild conditions. We demonstrate that these new methodologies open up for two......Over the past decades it has become clear that supported gold nanoparticles are surprisingly active and selective catalysts for several green oxidation reactions of oxygen-containing hydrocarbons using molecular oxygen as the stoichiometric oxidant. We here report that bifunctional gold...
Basic evaluation of 67Ga labeled digoxin derivative as a metal-labeled bifunctional radiopharmaceutical
Fujibayashi, Yasuhisa; Konishi, Junji; Takemura, Yasutaka; Taniuchi, Hideyuki; Iijima, Naoko; Yokoyama, Akira.
To develop metal-labeled digoxin radiopharmaceuticals with affinity with anti-digoxin antibody as well as Na + , K + -ATPase, a digoxin derivative conjugated with deferoxamine was synthesized. The derivative had a high binding affinity with 67 Ga at deferoxamine introduced to the terminal sugar ring of digoxin. The 67 Ga labeled digoxin derivative showed enough in vitro binding affinity and selectivity to anti-digoxin antibody as well as Na + , K + -ATPase. The 67 Ga labeled digoxin derivative is considered to be a potential metal-labeled bifunctional radiopharmaceutical for digoxin RIA as well as myocardial Na + , K + -ATPase imaging. (author)
Neurodegeneration in D-bifunctional protein deficiency: diagnostic clues and natural history using serial magnetic resonance imaging
Khan, Aneal [University of Calgary, Department of Medical Genetics and Pediatrics, Alberta Children' s Hospital, Calgary, AB (Canada); Wei, Xing-Chang [University of Calgary, Department of Radiology, Alberta Children' s Hospital, Calgary, AB (Canada); Snyder, Floyd F. [Alberta Children' s Hospital, Biochemical Genetics Laboratory, Calgary, AB (Canada); Mah, Jean K. [University of Calgary, Division of Neurology, Department of Pediatrics, Calgary, AB (Canada); Waterham, Hans; Wanders, Ronald J.A. [University of Amsterdam, Academic Medical Center, Lab Genetic Metabolic Diseases, Amsterdam (Netherlands)
We report serial neurodegenerative changes on neuroimaging in a rare peroxisomal disease called D-bifunctional protein deficiency. The pattern of posterior to anterior demyelination with white matter disease resembles X-linked adrenoleukodystrophy. We feel this case is important to (1) highlight that D-bifunctional protein deficiency should be considered in cases where the neuroimaging resembles X-linked adrenoleukodystrophy, (2) to show different stages of progression to help identify this disease using neuroimaging in children, and (3) to show that neuroimaging suggesting a leukodystrophy can warrant peroxisomal beta-oxidation studies in skin fibroblasts even when plasma very long chain fatty acids are normal. (orig.)
Synergistic extraction of Am(III) using HTTA and bi-functional (DHDECMP) and mono-functional (TBP) donors
Pai, S.A.; Lohithakshan, K.V.; Mithapara, P.D.; Aggarwal, S.K.
The equilibrium constant (log Ks) for the organic phase synergistic reaction for Am(III)-HTTA system with bi-functional neutral donor di-hexyl di-ethyl carbamoylmethyl phosphonate (DHDECMP) was found to be about two orders of magnitude higher than that of the mono-functional neutral donor (TBP) with comparable basicity values. This log Ks value along with a large positive entropy change with DHDECMP compared to that with TBP confirms that the neutral donors like DHDECMP behave as bi-functional, in sharp contrast to its mono-functional behaviour in Pu(VI). (author)
Palladium catalysed asymmetric alkylation of benzophenone Schiff ...
some time in the application of phase transfer catalysis to the preparation of ... to free them from moisture contamination. Ionic liq- ... mined by chiral HPLC analysis (chiracel OD, hexane: 2-propanol ... chromatography on alumina 90 active basic (0.063–. 0.200 mm) ..... ing solubility of ionic liquids in water at 20. ◦. C, for the.
A review of new developments in the Friedel–Crafts alkylation – From green chemistry to asymmetric catalysis
Full Text Available The development of efficient Friedel–Crafts alkylations of arenes and heteroarenes using only catalytic amounts of a Lewis acid has gained much attention over the last decade. The new catalytic approaches described in this review are favoured over classical Friedel–Crafts conditions as benzyl-, propargyl- and allyl alcohols, or styrenes, can be used instead of toxic benzyl halides. Additionally, only low catalyst loadings are needed to provide a wide range of products. Following a short introduction about the origin and classical definition of the Friedel–Crafts reaction, the review will describe the different environmentally benign substrates which can be applied today as an approach towards greener processes. Additionally, the first diastereoselective and enantioselective Friedel–Crafts-type alkylations will be highlighted.
Chiral ligands derived from monoterpenes: application in the synthesis of optically pure secondary alcohols via asymmetric catalysis.
El Alami, Mohammed Samir Ibn; El Amrani, Mohamed Amin; Agbossou-Niedercorn, Francine; Suisse, Isabelle; Mortreux, André
The preparation of optically pure secondary alcohols in the presence of catalysts based on chiral ligands derived from monoterpenes, such as pinenes, limonenes and carenes, is reviewed. A wide variety of these ligands has been synthesized and used in several catalytic reactions, including hydrogen transfer, C-C bond formation via addition of organozinc compounds to aldehydes, hydrosilylation, and oxazaborolidine reduction, leading to high activities and enantioselectivities. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Asymmetric C-C Bond-Formation Reaction with Pd: How to Favor Heterogeneous or Homogeneous Catalysis?
Reimann, S.; Grunwaldt, Jan-Dierk; Mallat, T.
The enantioselective allylic alkylation of (E)-1,3-diphenylallyl acetate was studied to clarify the heterogeneous or homogeneous character of the Pd/Al2O3-(R)-BINAP catalyst system. A combined approach was applied: the catalytic tests were completed with in situ XANES measurements to follow...
Molecular-Level Design of Heterogeneous Chiral Catalysis
Zaera, Francisco
The following is a proposal to continue our multi-institutional research on heterogeneous chiral catalysis. Our team combines the use of surface-sensitive analytical techniques for the characterization of model systems with quantum and statistical mechanical calculations to interpret experimental data and guide the design of future research. Our investigation focuses on the interrelation among the three main mechanisms by which enantioselectivity can be bestowed to heterogeneous catalysts, namely: (1) by templating chirality via the adsorption of chiral supramolecular assemblies, (2) by using chiral modifiers capable of forming chiral complexes with the reactant and force enantioselective surface reactions, and (3) by forming naturally chiral surfaces using imprinting chiral agents. Individually, the members of our team are leaders in these various aspects of chiral catalysis, but the present program provides the vehicle to generate and exploit the synergies necessary to address the problem in a comprehensive manner. Our initial work has advanced the methodology needed for these studies, including an enantioselective titration procedure to identify surface chiral sites, infrared spectroscopy in situ at the interface between gases or liquids and solids to mimic realistic catalytic conditions, and DFT and Monte Carlo algorithms to simulate and understand chirality on surfaces. The next step, to be funded by the monies requested in this proposal, is to apply those methods to specific problems in chiral catalysis, including the identification of the requirements for the formation of supramolecular surface structures with enantioselective behavior, the search for better molecules to probe the chiral nature of the modified surfaces, the exploration of the transition from supramolecular to one-to-one chiral modification, the correlation of the adsorption characteristics of one-to-one chiral modifiers with their physical properties, in particular with their configuration
Chaos of several typical asymmetric systems
Feng Jingjing; Zhang Qichang; Wang Wei
The threshold for the onset of chaos in asymmetric nonlinear dynamic systems can be determined using an extended Padé method. In this paper, a double-well asymmetric potential system with damping under external periodic excitation is investigated, as well as an asymmetric triple-well potential system under external and parametric excitation. The integrals of Melnikov functions are established to demonstrate that the motion is chaotic. Threshold values are acquired when homoclinic and heteroclinic bifurcations occur. The results of analytical and numerical integration are compared to verify the effectiveness and feasibility of the analytical method.
Modelling asymmetric growth in crowded plant communities
Damgaard, Christian
A class of models that may be used to quantify the effect of size-asymmetric competition in crowded plant communities by estimating a community specific degree of size-asymmetric growth for each species in the community is suggested. The model consists of two parts: an individual size......-asymmetric growth part, where growth is assumed to be proportional to a power function of the size of the individual, and a term that reduces the relative growth rate as a decreasing function of the individual plant size and the competitive interactions from other plants in the neighbourhood....
Bifunctional silica nanospheres with 3-aminopropyl and phenyl groups. Synthesis approach and prospects of their applications
Kotsyuda, Sofiya S.; Tomina, Veronika V.; Zub, Yuriy L.; Furtat, Iryna M.; Lebed, Anastasia P.; Vaclavikova, Miroslava; Melnyk, Inna V.
Spherical silica particles with bifunctional (tbnd Si(CH2)3NH2/tbnd SiC6H5) surface layers were synthesized by the Stöber method using ternary alkoxysilanes systems. The influence of the synthesis conditions, such as temperature and stirring time on the process of nanoparticles formation was studied. The presence of introduced functional groups was confirmed by FTIR. The composition of the surface layers examined by elemental analysis and acid-base titration was shown to be independent from the synthesis temperature. However, the size of the obtained particles depends on the synthesis temperature and, according to photon cross-correlation spectroscopy, can be varied from 50 to 846 nm. The variation of electric charges of N-functional groups was disclosed in obtained nanospheres and attributed to different surface location of these groups and their surrounding with other groups. The sorption of Cu(II) ions by functionalized silicas depends on the concentration of amino groups, which correlates with the isoelectric point values (determined to vary from 8.26 to 9.21). Bifunctional nanoparticles adsorb 99.0 mg/g of methylene blue, compared with 48.0 mg/g by silica sample with only amino groups. The nanospheres, both with and without adsorbed Cu2+, demonstrate reasonable antibacterial activity against S. aureus ATCC 25923, depending on particle concentration in water suspension.
Synthesis of novel bifunctional chelators and their use in preparing monoclonal antibody conjugates for tumor targeting
Westerberg, D.A.; Carney, P.L.; Rogers, P.E.; Kline, S.J.; Johnson, D.K.
Bifunctional derivatives of the chelating agents ethylenediaminetetraacetic acid and diethylenetriaminepentaacetic acid, in which a p-isothiocyanatobenzyl moiety is attached at the methylene carbon atom of one carboxymethyl arm, was synthesized by reductive alkylation of the relevant polyamine with (p-nitrophenyl)pyruvic acid followed by carboxymethylation, reduction of the nitro group, and reaction with thiophosgene. The resulting isothiocyanate derivatives reacted with monoclonal antibody B72.3 to give antibody-chelator conjugates containing 3 mol of chelator per mole of immunoglobulin, without significant loss of immunological activity. Such conjugates, labeled with the radioisotopic metal indium-111, selectively bound a human colorectal carcinoma implanted in nude mice when given intravenously. Uptake into normal tissues was comparable to or lower than that reported for analogous conjugates with known bifunctional chelators. It is concluded that substitution with a protein reactive group at this position in polyaminopolycarboxylate chelators does not alter the chelating properties of these molecules to a sufficient extent to adversely affect biodistribution and thus provides a general method for the synthesis of such chelators
Synthesis and characterization of new bifunctional nanocomposites possessing upconversion and oxygen-sensing properties
Liu Lina; Li Bin; Qin Ruifei; Zhao Haifeng; Ren Xinguang; Su Zhongmin
A new type of bifunctional nanocomposites for biomedical applications, upconversion NaY F 4 :Y b 3+ , Tm 3+ nanoparticles coated with Ru(II) complex chemically doped SiO 2 , has been developed by combining the useful functions of upconversion and oxygen-sensing properties into one nanoparticle. NaY F 4 :Y b 3+ , Tm 3+ nanoparticles were successfully coated with an Ru(II) complex doped SiO 2 shell with a thickness of ∼ 30 nm, and the surface of the SiO 2 was functionalized with amines. The obtained nanocomposites exhibited bright blue upconversion emission, and the luminescent emission intensity of the Ru(II) complex in the nanocomposites was sensitive to oxygen. Compared with the simple mixture of Ru(II) complex and SiO 2 , the core-shell nanocomposites showed better linearity between emission intensity of Ru(II) complex and oxygen concentrations. These bifunctional nanocomposites may find applications in biochemical and biomedical fields, such as biolabels and optical oxygen sensors, which can measure the oxygen concentrations in biological fluids.
Modeling of asymmetrical boost converters
Eliana Isabel Arango Zuluaga
Full Text Available The asymmetrical interleaved dual boost (AIDB is a fifth-order DC/DC converter designed to interface photovoltaic (PV panels. The AIDB produces small current harmonics to the PV panels, reducing the power losses caused by the converter operation. Moreover, the AIDB provides a large voltage conversion ratio, which is required to step-up the PV voltage to the large dc-link voltage used in grid-connected inverters. To reject irradiance and load disturbances, the AIDB must be operated in a closed-loop and a dynamic model is required. Given that the AIDB converter operates in Discontinuous Conduction Mode (DCM, classical modeling approaches based on Continuous Conduction Mode (CCM are not valid. Moreover, classical DCM modeling techniques are not suitable for the AIDB converter. Therefore, this paper develops a novel mathematical model for the AIDB converter, which is suitable for control-pur-poses. The proposed model is based on the calculation of a diode current that is typically disregarded. Moreover, because the traditional correction to the second duty cycle reported in literature is not effective, a new equation is designed. The model accuracy is contrasted with circuital simulations in time and frequency domains, obtaining satisfactory results. Finally, the usefulness of the model in control applications is illustrated with an application example.
Asymmetric Supercapacitor Electrodes and Devices.
Choudhary, Nitin; Li, Chao; Moore, Julian; Nagaiah, Narasimha; Zhai, Lei; Jung, Yeonwoong; Thomas, Jayan
The world is recently witnessing an explosive development of novel electronic and optoelectronic devices that demand more-reliable power sources that combine higher energy density and longer-term durability. Supercapacitors have become one of the most promising energy-storage systems, as they present multifold advantages of high power density, fast charging-discharging, and long cyclic stability. However, the intrinsically low energy density inherent to traditional supercapacitors severely limits their widespread applications, triggering researchers to explore new types of supercapacitors with improved performance. Asymmetric supercapacitors (ASCs) assembled using two dissimilar electrode materials offer a distinct advantage of wide operational voltage window, and thereby significantly enhance the energy density. Recent progress made in the field of ASCs is critically reviewed, with the main focus on an extensive survey of the materials developed for ASC electrodes, as well as covering the progress made in the fabrication of ASC devices over the last few decades. Current challenges and a future outlook of the field of ASCs are also discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bioinspired smart asymmetric nanochannel membranes.
Zhang, Zhen; Wen, Liping; Jiang, Lei
Bioinspired smart asymmetric nanochannel membranes (BSANM) have been explored extensively to achieve the delicate ionic transport functions comparable to those of living organisms. The abiotic system exhibits superior stability and robustness, allowing for promising applications in many fields. In view of the abundance of research concerning BSANM in the past decade, herein, we present a systematic overview of the development of the state-of-the-art BSANM system. The discussion is focused on the construction methodologies based on raw materials with diverse dimensions (i.e. 0D, 1D, 2D, and bulk). A generic strategy for the design and construction of the BSANM system is proposed first and put into context with recent developments from homogeneous to heterogeneous nanochannel membranes. Then, the basic properties of the BSANM are introduced including selectivity, gating, and rectification, which are associated with the particular chemical and physical structures. Moreover, we summarized the practical applications of BSANM in energy conversion, biochemical sensing and other areas. In the end, some personal opinions on the future development of the BSANM are briefly illustrated. This review covers most of the related literature reported since 2010 and is intended to build up a broad and deep knowledge base that can provide a solid information source for the scientific community.
Reflection asymmetric shapes in nuclei
Ahmad, I.; Carpenter, M.P.; Emling, H.
Experimental data show that there is no even-even nucleus with a reflection asymmetric shape in its ground state. Maximum octupole- octupole correlations occur in nuclei in the mass 224 (N∼134, Z∼88) region. Parity doublets, which are the characteristic signature of octupole deformation, have been observed in several odd mass Ra, Ac and Pa nuclei. Intertwined negative and positive parity levels have been observed in several even-even Ra and Th nuclei above spin ∼8ℎ. In both cases, the opposite parity states are connected by fast El transitions. In some medium-mass nuclei intertwined negative and positive parity levels have also been observed above spin ∼7ℎ. The nuclei which exhibit octupole deformation in this mass region are 144 Ba, 146 Ba and 146 Ce; 142 Ba, 148 Ce, 150 Ce and 142 Xe do not show these characteristics. No case of parity doublet has been observed in the mass 144 region. 32 refs., 16 figs., 1 tab
Twin Higgs Asymmetric Dark Matter.
García García, Isabel; Lasenby, Robert; March-Russell, John
We study asymmetric dark matter (ADM) in the context of the minimal (fraternal) twin Higgs solution to the little hierarchy problem, with a twin sector with gauged SU(3)^{'}×SU(2)^{'}, a twin Higgs doublet, and only third-generation twin fermions. Naturalness requires the QCD^{'} scale Λ_{QCD}^{'}≃0.5-20 GeV, and that t^{'} is heavy. We focus on the light b^{'} quark regime, m_{b^{'}}≲Λ_{QCD}^{'}, where QCD^{'} is characterized by a single scale Λ_{QCD}^{'} with no light pions. A twin baryon number asymmetry leads to a successful dark matter (DM) candidate: the spin-3/2 twin baryon, Δ^{'}∼b^{'}b^{'}b^{'}, with a dynamically determined mass (∼5Λ_{QCD}^{'}) in the preferred range for the DM-to-baryon ratio Ω_{DM}/Ω_{baryon}≃5. Gauging the U(1)^{'} group leads to twin atoms (Δ^{'}-τ^{'}[over ¯] bound states) that are successful ADM candidates in significant regions of parameter space, sometimes with observable changes to DM halo properties. Direct detection signatures satisfy current bounds, at times modified by dark form factors.
Lift production through asymmetric flapping
Jalikop, Shreyas; Sreenivas, K. R.
At present, there is a strong interest in developing Micro Air Vehicles (MAV) for applications like disaster management and aerial surveys. At these small length scales, the flight of insects and small birds suggests that unsteady aerodynamics of flapping wings can offer many advantages over fixed wing flight, such as hovering-flight, high maneuverability and high lift at large angles of attack. Various lift generating mechanims such as delayed stall, wake capture and wing rotation contribute towards our understanding of insect flight. We address the effect of asymmetric flapping of wings on lift production. By visualising the flow around a pair of rectangular wings flapping in a water tank and numerically computing the flow using a discrete vortex method, we demonstrate that net lift can be produced by introducing an asymmetry in the upstroke-to-downstroke velocity profile of the flapping wings. The competition between generation of upstroke and downstroke tip vortices appears to hold the key to understanding this lift generation mechanism.
Iminodiacetic acid as bifunctional linker for dimerization of cyclic RGD peptides
Xu, Dong; Zhao, Zuo-Quan; Chen, Shu-Ting; Yang, Yong; Fang, Wei; Liu, Shuang
Introduction: In this study, I2P-RGD 2 was used as the example to illustrate a novel approach for dimerization of cyclic RGD peptides. The main objective of this study was to explore the impact of bifunctional linkers (glutamic acid vs. iminodiacetic acid) on tumor-targeting capability and excretion kinetics of the 99m Tc-labeled dimeric cyclic RGD peptides. Methods: HYNIC-I2P-RGD 2 was prepared by reacting I2P-RGD 2 with HYNIC-OSu in the presence of diisopropylethylamine, and was evaluated for its α v β 3 binding affinity against 125 I-echistatin bound to U87MG glioma cells. 99m Tc-I2P-RGD 2 was prepared with high specific activity (~185 GBq/μmol). The athymic nude mice bearing U87MG glioma xenografts were used to evaluate its biodistribution properties and image quality in comparison with those of 99m Tc-3P-RGD 2 . Results: The IC 50 value for HYNIC-I2P-RGD 2 was determined to be 39 ± 6 nM, which was very close to that (IC 50 = 33 ± 5 nM) of HYNIC-3P-RGD 2 . Replacing glutamic acid with iminodiacetic acid had little impact on α v β 3 binding affinity of cyclic RGD peptides. 99m Tc-I2P-RGD 2 and 99m Tc-3P-RGD 2 shared similar tumor uptake values over the 2 h period, and its α v β 3 -specificity was demonstrated by a blocking experiment. The uptake of 99m Tc-I2P-RGD 2 was significantly lower than 99m Tc-3P-RGD 2 in the liver and kidneys. The U87MG glioma tumors were visualized by SPECT with excellent contrast using both 99m Tc-I2P-RGD 2 and 99m Tc-3P-RGD 2 . Conclusion: Iminodiacetic acid is an excellent bifunctional linker for dimerization of cyclic RGD peptides. Bifunctional linkers have significant impact on the excretion kinetics of 99m Tc radiotracers. Because of its lower liver uptake and better tumor/liver ratios, 99m Tc-I2P-RGD 2 may have advantages over 99m Tc-3P-RGD 2 for diagnosis of tumors in chest region. -- Graphical abstract: This report presents novel approach for dimerization of cyclic RGD peptides using iminodiacetic acid as a
Value-added Chemicals from Biomass by Heterogeneous Catalysis
Voss, Bodil
feedstock, having retained one C-C bond originating from the biomass precursor, the aspects of utilising heterogeneous catalysis for its conversion to value added chemicals is investigated. Through a simple analysis of known, but not industrialised catalytic routes, the direct conversion of ethanol....... The results of the thesis, taking one example of biomass conversion, show that the utilisation of biomass in the production of chemicals by heterogeneous catalysis is promising from a technical point of view. But risks of market price excursions dominated by fossil based chemicals further set a criterion...... been implemented. The subject on chemical production has received less attention. This thesis describes and evaluates the quest for an alternative conversion route, based on a biomass feedstock and employing a heterogeneous catalyst capable of converting the feedstock, to a value-added chemical...
Magnetic Catalysis of Chiral Symmetry Breaking: A Holographic Prospective
Filev, V.; Rashkov, R.; Rashkov, R.
We review a recent investigation of the effect of magnetic catalysis of mass generation in holographic Yang-Mills theories. We aim at a self-contained and pedagogical form of the review. We provide a brief field theory background and review the basics of holographic flavordynamics. The main part of the paper investigates the influence of external magnetic field to holographic gauge theories dual to the D3/D5- and D3/D7-brane intersections. Among the observed phenomena are the spontaneous breaking of a global internal symmetry, Zeeman splitting of the energy levels, and the existence of pseudo, Goldstone modes. An analytic derivation of the Gell-Mann-Oaks-Renner relation for the D3/D7 set up is reviewed. In the D3/D5 case, the pseudo-Goldstone modes satisfy nonrelativistic dispersion relation. The studies reviewed confirm the universal nature of the magnetic catalysis of mass generation.
Hydrogen Tunneling Links Protein Dynamics to Enzyme Catalysis
Klinman, Judith P.; Kohen, Amnon
The relationship between protein dynamics and function is a subject of considerable contemporary interest. Although protein motions are frequently observed during ligand binding and release steps, the contribution of protein motions to the catalysis of bond making/breaking processes is more difficult to probe and verify. Here, we show how the quantum mechanical hydrogen tunneling associated with enzymatic C–H bond cleavage provides a unique window into the necessity of protein dynamics for achieving optimal catalysis. Experimental findings support a hierarchy of thermodynamically equilibrated motions that control the H-donor and -acceptor distance and active-site electrostatics, creating an ensemble of conformations suitable for H-tunneling. A possible extension of this view to methyl transfer and other catalyzed reactions is also presented. The impact of understanding these dynamics on the conceptual framework for enzyme activity, inhibitor/drug design, and biomimetic catalyst design is likely to be substantial. PMID:23746260
Homogeneous Catalysis with Metal Complexes Fundamentals and Applications
Duca, Gheorghe
The book about homogeneous catalysis with metal complexes deals with the description of the reductive-oxidative, metal complexes in a liquid phase (in polar solvents, mainly in water, and less in nonpolar solvents). The exceptional importance of the redox processes in chemical systems, in the reactions occuring in living organisms, the environmental processes, atmosphere, water, soil, and in industrial technologies (especially in food-processing industries) is discussed. The detailed practical aspects of the established regularities are explained for solving the specific practical tasks in various fields of industrial chemistry, biochemistry, medicine, analytical chemistry and ecological chemistry. The main scope of the book is the survey and systematization of the latest advances in homogeneous catalysis with metal complexes. It gives an overview of the research results and practical experience accumulated by the author during the last decade.
An efficient catalyst for asymmetric Reformatsky reaction
rate enantioselectivity using N,N-dialkylnorephedrines as chiral ligands. ..... temperatures also, there was no product conversion. ... Optimization of reaction conditions for asymmetric Reformatsky reaction between benzaldehyde and α-.
Asymmetric cryptography based on wavefront sensing.
Peng, Xiang; Wei, Hengzheng; Zhang, Peng
A system of asymmetric cryptography based on wavefront sensing (ACWS) is proposed for the first time to our knowledge. One of the most significant features of the asymmetric cryptography is that a trapdoor one-way function is required and constructed by analogy to wavefront sensing, in which the public key may be derived from optical parameters, such as the wavelength or the focal length, while the private key may be obtained from a kind of regular point array. The ciphertext is generated by the encoded wavefront and represented with an irregular array. In such an ACWS system, the encryption key is not identical to the decryption key, which is another important feature of an asymmetric cryptographic system. The processes of asymmetric encryption and decryption are formulized mathematically and demonstrated with a set of numerical experiments.
Asymmetrical Representation of Gender in Amharic1
in its grammar. Gender representation in this language is asymmetrical heavily ..... In dictionaries where. Amharic appears either as the target or the source language, verbs are entered ...... The Dialects of Amharic Revisited. Semitica et.
Beam-beam issues in asymmetric colliders
Furman, M.A.
We discuss generic beam-beam issues for proposed asymmetric e + - e - colliders. We illustrate the issues by choosing, as examples, the proposals by Cornell University (CESR-B), KEK, and SLAC/LBL/LLNL (PEP-II)
Congenital asymmetric crying face: a case report
Semra Kara
Full Text Available Congenital asymmetric crying face is an anomalia caused by unilateral absence or weakness of depressor anguli oris muscle The major finding of the disease is the absence or weakness in the outer and lower movement of the commissure during crying. The other expression muscles are normal and the face is symmetric at rest. The asymmetry in congenital asymmetric crying face is most evident during infancy but decreases by age. Congenital asymmetric crying face can be associated with cervicofacial, musclebone, respiratory, genitourinary and central nervous system anomalia. It is diagnosed by physical examination. This paper presents a six days old infant with Congenital asymmetric crying face and discusses the case in terms of diagnosis and disease features.
Asymmetric total synthesis of cladosporin and isocladosporin.
Zheng, Huaiji; Zhao, Changgui; Fang, Bowen; Jing, Peng; Yang, Juan; Xie, Xingang; She, Xuegong
The first asymmetric total syntheses of cladosporin and isocladosporin were accomplished in 8 steps with 8% overall yield and 10 steps with 26% overall yield, respectively. The relative configuration of isocladosporin was determined via this total synthesis.
Magnetically Modified Asymmetric Supercapacitors, Phase I
National Aeronautics and Space Administration — This Small Business Innovation Research Phase I project is for the development of an asymmetric supercapacitor that will have improved energy density and cycle life....
Bionic catalysis of porphyrin for electrochemical detection of nucleic acids
Li Jie; Lei Jianping; Wang Quanbo; Wang Peng; Ju Huangxian
Highlights: ► This is the first application of bionic catalysis of porphyrin as detection probe in bioanalysis. ► Porphyrin–DNA–gold nanoparticle probe is synthesized. ► Binding model between FeTMPyP and DNA is verified. ► The detection probe shows excellent electrocatalytic behaviors toward the reduction of O 2 . ► The biosensor exhibited good performance with wide linear range and high specificity. - Abstract: A novel electrochemical strategy was designed for the detection of DNA based on the bionic catalysis of porphyrin. The detection probe was prepared via the assembly of thiolated double strand DNA (dsDNA) with gold nanoparticles (AuNPs), and then interacted with cationic iron (III) meso-tetrakis (N-methylphyridinum-4-yl) porphyrin (FeTMPyP) via groove binding along the dsDNA surface. The resulting nanocomplex was characterized with transmission electron microscopy, UV–vis absorption and circular dichroism spectroscopy. The FeTMPyP–DNA–AuNPs probe on gold electrode demonstrated the excellent electrocatalytic behaviors toward the reduction of O 2 due to the largely loading of FeTMPyP and good conductivity. Based on bionic catalysis of porphyrin for the reduction of O 2 , the resulting biosensor exhibited a good performance for the detection of DNA with a wide linear range from 1 × 10 −12 to 1 × 10 −8 mol L −1 and detection limit of 2.5 × 10 −13 mol L −1 at the signal/noise of 3. More importantly, the biosensor presented excellent ability to discriminate the perfectly complementary target and the mismatched stand. This strategy could be conveniently extended for detection of other biomolecules. To the best of our knowledge, this is the first application of bionic catalysis of porphyrin as detection probe and opens new opportunities for sensitive detection of biorecognition events.
Heterogeneous Catalysis: Understanding for Designing, and Designing for Applications
Corma Canós, Avelino
Despite the introduction of high-throughput and combinatorial methods that certainly can be useful in the process of catalysts optimization, it is recognized that the generation of fundamental knowledge at the molecular level is key for the development of new concepts and for reaching the final objective of solid catalysts by design … Corma Canós, A. (2016). Heterogeneous Catalysis: Understanding for Designing, and Designing for Applications. Angewandte Chemie International Edition. 55(21)...
Cooperative catalysis by silica-supported organic functional groups
Margelefsky, Eric L.; Zeidan, Ryan K.; Davis, Mark E.
Hybrid inorganic–organic materials comprising organic functional groups tethered from silica surfaces are versatile, heterogeneous catalysts. Recent advances have led to the preparation of silica materials containing multiple, different functional groups that can show cooperative catalysis; that is, these functional groups can act together to provide catalytic activity and selectivity superior to what can be obtained from either monofunctional materials or homogeneous catalysts. This tutorial...
Chemistry of Fluorinated Carbon Acids: Synthesis, Physicochemical Properties, and Catalysis.
Yanai, Hikaru
The bis[(trifluoromethyl)sulfonyl]methyl (Tf2CH; Tf=SO2CF3) group is known to be one of the strongest carbon acid functionalities. The acidity of such carbon acids in the gas phase is stronger than that of sulfuric acid. Our recent investigations have demonstrated that this type of carbon acids work as novel acid catalysts. In this paper, recent achievements in carbon acid chemistry by our research group, including synthesis, physicochemical properties, and catalysis, are summarized.
Complexation and biodistribution study of 111In complexes of bifunctional phosphinic acid analogues of H4DOTA
Forsterová, Michaela; Zimová, Jana; Petrík, M.; Lázní�ek, M.; Lázní�ková, A.; Hermann, P.; Melichar, František
Ro�. 2, �. 337 (2007), s. 34-34 ISSN 1619-7070 R&D Projects: GA AV ČR 1QS100480501 Institutional research plan: CEZ:AV0Z10480505 Keywords : bifunctional H4DOTA ligands * phosphinic acid analogues, * complexation of 111In Subject RIV: FR - Pharmacology ; Medidal Chemistry
Engineered Asymmetric Composite Membranes with Rectifying Properties.
Wen, Liping; Xiao, Kai; Sainath, Annadanam V Sesha; Komura, Motonori; Kong, Xiang-Yu; Xie, Ganhua; Zhang, Zhen; Tian, Ye; Iyoda, Tomokazu; Jiang, Lei
Asymmetric composite membranes with rectifying properties are developed by grafting pH-stimulus-responsive materials onto the top layer of the composite structure, which is prepared by two novel block copolymers using a phase-separation technique. This engineered asymmetric composite membrane shows potential applications in sensors, filtration, and nanofluidic devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Asymmetric synthesis with microbes; Biseibutsu wo katsuyoshita kogaku kassei kagobutsu no koritsutekina gosei
Kondo, S. [Ritsumeikan Univ., Tokyo (Japan). Faculty of Science and Engineering
Use of microbial enzymes have been widely extended as an effective means for asymmetric synthesis. However, the asymmetric selectivity often decreases due to competitive catalysis among plural enzymes in a microbe. The author has been studied development of methods for control of the stereo-selectivity using subtle difference of enzyme characteristics. When Michaelis constant (Km) differs between two enzymes, one enzyme of lower Km becomes active with decrease in concentration of substrate, expressing its stereo-selectivity. Reduction of {alpha}-ketoesters in water by bread yeast (Saccharomyces cerevisiae) yields products of S-configuration, whereas those of R-configuration are obtained in an organic solvent in the presence of small amount of water. This is because reaction field of the yeast is in water and because R-configuration enzyme of lower Km works for substrate whose concentration in water has decreased due to two phase partition of organic solvent and water system. Further, use of difference of decrease in enzyme activity by inhibitors in stereo-selective synthesis of {alpha}-hydroxyketones (I) from {alpha}-diketone and use of difference of thermal endurance in improvement of formation ratio among I, are also introduced. 6 refs., 3 figs., 2 tabs.
Rhodium-Catalyzed Asymmetric N-H Functionalization of Quinazolinones with Allenes and Allylic Carbonates: The First Enantioselective Formal Total Synthesis of (-)-Chaetominine.
Zhou, Yirong; Breit, Bernhard
An unprecedented asymmetric N-H functionalization of quinazolinones with allenes and allylic carbonates was successfully achieved by rhodium catalysis with the assistance of chiral bidentate diphosphine ligands. The high efficiency and practicality of this method was demonstrated by a low catalyst loading of 1 mol % as well as excellent chemo-, regio-, and enantioselectivities with broad functional group compatibility. Furthermore, this newly developed strategy was applied as key step in the first enantioselective formal total synthesis of (-)-chaetominine. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Understanding plasma catalysis through modelling and simulation—a review
Neyts, E C; Bogaerts, A
Plasma catalysis holds great promise for environmental applications, provided that the process viability can be maximized in terms of energy efficiency and product selectivity. This requires a fundamental understanding of the various processes taking place and especially the mutual interactions between plasma and catalyst. In this review, we therefore first examine the various effects of the plasma on the catalyst and of the catalyst on the plasma that have been described in the literature. Most of these studies are purely experimental. The urgently needed fundamental understanding of the mechanisms underpinning plasma catalysis, however, may also be obtained through modelling and simulation. Therefore, we also provide here an overview of the modelling efforts that have been developed already, on both the atomistic and the macroscale, and we identify the data that can be obtained with these models to illustrate how modelling and simulation may contribute to this field. Last but not least, we also identify future modelling opportunities to obtain a more complete understanding of the various underlying plasma catalytic effects, which is needed to provide a comprehensive picture of plasma catalysis. (paper)
Crown ethers and phase transfer catalysis in polymer science
Carraher, Charles
Phase transfer catalysis or interfacial catalysis is a syn thetic technique involving transport of an organic or inorganic salt from a solid or aqueous phase into an organic liquid where reaction with an organic-soluble substrate takes place. Over the past 15 years there has been an enormous amount of effort invested in the development of this technique in organic synthe sis. Several books and numerous review articles have appeared summarizing applications in which low molecular weight catalysts are employed. These generally include either crown ethers or onium salts of various kinds. While the term phase transfer catalysis is relatively new, the concept of using a phasetrans fer agent (PTA) is much older~ Both Schnell and Morgan employed such catalysts in synthesis of polymeric species in the early 1950's. Present developments are really extensions of these early applications. It has only been within the last several years that the use of phase transfer processes have been employed in polymer synthesis...
Inclined asymmetric librations in exterior resonances
Voyatzis, G.; Tsiganis, K.; Antoniadou, K. I.
Librational motion in Celestial Mechanics is generally associated with the existence of stable resonant configurations and signified by the existence of stable periodic solutions and oscillation of critical (resonant) angles. When such an oscillation takes place around a value different than 0 or π , the libration is called asymmetric. In the context of the planar circular restricted three-body problem, asymmetric librations have been identified for the exterior mean motion resonances (MMRs) 1:2, 1:3, etc., as well as for co-orbital motion (1:1). In exterior MMRs the massless body is the outer one. In this paper, we study asymmetric librations in the three-dimensional space. We employ the computational approach of Markellos (Mon Not R Astron Soc 184:273-281, https://doi.org/10.1093/mnras/184.2.273, 1978) and compute families of asymmetric periodic orbits and their stability. Stable asymmetric periodic orbits are surrounded in phase space by domains of initial conditions which correspond to stable evolution and librating resonant angles. Our computations were focused on the spatial circular restricted three-body model of the Sun-Neptune-TNO system (TNO = trans-Neptunian object). We compare our results with numerical integrations of observed TNOs, which reveal that some of them perform 1:2 resonant, inclined asymmetric librations. For the stable 1:2 TNO librators, we find that their libration seems to be related to the vertically stable planar asymmetric orbits of our model, rather than the three-dimensional ones found in the present study.
Growth and optical characterization of colloidal CdTe nanoparticles capped by a bifunctional molecule
Abd El-sadek, M.S., E-mail: [email protected] [Nanomaterial Laboratory, Physics Department, Faculty of Science, South Valley University, Qena-83523 (Egypt); Crystal Growth Centre, Anna University Chennai, Chennai-600025 (India); Moorthy Babu, S. [Crystal Growth Centre, Anna University Chennai, Chennai-600025 (India)
Thiol-capped CdTe nanoparticles were synthesized in aqueous solution by wet chemical route. CdTe nanoparticles with bifunctional molecule mercaptoacetic acid as a stabilizer were synthesized at pH{approx}11.2 and using potassium tellurite as tellurium source. The effect of refluxing time on the preparation of these samples was measured using UV-vis absorption and photoluminescence analysis. By increasing the refluxing time the UV-vis absorption and photoluminescence results show that the band edge emission is redshifted. The synthesized thiol-capped CdTe were characterized with FT-IR, TEM and TG-DTA. The particle size was calculated by the effective mass approximation (EMA). The role of precursors, their composition, pH and reaction procedure on the development of nanoparticles are analyzed.
Boehmite-An Efficient and Recyclable Acid-Base Bifunctional Catalyst for Aldol Condensation Reaction.
Reshma, P C Rajan; Vikneshvaran, Sekar; Velmathi, Sivan
In this work boehmite was used as an acid-base bifunctional catalyst for aldol condensation reactions of aromatic aldehydes and ketones. The catalyst was prepared by simple sol-gel method using Al(NO3)3·9H2O and NH4OH as precursors. The catalyst has been characterized by X-ray diffraction (XRD), Fourier Transform Infrared (FTIR), Scanning Electron Microscopy (SEM), UV-visible spectroscopy (DRS), BET surface area analyses. Boehmite is successfully applied as catalyst for the condensation reaction between 4-nitrobenzaldehyde and acetone as a model substrate giving α, β-unsaturated ketones without any side product. The scope of the reaction is extended for various substituted aldehydes. A probable mechanism has been suggested to explain the cooperative behavior of the acidic and basic sites. The catalyst is environmentally friendly and easily recovered from the reaction mixture. Also the catalyst is reusable up to 3 catalytic cycles.
Bi-functional biobased packing of the cassava starch, glycerol, licuri nanocellulose and red propolis.
Samantha Serra Costa
Full Text Available The aim of this study was to characterize and determine the bi-functional efficacy of active packaging films produced with starch (4% and glycerol (1.0%, reinforced with cellulose nanocrystals (0-1% and activated with alcoholic extracts of red propolis (0.4 to 1.0%. The cellulose nanocrystals used in this study were extracted from licuri leaves. The films were characterized using moisture, water-activity analyses and water vapor-permeability tests and were tested regarding their total phenolic compounds and mechanical properties. The antimicrobial and antioxidant efficacy of the films were evaluated by monitoring the use of the active films for packaging cheese curds and butter, respectively. The cellulose nanocrystals increased the mechanical strength of the films and reduced the water permeability and water activity. The active film had an antimicrobial effect on coagulase-positive staphylococci in cheese curds and reduced the oxidation of butter during storage.
Improving battery safety by early detection of internal shorting with a bifunctional separator
Wu, Hui; Zhuo, Denys; Kong, Desheng; Cui, Yi
Lithium-based rechargeable batteries have been widely used in portable electronics and show great promise for emerging applications in transportation and wind-solar-grid energy storage, although their safety remains a practical concern. Failures in the form of fire and explosion can be initiated by internal short circuits associated with lithium dendrite formation during cycling. Here we report a new strategy for improving safety by designing a smart battery that allows internal battery health to be monitored in situ. Specifically, we achieve early detection of lithium dendrites inside batteries through a bifunctional separator, which offers a third sensing terminal in addition to the cathode and anode. The sensing terminal provides unique signals in the form of a pronounced voltage change, indicating imminent penetration of dendrites through the separator. This detection mechanism is highly sensitive, accurate and activated well in advance of shorting and can be applied to many types of batteries for improved safety.
Basic evaluation of [sup 67]Ga labeled digoxin derivative as a metal-labeled bifunctional radiopharmaceutical
Fujibayashi, Yasuhisa; Konishi, Junji (Kyoto Univ. (Japan). Faculty of Medicine); Takemura, Yasutaka; Taniuchi, Hideyuki; Iijima, Naoko; Yokoyama, Akira
To develop metal-labeled digoxin radiopharmaceuticals with affinity with anti-digoxin antibody as well as Na[sup +], K[sup +]-ATPase, a digoxin derivative conjugated with deferoxamine was synthesized. The derivative had a high binding affinity with [sup 67]Ga at deferoxamine introduced to the terminal sugar ring of digoxin. The [sup 67]Ga labeled digoxin derivative showed enough in vitro binding affinity and selectivity to anti-digoxin antibody as well as Na[sup +], K[sup +]-ATPase. The [sup 67]Ga labeled digoxin derivative is considered to be a potential metal-labeled bifunctional radiopharmaceutical for digoxin RIA as well as myocardial Na[sup +], K[sup +]-ATPase imaging. (author).
First-Principles Study of Structure Property Relationships of Monolayer (Hydroxy)Oxide-Metal Bifunctional Electrocatalysts
Zeng, Zhenhua; Kubal, Joseph; Greeley, Jeffrey Philip
step towards accurate identification and prediction of a variety of oxide/electrode interfacial structure-properties relationships, but also provides the foundation for rational design and control of 'targeted active phases' at catalytic interfaces. The successful design of bifunctional......In the present study, on the basis of detailed density functional theory (DFT) calculations, and using Ni hydroxy(oxide) films on Pt(111) and Au(111) electrodes as model systems, we describe a detailed structural and electrocatalytic analysis of hydrogen evolution (HER) at three-phase boundaries...... under alkaline electrochemical conditions. We demonstrate that the structure and oxidation state of the films can be systematically tuned by changing the applied electrode potential and/or the nature of substrates. Structural features determined from the theoretical calculations provide a wealth...
Polarization holograms in a bifunctional amorphous polymer exhibiting equal values of photoinduced linear and circular birefringences.
Provenzano, Clementina; Pagliusi, Pasquale; Cipparrone, Gabriella; Royes, Jorge; Piñol, Milagros; Oriol, Luis
Light-controlled molecular alignment is a flexible and useful strategy introducing novelty in the fields of mechanics, self-organized structuring, mass transport, optics, and photonics and addressing the development of smart optical devices. Azobenzene-containing polymers are well-known photocontrollable materials with large and reversible photoinduced optical anisotropies. The vectorial holography applied to these materials enables peculiar optical devices whose properties strongly depend on the relative values of the photoinduced birefringences. Here is reported a polarization holographic recording based on the interference of two waves with orthogonal linear polarization on a bifunctional amorphous polymer that, exceptionally, exhibits equal values of linear and circular birefringence. The peculiar photoresponse of the material coupled with the holographic technique demonstrates an optical device capable of decomposing the light into a set of orthogonally polarized linear components. The holographic structures are theoretically described by the Jones matrices method and experimentally investigated.
Development of tartaric esters as bifunctional additives of methanol-gasoline.
Zhang, Jie; Yang, Changchun; Tang, Ying; Zhou, Rui; Wang, Xiaoli; Xu, Lianghong
Methanol has become an alternative fuel for gasoline, which is facing a rapidly rising world demand with a limited oil supply. Methanol-gasoline has been used in China, but phase stability and vapor lock still need to be resolved in methanol-gasoline applications. In this paper, a series of tartaric esters were synthesized and used as phase stabilizers and saturation vapor pressure depressors for methanol-gasoline. The results showed that the phase stabilities of tartaric esters for methanol-gasoline depend on the length of the alkoxy group. Several tartaric esters were found to be effective in various gasoline-methanol blends, and the tartaric esters display high capacity to depress the saturation vapor pressure of methanol-gasoline. According to the results, it can be concluded that the tartaric esters have great potential to be bifunctional gasoline-methanol additives.
Novel 3-nitrotriazole-based amides and carbinols as bifunctional anti-Chagasic agents
Papadopoulou, Maria V.; Bloomer, William D.; Lepesheva, Galina I.; Rosenzweig, Howard S.; Kaiser, Marcel; Aguilera-Venegas, Benjamín; Wilkinson, Shane R.; Chatelain, Eric; Ioset, Jean-Robert
3-Nitro-1H-1,2,4-triazole-based amides with a linear, rigid core and 3-nitrotriazole-based fluconazole analogs were synthesized as dual functioning antitrypanosomal agents. Such compounds are excellent substrates for type I nitroreductase (NTR) located in the mitochondrion of trypanosomatids and, at the same time, act as inhibitors of the sterol 14α-demethylase (T. cruzi CYP51) enzyme. Because combination treatments against parasites are often superior to monotherapy, we believe that this emerging class of bifunctional compounds may introduce a new generation of antitrypanosomal drugs. In the present work, the synthesis and in vitro and in vivo evaluation of such compounds is discussed. PMID:25580906
On the molecular basis of D-bifunctional protein deficiency type III.
Maija L Mehtälä
Full Text Available Molecular basis of D-bifunctional protein (D-BP deficiency was studied with wild type and five disease-causing variants of 3R-hydroxyacyl-CoA dehydrogenase fragment of the human MFE-2 (multifunctional enzyme type 2 protein. Complementation analysis in vivo in yeast and in vitro enzyme kinetic and stability determinants as well as in silico stability and structural fluctuation calculations were correlated with clinical data of known patients. Despite variations not affecting the catalytic residues, enzyme kinetic performance (K(m, V(max and k(cat of the recombinant protein variants were compromised to a varying extent and this can be judged as the direct molecular cause for D-BP deficiency. Protein stability plays an additional role in producing non-functionality of MFE-2 in case structural variations affect cofactor or substrate binding sites. Structure-function considerations of the variant proteins matched well with the available data of the patients.
Reversal modes in asymmetric Ni nanowires
Leighton, B.; Pereira, A. [Departamento de Fisica, Universidad de Santiago de Chile (USACH), Avda. Ecuador 3493, 917-0124 Santiago (Chile); Escrig, J., E-mail: [email protected] [Departamento de Fisica, Universidad de Santiago de Chile (USACH), Avda. Ecuador 3493, 917-0124 Santiago (Chile); Center for the Development of Nanoscience and Nanotechnology (CEDENNA), Avda. Ecuador 3493, 917-0124 Santiago (Chile)
We have investigated the evolution of the magnetization reversal mechanism in asymmetric Ni nanowires as a function of their geometry. Circular nanowires are found to reverse their magnetization by the propagation of a vortex domain wall, while in very asymmetric nanowires the reversal is driven by the propagation of a transverse domain wall. The effect of shape asymmetry of the wire on coercivity and remanence is also studied. Angular dependence of the remanence and coercivity is also addressed. Tailoring the magnetization reversal mechanism in asymmetric nanowires can be useful for magnetic logic and race-track memory, both of which are based on the displacement of magnetic domain walls. Finally, an alternative method to detect the presence of magnetic drops is proposed. - Highlights: Black-Right-Pointing-Pointer Asymmetry strongly modifies the magnetic behavior of a wire. Black-Right-Pointing-Pointer Very asymmetric nanowires reverse their magnetization by a transverse domain wall. Black-Right-Pointing-Pointer An alternative method to detect the presence of magnetic drops is proposed. Black-Right-Pointing-Pointer Tailoring the reversal mode in asymmetric nanowires can be useful for potential applications.
Defective DNA cross-link removal in Chinese hamster cell mutants hypersensitive to bifunctional alkylating agents
Hoy, C.A.; Thompson, L.H.; Mooney, C.L.; Salazar, E.P.
DNA repair-deficient mutants from five genetic complementation groups isolated previously from Chinese hamster cells were assayed for survival after exposure to the bifunctional alkylating agents mitomycin C or diepoxybutane. Groups 1, 3, and 5 exhibited 1.6- to 3-fold hypersensitivity compared to the wild-type cells, whereas Groups 2 and 4 exhibited extraordinary hypersensitivity. Mutants from Groups 1 and 2 were exposed to 22 other bifunctional alkylating agents in a rapid assay that compared cytotoxicity of the mutants to the wild-type parental strain, AA8. With all but two of the compounds, the Group 2 mutant (UV4) was 15- to 60-fold more sensitive than AA8 or the Group 1 mutant (UV5). UV4 showed only 6-fold hypersensitivity to quinacrine mustard. Alkaline elution measurements showed that this compound produced few DNA interstrand cross-links but numerous strand breaks. Therefore, the extreme hypersensitivity of mutants from Groups 2 and 4 appeared specific for compounds the main cytotoxic lesions of which were DNA cross-links. Mutant UV5 was only 1- to 4-fold hypersensitive to all the compounds. Although the initial number of cross-links was similar for the three cell lines, the efficiency of removal of cross-links was lowest in UV4 and intermediate in UV5. These results suggest that the different levels of sensitivity are specifically related to different efficiencies of DNA cross-link removal. The phenotype of hypersensitivity to both UV radiation and cross-link damage exhibited by the mutants in Groups 2 and 4 appears to differ from those of the known human DNA repair syndromes
Arabidopsis RIBA Proteins: Two out of Three Isoforms Have Lost Their Bifunctional Activity in Riboflavin Biosynthesis
Hiltunen, Hanna-Maija; Illarionov, Boris; Hedtke, Boris; Fischer, Markus; Grimm, Bernhard
Riboflavin serves as a precursor for flavocoenzymes (FMN and FAD) and is essential for all living organisms. The two committed enzymatic steps of riboflavin biosynthesis are performed in plants by bifunctional RIBA enzymes comprised of GTP cyclohydrolase II (GCHII) and 3,4-dihydroxy-2-butanone-4-phosphate synthase (DHBPS). Angiosperms share a small RIBA gene family consisting of three members. A reduction of AtRIBA1 expression in the Arabidopsis rfd1mutant and in RIBA1 antisense lines is not complemented by the simultaneously expressed isoforms AtRIBA2 and AtRIBA3. The intensity of the bleaching leaf phenotype of RIBA1 deficient plants correlates with the inactivation of AtRIBA1 expression, while no significant effects on the mRNA abundance of AtRIBA2 and AtRIBA3 were observed. We examined reasons why both isoforms fail to sufficiently compensate for a lack of RIBA1 expression. All three RIBA isoforms are shown to be translocated into chloroplasts as GFP fusion proteins. Interestingly, both AtRIBA2 and AtRIBA3 have amino acid exchanges in conserved peptides domains that have been found to be essential for the two enzymatic functions. In vitro activity assays of GCHII and DHBPS with all of the three purified recombinant AtRIBA proteins and complementation of E. coli ribA and ribB mutants lacking DHBPS and GCHII expression, respectively, confirmed the loss of bifunctionality for AtRIBA2 and AtRIBA3. Phylogenetic analyses imply that the monofunctional, bipartite RIBA3 proteins, which have lost DHBPS activity, evolved early in tracheophyte evolution. PMID:23203051
Gently reduced graphene oxide incorporated into cobalt oxalate rods as bifunctional oxygen electrocatalyst
Phihusut, Doungkamon; Ocon, Joey D.; Jeong, Beomgyun; Kim, Jin Won; Lee, Jae Kwang; Lee, Jaeyoung
Graphical abstract: - Abstract: Water-oxygen electrochemistry is at the heart of key renewable energy technologies (fuel cells, electrolyzers, and metal-air batteries) due to the sluggish kinetics of oxygen reduction reaction (ORR) and oxygen evolution reaction (OER). Although much effort has been devoted to the development of improved bifunctional electrocatalysts, an inexpensive, highly active oxygen electrocatalyst, however, remains to be a challenge. In this paper, we present a facile and robust method to create gently reduced graphene oxide incorporated into cobalt oxalate microstructures (CoC 2 O 4 /gRGO) and demonstrate its excellent and stable electrocatalytic activity in both OER and ORR, arising from the inherent properties of the components and their physicochemical interaction. Our synthesis technique also explores a single pot method to partially reduce graphene oxide and form CoC 2 O 4 structures while maintaining the solution processability of reduced graphene oxide. While the OER activity of CoC 2 O 4 /gRGO is exclusively due to CoC 2 O 4 , which transformed into OER-active Co species, the combination with gRGO significantly improves OER stability. On the other hand, CoC 2 O 4 /gRGO exhibits synergistic effect towards ORR, via a quasi-four-electron pathway, leading to a slightly higher ORR limiting current than Pt/C. Remarkably, gRGO offers dual functionality, contributing to ORR activity via the N-functional groups and also enhancing OER stability through the gRGO coating around CoC 2 O 4 structures. Our results suggest a new class of metal-carbon composite that has the potential to be alternative bifunctional catalysts for regenerative fuel cells and metal-air batteries
AmpH, a bifunctional DD-endopeptidase and DD-carboxypeptidase of Escherichia coli.
González-Leiza, Silvia M; de Pedro, Miguel A; Ayala, Juan A
In Escherichia coli, low-molecular-mass penicillin-binding proteins (LMM PBPs) are important for correct cell morphogenesis. These enzymes display DD-carboxypeptidase and/or dd-endopeptidase activities associated with maturation and remodeling of peptidoglycan (PG). AmpH has been classified as an AmpH-type class C LMM PBP, a group closely related to AmpC β-lactamases. AmpH has been associated with PG recycling, although its enzymatic activity remained uncharacterized until now. Construction and purification of His-tagged AmpH from E. coli permitted a detailed study of its enzymatic properties. The N-terminal export signal of AmpH is processed, but the protein remains membrane associated. The PBP nature of AmpH was demonstrated by its ability to bind the β-lactams Bocillin FL (a fluorescent penicillin) and cefmetazole. In vitro assays with AmpH and specific muropeptides demonstrated that AmpH is a bifunctional DD-endopeptidase and DD-carboxypeptidase. Indeed, the enzyme cleaved the cross-linked dimers tetrapentapeptide (D45) and tetratetrapeptide (D44) with efficiencies (k(cat)/K(m)) of 1,200 M(-1) s(-1) and 670 M(-1) s(-1), respectively, and removed the terminal D-alanine from muropeptides with a C-terminal D-Ala-D-Ala dipeptide. Both DD-peptidase activities were inhibited by 40 μM cefmetazole. AmpH also displayed a weak β-lactamase activity for nitrocefin of 1.4 × 10(-3) nmol/μg protein/min, 1/1,000 the rate obtained for AmpC under the same conditions. AmpH was also active on purified sacculi, exhibiting the bifunctional character that was seen with pure muropeptides. The wide substrate spectrum of the DD-peptidase activities associated with AmpH supports a role for this protein in PG remodeling or recycling.
Preparation of Ga-67 labeled monoclonal antibodies using deferoxamine as a bifunctional chelating agent
Endo, K.; Furukawa, T.; Ohmomo, Y.
Ga-67 labeled monoclonal IgG or F(ab')/sub 2/ fragments against α-fetoprotein and β-subunit of human choriogonadotropin (HCG), were prepared using Deferoxamine (DFO) as a bifunctional chelating agent. DFO, a well-known iron chelating agent, was conjugated with monoclonal antibodies (Ab) by a glutaraldehyde two step method and the effect of conjugation on the Ab activities was examined by RIA and Scatchard plot analysis. In both monoclonal Ab preparations, the conjugation reaction was favored as the pH increased. However, Ab-binding activities decreased as the molecular ratios of DFO to Ab increased. Preserved Ab activities were observed when Ab contained DFO per Ab molecule less than 2.1. At a ratio of over 3.3 DFO molecules per Ab, the maximal binding capacity rather than the affinity constant decreased. The inter-molecular cross linkage seemed to be responsible for the deactivation of binding activities. The obtained DFO-Ab conjugates, were then easily labeled with high efficiency and reproducibility and Ga-67 DFO-Ab complexes were highly stable both in vitro and in vivo. Thus, biodistribution of Ga-67 labeled F(ab')/sub 2/ fragments of monoclonal Ab to HCG β-subunit was attempted in nude mice transplanted with HCG-producing human teratocarcinoma. Tumor could be visualized, in spite of relatively high background imaging of liver, kidney and spleen. The use of DFO as a bifunctional chelating agent provided good evidence for its applicability to labeling monoclonal Ab with almost full retention of Ab activities. Further, availability of Ga-68 will make Ga-68 DFO-monoclonal Ab a very useful tool for positron tomography imaging of various tumors
Catalysis as a foundational pillar of green chemistry
Anastas, Paul T. [White House Office of Science and Technology Policy, Department of Chemistry, University of Nottingham Nottingham, (United Kingdom); Kirchhoff, Mary M. [U.S. Environmental Protection Agency and Trinity College, Washington, DC (United States); Williamson, Tracy C. [U.S. Environmental Protection Agency, Washington, DC (United States)
Catalysis is one of the fundamental pillars of green chemistry, the design of chemical products and processes that reduce or eliminate the use and generation of hazardous substances. The design and application of new catalysts and catalytic systems are simultaneously achieving the dual goals of environmental protection and economic benefit. Green chemistry, the design of chemical products and processes that reduce or eliminate the use and generation of hazardous substances, is an overarching approach that is applicable to all aspects of chemistry. From feedstocks to solvents, to synthesis and processing, green chemistry actively seeks ways to produce materials in a way that is more benign to human health and the environment. The current emphasis on green chemistry reflects a shift away from the historic 'command-and-control' approach to environmental problems that mandated waste treatment and control and clean up through regulation, and toward preventing pollution at its source. Rather than accepting waste generation and disposal as unavoidable, green chemistry seeks new technologies that are cleaner and economically competitive. Utilizing green chemistry for pollution prevention demonstrates the power and beauty of chemistry: through careful design, society can enjoy the products on which we depend while benefiting the environment. The economic benefits of green chemistry are central drivers in its advancement. Industry is adopting green chemistry methodologies because they improve the corporate bottom line. A wide array of operating costs are decreased through the use of green chemistry. When less waste is generated, environmental compliance costs go down. Treatment and disposal become unnecessary when waste is eliminated. Decreased solvent usage and fewer processing steps lessen the material and energy costs of manufacturing and increase material efficiency. The environmental, human health, and the economic advantages realized through green chemistry
Radio catalysis application in degradation of complex organic samples
Moreno L, A.
The generation of wastewater is a consequence of human activities, industries to be the generators of a large part of these discharges. These contaminated waters can be processed for their remediation; however the recalcitrant organic compounds are hardly removed through conventional treatments applied, so that new technologies have been developed for disposal such as the advanced oxidation technologies or processes. With the aim of the study is to apply ionizing radiation as a method of remediation in wastewater, in this work were carried out experiments of radiolysis and radio catalysis, which are techniques considered advanced oxidation technologies, that consist in irradiate with 60 Co gamma radiation solutions of 4- chloro phenol and methylene blue, applied at different concentrations and using as process control measurements of the compound not degraded by UV-vis spectrophotometry at 507 and 664 nm for 4-chloro phenol and methylene blue respectively. At doses greater than 2.5 kGy were near-zero degradation. Degradation experiments were also conducted by photo catalysis by irradiation with a UV lamp of 354 nm wavelength. For 4-chloro phenol results showed that degradation is efficient (39%). With those previous results, these techniques were applied to degrade complex mixtures of organic compounds from samples of wastewater from a sewage treatment plant, where was considered as process control measurement of the dissolved organic carbon obtained by a spectrophotometric analysis at 254 nm, and a maximum of 26% degradation was obtained by applying 80 kGy. On the other hand, a series of experiments fractionating the irradiations at intervals of 20 kGy to obtain a cumulative dose of 80 kGy, which was 2.8 times greater with respect to degradation by radio catalysis with continuous irradiation. (Author)
Organocatalysis: Fundamentals and Comparisons to Metal and Enzyme Catalysis
Pierre Vogel
Full Text Available Catalysis fulfills the promise that high-yielding chemical transformations will require little energy and produce no toxic waste. This message is carried by the study of the evolution of molecular catalysis of some of the most important reactions in organic chemistry. After reviewing the conceptual underpinnings of catalysis, we discuss the applications of different catalysts according to the mechanism of the reactions that they catalyze, including acyl group transfers, nucleophilic additions and substitutions, and C–C bond forming reactions that employ umpolung by nucleophilic additions to C=O and C=C double bonds. We highlight the utility of a broad range of organocatalysts other than compounds based on proline, the cinchona alkaloids and binaphthyls, which have been abundantly reviewed elsewhere. The focus is on organocatalysts, although a few examples employing metal complexes and enzymes are also included due to their significance. Classical Brønsted acids have evolved into electrophilic hands, the fingers of which are hydrogen donors (like enzymes or other electrophilic moieties. Classical Lewis base catalysts have evolved into tridimensional, chiral nucleophiles that are N- (e.g., tertiary amines, P- (e.g., tertiary phosphines and C-nucleophiles (e.g., N-heterocyclic carbenes. Many efficient organocatalysts bear electrophilic and nucleophilic moieties that interact simultaneously or not with both the electrophilic and nucleophilic reactants. A detailed understanding of the reaction mechanisms permits the design of better catalysts. Their construction represents a molecular science in itself, suggesting that sooner or later chemists will not only imitate Nature but be able to catalyze a much wider range of reactions with high chemo-, regio-, stereo- and enantioselectivity. Man-made organocatalysts are much smaller, cheaper and more stable than enzymes.
Renewable resource management under asymmetric information
Jensen, Frank; Andersen, Peder; Nielsen, Max
Asymmetric information between fishermen and the regulator is important within fisheries. The regulator may have less information about stock sizes, prices, costs, effort, productivity and catches than fishermen. With asymmetric information, a strong analytical tool is principal-agent analysis....... In this paper, we study asymmetric information about productivity within a principal-agent framework and a tax on fishing effort is considered. It is shown that a second best optimum can be achieved if the effort tax is designed such that low-productivity agents rent is exhausted, while high-productivity agents...... receive an information rent. The information rent is equivalent to the total incentive cost. The incentive costs arise as we want to reveal the agent's type....
Plasma Chemistry and Catalysis in Gases and Liquids
Parvulescu, Vasile I; Lukes, Petr
Filling the gap for a book that not only covers gases but also plasma methods in liquids, this is all set to become the standard reference on the topic. It considers the central aspects in plasma chemistry and plasma catalysis by focusing on the green and environmental applications, while also taking into account their practical and economic viability. With the topics addressed by an international group of major experts, this is a must-have for researchers, PhD students and postdocs specializing in the field.
Charge Transfer and Catalysis at the Metal Support Interface
Baker, Lawrence Robert [Univ. of California, Berkeley, CA (United States)
Kinetic, electronic, and spectroscopic characterization of model Pt–support systems are used to demonstrate the relationship between charge transfer and catalytic activity and selectivity. The results show that charge flow controls the activity and selectivity of supported metal catalysts. This dissertation builds on extensive existing knowledge of metal–support interactions in heterogeneous catalysis. The results show the prominent role of charge transfer at catalytic interfaces to determine catalytic activity and selectivity. Further, this research demonstrates the possibility of selectively driving catalytic chemistry by controlling charge flow and presents solid-state devices and doped supports as novel methods for obtaining electronic control over catalytic reaction kinetics.
Coal-related research, organic chemistry, and catalysis
Coal chemistry research topics included: H exchange at 400 0 C, breaking C-C bonds in coal, molecular weight estimation using small-angle neutron scattering, 13 C NMR spectra of coals, and tunneling during H/D isotope effects. Studies of coal conversion chemistry included thermolysis of bibenzyl and 1-naphthol, heating of coals in phenol, advanced indirect liquefaction based on Koelbel slurry Fischer-Tropsch reactor, and plasma oxidation of coal minerals. Reactions of PAHs in molten SbCl 3 , a hydrocracking catalyst, were studied. Finally, heterogeneous catalysis (desulfurization etc.) was studied using Cu, Au, and Ni surfaces. 7 figures, 6 tables
Inorganic Chemistry in Hydrogen Storage and Biomass Catalysis
Thorn, David [Los Alamos National Laboratory
Making or breaking C-H, B-H, C-C bonds has been at the core of catalysis for many years. Making or breaking these bonds to store or recover energy presents us with fresh challenges, including how to catalyze these transformations in molecular systems that are 'tuned' to minimize energy loss and in molecular and material systems present in biomass. This talk will discuss some challenging transformations in chemical hydrogen storage, and some aspects of the inorganic chemistry we are studying in the development of catalysts for biomass utilization.
USD Catalysis Group for Alternative Energy - Final report
Hoefelmeyer, James
I. Project Summary Catalytic processes are a major technological underpinning of modern society, and are essential to the energy sector in the processing of chemical fuels from natural resources, fine chemicals synthesis, and energy conversion. Advances in catalyst technology are enormously valuable since these lead to reduced chemical waste, reduced energy loss, and reduced costs. New energy technologies, which are critical to future economic growth, are also heavily reliant on catalysts, including fuel cells and photo-electrochemical cells. Currently, the state of South Dakota is underdeveloped in terms of research infrastructure related to catalysis. If South Dakota intends to participate in significant economic growth opportunities that result from advances in catalyst technology, then this area of research needs to be made a high priority for investment. To this end, a focused research effort is proposed in which investigators from The University of South Dakota (USD) and The South Dakota School of Mines and Technology (SDSMT) will contribute to form the South Dakota Catalysis Group (SDCG). The multidisciplinary team of the (SDCG) include: (USD) Dan Engebretson, James Hoefelmeyer, Ranjit Koodali, and Grigoriy Sereda; (SDSMT) Phil Scott Ahrenkiel, Hao Fong, Jan Puszynski, Rajesh Shende, and Jacek Swiatkiewicz. The group is well suited to engage in a collaborative project due to the resources available within the existing programs. Activities within the SDCG will be monitored through an external committee consisting of three distinguished professors in chemistry. The committee will provide expert advice and recommendations to the SDCG. Advisory meetings in which committee members interact with South Dakota investigators will be accompanied by individual oral and poster presentations in a materials and catalysis symposium. The symposium will attract prominent scientists, and will enhance the visibility of research in the state of South Dakota. The SDCG requests
Asymmetric acoustic transmission in multiple frequency bands
Sun, Hong-xiang, E-mail: [email protected] [Research Center of Fluid Machinery Engineering and Technology, Jiangsu University, Zhenjiang 212013 (China); Laboratory of Modern Acoustics, Institute of Acoustics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093 (China); State Key Laboratory of Acoustics, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190 (China); Yuan, Shou-qi, E-mail: [email protected] [Research Center of Fluid Machinery Engineering and Technology, Jiangsu University, Zhenjiang 212013 (China); Zhang, Shu-yi [Laboratory of Modern Acoustics, Institute of Acoustics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093 (China)
We report both experimentally and numerically that the multi-band device of the asymmetric acoustic transmission is realized by placing two periodic gratings with different periods on both sides of two brass plates immersed in water. The asymmetric acoustic transmission can exist in four frequency bands below 1500 kHz, which arises from the interaction between various diffractions from the two gratings and Lamb modes in the brass plates immersed in water. The results indicate that the device has the advantages of multiple band, broader bandwidth, and simpler structure. Our finding should have great potential applications in ultrasonic devices.
Sun, Hong-xiang; Yuan, Shou-qi; Zhang, Shu-yi
We report both experimentally and numerically that the multi-band device of the asymmetric acoustic transmission is realized by placing two periodic gratings with different periods on both sides of two brass plates immersed in water. The asymmetric acoustic transmission can exist in four frequency bands below 1500 kHz, which arises from the interaction between various diffractions from the two gratings and Lamb modes in the brass plates immersed in water. The results indicate that the device has the advantages of multiple band, broader bandwidth, and simpler structure. Our finding should have great potential applications in ultrasonic devices
On the origin of the cobalt particle size effects in Fischer−Tropsch catalysis
den Breejen, J.P.|info:eu-repo/dai/nl/304837318; Radstake, P.B.|info:eu-repo/dai/nl/304829587; Bezemer, G.L.; Bitter, J.H.|info:eu-repo/dai/nl/160581435; Froseth, V.; Holmen, A.; de Jong, K.P.|info:eu-repo/dai/nl/06885580X
The effects of metal particle size in catalysis are of prime scientific and industrial importance and call for a better understanding. In this paper the origin of the cobalt particle size effects in Fischer−Tropsch (FT) catalysis was studied. Steady-State Isotopic Transient Kinetic Analysis (SSITKA)
[Prediction of common buffer catalysis in hydrolysis of fenchlorazole-ethyl].
Lin, Jing; Chen, Jing-wen; Zhang, Si-yu; Cai, Xi-yun; Qiao, Xian-liang
The purpose of this study was to elucidate the effects of temperatures, pH levels and buffer catalysis on the hydrolysis of FCE. The hydrolysis of FCE follows first-order kinetics at different pH levels and temperatures. FCE hydrolysis rates are greatly increased at elevated pH levels and temperatures. The maximum contribution of buffer catalysis to the hydrolysis of FCE was assessed based on application of the Bronsted equations for general acid-base catalysis. The results suggest that the buffer solutions play an obvious catalysis role in hydrolysis of FCE and the hydrolysis rates of FCE are quickened by the buffer solutions. Besides, the buffer catalysis capacity of different buffer solutions is diverse, and the buffer catalysis capacity at different pH levels with the same buffer solutions is different, too. The phosphate buffer at pH = 7 shows the maximal buffer catalysis capacity. The hydrolysis rate constants of FCE as a function of temperature and pH, which were remedied by the buffer catalysis factor, were mathematically combined to predict the hydrolytic dissipation of FCE. The equation suggests that the hydrolysis half-lives of FCE ranged from 7 d to 790 d. Hydrolysis metabolites of FCE were identified by liquid chromatography-mass spectrometry. In basic conditions (pH 8-10), fenchlorazole was formed via breakdown of the ester bond of the safener.
Dynamics of tropomyosin in muscle fibers as monitored by saturation transfer EPR of bi-functional probe.
Roni F Rayes
Full Text Available The dynamics of four regions of tropomyosin was assessed using saturation transfer electron paramagnetic resonance in the muscle fiber. In order to fully immobilize the spin probe on the surface of tropomyosin, a bi-functional spin label was attached to i,i+4 positions via cysteine mutagenesis. The dynamics of bi-functionally labeled tropomyosin mutants decreased by three orders of magnitude when reconstituted into "ghost muscle fibers". The rates of motion varied along the length of tropomyosin with the C-terminus position 268/272 being one order of magnitude slower then N-terminal domain or the center of the molecule. Introduction of troponin decreases the dynamics of all four sites in the muscle fiber, but there was no significant effect upon addition of calcium or myosin subfragment-1.
Hydrophilic cobalt sulfide nanosheets as a bifunctional catalyst for oxygen and hydrogen evolution in electrolysis of alkaline aqueous solution.
Zhu, Mingchao; Zhang, Zhongyi; Zhang, Hu; Zhang, Hui; Zhang, Xiaodong; Zhang, Lixue; Wang, Shicai
Hydrophilic medium and precursors were used to synthesize a hydrophilic electro-catalyst for overall water splitting. The cobalt sulfide (Co 3 S 4 ) catalyst exhibits a layered nanosheet structure with a hydrophilic surface, which can facilitate the diffusion of aqueous substrates into the electrode pores and towards the active sites. The Co 3 S 4 catalyst shows excellent bifunctional catalytic activity for both the oxygen evolution reaction (OER) and hydrogen evolution reaction (HER) in alkaline solution. The assembled water electrolyzer based on Co 3 S 4 exhibits better performance and stability than that of Pt/C-RuO 2 catalyst. Thereforce the hydrophilic Co 3 S 4 is a highly promising bifunctional catalyst for the overall water splitting reaction. Copyright © 2017 Elsevier Inc. All rights reserved.
Functionalization of nanoparticle titanium dioxide with different bifunctional organic molecules and trimers of transition compounds for obtaining new materials
Rivera Martinez, Maria Cinthya
Functionalization of titanium dioxide in nanoporous anatase phase is investigated for obtaining new nanomaterials. Functionalizations were performed using two heating methods: the conventional of refluxing heating method and microwave irradiation with bifunctional organic molecules is used to study how to anchor molecules and the change in the wettability of the material. Besides, reactions with organic molecules were performed as the derived from nanoproxene. The growth layer by layer is performed using the bifunctional molecules previous for the immobilization of cobalt trimers. Functionalized molecules were characterized by infrared spectroscopy, X-ray diffraction, contact angle, scanning electron microscopy, x-ray elemental analysis, plasma atomic emission spectroscopy coupled inductively, x-ray photoelectron spectroscopy and thermogravimetric analysis. This type of functionalizations on nanoporous titanium dioxide could potentially improve optical sensitivity and activity of this nanomaterial in the visible region. (author) [es
Rational design of micro-RNA-like bifunctional siRNAs targeting HIV and the HIV coreceptor CCR5.
Ehsani, Ali; Saetrom, Pål; Zhang, Jane; Alluin, Jessica; Li, Haitang; Snøve, Ola; Aagaard, Lars; Rossi, John J
Small-interfering RNAs (siRNAs) and micro-RNAs (miRNAs) are distinguished by their modes of action. SiRNAs serve as guides for sequence-specific cleavage of complementary mRNAs and the targets can be in coding or noncoding regions of the target transcripts. MiRNAs inhibit translation via partially complementary base-pairing to 3' untranslated regions (UTRs) and are generally ineffective when targeting coding regions of a transcript. In this study, we deliberately designed siRNAs that simultaneously direct cleavage and translational suppression of HIV RNAs, or cleavage of the mRNA encoding the HIV coreceptor CCR5 and suppression of translation of HIV. These bifunctional siRNAs trigger inhibition of HIV infection and replication in cell culture. The design principles have wide applications throughout the genome, as about 90% of genes harbor sites that make the design of bifunctional siRNAs possible.
Post-modified acid-base bifunctional MIL-101(Cr) for one-pot deacetalization-Knoevenagel reaction
Mu, Manman [Tianjin University, School of Science (China); Yan, Xilong; Li, Yang; Chen, Ligong, E-mail: [email protected] [Collaborative Innovation Center of Chemical Science and Engineering (Tianjin) (China)
A novel and convenient approach for the construction of the bifunctional MIL-101 material bearing sulfonic acid and amino groups was established via the post-synthetic modification. This material possesses high BET surface area (1446Â m{sup 2}/g) and large pore volume (0.77Â cm{sup 3}/g). Significantly, this material could serve as a bifunctional heterogeneous catalyst and was initially employed for one-pot deacetalization-Knoevenagel reaction, exhibiting excellent catalytic performance (yield 99.74%). More importantly, it can be easily recovered and reused at least three times. Finally, our proposed catalytic mechanism indicated that amino and the sulfonic acid groups played a synergistic effect on this one-pot deacetalization-Knoevenagel reaction.
Self-organization of Au–CdSe hybrid nanoflowers at different length scales via bi-functional diamine linkers
AbouZeid, Khaled Mohamed [Virginia Commonwealth University, Department of Chemistry (United States); Mohamed, Mona Bakr [Cairo University, National Institute of Laser Enhanced Science (NILES) (Egypt); El-Shall, M. Samy, E-mail: [email protected] [Virginia Commonwealth University, Department of Chemistry (United States)
This work introduces a series of molecular bridging bi-functional linkers to produce laterally self-assembled nanostructures of the Au–CdSe nanoflowers on different length scales ranging from 10 nm to 100 microns. Assembly of Au nanocrystals within amorphous CdSe rods is found in the early stages of the growth of the Au–CdSe nanoflowers. The Au–CdSe nanoflowers are formed through a one-pot low temperature (150 °C) process where CdSe clusters are adsorbed on the surface of the Au cores, and they then start to form multiple arms and branches resulting in flower-shaped hybrid nanostructures. More complex assembly at a micron length scale can be achieved by means of bi-functional capping agents with appropriate alkyl chain lengths, such as 1,12-diaminododecane.
Fuel cells cathode with multiple catalysis and electrocapillary convection; Catodo de celula a combustivel com catalise multipla e conveccao eletrocapilar
Bambace, Luis Antonio Waack; Nishimori, Miriam; Ramos, Fernando Manuel [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil)], e-mail: [email protected]; Bastos Netto, Demetrio [Instituto Nacional de Pesquisas Espaciais (INPE), Cachoeira Paulista, SP (Brazil)
This paper discusses a mathematical model for the chemical reactions and liquid phase flow processes occurring in a fuel cell cathode through non homogeneous catalysis carried by gold and Prussian Blue. The gold is applied inside the porous walls of micro-tubes, which may be obtained through several methods. The wall porosity ranging from 7 to 30% ensures gas exchange between the interior of a micro-tube and its exterior where gas flow takes place. The Prussian Blue consists of a thin porous layer located between the selective membrane and the micro-tube system, with void fraction in the 70 to 80% range. A porous electricity conducting carbide flux collector is placed between the tube system and the bipolar plates. The system return tubes possess a diameter much larger than one of the micro-tubes. The electric potential differences generated by the ionic currents in the system and its asymmetrical shape are used to generate electrocapillary flows, which are related with the surface tension changes with local potential. The hydrogen peroxide concentration and its transport to the Prussian Blue layer, and the oxygen transport inside the reactive tubular system are analyzed in this work. (author)
Catalysis of heat-to-work conversion in quantum machines
Ghosh, A.; Latune, C. L.; Davidovich, L.; Kurizki, G.
We propose a hitherto-unexplored concept in quantum thermodynamics: catalysis of heat-to-work conversion by quantum nonlinear pumping of the piston mode which extracts work from the machine. This concept is analogous to chemical reaction catalysis: Small energy investment by the catalyst (pump) may yield a large increase in heat-to-work conversion. Since it is powered by thermal baths, the catalyzed machine adheres to the Carnot bound, but may strongly enhance its efficiency and power compared with its noncatalyzed counterparts. This enhancement stems from the increased ability of the squeezed piston to store work. Remarkably, the fraction of piston energy that is convertible into work may then approach unity. The present machine and its counterparts powered by squeezed baths share a common feature: Neither is a genuine heat engine. However, a squeezed pump that catalyzes heat-to-work conversion by small investment of work is much more advantageous than a squeezed bath that simply transduces part of the work invested in its squeezing into work performed by the machine.
Structural basis for catalysis at the membrane-water interface.
Dufrisne, Meagan Belcher; Petrou, Vasileios I; Clarke, Oliver B; Mancia, Filippo
The membrane-water interface forms a uniquely heterogeneous and geometrically constrained environment for enzymatic catalysis. Integral membrane enzymes sample three environments - the uniformly hydrophobic interior of the membrane, the aqueous extramembrane region, and the fuzzy, amphipathic interfacial region formed by the tightly packed headgroups of the components of the lipid bilayer. Depending on the nature of the substrates and the location of the site of chemical modification, catalysis may occur in each of these environments. The availability of structural information for alpha-helical enzyme families from each of these classes, as well as several beta-barrel enzymes from the bacterial outer membrane, has allowed us to review here the different ways in which each enzyme fold has adapted to the nature of the substrates, products, and the unique environment of the membrane. Our focus here is on enzymes that process lipidic substrates. This article is part of a Special Issue entitled: Bacterial Lipids edited by Russell E. Bishop. Copyright © 2016 Elsevier B.V. All rights reserved.
Acid-base catalysis of N-[(morpholine)methylene]daunorubicin.
Krause, Anna; Jelińska, Anna; Cielecka-Piontek, Judyta; Klawitter, Maria; Zalewski, Przemysław; Oszczapowicz, Irena; Wąsowska, Małgorzata
The stability of N-[(morpholine)methylene]-daunorubicin hydrochloride (MMD) was investigated in the pH range 0.44-13.54, at 313, 308, 303 and 298 K. The degradation of MMD as a result of hydrolysis is a pseudo-first-order reaction described by the following equation: ln c = ln c(0) - k(obs)• t. In the solutions of hydrochloric acid, sodium hydroxide, borate, acetate and phosphate buffers, k(obs) = k(pH) because general acid-base catalysis was not observed. Specific acid-base catalysis of MMD comprises the following reactions: hydrolysis of the protonated molecules of MMD catalyzed by hydrogen ions (k(1)) and spontaneous hydrolysis of MMD molecules other than the protonated ones (k(2)) under the influence of water. The total rate of the reaction is equal to the sum of partial reactions: k(pH) = k(1) • a(H)+ • f(1) + k(2) • f(2) where: k(1) is the second-order rate constant (mol(-1) l s(-1)) of the specific hydrogen ion-catalyzed degradation of the protonated molecules of MMD; k(2) is the pseudo-first-order rate constant (s(-1)) of the water-catalyzed degradation of MMD molecules other than the protonated ones, f(1) - f(2) are fractions of the compound. MMD is the most stable at approx. pH 2.5.
Conformational Dynamics of Thermus aquaticus DNA Polymerase I during Catalysis
Suo, Zucai
Despite the fact that DNA polymerases have been investigated for many years and are commonly used as tools in a number of molecular biology assays, many details of the kinetic mechanism they use to catalyze DNA synthesis remain unclear. Structural and kinetic studies have characterized a rapid, pre-catalytic open-to-close conformational change of the Finger domain during nucleotide binding for many DNA polymerases including Thermus aquaticus DNA polymerase I (Taq Pol), a thermostable enzyme commonly used for DNA amplification in PCR. However, little has been done to characterize the motions of other structural domains of Taq Pol or any other DNA polymerase during catalysis. Here, we used stopped-flow Förster resonance energy transfer (FRET) to investigate the conformational dynamics of all five structural domains of the full-length Taq Pol relative to the DNA substrate during nucleotide binding and incorporation. Our study provides evidence for a rapid conformational change step induced by dNTP binding and a subsequent global conformational transition involving all domains of Taq Pol during catalysis. Additionally, our study shows that the rate of the global transition was greatly increased with the truncated form of Taq Pol lacking the N-terminal domain. Finally, we utilized a mutant of Taq Pol containing a de novo disulfide bond to demonstrate that limiting protein conformational flexibility greatly reduced the polymerization activity of Taq Pol. PMID:24931550
Oxidase catalysis via aerobically generated hypervalent iodine intermediates
Maity, Asim; Hyun, Sung-Min; Powers, David C.
The development of sustainable oxidation chemistry demands strategies to harness O2 as a terminal oxidant. Oxidase catalysis, in which O2 serves as a chemical oxidant without necessitating incorporation of oxygen into reaction products, would allow diverse substrate functionalization chemistry to be coupled to O2 reduction. Direct O2 utilization suffers from intrinsic challenges imposed by the triplet ground state of O2 and the disparate electron inventories of four-electron O2 reduction and two-electron substrate oxidation. Here, we generate hypervalent iodine reagents—a broadly useful class of selective two-electron oxidants—from O2. This is achieved by intercepting reactive intermediates of aldehyde autoxidation to aerobically generate hypervalent iodine reagents for a broad array of substrate oxidation reactions. The use of aryl iodides as mediators of aerobic oxidation underpins an oxidase catalysis platform that couples substrate oxidation directly to O2 reduction. We anticipate that aerobically generated hypervalent iodine reagents will expand the scope of aerobic oxidation chemistry in chemical synthesis.
Characterization techniques for graphene-based materials in catalysis
Maocong Hu
Full Text Available Graphene-based materials have been studied in a wide range of applications including catalysis due to the outstanding electronic, thermal, and mechanical properties. The unprecedented features of graphene-based catalysts, which are believed to be responsible for their superior performance, have been characterized by many techniques. In this article, we comprehensively summarized the characterization methods covering bulk and surface structure analysis, chemisorption ability determination, and reaction mechanism investigation. We reviewed the advantages/disadvantages of different techniques including Raman spectroscopy, X-ray photoelectron spectroscopy (XPS, Fourier transform infrared spectroscopy (FTIR and Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS, X-Ray diffraction (XRD, X-ray absorption near edge structure (XANES and X-ray absorption fine structure (XAFS, atomic force microscopy (AFM, scanning electron microscopy (SEM, transmission electron microscopy (TEM, high-resolution transmission electron microscopy (HRTEM, ultraviolet-visible spectroscopy (UV-vis, X-ray fluorescence (XRF, inductively coupled plasma mass spectrometry (ICP, thermogravimetric analysis (TGA, Brunauer–Emmett–Teller (BET, and scanning tunneling microscopy (STM. The application of temperature-programmed reduction (TPR, CO chemisorption, and NH3/CO2-temperature-programmed desorption (TPD was also briefly introduced. Finally, we discussed the challenges and provided possible suggestions on choosing characterization techniques. This review provides key information to catalysis community to adopt suitable characterization techniques for their research.
Stabilizing ultrasmall Au clusters for enhanced photoredox catalysis.
Weng, Bo; Lu, Kang-Qiang; Tang, Zichao; Chen, Hao Ming; Xu, Yi-Jun
Recently, loading ligand-protected gold (Au) clusters as visible light photosensitizers onto various supports for photoredox catalysis has attracted considerable attention. However, the efficient control of long-term photostability of Au clusters on the metal-support interface remains challenging. Herein, we report a simple and efficient method for enhancing the photostability of glutathione-protected Au clusters (Au GSH clusters) loaded on the surface of SiO 2 sphere by utilizing multifunctional branched poly-ethylenimine (BPEI) as a surface charge modifying, reducing and stabilizing agent. The sequential coating of thickness controlled TiO 2 shells can further significantly improve the photocatalytic efficiency, while such structurally designed core-shell SiO 2 -Au GSH clusters-BPEI@TiO 2 composites maintain high photostability during longtime light illumination conditions. This joint strategy via interfacial modification and composition engineering provides a facile guideline for stabilizing ultrasmall Au clusters and rational design of Au clusters-based composites with improved activity toward targeting applications in photoredox catalysis.
Construction of Bifunctional Co/H-ZSM-5 Catalysts for the Hydrodeoxygenation of Stearic Acid to Diesel-range Alkanes.
Wu, Guangjun; Zhang, Nan; Dai, Weili; Guan, Naijia; Li, Landong
Bifunctional Co/H-ZSM-5 zeolites were prepared by surface organometallic chemistry grafting route, namely by the stoichiometric reaction between cobaltocene and the Brønsted acid sites in zeolites, and applied to the model reaction of stearic acid catalytic hydrodeoxygenation. Cobalt species existed in the form of isolated Co2+ ions at exchange positions after grafting, transformed to CoO species on the surface of zeolite and stabilized inside zeolite channels upon calcination in air, and finally reduced to metallic cobalt species of homogeneous clusters of ca. 1.5 nm by hydrogen. During this process, the Brønsted acid sites of H-ZSM-5 zeolites could be preserved with acid strength slightly reduced. The as-prepared bifunctional catalyst exhibited a ~16 times higher activity in stearic acid hydrodeoxygenation (2.11 gSAgcat-1h-1) than the reference catalyst (0.13 gSAgcat-1h-1) prepared by solid-state ion exchange, and a high C18/C17 ratio of ~24 was achieved as well. The remarkable hydrodeoxygenation performance of bifunctional Co/H-ZSM-5 could be explained from the effective synergy between the uniformed metallic cobalt clusters and the Brønsted acid sites in H-ZSM-5 zeolite. The simplified reaction network and kinetics of stearic acid hydrodeoxygenation catalyzed by the as-prepared bifunctional Co/H-ZSM-5 zeolites were also investigated. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ZIF-67 incorporated with carbon derived from pomelo peels: A highly efficient bifunctional catalyst for oxygen reduction/evolution reactions
Wang, Hao; Yin, Feng-Xiang; Chen, Biao-Hua; He, Xiao-Bo; Lv, Peng-Liang; Ye, Cai-Yun; Liu, Di-Jia
Developing carbon catalyst materials using natural, abundant and renewable resources as precursors plays an increasingly important role in clean energy generation and environmental protection. In this work, N-doped pomelo-peel-derived carbon (NPC) materials were prepared using a widely available food waste-pomelo peels and melamine. The synthetic NPC exhibits well-defined porosities and a highly doped-N content (e.g. 6.38 at% for NPC-2), therefore affords excellent oxygen reduction reaction (ORR) catalytic activities in alkaline electrolytes. NPC was further integrated with ZIF-67 to form ZIF-67@NPC hybrids through solvothermal reactions. The hybrid catalysts show substantially enhanced ORR catalytic activities comparable to that of commercial 20 wa Pt/C. Furthermore, the catalysts also exhibit excellent oxygen evolution reaction (OER) catalytic activities. Among all prepared ZIF-67@NPC hybrids, the optimal composition with ZIF-67 to NPC ratio of 2:1 exhibits the best ORR and OER bifunctional catalytic performance and the smallest Delta E (E-OER@10 mA cm(-2)-E-ORR@-1 mA cm(-2)) value of 0.79 V. The catalyst also demonstrated desirable 4-electron transfer pathways and superior catalytic stabilities. The Co-N-4 in ZIF-67, electrochemical active surface area, and the strong interactions between ZIF-67 and NPC are attributed as the main contributors to the bifunctional catalytic activities. These factors act synergistically, resulting in substantially enhanced bifunctional catalytic activities and stabilities; consequently, this hybrid catalyst is among the best of the reported bifunctional electrocatalysts and is promising for use in metal-air batteries and fuel cells. (C) 2016 Elsevier B.V. All rights reserved.
L-Threonine-derived novel bifunctional phosphine-sulfonamide catalyst-promoted enantioselective aza-morita-Baylis-Hillman reaction
Zhong, Fangrui
A series of novel bifunctional phosphine-sulfonamide organic catalysts were designed and readily prepared from natural amino acids, and they were utilized to promote enantioselective aza-Morita-Baylis-Hillman (MBH) reactions. l-Threonine-derived phosphine-sulfonamide 9b was found to be the most efficient catalyst, affording the desired aza-MBH adducts in high yields and with excellent enantioselectivities. © 2011 American Chemical Society.
Standards vs. labels with imperfect competition and asymmetric information
Baltzer, Kenneth Thomas
I demonstrate that providing information about product quality is not necessarily the best way to address asymmetric information problems when markets are imperfectly competitive. In a vertical differentiation model I show that a Minimum Quality Standard, which retains asymmetric information...
I demonstrate that providing information about product quality is not necessarily the best way to address asymmetric information problems when markets are imperfectly competitive. In a vertical dierentiation model I show that a Minimum Quality Standard, which retains asymmetric information...
Vortex Dynamics of Asymmetric Heave Plates
Rusch, Curtis; Maurer, Benjamin; Polagye, Brian
Heave plates can be used to provide reaction forces for wave energy converters, which harness the power in ocean surface waves to produce electricity. Heave plate inertia includes both the static mass of the heave plate, as well as the ``added mass'' of surrounding water accelerated with the object. Heave plate geometries may be symmetric or asymmetric, with interest in asymmetric designs driven by the resulting hydrodynamic asymmetry. Limited flow visualization has been previously conducted on symmetric heave plates, but flow visualization of asymmetric designs is needed to understand the origin of observed hydrodynamic asymmetries and their dependence on the Keulegan-Carpenter number. For example, it is hypothesized that the time-varying added mass of asymmetric heave plates is caused by vortex shedding, which is related to oscillation amplitude. Here, using direct flow visualization, we explore the relationship between vortex dynamics and time-varying added mass and drag. These results suggest potential pathways for more advanced heave plate designs that can exploit vortex formation and shedding to achieve more favorable hydrodynamic properties for wave energy converters.
Asymmetric hindwing foldings in rove beetles.
Saito, Kazuya; Yamamoto, Shuhei; Maruyama, Munetoshi; Okabe, Yoji
Foldable wings of insects are the ultimate deployable structures and have attracted the interest of aerospace engineering scientists as well as entomologists. Rove beetles are known to fold their wings in the most sophisticated ways that have right-left asymmetric patterns. However, the specific folding process and the reason for this asymmetry remain unclear. This study reveals how these asymmetric patterns emerge as a result of the folding process of rove beetles. A high-speed camera was used to reveal the details of the wing-folding movement. The results show that these characteristic asymmetrical patterns emerge as a result of simultaneous folding of overlapped wings. The revealed folding mechanisms can achieve not only highly compact wing storage but also immediate deployment. In addition, the right and left crease patterns are interchangeable, and thus each wing internalizes two crease patterns and can be folded in two different ways. This two-way folding gives freedom of choice for the folding direction to a rove beetle. The use of asymmetric patterns and the capability of two-way folding are unique features not found in artificial structures. These features have great potential to extend the design possibilities for all deployable structures, from space structures to articles of daily use.
Mixed gas plasticization phenomena in asymmetric membranes
Visser, Tymen
This thesis describes the thorough investigation of mixed gas transport behavior of asymmetric membranes in the separation of feed streams containing plasticizing gases in order to gain more insights into the complicated behavior of plasticization. To successfully employ gas separation membranes in
Asymmetric conditional volatility in international stock markets
Ferreira, Nuno B.; Menezes, Rui; Mendes, Diana A.
Recent studies show that a negative shock in stock prices will generate more volatility than a positive shock of similar magnitude. The aim of this paper is to appraise the hypothesis under which the conditional mean and the conditional variance of stock returns are asymmetric functions of past information. We compare the results for the Portuguese Stock Market Index PSI 20 with six other Stock Market Indices, namely the SP 500, FTSE 100, DAX 30, CAC 40, ASE 20, and IBEX 35. In order to assess asymmetric volatility we use autoregressive conditional heteroskedasticity specifications known as TARCH and EGARCH. We also test for asymmetry after controlling for the effect of macroeconomic factors on stock market returns using TAR and M-TAR specifications within a VAR framework. Our results show that the conditional variance is an asymmetric function of past innovations raising proportionately more during market declines, a phenomenon known as the leverage effect. However, when we control for the effect of changes in macroeconomic variables, we find no significant evidence of asymmetric behaviour of the stock market returns. There are some signs that the Portuguese Stock Market tends to show somewhat less market efficiency than other markets since the effect of the shocks appear to take a longer time to dissipate.
MHD stability of vertically asymmetric tokamak equilibria
Dalhed, H.E.; Grimm, R.C.; Johnson, J.L.
The ideal MHD stability properties of a special class of vertically asymmetric tokamak equilibria are examined. The calculations confirm that no major new physical effects are introduced and the modifications can be understood by conventional arguments. The results indicate that significant departures from up-down symmetry can be tolerated before the reduction in β becomes important for reactor operation
Catalytic asymmetric synthesis of the alkaloid (+)-myrtine
Pizzuti, Maria Gabriefla; Minnaard, Adriaan J.; Feringa, Ben L.
A new protocol for the asymmetric synthesis of trans-2,6-disubstituted-4-piperidones has been developed using a catalytic enantioselective conjugate addition reaction in combination with a diastereoselective lithiation-substitution sequence; an efficient synthesis of (+)-myrtine has been achieved
Volume inequalities for asymmetric Wulff shapes
Schuster, Franz E.; Weberndorfer, Manuel
Sharp reverse affine isoperimetric inequalities for asymmetric Wulff shapes and their polars are established, along with the characterization of all extremals. These new inequalities have as special cases previously obtained simplex inequalities by Ball, Barthe and Lutwak, Yang, and Zhang. In particular, they provide the solution to a problem by Zhang.
Quantum optics of lossy asymmetric beam splitters
Uppu, Ravitej; Wolterink, Tom; Tentrup, Tristan Bernhard Horst; Pinkse, Pepijn Willemszoon Harry
We theoretically investigate quantum interference of two single photons at a lossy asymmetric beam splitter, the most general passive 2×2 optical circuit. The losses in the circuit result in a non-unitary scattering matrix with a non-trivial set of constraints on the elements of the scattering
Motion in an Asymmetric Double Well
Brizard, Alain J.; Westland, Melissa C.
The problem of the motion of a particle in an asymmetric double well is solved explicitly in terms of the Weierstrass and Jacobi elliptic functions. While the solution of the orbital motion is expressed simply in terms of the Weierstrass elliptic function, the period of oscillation is more directly expressed in terms of periods of the Jacobi elliptic functions.
Asymmetric output profile of Xe Laser
Blok, F.J.; Rubin, P.L.; Verschuur, Jeroen W.J.; Witteman, W.J.
A new set of asymmetric modes was recently revealed in a Xe slab laser with pronounced lens effects originating from gas heating in the discharge. The appearance of these modes is a threshold effect. Their domain of existence in the Xe laser is discussed. It is shown that mode competition can result
Computing modal dispersion characteristics of radially Asymmetric ...
We developed a matrix theory that applies to with non-circular/circular but concentric layers fibers. And we compute the dispersion characteristics of radially unconventional fiber, known as Asymmetric Bragg fiber. An attempt has been made to determine how the modal characteristics change as circular Bragg fiber is ...
Seasonally asymmetric enhancement of northern vegetation productivity
Park, T.; Myneni, R.
Multiple evidences of widespread greening and increasing terrestrial carbon uptake have been documented. In particular, enhanced gross productivity of northern vegetation has been a critical role leading to observed carbon uptake trend. However, seasonal photosynthetic activity and its contribution to observed annual carbon uptake trend and interannual variability are not well understood. Here, we introduce a multiple-source of datasets including ground, atmospheric and satellite observations, and multiple process-based global vegetation models to understand how seasonal variation of land surface vegetation controls a large-scale carbon exchange. Our analysis clearly shows a seasonally asymmetric enhancement of northern vegetation productivity in growing season during last decades. Particularly, increasing gross productivity in late spring and early summer is obvious and dominant driver explaining observed trend and variability. We observe more asymmetric productivity enhancement in warmer region and this spatially varying asymmetricity in northern vegetation are likely explained by canopy development rate, thermal and light availability. These results imply that continued warming may facilitate amplifying asymmetric vegetation activity and cause these trends to become more pervasive, in turn warming induced regime shift in northern land.
Electrodeposited nano-scale islands of ruthenium oxide as a bifunctional electrocatalyst for simultaneous catalytic oxidation of hydrazine and hydroxylamine
Zare, Hamid R., E-mail: [email protected] [Department of Chemistry, Yazd University, P.O. Box 89195-741, Yazd (Iran, Islamic Republic of); Nanotechnology Research Center, Yazd University, P.O. Box 89195-741, Yazd (Iran, Islamic Republic of); Hashemi, S. Hossein; Benvidi, Ali [Department of Chemistry, Yazd University, P.O. Box 89195-741, Yazd (Iran, Islamic Republic of)
For the first time, an electrodeposited nano-scale islands of ruthenium oxide (ruthenium oxide nanoparticles), as an excellent bifunctional electrocatalyst, was successfully used for hydrazine and hydroxylamine electrocatalytic oxidation. The results show that, at the present bifunctional modified electrode, two different redox couples of ruthenium oxides serve as electrocatalysts for simultaneous electrocatalytic oxidation of hydrazine and hydroxylamine. At the modified electrode surface, the peaks of differential pulse voltammetry (DPV) for hydrazine and hydroxylamine oxidation were clearly separated from each other when they co-exited in solution. Thus, it was possible to simultaneously determine hydrazine and hydroxylamine in the samples at a ruthenium oxide nanoparticles modified glassy carbon electrode (RuON-GCE). Linear calibration curves were obtained for 2.0-268.3 {mu}M and 268.3-417.3 {mu}M of hydrazine and for 4.0-33.8 {mu}M and 33.8-78.3 {mu}M of hydroxylamine at the modified electrode surface using an amperometric method. The amperometric method also exhibited the detection limits of 0.15 {mu}M and 0.45 {mu}M for hydrazine and hydroxylamine respectively. RuON-GCE was satisfactorily used for determination of spiked hydrazine in two water samples. Moreover, the studied bifunctional modified electrode exhibited high sensitivity, good repeatability, wide linear range and long-term stability.
Synthesis of deuterium-labeled analogs of the lipid hydroperoxide-derived bifunctional electrophile 4-oxo-2(E)-nonenal.
Arora, Jasbir S; Oe, Tomoyuki; Blair, Ian A
Lipid hydroperoxides undergo homolytic decomposition into the bifunctional 4-hydroxy-2( E )-nonenal and 4-oxo-2( E )-nonenal (ONE). These bifunctional electrophiles are highly reactive and can readily modify intracellular molecules including glutathione (GSH), deoxyribonucleic acid (DNA) and proteins. Lipid hydroperoxide-derived bifunctional electrophiles are thought to contribute to the pathogenesis of a number of diseases. ONE is an α , β -unsaturated aldehyde that can react in multiple ways and with glutathione, proteins and DNA. Heavy isotope-labeled analogs of ONE are not readily available for conducting mechanistic studies or for use as internal standards in mass spectrometry (MS)-based assays. An efficient onestep cost-effective method has been developed for the preparation of C-9 deuterium-labeled ONE. In addition, a method for specific deuterium labeling of ONE at C-2, C-3 or both C-2 and C-3 has been developed. This latter method involved the selective reduction of an intermediate alkyne either by lithium aluminum hydride or lithium aluminum deuteride and quenching with water or deuterium oxide. The availability of these heavy isotope analogs will be useful as internal standards for quantitative studies employing MS and for conducting mechanistic studies of complex interactions between ONE and DNA bases as well as between ONE and proximal amino acid residues in peptides and proteins.
A fundamental trade-off in covalent switching and its circumvention by enzyme bifunctionality in glucose homeostasis.
Dasgupta, Tathagata; Croll, David H; Owen, Jeremy A; Vander Heiden, Matthew G; Locasale, Jason W; Alon, Uri; Cantley, Lewis C; Gunawardena, Jeremy
Covalent modification provides a mechanism for modulating molecular state and regulating physiology. A cycle of competing enzymes that add and remove a single modification can act as a molecular switch between "on" and "off" and has been widely studied as a core motif in systems biology. Here, we exploit the recently developed "linear framework" for time scale separation to determine the general principles of such switches. These methods are not limited to Michaelis-Menten assumptions, and our conclusions hold for enzymes whose mechanisms may be arbitrarily complicated. We show that switching efficiency improves with increasing irreversibility of the enzymes and that the on/off transition occurs when the ratio of enzyme levels reaches a value that depends only on the rate constants. Fluctuations in enzyme levels, which habitually occur due to cellular heterogeneity, can cause flipping back and forth between on and off, leading to incoherent mosaic behavior in tissues, that worsens as switching becomes sharper. This trade-off can be circumvented if enzyme levels are correlated. In particular, if the competing catalytic domains are on the same protein but do not influence each other, the resulting bifunctional enzyme can switch sharply while remaining coherent. In the mammalian liver, the switch between glycolysis and gluconeogenesis is regulated by the bifunctional 6-phosphofructo-2-kinase/fructose-2,6-bisphosphatase (PFK-2/FBPase-2). We suggest that bifunctionality of PFK-2/FBPase-2 complements the metabolic zonation of the liver by ensuring coherent switching in response to insulin and glucagon.
Charge Asymmetric Cosmic Rays as a probe of Flavor Violating Asymmetric Dark Matter
Masina, Isabella; Sannino, Francesco
The recently introduced cosmic sum rules combine the data from PAMELA and Fermi-LAT cosmic ray experiments in a way that permits to neatly investigate whether the experimentally observed lepton excesses violate charge symmetry. One can in a simple way determine universal properties of the unknown...... component of the cosmic rays. Here we attribute a potential charge asymmetry to the dark sector. In particular we provide models of asymmetric dark matter able to produce charge asymmetric cosmic rays. We consider spin zero, spin one and spin one-half decaying dark matter candidates. We show that lepton...... flavor violation and asymmetric dark matter are both required to have a charge asymmetry in the cosmic ray lepton excesses. Therefore, an experimental evidence of charge asymmetry in the cosmic ray lepton excesses implies that dark matter is asymmetric....
The Mycobacterium tuberculosis Rv2540c DNA sequence encodes a bifunctional chorismate synthase
Santos Diógenes S
Full Text Available Abstract Background The emergence of multi- and extensively-drug resistant Mycobacterium tuberculosis strains has created an urgent need for new agents to treat tuberculosis (TB. The enzymes of shikimate pathway are attractive targets to the development of antitubercular agents because it is essential for M. tuberculosis and is absent from humans. Chorismate synthase (CS is the seventh enzyme of this route and catalyzes the NADH- and FMN-dependent synthesis of chorismate, a precursor of aromatic amino acids, naphthoquinones, menaquinones, and mycobactins. Although the M. tuberculosis Rv2540c (aroF sequence has been annotated to encode a chorismate synthase, there has been no report on its correct assignment and functional characterization of its protein product. Results In the present work, we describe DNA amplification of aroF-encoded CS from M. tuberculosis (MtCS, molecular cloning, protein expression, and purification to homogeneity. N-terminal amino acid sequencing, mass spectrometry and gel filtration chromatography were employed to determine identity, subunit molecular weight and oligomeric state in solution of homogeneous recombinant MtCS. The bifunctionality of MtCS was determined by measurements of both chorismate synthase and NADH:FMN oxidoreductase activities. The flavin reductase activity was characterized, showing the existence of a complex between FMNox and MtCS. FMNox and NADH equilibrium binding was measured. Primary deuterium, solvent and multiple kinetic isotope effects are described and suggest distinct steps for hydride and proton transfers, with the former being more rate-limiting. Conclusion This is the first report showing that a bacterial CS is bifunctional. Primary deuterium kinetic isotope effects show that C4-proS hydrogen is being transferred during the reduction of FMNox by NADH and that hydride transfer contributes significantly to the rate-limiting step of FMN reduction reaction. Solvent kinetic isotope effects and
Combination of Asymmetric Supercapacitor Utilizing Activated Carbon and Nickel Oxide with Cobalt Polypyridyl-Based Dye-Sensitized Solar Cell
Bagheri, Narjes; Aghaei, Alireza; Ghotbi, Mohammad Yeganeh; Marzbanrad, Ehsan; Vlachopoulos, Nick; Häggman, Leif; Wang, Michael; Boschloo, Gerrit; Hagfeldt, Anders; Skunik-Nuckowska, Magdalena; Kulesza, Pawel J.
Highlights: • Dye Solar Cell and supercapacitor are integrated into a single device capable of generation and storage of energy. • The solar cell part of the device utilizes the Co-based electrolyte and nickel/PEDOT counter electrode. • A cobalt-doped nickel oxide together with activated carbon is used in the capacitor part of the device. • The integrated photocapacitor is characterized by the capacitance of 32 F g −1 and the total efficiency of 0.6%. - Abstract: A dye-sensitized solar cell (DSC) based on the metal-free organic sensitizer and the cobalt (II, III) polypyridyl electrolyte was integrated here within an asymmetric supercapacitor utilizing cobalt-doped nickel oxide and activated carbon as positive and negative electrodes, respectively. A low cost nickel foil served as intermediate (auxiliary) bifunctional electrode separating two parts of the device and permitting the DSC electrolyte regeneration at one side and charge storage within cobalt-doped nickel oxide at the other. The main purpose of the research was to develop an integrated photocapacitor system capable of both energy generation and its further storage. Following irradiation at the 100 mW cm −2 level, the solar cell generated an open-circuit voltage of 0.8 V and short-circuit current of 8 mA cm −2 which corresponds to energy conversion efficiency of 4.9%. It was further shown that upon integration with asymmetric supercapacitor, the photogenerated energy was directly injected into porous charge storage electrodes thus resulting in specific capacitance of 32 F g −1 and energy density of 2.3 Wh kg −1 . The coulumbic and total (energy conversion and charge storage) efficiency of photocapacitor were equal to 54% and 0.6%, respectively
Contrast and Synergy between Electrocatalysis and Heterogeneous Catalysis
Andrzej Wieckowski
Full Text Available The advances in spectroscopy and theory that have occurred over the past two decades begin to provide detailed in situ resolution of the molecular transformations that occur at both gas/metal as well as aqueous/metal interfaces. These advances begin to allow for a more direct comparison of heterogeneous catalysis and electrocatalysis. Such comparisons become important, as many of the current energy conversion strategies involve catalytic and electrocatalytic processes that occur at fluid/solid interfaces and display very similar characteristics. Herein, we compare and contrast a few different catalytic and electrocatalytic systems to elucidate the principles that cross-cut both areas and establish characteristic differences between the two with the hope of advancing both areas.
REALCAT: A New Platform to Bring Catalysis to the Lightspeed
Paul Sébastien
Full Text Available Catalysis, irrespective of its form can be considered as one of the most important pillars of today's chemical industry. The development of new catalysts with improved performances is therefore a highly strategic issue. However, the a priori theoretical design of the best catalyst for a desired reaction is not yet possible and a time- and money-consuming experimental phase is still needed to develop a new catalyst for a given reaction. The REALCAT platform described in this paper consists in a complete, unique, integrated and top-level high-throughput technologies workflow that allows a significant acceleration of this kind of research. This is illustrated by some preliminary results of optimization of the operating conditions of glycerol dehydration to acrolein over an heteropolyacid-based supported catalyst. It is shown that using REALCAT high-throughput tools a more than 10-fold acceleration of the operating conditions optimization process is obtained.
Advanced electron microscopy characterization of nanomaterials for catalysis
Dong Su
Full Text Available Transmission electron microscopy (TEM has become one of the most powerful techniques in the fields of material science, inorganic chemistry and nanotechnology. In terms of resolutions, advanced TEM may reach a high spatial resolution of 0.05Â nm, a high energy-resolution of 7Â meV. In addition, in situ TEM can help researchers to image the process happened within 1Â ms. This paper reviews the recent technical progresses of applying advanced TEM characterization on nanomaterials for catalysis. The text is organized based on the perspective of application: for example, size, composition, phase, strain, and morphology. The electron beam induced effect and in situ TEM are also introduced. I hope this review can help the scientists in related fields to take advantage of advanced TEM to their own researches. Keywords: Advanced TEM, Nanomaterials, Catalysts, In situ
Gravitational catalysis of merons in Einstein-Yang-Mills theory
Canfora, Fabrizio; Oh, Seung Hun; Salgado-Rebolledo, Patricio
We construct regular configurations of the Einstein-Yang-Mills theory in various dimensions. The gauge field is of meron-type: it is proportional to a pure gauge (with a suitable parameter λ determined by the field equations). The corresponding smooth gauge transformation cannot be deformed continuously to the identity. In the three-dimensional case we consider the inclusion of a Chern-Simons term into the analysis, allowing λ to be different from its usual value of 1 /2 . In four dimensions, the gravitating meron is a smooth Euclidean wormhole interpolating between different vacua of the theory. In five and higher dimensions smooth meron-like configurations can also be constructed by considering warped products of the three-sphere and lower-dimensional Einstein manifolds. In all cases merons (which on flat spaces would be singular) become regular due to the coupling with general relativity. This effect is named "gravitational catalysis of merons".
Catalysis by Dust Grains in the Solar Nebula
Kress, Monika E.; Tielens, Alexander G. G. M.
In order to determine whether grain-catalyzed reactions played an important role in the chemistry of the solar nebula, we have applied our time-dependent model of methane formation via Fischer-Tropsch catalysis to pressures from 10(exp -5) to 1 bar and temperatures from 450 to 650 K. Under these physical conditions, the reaction 3H2 + CO yields CH4 + H2O is readily catalyzed by an iron or nickel surface, whereas the same reaction is kinetically inhibited in the gas phase. Our model results indicate that under certain nebular conditions, conversion of CO to methane could be extremely efficient in the presence of iron-nickel dust grains over timescales very short compared to the lifetime of the solar nebula.
Enzymatic catalysis treatment method of meat industry wastewater using lacasse.
Thirugnanasambandham, K; Sivakumar, V
The process of meat industry produces in a large amount of wastewater that contains high levels of colour and chemical oxygen demand (COD). So they must be pretreated before their discharge into the ecological system. In this paper, enzymatic catalysis (EC) was adopted to treat the meat wastewater. Box-Behnken design (BBD), an experimental design for response surface methodology (RSM), was used to create a set of 29 experimental runs needed for optimizing of the operating conditions. Quadratic regression models with estimated coefficients were developed to describe the colour and COD removals. The experimental results show that EC could effectively reduce colour (95 %) and COD (86 %) at the optimum conditions of enzyme dose of 110 U/L, incubation time of 100 min, pH of 7 and temperature of 40 °C. RSM could be effectively adopted to optimize the operating multifactors in complex EC process.
Catalysis and Downsizing in Mg-Based Hydrogen Storage Materials
Jianding Li
Full Text Available Magnesium (Mg-based materials are promising candidates for hydrogen storage due to the low cost, high hydrogen storage capacity and abundant resources of magnesium for the realization of a hydrogen society. However, the sluggish kinetics and strong stability of the metal-hydrogen bonding of Mg-based materials hinder their application, especially for onboard storage. Many researchers are devoted to overcoming these challenges by numerous methods. Here, this review summarizes some advances in the development of Mg-based hydrogen storage materials related to downsizing and catalysis. In particular, the focus is on how downsizing and catalysts affect the hydrogen storage capacity, kinetics and thermodynamics of Mg-based hydrogen storage materials. Finally, the future development and applications of Mg-based hydrogen storage materials is discussed.
Catalysis by metal-organic frameworks: fundamentals and opportunities.
Ranocchiari, Marco; van Bokhoven, Jeroen Anton
Crystalline porous materials are extremely important for developing catalytic systems with high scientific and industrial impact. Metal-organic frameworks (MOFs) show unique potential that still has to be fully exploited. This perspective summarizes the properties of MOFs with the aim to understand what are possible approaches to catalysis with these materials. We categorize three classes of MOF catalysts: (1) those with active site on the framework, (2) those with encapsulated active species, and (3) those with active sites attached through post-synthetic modification. We identify the tunable porosity, the ability to fine tune the structure of the active site and its environment, the presence of multiple active sites, and the opportunity to synthesize structures in which key-lock bonding of substrates occurs as the characteristics that distinguish MOFs from other materials. We experience a unique opportunity to imagine and design heterogeneous catalysts, which might catalyze reactions previously thought impossible.
Quantifying the limits of transition state theory in enzymatic catalysis.
Zinovjev, Kirill; Tuñón, Iñaki
While being one of the most popular reaction rate theories, the applicability of transition state theory to the study of enzymatic reactions has been often challenged. The complex dynamic nature of the protein environment raised the question about the validity of the nonrecrossing hypothesis, a cornerstone in this theory. We present a computational strategy to quantify the error associated to transition state theory from the number of recrossings observed at the equicommittor, which is the best possible dividing surface. Application of a direct multidimensional transition state optimization to the hydride transfer step in human dihydrofolate reductase shows that both the participation of the protein degrees of freedom in the reaction coordinate and the error associated to the nonrecrossing hypothesis are small. Thus, the use of transition state theory, even with simplified reaction coordinates, provides a good theoretical framework for the study of enzymatic catalysis. Copyright © 2017 the Author(s). Published by PNAS.
Cooperative catalysis with block copolymer micelles: A combinatorial approach
Bukhryakov, Konstantin V.
A rapid approach to identifying complementary catalytic groups using combinations of functional polymers is presented. Amphiphilic polymers with "clickable" hydrophobic blocks were used to create a library of functional polymers, each bearing a single functionality. The polymers were combined in water, yielding mixed micelles. As the functional groups were colocalized in the hydrophobic microphase, they could act cooperatively, giving rise to new modes of catalysis. The multipolymer "clumps" were screened for catalytic activity, both in the presence and absence of metal ions. A number of catalyst candidates were identified across a wide range of model reaction types. One of the catalytic systems discovered was used to perform a number of preparative-scale syntheses. Our approach provides easy access to a range of enzyme-inspired cooperative catalysts.
Molecular surface science of heterogeneous catalysis. History and perspective
A personal account is given of how the author became involved with modern surface science and how it was employed for studies of the chemistry of surfaces and heterogeneous catalysis. New techniques were developed for studying the properties of the surface monolayers: Auger electron spectroscopy, LEED, XPS, molecular beam surface scattering, etc. An apparatus was developed and used to study hydrocarbon conversion reactions on Pt, CO hydrogenation on Rh and Fe, and NH 3 synthesis on Fe. A model has been developed for the working Pt reforming catalyst. The three molecular ingredients that control catalytic properties are atomic surface structure, an active carbonaceous deposit, and the proper oxidation state of surface atoms. 40 references, 21 figures
A personal account is given of how the author became involved with modern surface science and how it was employed for studies of the chemistry of surfaces and heterogeneous catalysis. New techniques were developed for studying the properties of the surface monolayers: Auger electron spectroscopy, LEED, XPS, molecular beam surface scattering, etc. An apparatus was developed and used to study hydrocarbon conversion reactions on Pt, CO hydrogenation on Rh and Fe, and NH/sub 3/ synthesis on Fe. A model has been developed for the working Pt reforming catalyst. The three molecular ingredients that control catalytic properties are atomic surface structure, an active carbonaceous deposit, and the proper oxidation state of surface atoms. 40 references, 21 figures. (DLC)
Bukhryakov, Konstantin V.; Desyatkin, Victor G.; O'Shea, John Paul; Almahdali, Sarah; Solovyeva, Vera; Rodionov, Valentin
A solvable two-species catalysis-driven aggregation model
Ke Jian Hong
We study the kinetics of a two-species catalysis-driven aggregation system, in which an irreversible aggregation between any two clusters of one species occurs only with the catalytic action of another species. By means of a generalized mean-field rate equation, we obtain the asymptotic solutions of the cluster mass distributions in a simple process with a constant rate kernel. For the case without any consumption of the catalyst, the cluster mass distribution of either species always approaches a conventional scaling law. However, the evolution behaviour of the system in the case with catalyst consumption is complicated and depends crucially on the relative data of the initial concentrations of the two species.
Mesostructure-Induced Selectivity in CO2 Reduction Catalysis.
Hall, Anthony Shoji; Yoon, Youngmin; Wuttig, Anna; Surendranath, Yogesh
Gold inverse opal (Au-IO) thin films are active for CO2 reduction to CO with high efficiency at modest overpotentials and high selectivity relative to hydrogen evolution. The specific activity for hydrogen evolution diminishes by 10-fold with increasing porous film thickness, while CO evolution activity is largely unchanged. We demonstrate that the origin of hydrogen suppression in Au-IO films stems from the generation of diffusional gradients within the pores of the mesostructured electrode rather than changes in surface faceting or Au grain size. For electrodes with optimal mesoporosity, 99% selectivity for CO evolution can be obtained at overpotentials as low as 0.4 V. These results establish electrode mesostructuring as a complementary method for tuning selectivity in CO2-to-fuels catalysis.
Atomically precise cluster catalysis towards quantum controlled catalysts
Watanabe, Yoshihide
Catalysis of atomically precise clusters supported on a substrate is reviewed in relation to the type of reactions. The catalytic activity of supported clusters has generally been discussed in terms of electronic structure. Several lines of evidence have indicated that the electronic structure of clusters and the geometry of clusters on a support, including the accompanying cluster-support interaction, are strongly correlated with catalytic activity. The electronic states of small clusters would be easily affected by cluster–support interactions. Several studies have suggested that it is possible to tune the electronic structure through atomic control of the cluster size. It is promising to tune not only the number of cluster atoms, but also the hybridization between the electronic states of the adsorbed reactant molecules and clusters in order to realize a quantum-controlled catalyst. (review)
Direct conversion of CO2 into liquid fuels with high selectivity over a bifunctional catalyst
Gao, Peng; Li, Shenggang; Bu, Xianni; Dang, Shanshan; Liu, Ziyu; Wang, Hui; Zhong, Liangshu; Qiu, Minghuang; Yang, Chengguang; Cai, Jun; Wei, Wei; Sun, Yuhan
Although considerable progress has been made in carbon dioxide (CO2) hydrogenation to various C1 chemicals, it is still a great challenge to synthesize value-added products with two or more carbons, such as gasoline, directly from CO2 because of the extreme inertness of CO2 and a high C-C coupling barrier. Here we present a bifunctional catalyst composed of reducible indium oxides (In2O3) and zeolites that yields a high selectivity to gasoline-range hydrocarbons (78.6%) with a very low methane selectivity (1%). The oxygen vacancies on the In2O3 surfaces activate CO2 and hydrogen to form methanol, and C-C coupling subsequently occurs inside zeolite pores to produce gasoline-range hydrocarbons with a high octane number. The proximity of these two components plays a crucial role in suppressing the undesired reverse water gas shift reaction and giving a high selectivity for gasoline-range hydrocarbons. Moreover, the pellet catalyst exhibits a much better performance during an industry-relevant test, which suggests promising prospects for industrial applications.
Bifunctional Anti-Non-Amyloid Component α-Synuclein Nanobodies Are Protective In Situ.
David C Butler
Full Text Available Misfolding, abnormal accumulation, and secretion of α-Synuclein (α-Syn are closely associated with synucleinopathies, including Parkinson's disease (PD. VH14 is a human single domain intrabody selected against the non-amyloid component (NAC hydrophobic interaction region of α-Syn, which is critical for initial aggregation. Using neuronal cell lines, we show that as a bifunctional nanobody fused to a proteasome targeting signal, VH14PEST can counteract heterologous proteostatic effects of mutant α-Syn on mutant huntingtin Exon1 and protect against α-Syn toxicity using propidium iodide or Annexin V readouts. We compared this anti-NAC candidate to NbSyn87, which binds to the C-terminus of α-Syn. NbSyn87PEST degrades α-Syn as well or better than VH14PEST. However, while both candidates reduced toxicity, VH14PEST appears more effective in both proteostatic stress and toxicity assays. These results show that the approach of reducing intracellular monomeric targets with novel antibody engineering technology should allow in vivo modulation of proteostatic pathologies.
Colorimetric and luminescent bifunctional iridium(III) complexes for the sensitive recognition of cyanide ions
Chen, Xiudan; Wang, Huili; Li, Jing; Hu, Wenqin; Li, Mei-Jin
Two new cyclometalated iridium(III) complexes [(ppy)2Irppz]Cl (1) and [(ppy)2Irbppz]Cl (2) (where ppy = 2-phenylpyridine, ppz = 4,7-phenanthrolino-5,6:5,6-pyrazine, bppz = 2.3-di-2-pyridylpyrazine), were designed and synthesized. The structure of [(ppy)2Irppz]Cl was determined by single crystal X-ray diffraction. Their photophysical properties were also studied. This kind of complexes could coordinate with Cu2 +, the photoluminescence (PL) of the complex was quenched, and the color changed from orange-red to green. The forming M-Cu (M: complexes 1 and 2) ensemble could be further utilized as a colorimetric and emission ;turn-on; bifunctional detection for CN-, especially for complex 1-Cu2 + showed a high sensitivity toward CN- with a limit of diction is 97 nM. Importantly, this kind of iridium(III) complexes shows a unique recognition of cyanide ions over other anions which makes it an eligible sensing probe for cyanide ions.
Hypoxia targeted bifunctional suicide gene expression enhances radiotherapy in vitro and in vivo
Sun, Xiaorong; Xing, Ligang; Deng, Xuelong; Hsiao, Hung Tsung; Manami, Akiko; Koutcher, Jason A.; Clifton Ling, C.; Li, Gloria C.
Purpose: To investigate whether hypoxia targeted bifunctional suicide gene expression-cytosine deaminase (CD) and uracil phosphoribosyltransferase (UPRT) with 5-FC treatments can enhance radiotherapy. Materials and methods: Stable transfectants of R3327-AT cells were established which express a triple-fusion-gene: CD, UPRT and monomoric DsRed (mDsRed) controlled by a hypoxia inducible promoter. Hypoxia-induced expression/function of CDUPRTmDsRed was verified by western blot, flow cytometry, fluorescent microscopy, and cytotoxicity assay of 5-FU and 5-FC. Tumor-bearing mice were treated with 5-FC and local radiation. Tumor volume was monitored and compared with those treated with 5-FC or radiation alone. In addition, the CDUPRTmDsRed distribution in hypoxic regions of tumor sections was visualized with fluorescent microscopy. Results: Hypoxic induction of CDUPRTmDsRed protein correlated with increased sensitivity to 5-FC and 5-FU. Significant radiosensitization effects were detected after 5-FC treatments under hypoxic conditions. In the tumor xenografts, the distribution of CDUPRTmDsRed expression visualized with fluorescence microscopy was co-localized with the hypoxia marker pimonidazole positive staining cells. Furthermore, administration of 5-FC to mice in combination with local irradiation resulted in significant tumor regression, as in comparison with 5-FC or radiation treatments alone. Conclusions: Our data suggest that the hypoxia-inducible CDUPRT/5-FC gene therapy strategy has the ability to specifically target hypoxic cancer cells and significantly improve the tumor control in combination with radiotherapy.
Bifunctional Rhodamine Probes of Myosin Regulatory Light Chain Orientation in Relaxed Skeletal Muscle Fibers
Brack, Andrew S.; Brandmeier, Birgit D.; Ferguson, Roisean E.; Criddle, Susan; Dale, Robert E.; Irving, Malcolm
The orientation of the regulatory light chain (RLC) region of the myosin heads in relaxed skinned fibers from rabbit psoas muscle was investigated by polarized fluorescence from bifunctional rhodamine (BR) probes cross-linking pairs of cysteine residues introduced into the RLC. Pure 1:1 BR-RLC complexes were exchanged into single muscle fibers in EDTA rigor solution for 30 min at 30°C; ∼60% of the native RLC was removed and stoichiometrically replaced by BR-RLC, and >85% of the BR-RLC was located in the sarcomeric A-bands. The second- and fourth-rank order parameters of the orientation distributions of BR dipoles linking RLC cysteine pairs 100-108, 100-113, 108-113, and 104-115 were calculated from polarized fluorescence intensities, and used to determine the smoothest RLC orientation distribution—the maximum entropy distribution—consistent with the polarized fluorescence data. Maximum entropy distributions in relaxed muscle were relatively broad. At the peak of the distribution, the "lever� axis, linking Cys707 and Lys843 of the myosin heavy chain, was at 70–80° to the fiber axis, and the "hook� helix (Pro830–Lys843) was almost coplanar with the fiber and lever axes. The temperature and ionic strength of the relaxing solution had small but reproducible effects on the orientation of the RLC region. PMID:15041671
Toward Protein Structure In Situ: Comparison of Two Bifunctional Rhodamine Adducts of Troponin C
Julien, Olivier; Sun, Yin-Biao; Knowles, Andrea C.; Brandmeier, Birgit D.; Dale, Robert E.; Trentham, David R.; Corrie, John E. T.; Sykes, Brian D.; Irving, Malcolm
As part of a program to develop methods for determining protein structure in situ, sTnC was labeled with a bifunctional rhodamine (BR or BSR), cross-linking residues 56 and 63 of its C-helix. NMR spectroscopy of the N-terminal domain of BSR-labeled sTnC in complex with Ca2+ and the troponin I switch peptide (residues 115–131) showed that BSR labeling does not significantly affect the secondary structure of the protein or its dynamics in solution. BR-labeling was previously shown to have no effect on the solution structure of this complex. Isometric force generation in isolated demembranated fibers from rabbit psoas muscle into which BR- or BSR-labeled sTnC had been exchanged showed reduced Ca2+-sensitivity, and this effect was larger with the BSR label. The orientation of rhodamine dipoles with respect to the fiber axis was determined by polarized fluorescence. The mean orientations of the BR and BSR dipoles were almost identical in relaxed muscle, suggesting that both probes accurately report the orientation of the C-helix to which they are attached. The BSR dipole had smaller orientational dispersion, consistent with less flexible linkers between the rhodamine dipole and cysteine-reactive groups. PMID:17483167
A self-cleaning Li-S battery enabled by a bifunctional redox mediator
Ren, Y. X.; Zhao, T. S.; Liu, M.; Zeng, Y. K.; Jiang, H. R.
The polysulfide shuttle effect and lithium dendrite growth in lithium-sulfur (Li-S) batteries can repeatedly breach the anodic solid electrolyte interphase (SEI) over cycling. As a result, irreversible short-chain sulfide side products (Li2Sx, x = 1, 2) keep depositing on the Li anode, leading to the active material loss, increasing the Li+ transport resistance, and thereby reducing the cycle life. In this work, indium iodide (InI3) is investigated as a bifunctional electrolyte additive for Li-S batteries to protect the Li anode and decompose the side products spontaneously. On the one hand, Indium (In) is electrodeposited onto the Li anode prior to Li plating during the initial charging process, forming a chemically and mechanically stable SEI to prevent the Li anode from reacting with soluble polysulfide species to form Li2Sx (x = 1, 2) side products. On the other hand, by adequately overcharging the battery, the triiodide/iodide redox mediator is capable of chemically transforming side products deposited on the Li anode and separator into soluble polysulfides, which can be recycled by the cathode. It is shown that the battery with the InI3 additive exhibits a prolonged cycle life, and is capable of retrieving its capacity by a facile overcharging process.
Bifunctional separator as a polysulfide mediator for highly stable Li-S batteries
Abbas, Syed Ali
The shuttling process involving lithium polysulfides is one of the major factors responsible for the degradation in capacity of lithium–sulfur batteries (LSBs). Herein, we demonstrate a novel and simple strategy—using a bifunctional separator, prepared by spraying poly(3,4-ethylenedioxythiophene):poly(styrene sulfonate) (PEDOT:PSS) on pristine separator—to obtain long-cycle LSBs. The negatively charged SO3– groups present in PSS act as an electrostatic shield for soluble lithium polysulfides through mutual coulombic repulsion, whereas PEDOT provides chemical interactions with insoluble polysulfides (Li2S, Li2S2). The dual shielding effect can provide an efficient protection from the shuttling phenomenon by confining lithium polysulfides to the cathode side of the battery. Moreover, coating with PEDOT:PSS transforms the surface of the separator from hydrophobic to hydrophilic, thereby improving the electrochemical performance. We observed an ultralow decay of 0.0364% per cycle when we ran the battery for 1000 cycles at 0.25 C—far superior to that of the pristine separator and one of the lowest recorded values reported at a low current density. We examined the versatility of our separator by preparing a flexible battery that functioned well under various stress conditions; it displayed flawless performance. Accordingly, this economical and simple strategy appears to be an ideal platform for commercialization of LSBs.
The Design of New HIV-IN Tethered Bifunctional Inhibitors using Multiple Microdomain Targeted Docking.
Ciubotaru, Mihai; Musat, Mihaela Georgiana; Surleac, Marius; Ionita, Elena; Petrescu, Andrei Jose; Abele, Edgars; Abele, Ramona
Currently used antiretroviral HIV therapy drugs exclusively target critical groups in the enzymes essential for the viral life cycle. Increased mutagenesis of their genes, changes these viral enzymes which once mutated can evade therapeutic targeting, effects which confer drug resistance. To circumvent this, our review addresses a strategy to design and derive HIV-Integrase (HIV-IN) inhibitors which simultaneously target two IN functional domains, rendering it inactive even if the enzyme accumulates many mutations. First we review the enzymatic role of IN to insert the copied viral DNA into a chromosome of the host T lymphocyte, highlighting its main functional and structural features to be subjected to inhibitory action. From a functional and structural perspective we present all classes of HIV-IN inhibitors with their most representative candidates. For each chosen compound we also explain its mechanism of IN inhibition. We use the recently resolved cryo EM IN tetramer intasome DNA complex [1] onto which we dock various reference IN inhibitory chemical scaffolds such as to target adjacent functional IN domains. Pairing compounds with complementary activity, which dock in the vicinity of a IN structural microdomain, we design bifunctional new drugs which may not only be more resilient to IN mutations but also may be more potent inhibitors than their original counterparts. In the end of our review we propose synthesis pathways to link such paired compounds with enhanced synergistic IN inhibitory effects. Copyright© Bentham Science Publishers; For any queries, please email at [email protected].
A bifunctional electrolyte additive for separator wetting and dendrite suppression in lithium metal batteries
Zheng, Hao; Xie, Yong; Xiang, Hongfa; Shi, Pengcheng; Liang, Xin; Xu, Wu
Reformulation of electrolyte systems and improvement of separator wettability are vital to electrochemical performances of rechargeable lithium (Li) metal batteries, especially for suppressing Li dendrites. In this work we report a bifunctional electrolyte additive that improves separator wettability and suppresses Li dendrite growth in LMBs. A triblock polyether (Pluronic P123) was introduced as an additive into a commonly used carbonate-based electrolyte. It was found that addition of 0.2~1% (by weight) P123 into the electrolyte could effectively enhance the wettability of polyethylene separator. More importantly, the adsorption of P123 on Li metal surface can act as an artificial solid electrolyte interphase layer and contribute to suppress the growth of Li dendrites. A smooth and dendritic-free morphology can be achieved in the electrolyte with 0.2% P123. The Li||Li symmetric cells with the 0.2% P123 containing electrolyte exhibit a relatively stable cycling stability at high current densities of 1.0 and 3.0 mA cm-2.
Highly stable acyclic bifunctional chelator for {sup 64}Cu PET imaging
Abada, S.; Lecointre, A.; Christine, C.; Charbonniere, L. [CNRS/UDS, EPCM, Strasbourg (France). Lab. d' Ingenierie Appliquee a l' Analyse; Dechamps-Olivier, I. [Univ. de Reims Champagne Ardenne, Reims (France). Group Chimie de Coordination; Platas-Iglesias, C. [Univ. da Coruna (Spain). Dept. de Quimica Fundamental; Elhabiri, M. [CNRS/UDS, EPCM, Strasbourg (France). Lab. de Physico-Chimie Bioinorganique
Ligand L{sup 1}, based on a pyridine scaffold, functionalized by two bis(methane phosphonate)aminomethyl groups, was shown to display a very high affinity towards Cu(II) (log K{sub CuL}=22.7) and selectivity over Ni(II), Co(II), Zn(II) and Ga(III) ({delta} log K{sub ML}>4) as shown by the values of the stability constants obtained from potentiometric measurements. Insights into the coordination mode of the ligand around Cu(II) cation were obtained by UV-Vis absorption and EPR spectroscopies as well as density functional theory (DFT) calculations (B3LYP model) performed in aqueous solution. The results point to a pentacoordination pattern of the metal ion in the fully deprotonated [CuL{sup 1}]{sup 6-} species. Considering the beneficial thermodynamic parameters of this ligand, kinetic experiments were run to follow the formation of the copper(II) complexes, indicating a very rapid formation of the complex, appropriate for {sup 64}Cu complexation. As L{sup 1} represents a particularly interesting target within the frame of {sup 64}Cu PET imaging, a synthetic protocol was developed to introduce a labeling function on the pyridyl moiety of L{sup 1}, thereby affording L{sup 2}, a potential bifunctional chelator (BFC) for PET imaging.
Abbas, Syed Ali; Ibrahem, Mohammed Aziz; Hu, Lung-hao; Lin, Chia-Nan; Fang, Jason; Boopathi, Karunakara Moorthy; Wang, Pen-Cheng; Li, Lain-Jong; Chu, Chih Wei
Solution Structure of a Novel C2-Symmetrical Bifunctional Bicyclic Inhibitor Based on SFTI-1
Jaulent, Agnes M.; Brauer, Arnd B. E.; Matthews, Stephen J.; Leatherbarrow, Robin J.
A novel bifunctional bicyclic inhibitor has been created that combines features both from the Bowman-Birk inhibitor (BBI) proteins, which have two distinct inhibitory sites, and from sunflower trypsin inhibitor-1 (SFTI-1), which has a compact bicyclic structure. The inhibitor was designed by fusing together a pair of reactive loops based on a sequence derived from SFTI-1 to create a backbone-cyclized disulfide-bridged 16-mer peptide. This peptide has two symmetrically spaced trypsin binding sites. Its synthesis and biological activity have been reported in a previous communication [Jaulent and Leatherbarrow, 2004, PEDS 17, 681]. In the present study we have examined the three-dimensional structure of the molecule. We find that the new inhibitor, which has a symmetrical 8-mer half-cystine CTKSIPP'I' motif repeated through a C 2 symmetry axis also shows a complete symmetry in its three-dimensional structure. Each of the two loops adopts the expected canonical conformation common to all BBIs as well as SFTI-1. We also find that the inhibitor displays a strong and unique structural identity, with a notable lack of minor conformational isomers that characterise most reactive site loop mimics examined to date as well as SFTI-1. This suggests that the presence of the additional cyclic loop acts to restrict conformational mobility and that the deliberate introduction of cyclic symmetry may offer a general route to locking the conformation of β-hairpin structures
Flexible control of cellular encapsulation, permeability, and release in a droplet-templated bifunctional copolymer scaffold.
Chen, Qiushui; Chen, Dong; Wu, Jing; Lin, Jin-Ming
Designing cell-compatible, bio-degradable, and stimuli-responsive hydrogels is very important for biomedical applications in cellular delivery and micro-scale tissue engineering. Here, we report achieving flexible control of cellular microencapsulation, permeability, and release by rationally designing a diblock copolymer, alginate-conjugated poly(N-isopropylacrylamide) (Alg-co-PNiPAM). We use the microfluidic technique to fabricate the bifunctional copolymers into thousands of mono-disperse droplet-templated hydrogel microparticles for controlled encapsulation and triggered release of mammalian cells. In particular, the grafting PNiPAM groups in the synthetic cell-laden microgels produce lots of nano-aggregates into hydrogel networks at elevated temperature, thereafter enhancing the permeability of microparticle scaffolds. Importantly, the hydrogel scaffolds are readily fabricated via on-chip quick gelation by triggered release of Ca 2+ from the Ca-EDTA complex; it is also quite exciting that very mild release of microencapsulated cells is achieved via controlled degradation of hydrogel scaffolds through a simple strategy of competitive affinity of Ca 2+ from the Ca-Alginate complex. This finding suggests that we are able to control cellular encapsulation and release through ion-induced gelation and degradation of the hydrogel scaffolds. Subsequently, we demonstrate a high viability of microencapsulated cells in the microgel scaffolds.
Designing calcium phosphate-based bifunctional nanocapsules with bone-targeting properties
Khung, Yit-Lung; Bastari, Kelsen; Cho, Xing Ling; Yee, Wu Aik; Loo, Say Chye Joachim, E-mail: [email protected] [Nanyang Technological University, School of Materials Science and Engineering (Singapore)
Using sodium dodecyl sulphate micelles as template, hollow-cored calcium phosphate nanocapsules were produced. The surfaces of the nanocapsule were subsequently silanised by a polyethylene glycol (PEG)-based silane with an N-hydroxysuccinimide ester end groups which permits for further attachment with bisphosphonates (BP). Characterisations of these nanocapsules were investigated using Field Emission Scanning Electron Microscopy (FESEM), Transmission Electron Microscopy, Fourier Transform Infra-Red Spectroscopy, X-ray diffraction, X-ray photoelectron spectroscopy and Dynamic Light Scattering. To further validate the bone-targeting potential, dentine discs were incubated with these functionalised nanocapsules. FESEM analysis showed that these surface-modified nanocapsules would bind strongly to dentine surfaces compared to non-functionalised nanocapsules. We envisage that respective components would give this construct a bifunctional attribute, whereby (1) the shell of the calcium phosphate nanocapsule would serve as biocompatible coating aiding in gradual osteoconduction, while (2) surface BP moieties, acting as targeting ligands, would provide the bone-targeting potential of these calcium phosphate nanocapsules.
Khung, Yit-Lung; Bastari, Kelsen; Cho, Xing Ling; Yee, Wu Aik; Loo, Say Chye Joachim
Catalysis Science Initiative: Catalyst Design by Discovery Informatics
Delgass, William Nicholas [Purdue Univ., West Lafayette, IN (United States). Chemical Engineering; Abu-Omar, Mahdi [Purdue Univ., West Lafayette, IN (United States) Department of Chemistry; Caruthers, James [Purdue Univ., West Lafayette, IN (United States). Chemical Engineering; Ribeiro, Fabio [Purdue Univ., West Lafayette, IN (United States). Chemical Engineering; Thomson, Kendall [Purdue Univ., West Lafayette, IN (United States). Chemical Engineering; Schneider, William [Univ. of Notre Dame, IN (United States)
Catalysts selectively enhance the rates of chemical reactions toward desired products. Such reactions provide great benefit to society in major commercial sectors such as energy production, protecting the environment, and polymer products and thereby contribute heavily to the country's gross national product. Our premise is that the level of fundamental understanding of catalytic events at the atomic and molecular scale has reached the point that more predictive methods can be developed to shorten the cycle time to new processes. The field of catalysis can be divided into two regimes: heterogeneous and homogeneous. For the heterogeneous catalysis regime, we have used the water-gas shift (WGS) reaction (CO + H2O + CO2 + H2O) over supported metals as a test bed. Detailed analysis and strong coupling of theory with experiment have led to the following conclusions: • The sequence of elementary steps goes through a COOH intermediate • The CO binding energy is a strong function of coverage of CO adsorbed on the surface in many systems • In the case of Au catalysts, the CO adsorption is generally too weak on surface with close atomic packing, but the enhanced binding at corner atoms (which are missing bonding partners) of cubo-octahedral nanoparticles increases the energy to a near optimal value and produces very active catalysts. • Reaction on the metal alone cannot account for the experimental results. The reaction is dual functional with water activation occurring at the metal-support interface. It is clear from our work that the theory component is essential, not only for prediction of new systems, but also for reconciling data and testing hypotheses regarding potential descriptors. Particularly important is the finding that the interface between nano-sized metal particles and the oxides that are used to support them represent a new state of matter in the sense that the interfacial bonding perturbs the chemical state of both metals atoms and the support
Solvent extraction of uranium(VI), plutonium(VI) and americium(III) with HTTA/HPMBP using mono- and bi-functional neutral donors. Synergism and thermodynamics
Synergistic extraction of hexavalent uranium and plutonium as well as trivalent americium was studied in HNO 3 with thenoyl, trifluoro-acetone (HTTA)/1-phenyl, 3-methyl, 4-benzoyl pyrazolone-5 (HPMBP) in combination with neutral donors viz. DPSO, TBP, TOPO (mono-functional) and DBDECMP, DHDECMP, CMPO (bi-functional) with wide basicity range using benzene as diluent. A linear correlation was observed when the equilibrium constant log Ks for the organic phase synergistic reaction of both U(VI) and Pu(VI) with either of the chelating agents HTTA or HPMBP was plotted vs. the basicity (log Kh) of the donor (both mono- and bi-functional) indicating bi-functional donors also behave as mono-functional. This was supported by the thermodynamic data (ΔG 0 , ΔH 0 , ΔS 0 ) obtained for these systems. The organic phase adduct formation reactions were identified for the above systems from the thermodynamic data. In the Am(III) HTTA system log K s values of bi-functional donors were found to be very high and deviate from the linear plot (log K s vs. log K h ) obtained for mono-functional donors, indicating that they function as bi-functional for the Am(III)/HTTA) system studied. This was supported by high +ve ΔS 0 values obtained for this system. (author)
Direct catalytic asymmetric aldol-Tishchenko reaction.
Gnanadesikan, Vijay; Horiuchi, Yoshihiro; Ohshima, Takashi; Shibasaki, Masakatsu
A direct catalytic asymmetric aldol reaction of propionate equivalent was achieved via the aldol-Tishchenko reaction. Coupling an irreversible Tishchenko reaction to a reversible aldol reaction overcame the retro-aldol reaction problem and thereby afforded the products in high enantio and diastereoselectivity using 10 mol % of the asymmetric catalyst. A variety of ketones and aldehydes, including propyl and butyl ketones, were coupled efficiently, yielding the corresponding aldol-Tishchenko products in up to 96% yield and 95% ee. Diastereoselectivity was generally below the detection limit of 1H NMR (>98:2). Preliminary studies performed to clarify the mechanism revealed that the aldol products were racemic with no diastereoselectivity. On the other hand, the Tishchenko products were obtained in a highly enantiocontrolled manner.
Brownian Motion of Asymmetric Boomerang Colloidal Particles
Chakrabarty, Ayan; Konya, Andrew; Wang, Feng; Selinger, Jonathan; Sun, Kai; Wei, Qi-Huo
We used video microscopy and single particle tracking to study the diffusion and local behaviors of asymmetric boomerang particles in a quasi-two dimensional geometry. The motion is biased towards the center of hydrodynamic stress (CoH) and the mean square displacements of the particles are linear at short and long times with different diffusion coefficients and in the crossover regime it is sub-diffusive. Our model based on Langevin theory shows that these behaviors arise from the non-coincidence of the CoH with the center of the body. Since asymmetric boomerangs represent a class of rigid bodies of more generals shape, therefore our findings are generic and true for any non-skewed particle in two dimensions. Both experimental and theoretical results will be discussed.
Dynamics of asymmetric kinetic Ising systems revisited
Huang, Haiping; Kabashima, Yoshiyuki
The dynamics of an asymmetric kinetic Ising model is studied. Two schemes for improving the existing mean-field description are proposed. In the first scheme, we derive the formulas for instantaneous magnetization, equal-time correlation, and time-delayed correlation, considering the correlation between different local fields. To derive the time-delayed correlation, we emphasize that the small-correlation assumption adopted in previous work (Mézard and Sakellariou, 2011 J. Stat. Mech. L07001) is in fact not required. To confirm the prediction efficiency of our method, we perform extensive simulations on single instances with either temporally constant external driving fields or sinusoidal external fields. In the second scheme, we develop an improved mean-field theory for instantaneous magnetization prediction utilizing the notion of the cavity system in conjunction with a perturbative expansion approach. Its efficiency is numerically confirmed by comparison with the existing mean-field theory when partially asymmetric couplings are present. (paper)
Bianisotropic metamaterials based on twisted asymmetric crosses
Reyes-Avendaño, J A; Sampedro, M P; Juárez-Ruiz, E; Pérez-Rodríguez, F
The effective bianisotropic response of 3D periodic metal-dielectric structures, composed of crosses with asymmetrically-cut wires, is investigated within a general homogenization theory using the Fourier formalism and the form-factor division approach. It is found that the frequency dependence of the effective permittivity for a system of periodically-repeated layers of metal crosses exhibits two strong resonances, whose separation is due to the cross asymmetry. Besides, bianisotropic metamaterials, having a base of four twisted asymmetric crosses, are proposed. The designed metamaterials possess negative refractive index at frequencies determined by the cross asymmetry, the gap between the arms of adjacent crosses lying on the same plane, and the type of Bravais lattice. (papers)
Improved DFIG Capability during Asymmetrical Grid Faults
Zhou, Dao; Blaabjerg, Frede
In the wind power application, different asymmetrical types of the grid fault can be categorized after the Y/d transformer, and the positive and negative components of a single-phase fault, phase-to-phase fault, and two-phase fault can be summarized. Due to the newly introduced negative and even...... the natural component of the Doubly-Fed Induction Generator (DFIG) stator flux during the fault period, their effects on the rotor voltage can be investigated. It is concluded that the phase-to-phase fault has the worst scenario due to its highest introduction of the negative stator flux. Afterwards......, the capability of a 2 MW DFIG to ride through asymmetrical grid faults can be estimated at the existing design of the power electronics converter. Finally, a control scheme aimed to improve the DFIG capability is proposed and the simulation results validate its feasibility....
Asymmetric volatility connectedness on the forex market
Baruník, Jozef; Ko�enda, Evžen; Vácha, Lukáš
Ro�. 77, �. 1 (2017), s. 39-56 ISSN 0261-5606 R&D Projects: GA ČR(CZ) GA16-14179S Institutional support: RVO:67985556 Keywords : volatility * connectedness * asymmetric effects Subject RIV: AH - Economics OBOR OECD: Finance Impact factor: 1.853, year: 2016 http://library.utia.cas.cz/separaty/2017/E/barunik-0478477.pdf
Magnetic properties of strongly asymmetric nuclear matter
Kutschera, M.; Wojcik, W.
We investigate stability of neutron matter containing a small proton admixture with respect to spin fluctuations. We establish conditions under which strongly asymmetric nuclear matter could acquire a permanent magnetization. It is shown that if the protons are localized, the system becomes unstable to spin fluctuations for arbitrarily weak proton-neutron spin interactions. For non-localized protons there exists a threshold value of the spin interaction above which the system can develop a spontaneous polarization. 12 refs., 2 figs. (author)
Isospin dependent properties of asymmetric nuclear matter
Chowdhury, P. Roy; Basu, D. N.; Samanta, C.
The density dependence of nuclear symmetry energy is determined from a systematic study of the isospin dependent bulk properties of asymmetric nuclear matter using the isoscalar and the isovector components of density dependent M3Y interaction. The incompressibility $K_\\infty$ for the symmetric nuclear matter, the isospin dependent part $K_{asy}$ of the isobaric incompressibility and the slope $L$ are all in excellent agreement with the constraints recently extracted from measured isotopic de...
Asymmetric flow events in a VEER 1000
Horak, W.C.; Kennett, R.J.; Shier, W.; Guppy, J.G.
This paper describes the simulation of asymmetric loss of flow events in Russian designed VVER-1000 reactors using the RETRAN-02 Mod4 computer code. VVER-1000 reactors have significant differences from United States pressurized water reactors including multi-level emergency response systems and plant operation at reduced power levels with one or more main circulation pumps inoperable. The results of these simulations are compared to similar analyses done by the designers for the Rovno plant
Two particle states in an asymmetric box
Li, Xin; Liu, Chuan
The exact two-particle energy eigenstates in an asymmetric rectangular box with periodic boundary conditions in all three directions are studied. Their relation with the elastic scattering phases of the two particles in the continuum are obtained. These results can be viewed as a generalization of the corresponding formulae in a cubic box obtained by L\\"uscher before. In particular, the s-wave scattering length is related to the energy shift in the finite box. Possible applications of these f...
Symmetric vs. asymmetric punishment regimes for bribery
Engel, Christoph; Goerg, Sebastian J.; Yu, Gaoneng
In major legal orders such as UK, the U.S., Germany, and France, bribers and recipients face equally severe criminal sanctions. In contrast, countries like China, Russia, and Japan treat the briber more mildly. Given these differences between symmetric and asymmetric punishment regimes for bribery, one may wonder which punishment strategy is more effective in curbing corruption. For this purpose, we designed and ran a lab experiment in Bonn (Germany) and Shanghai (China) with exactly the same...
Highly Enantioselective Construction of Tertiary Thioethers and Alcohols via Phosphine-Catalyzed Asymmetric γ-Addition reactions of 5H-Thiazol-4-ones and 5H-Oxazol-4-ones: Scope and Mechanistic Understandings
Wang, Tianli
Phosphine-catalyzed highly enantioselective γ-additions of 5H-thiazol-4-ones and 5H-oxazol-4-ones to allenoates have been developed for the first time. With the employment of amino-acid derived bifunctional phosphines, a wide range of substituted 5H-thiazol-4-one and 5H-oxazol-4-one derivatives bearing heteroarom (S or O)-containing tertiary chiral centers were constructed in high yields and excellent enantioselectivities. The reported method provides a facile access to enantioenriched tertiary thioether/alcohols. The mechanism of γ-addition reaction was investigated by performing DFT calculations, and the hydrogen bonding interactions between the Brønsted acid moiety of the phosphine catalysts and the "C=O� unit of donor molecules were shown to be crucial in asymmetric induction.
Wang, Tianli; Yu, Zhaoyuan; Hoon, Ding Long; Huang, Kuo-Wei; Lan, Yu; Lu, Yixin
Predicting tensorial electrophoretic effects in asymmetric colloids
Mowitz, Aaron J.; Witten, T. A.
We formulate a numerical method for predicting the tensorial linear response of a rigid, asymmetrically charged body to an applied electric field. This prediction requires calculating the response of the fluid to the Stokes drag forces on the moving body and on the countercharges near its surface. To determine the fluid's motion, we represent both the body and the countercharges using many point sources of drag known as Stokeslets. Finding the correct flow field amounts to finding the set of drag forces on the Stokeslets that is consistent with the relative velocities experienced by each Stokeslet. The method rigorously satisfies the condition that the object moves with no transfer of momentum to the fluid. We demonstrate that a sphere represented by 1999 well-separated Stokeslets on its surface produces flow and drag force like a solid sphere to 1% accuracy. We show that a uniformly charged sphere with 3998 body and countercharge Stokeslets obeys the Smoluchowski prediction [F. Morrison, J. Colloid Interface Sci. 34, 210 (1970), 10.1016/0021-9797(70)90171-2] for electrophoretic mobility when the countercharges lie close to the sphere. Spheres with dipolar and quadrupolar charge distributions rotate and translate as predicted analytically to 4% accuracy or better. We describe how the method can treat general asymmetric shapes and charge distributions. This method offers promise as a way to characterize and manipulate asymmetrically charged colloid-scale objects from biology (e.g., viruses) and technology (e.g., self-assembled clusters).
Asymmetric threat data mining and knowledge discovery
Gilmore, John F.; Pagels, Michael A.; Palk, Justin
Asymmetric threats differ from the conventional force-on- force military encounters that the Defense Department has historically been trained to engage. Terrorism by its nature is now an operational activity that is neither easily detected or countered as its very existence depends on small covert attacks exploiting the element of surprise. But terrorism does have defined forms, motivations, tactics and organizational structure. Exploiting a terrorism taxonomy provides the opportunity to discover and assess knowledge of terrorist operations. This paper describes the Asymmetric Threat Terrorist Assessment, Countering, and Knowledge (ATTACK) system. ATTACK has been developed to (a) data mine open source intelligence (OSINT) information from web-based newspaper sources, video news web casts, and actual terrorist web sites, (b) evaluate this information against a terrorism taxonomy, (c) exploit country/region specific social, economic, political, and religious knowledge, and (d) discover and predict potential terrorist activities and association links. Details of the asymmetric threat structure and the ATTACK system architecture are presented with results of an actual terrorist data mining and knowledge discovery test case shown.
Diagnostic implications of asymmetrical mammographic patterns
Asenjo, M.; Ania, B.J.
To analyze the effect of asymmetrical mammographic patterns of the diagnosis of breast cancer. In a series of 6, 476 patients referred to a Breast Imaging Diagnosis Unit, we excluded males, women with previous breast surgery, and cases in which mammography was not performed, which left 5,203 women included. Each breast was classified according to one of four patterns of mammographic parenchymal density. Asymmetry was considered to exist when a patient's breasts had different patterns. Breast cancer was confirmed histologically in 282 (5.4%) women. The mammographic pattern was asymmetrical in 8% of the women with cancer and in 2% of the women without cancer (p<0.001). Fine-needle aspiration biopsy was performed in 78% and 96% (p=0.04), respectively, of the women with and without mammographic asymmetry who had neoplasms, and in 33% and 22% (p=0.02), respectively, of the women with and without mammographic asymmetry who did not have neoplasms. Asymmetrical mammographic pattern was four times more frequent in the women with breast cancer. This asymmetry decreased the frequency of needle biopsy in women with cancer, but increased the frequency of needle biopsy in women without cancer. (Author) 11 refs
Hadron scattering in an asymmetric box
Li Xin; Chen Ying; Meng Guozhan; Feng Xu; Gong Ming; He Song; Li Gang; Liu Chuan; Liu Yubin; Ma Jianping; Meng Xiangfei; Shen Yan; Zhang Jianbo
We propose to study hadron-hadron scattering using lattice QCD in an asymmetric box which allows one to access more non-degenerate low-momentum modes for a given volume. The conventional Luescher's formula applicable in a symmetric box is modified accordingly. To illustrate the feasibility of this approach, pion-pion elastic scattering phase shifts in the I = 2, J = 0 channel are calculated within quenched approximation using improved gauge and Wilson fermion actions on anisotropic lattices in an asymmetric box. After the chiral and continuum extrapolation, we find that our quenched results for the scattering phase shifts in this channel are consistent with the experimental data when the three-momentum of the pion is below 300MeV. Agreement is also found when compared with previous theoretical results from lattice and other means. Moreover, with the usage of asymmetric volume, we are able to compute the scattering phases in the low-momentum range (pion three momentum less than about 350MeV in the center of mass frame) for over a dozen values of the pion three-momenta, much more than using the conventional symmetric box with comparable volume
Simulation of Phenix EOL Asymmetric Test
Ha, Kwi Seok; Lee, Kwi Lim; Choi, Chi Woong; Kang, Seok Hun; Chang, Won Pyo; Jeong, Hae Yong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
The asymmetric test of End-Of-Life (EOL) tests on the Phenix plant was used for the evaluation of the MARS-LMR in the Generation IV frame as a part of the code validation. The purpose of the test is to evaluate the ability of the system code to describe asymmetric situations and to identify important phenomena during asymmetrical transient such as a three dimensional effect, buoyancy influence, and thermal stratification in the hot and cold pools. 3-dimensional sodium coolant mixing in the pools has different characteristics from the one dimensional full instantaneous mixing. The velocities and temperatures at the core outlet level differ at each sub-assembly and the temperature in the center of the hot pool may be high because the driver fuels are located at the center region. The temperatures in the hot pool are not the same in the radial and axial locations due to the buoyancy effect. The temperatures in the cold pool also differ along with the elevations and azimuthal directions due to the outlet location of IHX and the thermal stratification
Flatfish: an asymmetric perspective on metamorphosis.
Schreiber, Alexander M
The most asymmetrically shaped and behaviorally lateralized of all the vertebrates, the flatfishes are an endless source of fascination to all fortunate enough to study them. Although all vertebrates undergo left-right asymmetric internal organ placement during embryogenesis, flatfish are unusual in that they experience an additional period of postembryonic asymmetric remodeling during metamorphosis, and thus deviate from a bilaterally symmetrical body plan more than other vertebrates. As with amphibian metamorphosis, all the developmental programs of flatfish metamorphosis are ultimately under the control of thyroid hormone. At least one gene pathway involved in embryonic organ lateralization (nodal-lefty-pitx2) is re-expressed in the larval stage during flatfish metamorphosis. Aspects of modern flatfish ontogeny, such as the gradual translocation of one eye to the opposite side of the head and the appearance of key neurocranial elements during metamorphosis, seem to elegantly recapitulate flatfish phylogeny. This chapter highlights the current state of knowledge of the developmental biology of flatfish metamorphosis with emphases on the genetic, morphological, behavioral, and evolutionary origins of flatfish asymmetry. Copyright © 2013 Elsevier Inc. All rights reserved.
Coupling chemical and biological catalysis: a flexible paradigm for producing biobased chemicals.
Schwartz, Thomas J; Shanks, Brent H; Dumesic, James A
Advances in metabolic engineering have allowed for the development of new biological catalysts capable of selectively de-functionalizing biomass to yield platform molecules that can be upgraded to biobased chemicals using high efficiency continuous processing allowed by heterogeneous chemical catalysis. Coupling these disciplines overcomes the difficulties of selectively activating COH bonds by heterogeneous chemical catalysis and producing petroleum analogues by biological catalysis. We show that carboxylic acids, pyrones, and alcohols are highly flexible platforms that can be used to produce biobased chemicals by this approach. More generally, we suggest that molecules with three distinct functionalities may represent a practical upper limit on the extent of functionality present in the platform molecules that serve as the bridge between biological and chemical catalysis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Core–shell nanoparticles: synthesis and applications in catalysis and electrocatalysis
Core–shell nanoparticles (CSNs) are a class of nanostructured materials that have recently received increased attention owing to their interesting properties and broad range of applications in catalysis, biology, materials chemistry and sensors. By rationally tuning the cores as ...
Ir/Sn dual-reagent catalysis towards highly selective alkylation of ...
Organometallic; bimetallic; catalysis; alkylation; benzyl alcohol; iridium, tin. 1. Introduction ... cording to our proposal, the oxidative addition of tin(IV) halides across a ..... 33. 4. Conclusion. In summary, we have demonstrated here an Ir/Sn.
Support for U.S. Participants at the 16th International Congress on Catalysis
Wachs, Israel E. [Lehigh Univ., Bethlehem, PA (United States)
The enclosed report highlights the travel grant awarded to offset the cost of foreign travel of several faculty and students to attend the 16th International Congress on Catalysis (ICC) held in Beijing, China, July 3-8, 2016.
|
CommonCrawl
|
Space-time heterogeneity of hand, foot and mouth disease in children and its potential driving factors in Henan, China
Xiangxue Zhang1,2 na1,
Chengdong Xu2 na1 &
Gexin Xiao3
Hand, foot and mouth disease (HFMD) has become a substantial threat recently. However few studies have quantified spatiotemporal heterogeneity of HFMD and detected spatiotemporal interactive effect of potential driving factors on this disease.
Using GeoDetector and Bayesian space-time hierarchy model, we characterized the epidemiology of HFMD in Henan, one of the largest population provinces in China, from 2012 to 2013, and quantified the impacts of potential driving factors.
Notably, 21.43 and 24.60% counties were identified as hot and cold spots, respectively. Spatially, the hotspots were mainly clustered in regions where the economic level was high. Temporally, the highest incidence period of HFMD was discovered to be in late spring and early summer. The impact of meteorological and socio-economic factors on the disease are significant, and this study found that a 1 °C rise in temperature was related to an increase of 4.09% in the HFMD incidence, a 1% increment in relative humidity was associated with a 1.77% increase of the disease, and a 1% increment in ratio of urban to rural population was associated with a 0.16% increase of the disease.
Meteorological and socio-economic factors presented significantly association with HFMD incidence, high-risk mainly appeared in large cities and their adjacent regions in hot and humid season. These findings will be helpful for HFMD risk control and disease-prevention policies implementation.
Hand, foot and mouth disease (HFMD) is a worldwide infectious disease [1]. It is mainly caused by Coxsackie virus A16 (CV-A16) or enterovirus 71 (EV71) [2,3,4]. This disease is characterized by flu-like clinical symptoms including fever, mouth ulcers, poor appetite, vomiting, diarrhea, and rashes on the hands, feet, and buttocks [2, 4]. It is believed to be transmitted mainly through direct contact with contaminated discharges, contaminated objects, and fluid from blisters or stool from infected persons, with an average incubation period of three to 7 days [5, 6]. This disease continues to be a serious public health threat, especially to children, as there is no definitive treatment for HFMD, currently.
During the past decades, HFMD outbreak has occurred in numerous areas, especially in the Asia-Pacific region, such as Thailand [7], Taiwan [8], Singapore [9], Hong Kong [10], Vietnam [11], Malaysia [12], Japan [13] and parts of mainland China [14, 15]. In 2007 and early 2008, mainland China experienced several serious outbreaks of HFMD and established a national enhanced surveillance system to respond those outbreaks [15]. In May 2008, HFMD was defined as a Class C infectious disease that requires reporting of every case [16]. A considerable threat still exists, because HFMD especially affects areas of high economic level, possesses distinctive seasonality, and can result in death in severe cases.
Some studies have determined that HFMD risk has temporal variations. It is well accepted that meteorological factors play an important role in the transmission of HFMD. For example, in Finland and Japan, a single season peak of HFMD has been observed during the summer and early autumn months, separately [13, 17]. Meanwhile, an annual peaks in the warmer months (May to July) and a smaller winter peak (October to December) have been detected in subtropical and tropical regions, including Hong Kong, Malaysia, and parts of mainland China [6, 10, 12, 18, 19]. Furthermore, the annual peak of incidence seasonality has varied from April in the southern area to July in the northern area of China [15]. In recent years, there has been increased interest in exploring the impact of meteorological factors on HFMD, such as temperature [20,21,22,23], relative humidity [20, 23], precipitation [15, 20, 22], wind speed [15, 20], hours of sunlight [15], and air pressure [15, 21].
Meanwhile, the risk of HFMD also presents obvious spatial heterogeneity. Some studies indicated that it was closely correlated to socio-economic variables: demographics, local geographic environment, socio-economic status, health conditions, and infrastructure. For example, Yan et al. showed that HFMD incidence was higher in urban areas compared with rural areas and demonstrated that the distance to the nearest freeway and per capital GDP are risk factors associated with HFMD incidence [24]. Hu et al. indicated that the population density of children can explain 56% of the variance in the cumulative monthly HFMD incidences in 2912 counties in China [23]. Likewise, rural-to-urban migrant-worker parents were found to be a major risk factor associated with HFMD in children [25], which implies that socio-economic factors also play an essential role in the transmission and spread of HFMD.
To our knowledge, few studies have quantified spatiotemporal heterogeneity of HFMD and detected spatiotemporal interactive effect of potential driving factors on this disease in the study region. The aims of this study are to 1) reveal the county-level spatiotemporal heterogeneity of HFMD risk, 2) detect the hot/cold spots, and 3) quantify the relationships between meteorological, socio-economic factors and HFMD incidence.
Henan, as one of the provinces with the largest population and the greatest population mobility, is located in the latitude 31.23° to 36.22°N and longitude 110.21° to 116.39°E and has a population close to 95.32 million within an area of 167,000 km2 (Fig. 1). It includes millions of immigrants and migrants, mainly to other provinces in China. Henan has a warm and humid monsoon climate, with four distinctive seasons: a dry and windy spring, hot and humid summer, warm and sunny autumn, cold and dry winter. The average annual temperature and precipitation in the province are 15 °C and 672 mm, respectively.
Geographic location of the Henan province in China, and cumulative monthly incidence of HFMD in children from 2012 to 2013. (The administrative map in the figure was obtained from the Resource and Environment Data Cloud Platform (http://www.resdc.cn))
Data on HFMD cases from January 1, 2012 to December 31, 2013 were obtained from the Chinese Centre for Disease Control and Prevention for use in this study. Monthly meteorological data for the same period was obtained from the China Meteorological Data Sharing Service System and includes average temperature, relative humidity, wind speed, precipitation, hours of sunlight, and air pressure (Fig. 2). The county level socio-economic variables from 2012 to 2013 were acquired from the governmental economic statistical yearbooks of Henan province, including the ratio of urban to rural population, population density of children under five, per capita Gross Domestic Product (GDP), per capita income of farmers, high school enrollment rate, and industrial structures (Additional file 1: Table S1). The administrative map used in the study was obtained from the Resource and Environment Data Cloud Platform (http://www.resdc.cn).
Temporal evolution in potential meteorological factors from 2012 to 2013
GeoDetector
In this study, GeoDetector q statistic [26,27,28], was used to quantify the spatial and temporal stratified heterogeneity and assess their interactive affect for risks of HFMD.
The GeoDetector q value can be expressed as:
$$ q=1-\frac{1}{N{\sigma}^2}{\sum}_{h=1}^L{N}_h{\sigma}_h^2 $$
where q denotes the level of spatial, temporal or spatiotemporal stratified heterogeneity for target variable, e.g., HFMD risk. Its value ranges from 0 to 1, if the value approach 1 indicates the distribution of the variable has strong heterogeneity, otherwise if the value approach 0 indicates the variable has random distribution. N is the number of counties. σ2 and \( {\sigma}_h^2 \) are the variance over all the statistical units in the study area and within stratum h (h = 1, 2,…, L), respectively.
Bayesian space-time hierarchy model
Bayesian space-time hierarchy model (BSTHM) was used to analyze the temporal and spatial variations of disease risk. This model can explore the spatial-temporal heterogeneity of disease risk, quantify the impacts of potential driving factors, and highlight the changes in local or common trends.
The Poisson with log link regression function was used to model the data. Supposing that, in area i (i = 1, 2,…, 126) and month t (t = 1, 2,…, 24), yit and nit represent the number of cases and the risk population respectively, and disease cases can be described as followings:
$$ {\displaystyle \begin{array}{c}{y}_{it}\sim Poisson\left({n}_{it}{u}_{it}\right)\\ {}\mathit{\log}\left({u}_{it}\right)=a+{s}_i+\left({b}_0{t}^{\ast }+{v}_t\right)+{b}_{1i}{t}^{\ast }+{\sum}_{n=1}^N{\beta}_n{x}_{nit}+{\varepsilon}_{it}\end{array}} $$
where uit denotes the potential risk of HFMD in region i and month t. The term α is the overall log disease risk during a selected period in the study region. The spatial term si indicates the disease risks in county i. The overall time trend is expressed by b0t* + vt, and it is composed of a linear trend b0t* with additional Gaussian noise vt. Time span relative to the midpoint tmid over the study period is represented by t* = t − tmid. The term b1it* allows each county to have its own trend. Specifically, b0 represents the overall change rate of disease risk, while, b1i measures the departure from b0 for each county. For example, if b1i is greater than 0, the local variation intensity is higher than the overall variation trend, if b1i is less than 0, the local variation intensity is lower than the overall variation trend. The regression coefficient of the risk factors is β. The term xnit is the n-th risk factor for area i and month t. Gaussian noise random variable is represented by ε1i [29].
The Besag, York, and Mollie (BYM) spatial model was introduced to determine the prior distribution of the parameters si and b1i [30]. To enhance the random effect of spatial structure in BYM, we used the conditional autoregressive (CAR) prior with a spatial adjacency matrix W. The CAR prior on the spatial random effect implied that adjacent counties tend to have similar disease risks.
The temporal noise vt is quantified as vt ∼ N (0, σv2) and the Gaussian noise εit is expressed as εit ~ N (0, σε2). As suggested by Gelman [31], the prior distribution of the standard deviations (e.g., σv, σε) of all the random variables in the model is determined as a strictly positive half Gaussian distribution N+∞ (0, 10).
According to the posterior distribution of all parameters, the spatiotemporal heterogeneity and variation of HFMD risk was quantified. Then, the following criteria was used to classify study area into hot, cold and other spots [32]. If the posterior probability p (exp (si) > 1 | data) > 0.90, a county was defined as a hotspot. Conversely, a county was defined as coldspot if the posterior probability p (exp (si) > 1 | data) < 0.10. The other areas were regarded as neither hot nor cold spots. Here, exp. (si) represents the average disease risk (over time) in county i relative to α [33].
All parameters were calculated by WinBUGS [34], statistical software that was designed specifically for Bayesian calculation. In addition, posterior distributions of all parameters in the model were obtained through Markov chain Monte Carlo (MCMC) simulations.
Spatial lag model
The spatial term (si) in BSTHM, was largely affected by long-term stable factors compared to meteorological factors, such as local geographic environment, socio-economic conditions, topography, and medical equipment. In the study, the spatial lag model (SLM) was used to quantify the relationships between the spatial term si and socio-economic factors. It was modeled as the following formula:
$$ {s}_i=\rho {\boldsymbol{W}}_{s_i}+\boldsymbol{X}\boldsymbol{\beta } +\boldsymbol{\varphi} $$
where the ρ was the coefficient of the spatial term. Its value ranged from 0 to 1; when the value was closer to 1, the more similar the dependent variables in adjacent areas were. The spatial adjacent matrix W reflects the spatial trend of the response variable itself. The spatial regression coefficient of the explanatory variables is represented by β. The explanatory variable is represented by X, which includes all selected socio-economic factors. The error term is defined by φ.
From January 2012 to December 2013, there were a total of approximately 120 thousands cases of HFMD in children in all 126 counties of Henan province. The annual incidence in 2012 and 2013 were 88.04/ 104 and 78.92/ 104, respectively.
Figure 3 showed the overall temporal trend of HFMD risk from 2012 to 2013, which denoted that the temporal relative risk differed significantly between months, as the GeoDetector q value was 0.35 (p < 0.01), indicating that there was an obvious seasonal variation in the risks of HFMD. The period of highest risk appeared in late spring and early summer (April to June), with an average monthly incidence of 14.45/104, and the lower disease risk occurred in the fall season (August to October), with an average monthly incidence of 3.29/104.
The posterior means of the temporal relative risks (exp(b0t* + vt)) of HFMD in children from 2012 to 2013
The relative risk (RR) of HFMD varied geographically, which indicated that there also has obvious spatial heterogeneity, as the GeoDetector q value was 0.31 (p < 0.01). Figure 4 shows the spatial RR of HFMD by county from 2012 to 2013. The high risk mainly appear in the regions where the level of economic and urbanization was high, including Zhengzhou, Jiyuan, Sanmenxia, Jiaozuo, Luoyang, Xuchang, Hebi, which was correspond to the areas where the per capita GDP was high [35].
The posterior means of the spatial relative risks (RRs) (exp(si)) of HFMD in children for each county, Henan province. (The administrative map in the figure was obtained from the Resource and Environment Data Cloud Platform (http://www.resdc.cn))
Additionally, the spatiotemporal interaction effect of HFMD relative risk was also calculated by GeoDetector, the q value was 0.67 (p < 0.01), which indicated a significantly spatiotemporal heterogeneity.
In the study region, among the 126 counties, 27 (21.43%) and 31 (24.60%) counties were considered as hot and cold spots, respectively. Another 68 (53.97%) counties were identified as neither hot nor cold spots. Figure 5 presents that, hotspot areas were mainly distributed in economically developed areas.
Map of the hot spots and cold spots of HFMD in each county of Henan Province. (The administrative map in the figure was obtained from the Resource and Environment Data Cloud Platform (http://www.resdc.cn))
To quantify the relative importance of the stable component (si + b0t* + vt) compared to the terms allowing for space-time interaction (b1i + εit) in explaining the observed space-time variation [33], we computed the posterior median of variance partition coefficient (VPC). It is the ratio of the empirical variance of (si + b0t* + vt) to the sum of the empirical variances of (si + b0t* + vt) and (b1i + εit) multiplied by 100%. The posterior median and 95% CI of VPC from the MCMC iterations can be obtained, which was 95.57% (with 95% CI: 93.98 to 96.74%), indicating that the stable component explained the majority of the observed variability.
Risk factor detection
The HFMD risk showed an apparent correlation to seasonal changes (Fig. 3), and it indicated that meteorological factors played a dominant role in the temporal variation of the HFMD, in which average temperature represented the strongest influence on HFMD.
There was a positive association between average temperature and HFMD. A 1 °C rise in temperature related to an increase of 4.09% (95% CI: 1.12 to 7.27) in the risk of HFMD (RR: 1.04; 95% CI: 1.01 to 1.08) (Table 1).
Table 1 The quantified posterior means and RR of all coefficients in BSTHM
There was a positive association between HFMD and relative humidity. A 1% increment in relative humidity was associated with a 1.77% rise (95% CI: 0.68 to 2.77) in the risk of HFMD (RR: 1.02; 95% CI: 1.01 to 1.03) (Table 1).
A positive association was found between air pressure and HFMD. A 1 hPa increasing was related to 0.89% (95% CI: 0.36 to 1.36) rise in the HFMD risk, with corresponding RRs were 1.01 (95% CI: 1.00 to 1.014) (Table 1).
Meanwhile, precipitation was presented negative association with the HFMD risk. A 1 mm rise was linked to 0.12% (95% CI: − 0.23 to − 0.01) decrease in the HFMD risk, with corresponding RRs were 0.999 (95% CI: 0.998 to 1.00) (Table 1). Additionally, the estimated coefficients for sun hour, wind speed were not statistically significant (Table 1).
Furthermore, the risk of HFMD presented apparent spatial heterogeneity, and the study found that socio-economic factors also played a dominant role.
There was positive relationship between ratio of urban to rural population and the HFMD risk. A 1% increment in ratio of urban to rural population was associated with a 0.16% increase in the risk of HFMD (p < 0.01) (Table 2).
Table 2 The estimated coefficients of socio-economic factors in SLM
The proportion of the tertiary industry also showed a positive association with HFMD risk. A 1% rise in proportion of the third industry may be related to an increase of 0.02% in the HFMD risk (p < 0.05) (Table 2).
In particular, per capita GDP presented the highest determinant power amongst these socio-economic factors. A 1000 yuan rise in per capita GDP was associated with a 1.10% increase in the HFMD risk (p < 0.01) (Table 2).
These results indicated a statistically significant regression relationship between the HFMD risk and the ratio of urban to rural population, proportion of the tertiary industry, and per capita GDP, as the values of p all are less than 0.05. There also presented on not statistically significant relationships for other selected factors (Table 2).
HFMD remains a serious threat to childhood health and has become one of the leading causes of childhood mortality in mainland China [15, 23, 36, 37]. In recent decades, Henan province, as one of the largest population provinces in China, has experienced several serious outbreaks of HFMD [38, 39]. The present study, from spatiotemporal perspective, explored the epidemiological characteristics of the disease, and quantified the impacts of meteorological factors and socio-economic variations on childhood HFMD incidence in Henan. The results revealed that the highest risk was mainly gathered in areas with high urbanization levels, meanwhile, meteorological factors were found have significant effects on the transmission of HFMD.
The relative risk of HFMD was linked to an obvious seasonal variation, with the highest risk appearing in late spring and early summer (April to June), and the lowest risk in autumn (August to October). It is widely accepted that meteorological factors play a decisive role in the seasonal changes of HFMD incidence, which are regarded as crucial environmental factors that influence the spread and survival of viruses causing HFMD [36, 37, 40]. The association between meteorological factors and the seasonal evolution of HFMD incidence has captured particular interests from many researchers, and some studies have reported that temperature and relative humidity played an extraordinary important role in the seasonal variation of HFMD [40,41,42,43].
The study found that average temperature was strongly positively association with monthly HFMD incidence, which is consistent with previous studies in month time scale. For example, a rise in average temperature may have led to an increase in the number of HFMD cases in Vietnam [44]. And a study showed that an increase in average temperature was associated with a rise of the number of HFMD cases [45]. The potential mechanism could be that temperature affects the behavioral patterns of people, and warmer weather can lead to increased contact, especially among young children, accordingly facilitating the spread of HFMD infection [46].
Similarly, there was a positive relationship between relative humidity and the incidence of HFMD, which is same as other studies [23, 47]. That maybe because during humid days the virus could easily attach to articles in the air, facilitating the spread of the disease [48]. However, a previous study found that relative humidity is not related to the prevalence of this disease, which was different from the present research [46].
Another important driving meteorological factor that influence the transmission of HFMD are air pressure, also presenting positive correlation with HFMD incidence, which was consistent with other study [49]. The potential mechanism may be that air pressure affects the immune system and increases the risk of disease.
Additionally, precipitation presented negative correlation with HFMD incidence, which was also consistent with other studies. Some studies demonstrated that heavy downpours could break down the survival environment of viruses [44, 50]. The potential reasons may be that precipitation would reduce social contact, thus affect the spread of the disease [51].
Furthermore, wind speed and sun hour were found to have no statistic significant association with HFMD in the study. This was consistent with some of previous studies, however, some studies have drawn opposite conclusions. For example, Liao et al. found that wind speed and sun hour has no significant association with HFMD incidence [6]. Whereas, Xiao et al. demonstrated that the weaker association presented between the sun hour and HFMD incidence [52]. Meanwhile, Wang et al. denoted that the wind speed and sun hour were found to be positively associated with HFMD [53].The potential reasons may be that these meteorological factors have different relationships with the HFMD in different regions.
These results denoted that meteorological factors play different roles in contributing to the transmission of enteric infectious diseases by affecting the ecological environment of pathogens, exposure probability, and host susceptibility, thus resulting in the occurrence of the disease.
In the study, in order to analysis the spatial heterogeneity of the influence of meteorological factors on HFMD, the relationships between HFMD and meteorological factors was further calculated in three strata classified by the BSTHM in hot spots, cold spots and neither hot nor cold spots, respectively. The results indicated that there presented distinctive local relationships in each stratum compared with those in global model (Table 1, Additional file 1: Tables S2, S3 and S4). In the global model, average temperature and relative humidity were found to be key factors affecting HFMD risk, however it indicated no statistically significant influence of average temperature on HFMD risk in the cold spots (Additional file 1: Table S3), and the effect of relative humidity was also not found statistically significant relationship with HFMD risk in each stratum (Additional file 1: Tables S2, S3 and S4). The potential reasons for these difference between global and local models may be that there existed different HFMD transmission mechanisms in different regions, and small size of samples in each stratum also affected the statistically significant level of estimated parameters.
In addition, the study indicated that the distribution of HFMD risk presented apparently spatial heterogeneity. High risk of HFMD (hot spots) was mainly concentrated in the areas where the level of economic and urbanization was high, while low risk of HFMD (cold spots) was mainly distributed in the undeveloped counties having lower economic level and incomplete infrastructure [35], which was consistent with previous studies. For example, one previous study found that the proportion of tertiary industry was positively correlated to the incidence of HFMD [36]. One previous study found that incidence in economically developed areas, for example Beijing, Tianjin, Shanghai, and Zhejiang, are higher than in less developed areas [19]. Furthermore, a study found that population density and tertiary industry presented the most significant impact on this disease, explaining 42% of the HFMD transmission [54]. The potential mechanism may be that, due to the rapid economic development and urbanization in recent years, there exists increased floating population in the more developed regions compared with cold spots, however, there is limited living and work room, which providing more opportunities for contact between each other, thus accelerating the spread of the virus.
In the study, three models, GeoDetector, BSTHM and SLM, were used, in which the BSTHM is linear models used to detect the spatiotemporal heterogeneity of the HFMD risk. However, HFMD transmission in reality has a fundamentally non-linear nature, and a linear method was a first-order approximation for reality. This introduces some uncertainty to the results of the study. Fortunately, in a linear model, the physical mechanism of parameters is clear, and the calculation is easy to implement and repeat.
The present study describes the detailed spatiotemporal dynamics of HFMD and its relationships with meteorological and socio-economic factors from 2012 to 2013 in Henan province, China. The high risks were mainly concentrated in regions where the level of economic was high. HFMD risk in Henan had an obviously seasonal characteristic, which indicated that HFMD risk is mainly related to a hot and humid environment. These results provide a good illustration for the spatiotemporal distribution and the seasonal variation of HFMD risk among different geographic areas, which can serve as reference and basis for the surveillance and control of this disease in practice.
BSTHM:
BYM:
Besag, York, and Mollie
Conditional autoregressive
HFMD:
Hand, foot and mouth disease
MCMC:
Markov chain Monte Carlo
SLM:
Lei XB, Cui S, Zhao ZD, Wang JW. Etiology, pathogenesis, antivirals and vaccines of hand, foot, and mouth disease. Natl Sci Rev. 2015;2:268–84.
Qiu J. Enterovirus 71 infection: a new threat to global public health? Lancet Neurol. 2008;7:868–9.
Solomon T, Lewthwaite P, Perera D, Cardoso MJ, McMinn P, Ooi MH. Virology, epidemiology, pathogenesis, and control of enterovirus 71. Lancet Infect Dis. 2010;10:778–90.
Li RC, Liu LD, Mo ZJ, Wang XY, Xia JL, Liang ZL, Zhang Y, Li YP, Mao QY, Wang JJ, et al. An inactivated enterovirus 71 vaccine in healthy children. N Engl J Med. 2014;370:829–37.
Zhu L, Wang XJ, Guo YM, Xu J, Xue FZ, Liu YX. Assessment of temperature effect on childhood hand, foot and mouth disease incidence (0–5 years) and associated effect modifiers: a 17 cities study in Shandong Province, China, 2007–2012. Sci Total Environ. 2016;551:452–9.
Liao JQ, Qin ZJ, Zuo ZL, Yu SC, Zhang JY. Spatial-temporal mapping of hand foot and mouth disease and the long-term effects associated with climate and socio-economic variables in Sichuan Province, China from 2009 to 2013. Sci Total Environ. 2016;563:152–9.
Chatproedprai S, Theanboonlers A, Korkong S, Thongmee C, Wananukul S, Poovorawan Y. Clinical and molecular characterization of hand-foot-and-mouth disease in Thailand, 2008–2009. Jpn J Infect Dis. 2010;63:229–33.
Chen KT, Chang HL, Wang ST, Cheng YT, Yang JY. Epidemiologic features of hand-foot-mouth disease and herpangina caused by enterovirus 71 in Taiwan, 1998–2005. Pediatrics. 2007;120:E244–52.
Ang LW, Koh BKW, Chan KP, Chua LT, James L, Goh KT. Epidemiology and control of hand, foot and mouth disease in Singapore, 2001–2007. Ann Acad Med Singap. 2009;38:106–12.
Ma E, Lam T, Chan KC, Wong C, Chuang SK. Changing epidemiology of hand, foot, and mouth disease in Hong Kong, 2001–2009. Jpn J Infect Dis. 2010;63:422–6.
Nguyen NTB, Pham HV, Hoang CQ, Nguyen TM, Nguyen LT, Phan HC, Phan LT, Vu LN, Minh NNT. Epidemiological and clinical characteristics of children who died from hand, foot and mouth disease in Vietnam, 2011. BMC Infect Dis. 2014;14:341.
Chua KB, Kasri AR. Hand foot and mouth disease due to enterovirus 71 in Malaysia. Virol Sin. 2011;26:221–8.
Onozuka D, Hashizume M. The influence of temperature and humidity on the incidence of hand, foot, and mouth disease in Japan. Sci Total Environ. 2011;410:119–25.
Wang J, Hu T, Sun DP, Ding SJ, Carr MJ, Xing WJ, Li SX, Wang XJ, Shi WF. Epidemiological characteristics of hand, foot, and mouth disease in Shandong, China, 2009–2016. Sci Rep. 2017;7:8900.
Xing WJ, Liao QH, Viboud C, Zhang J, Sun JL, Wu JT, Chang ZR, Liu FF, Fang VJ, Zheng YD, et al. Hand, foot, and mouth disease in China, 2008–12: an epidemiological study. Lancet Infect Dis. 2014;14:308–18.
Li J, Wang JF, Xu CD, Yin Q, Hu MG, Sun ZJ, Shao DW. Hand, foot, and mouth disease in mainland China before it was listed as category C disease in May, 2008. Lancet Infect Dis. 2017;17:1017–8.
Blomqvist S, Klemola P, Kaijalainen S, Paananen A, Simonen ML, Vuorinen T, Roivainen M. Co-circulation of coxsackieviruses A6 and A10 in hand, foot and mouth disease outbreak in Finland. J Clin Virol. 2010;48:49–54.
Hii YL, Rocklov J, Ng N. Short term effects of weather on hand, foot and mouth disease. PLoS One. 2011;6:e16796.
Zhu Q, Hao YT, Ma JQ, Yu SC, Wang Y. Surveillance of hand, foot, and mouth disease in mainland China (2008–2009). Biomed Environ Sci. 2011;24:349–56.
Wang Y, Feng Z, Yang Y, Self S, Gao Y, Longini IM, Wakefield J, Zhang J, Wang L, Chen X. Hand, foot, and mouth disease in China: patterns of spread and transmissibility. Epidemiology. 2011;22:781.
Wang J, Guo YS, Christakos G, Yang WZ, Liao YL, Zhongjie LI, Xiaozhou LI, Lai SJ, Chen HY. Hand, foot and mouth disease: spatiotemporal transmission and climate. Int J Health Geogr. 2011;10:25.
Bo YC, Song C, Wang JF, Li XW. Using an autologistic regression model to identify spatial risk factors and spatial risk patterns of hand, foot and mouth disease (HFMD) in Mainland China. BMC Public Health. 2014;14:358.
Hu MG, Li ZJ, Wang JF, Jia L, Liao YL, Lai SJ, Guo YS, Zhao D, Yang WZ. Determinants of the incidence of hand, foot and mouth disease in China using geographically weighted regression models. PLoS One. 2012;7:e38978.
Yan L, Li XL, Yu YQ, de Vlas SJ, Li YP, Wang DD, Li YL, Yin Y, Wu J, Liu H, et al. Distribution and risk factors of hand, foot, and mouth disease in Changchun, northeastern China. Chin Sci Bull. 2014;59:533–8.
Zeng M, Pu DB, Mo XW, Zhu CM, Gong ST, Xu Y, Lin GY, Wu BY, He SL, Jiao XY, et al. Children of rural-to-urban migrant workers in China are at a higher risk of contracting severe hand, foot and mouth disease and EV71 infection: a hospital-based study. Emerg Microbes Infect. 2013;2:e72.
Wang JF, Li XH, Christakos G, Liao YL, Zhang T, Gu X, Zheng XY. Geographical detectors-based health risk assessment and its application in the neural tube defects study of the Heshun region, China. Int J Geogr Inf Sci. 2010;24:107–27.
Wang JF, Zhang TL, Fu BJ. A measure of spatial stratified heterogeneity. Ecol Indic. 2016;67:250–6.
Wang JF, Xu CD. Geodetector: Principle and prospective. Acta Geograph Sin. 2017;72:116–34.
Xu CD, Xiao GX. Spatiotemporal risk mapping of hand, foot and mouth disease and its association with meteorological variables in children under 5 years. Epidemiol Infect. 2017;145:2912–20.
Besag J, York J, Mollie A. Bayesian image-restoration, with 2 applications in spatial statistics. Ann Inst Stat Math. 1991;43:1–20.
Gelman A. Prior distributions for variance parameters in hierarchical models (comment on an article by Browne and Draper). Bayesian Anal. 2006;1:515–33.
Richardson S, Thomson A, Best N, Elliott P. Interpreting posterior relative risk estimates in disease-mapping studies. Environ Health Perspect. 2004;112:1016–25.
Li G, Haining R, Richardson S, Best N. Space-time variability in burglary risk: a Bayesian spatio-temporal modelling approach. Spat Stat. 2014;9:180–91.
Lunn DJ, Thomas A, Best N, Spiegelhalter D. WinBUGS - a Bayesian modelling framework: concepts, structure, and extensibility. Stat Comput. 2000;10:325–37.
Henan Province Government. Henan economic yearbook 2012–2013. Beijing: China Statistics Press; 2012–2013. (in Chinese).
Xu CD. Spatio-temporal pattern and risk factor analysis of hand, foot and mouth disease associated with under-five morbidity in the Beijing-Tianjin-Hebei region of China. Int J Environ Res Public Health. 2017;14:416.
Song C, Shi X, Bo YC, Wang JF, Wang Y, Huang DC. Exploring spatiotemporal nonstationary effects of climate factors on hand, foot, and mouth disease using Bayesian Spatiotemporally Varying Coefficients (STVC) model in Sichuan, China. Sci Total Environ. 2018;648:550–60.
Wang S, Lan CW, Zhang LW, Zhang HZ, Yao ZJ, Wang D, Ma JB, Deng JR, Liu SG. Seroprevalence of Toxoplasma gondii infection among patients with hand, foot and mouth disease in Henan, China: a hospital-based study. Infect Dis Poverty. 2015;4:53.
Li XL, Li Y, Zhang BF, Sui ML, Pan JJ, Chen ZJ, Cheng NN, Du YH, Wei HY, Xu BL, et al. Etiology study on severe cases caused by hand-foot-mouth disease in children from Henan province, 2014. Chin J Epidemiol. 2016;37:568–71.
Basu R, Ostro BD. A multicounty analysis identifying the populations vulnerable to mortality associated with high ambient temperature in California. Am J Epidemiol. 2008;168:632–7.
Huang Y, Deng T, Yu SC, Gu J, Huang CR, Xiao GX, Hao YT. Effect of meteorological variables on the incidence of hand, foot, and mouth disease in children: a time-series analysis in Guangzhou, China. BMC Infect Dis. 2013;13:134.
Zhuang DF, Hu WS, Ren HY, Ai W, Xu XL. The influences of temperature on spatiotemporal trends of hand-foot-and-mouth disease in mainland China. Int J Environ Health Res. 2014;24:1–10.
Bell ML, O'Neill MS, Ranjit N, Borjaaburto VH, Cifuentes LA, Gouveia NC. Vulnerability to heat-related mortality in Latin America: a case-crossover study in São Paulo, Brazil, Santiago, Chile and Mexico City, Mexico. Int J Epidemiol. 2008;37:796–804.
Dung P, Huong XN, Huong Lien Thi N, Cuong MD, Quang DT, Chu C. Spatiotemporal variation of hand-foot-mouth disease in relation to socioecological factors: a multiple-province analysis in Vietnam. Sci Total Environ. 2018;610:983–91.
Wang H, Du ZH, Wang XJ, Liu YX, Yuan ZS, Liu YX, Xue FZ. Detecting the association between meteorological factors and hand, foot, and mouth disease using spatial panel data models. Int J Infect Dis. 2015;34:66–70.
Liu WD, Ji H, Shan J, Bao J, Sun Y, Li J, Bao CJ, Tang FY, Yang K, Bergquist R, et al. Spatiotemporal dynamics of hand-foot-mouth disease and its relationship with meteorological factors in Jiangsu province, China. PLoS One. 2015;10:e0131311.
Chang HL, Chio CP, Su HJ, Liao CM, Lin CY, Shau WY, Chi YC, Cheng YT, Chou YL, Li CY, et al. The association between enterovirus 71 infections and meteorological parameters in Taiwan. PLoS One. 2012;7:e46845.
Fletcher LA, Noakes CJ, Beggs CB, Sleigh PA. The importance of bioaerosols in hospital infections and the potential for control using germicidal ultraviolet irradiation. Centro de Edafología y Biología Aplicada del Segura. 2004;154:279–91.
Wang C, Cao K, Zhang YJ, Fang LQ, Li X, Xu Q, Huang FF, Tao LX, Guo J, Gao Q, et al. Different effects of meteorological factors on hand, foot and mouth disease in various climates: a spatial panel data model analysis. BMC Infect Dis. 2016;16:233.
Cheng J, Wu JJ, Xu ZW, Zhu R, Wang X, Li KS, Wen LY, Yang HH, Su H. Associations between extreme precipitation and childhood hand, foot and mouth disease in urban and rural areas in Hefei, China. Sci Total Environ. 2014;497:484–90.
Belanger M, Gray-Donald K, O'Loughlin J, Paradis G, Hanley J. Influence of weather conditions and season on physical activity in adolescents. Ann Epidemiol. 2009;19:180–6.
Xiao X, Gasparrini A, Huang J, Liao QH, Liu FF, Yin F, Yu HJ, Li XS. The exposure-response relationship between temperature and childhood hand, foot and mouth disease: a multicity study from mainland China. Environ Int. 2017;100:102–9.
Wang P, Goggins WB, Chan EYY. Hand, foot and mouth disease in Hong Kong: a time-series analysis on its relationship with weather. PLoS One. 2016;11:e0161006.
Huang JX, Wang JF, Bo YC, Xu CD, Hu MG, Huang DC. Identification of health risks of hand, foot and mouth disease in China using the geographical detector technique. Int J Environ Res Public Health. 2014;11:3407–23.
National Science Foundation of China (41601419, 41531179), Innovation Project of LREIS (O88RA205YA, O88RA200YA) and Special Scientific Research Fund of Public Welfare Profession of China (GYHY20140616). Additionally, the funding body in the study had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Xiangxue Zhang and Chengdong Xu contributed equally to this work.
The School of Earth Science and Resources, Chang'an University, Xi'an, 710054, China
Xiangxue Zhang
State Key Laboratory of Resources and Environmental Information System, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, 11A, Datun Road, Chaoyang District, Beijing, 100101, China
Xiangxue Zhang & Chengdong Xu
China National Center for Food Safety Risk Assessment, Beijing, 100022, China
Gexin Xiao
Chengdong Xu
CDX conceived designed the study. XXZ performed the experiments. XXZ and GXX analyzed the data. CDX and XXZ wrote the paper. All Authors contributed to the final version of the manuscript. All authors read and approved the final manuscript.
Correspondence to Chengdong Xu.
Table S1. contains descriptive characteristics for meteorological and socio-economic variables selected in this study. Tables S2–S4. contain the estimated the posterior means and RR of BSTHM coefficients in three strata. (DOCX 23 kb)
Zhang, X., Xu, C. & Xiao, G. Space-time heterogeneity of hand, foot and mouth disease in children and its potential driving factors in Henan, China. BMC Infect Dis 18, 638 (2018). https://doi.org/10.1186/s12879-018-3546-2
Hand, Foot, and mouth disease
Meteorological factors
Socio-economic factors
Spatiotemporal risk
|
CommonCrawl
|
Russell's paradox but set with sets of 2-element sets. [closed]
Want to improve this question? Update the question so it's on-topic for Mathematics Stack Exchange.
How do you prove that set of all 2-element sets does not exist basing on russell's paradox. Seems pretty obvious to me but no idea how to make a proper proof.
elementary-set-theory
Yashiru99
Yashiru99Yashiru99
$\begingroup$ All two sets? If there were only two then life would be a lot simpler. $\endgroup$ – badjohn Oct 22 '19 at 16:21
$\begingroup$ The assertion stating this exists is not inconsistent as a single statement in the way that that asserting the existence of the Russell set is. In the set theory NFU, for example, there is such a set. So the question needs to be sharpened by situating it within a particular theory. In ZFC, for example, its existence can be disproved because of the axiom of union. $\endgroup$ – Malice Vidrine Oct 22 '19 at 20:21
$\begingroup$ What do you mean by "basing on"? $\endgroup$ – Andrés E. Caicedo Oct 24 '19 at 1:39
It seems as though you want to prove the non-existence of the set of all two-element sets as a corollary of Russell's paradox. But there's an important difference between the class of all pairs and the Russell class. Notice the theory consisting of the single sentence $$\exists y\forall x(x\in y\leftrightarrow x\notin x)$$ is inconsistent. But there are consistent set theories in which $\{z:\exists xy(z=\{x,y\})\}$ is actually a set (like $\mathsf{NFU})$. So you can't disprove the existence of such a set except with respect to a particular theory.
If you're thinking about something like Zermelo set theory, or one of its extensions, then you likely already know the argument that there can be no universal set; the separation scheme would let us show that the Russell class is a set, resulting in the usual contradiction.
To disprove the existence of a set of all two-element sets in Zermelo, we note that in the presence of the other axioms (particularly whichever axioms ensure that there's at least one thing, and also something else), Pairing implies that every set is a member of some two-element set. So suppose we have our set of all pairs; the axiom of Union says that if we have this set, we can form the set of all elements that are a member of some two-element set. And that's the set of all sets, something we already know leads to contradiction. So we can have no such set.
Malice VidrineMalice Vidrine
Not the answer you're looking for? Browse other questions tagged elementary-set-theory or ask your own question.
The set of all sets of the universe?
A doubt regarding Russell's paradox.
Defeating Russell's paradox
Is Russell's paradox really about sets as such?
Is the fact that these sets cannot exist a consequence of Russell's paradox?
Why can't Russell's Paradox be solved with references to sets instead of containment?
Assumption of existence of which sets lead to Russell's paradox?
Russell's paradox in ZF theory : Enderton's Elements of set theory : Ch.2
About Russell's paradox and Russell's anti-sets
|
CommonCrawl
|
Flexitranstore
The International Symposium on High Voltage Engineering
ISH 2019: Flexitranstore pp 61-72 | Cite as
Conflict of Interests Between SPC-Based BESS and UFLS Scheme Frequency Responses
Mojtaba Eliassi
Roozbeh Torkzadeh
Peyman Mazidi
Ricardo Pastor
Vasiliki Vita
Elias Zafiropoulos
Christos Dikeakos
Michalis Michael
Rogiros Tapakis
George Boultadakis
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 610)
Nowadays the interest in grid-supporting energy storage systems for frequency response improvement is spurred to increase the penetration of renewable energy resources. Operational frequency constraints of the grid code should be fulfilled in the combined state feedback frequency control provided through the BESS frequency support and UFLS relays. In this paper, favoritism and unfairness of the grid-interactive Battery Energy Storage System (BESS) frequency support is investigated in terms of Rate of Change of Frequency (RoCoF), frequency nadir, time response, steady-state error, and specifically, total load shed subject to power balance over the network. Categorizing load shedding stages into vital and non-vital can measure the appropriateness of the BESS response. Conflicts of the BESS control parameters, performance measures and UFLS actions are verified on the modified Cypriot transmission grid, and the simulation results show that a controller or modulation technique would be essential to coordinate the BESS and UFLS scheme frequency responses to handle conflict of controllers.
Battery Energy Storage System Under Frequency Load Shedding Dynamic frequency response Performance measures
Download conference paper PDF
A rising interest to investigate the impact of high penetration of renewable generation on frequency control of power system and the capability of delivering frequency support by full Converter Control-Based Generators (CCBGs) is emerged during recent years [1]. Decrease of system inertia and conventional spinning reserve and increase of unpredictable uncertainties imposed by integrating renewables in the generation mix deteriorates the frequency support. By implementation of the virtual synchronous machine behavior [2, 3] through the new grid-supporting convertors of storages and renewables, the frequency support can be improved by means of the provided fast synthetic inertia and timely generation control in severe conditions [4].
In severe frequency decline situations, Under-Frequency Load Shedding (UFLS) programs (generally including event-based and response-based UFLSs) as the last automated measure of power system prevention from collapsing are designed and implemented by Transmission System Operator (TSO) through dynamic off-line simulations in operational planning time-scale. In response-based UFLS programs, the relay settings are usually pre-defined, fixed, and non-robust which may not be a comprehensive solution for the wide range of combinational/cascading events and operational variability and uncertainty imposed by high penetration of renewables [5]. Grid-interactive BESS can support the system operation during inertial and primary response through the inertia and droop parameters tuning of the SPC (Synchronous Power Controller).
Several TSOs have announced new grid codes requiring Electronically Interfaced Resources (EIRs) like energy storages, PVs, HVDC links, and wind power plants to provide frequency response [6]. The mutual interaction of UFLS as a traditional emergency control action has not been investigated enough under high share of renewable energy sources with BESS frequency support. The influence of the fast acting power controller of BESS on UFLS scheme and also mal-operation and readjustment of UFLS settings in high penetration of wind generation are presented in [7] and [8]. In [9], a systematic method is presented for controlling an energy storage system output power for preventing transient load shedding. Existing UFLS schemes suggested for conventional power systems has not been yet incorporated in grid-supporting convertors tuning to provide better inertial and primary frequency response.
A SPC-based Battery Energy Storage System (BESS) [10, 11, 12] as a linear state feedback controller could be tuned to control the performance of system frequency response measured by conflicting multi-dimensional performance measures such as minimum RoCoF and frequency, steady state error, and settling time. Good performance of UFLS schemes as another state feedback controller in conjunction with BESS frequency response is of great importance considering the fact that frequency support of the BESS may be harmful to some frequency performance measures or UFLS scheme performance. On the other hand, UFLS actions as a step change in RHS of the swing equation declines the BESS frequency response. Although, UFLS scheme has similar operation horizon as BESS frequency support, they used to be adjusted in different operational planning horizons, which makes the conflicts more complex and inevitable. In this paper, it is shown that the gains of the SPC and UFLS relays parameters need be adjusted in a coordinated manner to compromise between diverse technical and economic performance criteria of the frequency response.
The rest of the paper is constructed as follows: Sects. 2 and 3 provide an investigation of the impact of UFLS scheme and BESS inertia and droop characteristic on frequency response based on the unified discretized system frequency response using UFLS and BESS frequency responses formulation. Simulation results and discussion is presented in Sect. 4, respectively. The conclusion is drawn in Sect. 5.
2 Impact of BESS Inertia and Droop Characteristic on Frequency Response
2.1 Unified Discretized System Frequency Response
Based on the swing equation, accelerating and decelerating behaviour of the power system could be studied, when a power disturbance occurs. If the sum of all torques acting on the rotor shaft of a synchronous machine on the right-hand side of the swing equation including;
the mechanical torque (Tm) (power input),
the electrical torque (Te) (power output), and
the damping torque (Td) [Nm]
Does not add up to zero, the excess torque accelerates or decelerates the rotor with moment of inertia J [kg m2], meaning that the rotor angle θ [rad] accelerates (non-zero second derivative). By multiplying the swing equation with the system frequency ωb, the torques turn into power. In a single-area be made of Ng synchronous machine, the generalized continuous form of Center of Inertia (CoI) swing equation, presented in (1) is linearized as (2).
$$ \frac{2H}{{f_{0} }}\frac{df\left( t \right)}{dt} = \sum\nolimits_{i = 1}^{{N_{g} }} {P_{mi} - P_{ei} } $$
$$ \frac{d\Delta f\left( t \right)}{dt} = \frac{{f_{0} }}{2H}\Delta P^{im} \left( t \right) $$
$$ \Delta P^{im} \left( t \right) = \Delta P^{gov} \left( t \right) - \Delta P^{c} \left( t \right) + \Delta P^{sh} \left( t \right) + \Delta P^{wind} \left( t \right) + \Delta P^{PV} \left( t \right) + \Delta P^{ESS} \left( t \right) + \Delta P^{DG} \left( t \right) + \Delta P^{Tie} \left( t \right) - D\Delta f\left( t \right) $$
Terms of \( \Delta P^{im} \left( t \right) \) could be defined as a function of \( \frac{d\Delta f\left( t \right)}{dt} \) and/or \( \Delta f\left( t \right) \) or as a step input. Generation and load changes which may cause input power imbalance are: Generation outage \( (\Delta P^{c} ) \), Governor action \( (\Delta P^{gov} ) \), Wind, PV, and ESS frequency response \( (\Delta P^{wind} , \Delta P^{PV} \left( t \right),\Delta P^{ESS} \left( t \right) ) \) and load changes is incurred by load shedding \( (\Delta P^{sh} \left( t \right)) \), and load damping \( (D\Delta f\left( t \right)) \). By assuming each term of \( \Delta P^{im} \) as X, the system frequency response could be discretized over time with time step ∆t as \( \Delta X\left( {n\Delta t} \right) = \Delta X_{n} \).
$$ \Delta f_{n + 1} = \Delta f_{n} + \Delta t\frac{{f_{0} }}{2H}(\Delta P_{n}^{gov} - \Delta P^{c} + \Delta P_{n}^{sh} + \Delta P_{n}^{wind} + \Delta P_{n}^{PV} + \Delta P_{n}^{ESS} - D\Delta f_{n} ) $$
$$ \Delta P_{n + 1}^{gov} = \Delta P_{n}^{gov} + \frac{ - \Delta t}{T}\left( {\Delta P_{n}^{gov} + \frac{{\Delta f_{n} }}{R}} \right) $$
The main focus of the rest of the paper is on the BESS influence on system frequency response and UFLS scheme. According to, other terms of (3) except than \( \Delta P^{ESS} \left( t \right) \) and \( \Delta P^{sh} \left( t \right) \) are ignored in the following equations. However, in simulation step, all of the resources contributing in the system frequency response are modelled and reflected.
2.2 System Acceleration Behaviour
The corresponding performance measures of the system frequency response are shown in Fig. 1.
Frequency performance measures illustration.
For a step-wise decrease in \( \Delta P^{c} \) from 0 to \( - \Delta P \) at the disturbance time t0, system frequency starts to drop fast with a high RoCoF. Higher RoCoF may activate the RoCoF relays of DGs and loads. The inertial and primary frequency response of the generations and loads try to limit the frequency deviation by increasing the system kinetic energy. At minimum frequency (Nadir frequency) time tm, RoCoF reaches zero and the frequency deviation \( \Delta f \) starts to decrease. Subsequently, \( \Delta f \) oscillates and finally stabilizes at the new steady-state frequency \( \Delta f_{ss} \).
Consequently, as shown in Fig. 2 the frequency response could be divided into areas in which the power system is accelerating, and others in which the system is decelerating. In an accelerating area, the frequency deviation \( \Delta f \) with respect to the steady-state frequency \( \Delta f_{ss} \) is increasing or in other words \( \Delta f - \Delta f_{ss} \) and RoCoF have the same sign. This can be written as:
Acceleration periods of power system frequency response.
$$ {\text{Accelerating}}:\left( {\Delta f\left( t \right) - \Delta f_{ss} } \right).\frac{d\Delta f\left( t \right)}{dt} > 0 $$
$$ {\text{Decelerating}}:\left( {\Delta f\left( t \right) - \Delta f_{ss} } \right).\frac{d\Delta f\left( t \right)}{dt} < 0 $$
2.3 BESS as a State Feedback Controller
The frequency response of storage can be similar to conventional units when participating in inertia and primary response. When the system is faced with large amount of power deficiency, ESS can provide FFR (Fast Frequency Response) based on the following transfer function [13]:
$$ \Delta f\left( t \right) = \frac{{\Delta P^{ESS} \left( t \right)}}{{K_{e} }}\quad \;K_{e} = \frac{{ - \left( {2H_{e} s + \frac{1}{{R_{e} }}} \right)}}{{1 + T_{e} s}} $$
where \( H_{e} \) and \( R_{e} \) are inertia constant and primary frequency constant of control, respectively, and \( T_{e} \) is the time constant of BESS which is ignored in comparison to the time constants of the other and consistent with conventional units. Therefore;
$$ - 2H_{e} \frac{d\varDelta f\left( t \right)}{dt} - \frac{1}{{R_{e} }}\varDelta f\left( t \right) = \Delta P^{ESS} \left( t \right) $$
$$ - 2H_{e} \frac{{\Delta f_{n + 1} - \Delta f_{n} }}{\Delta t} - \frac{1}{{R_{e} }}\Delta f_{n} = \Delta P^{ESS} \left( n \right) $$
Charging, discharging and power capacity limits of the BESS is assumed to be considered on the limits of the \( \Delta P^{ESS} \), \( H_{e} \) and \( R_{e} \). Integration of (8) in (3) results in (9).
$$ \Delta f_{n + 1} = \Delta f_{n} + \Delta t\frac{{f_{0} }}{{2\left( {H + H_{e} } \right)}}(\Delta P_{n}^{gov} - \Delta P^{c} + \Delta P_{n}^{sh} - \frac{1}{{R_{e} }}\Delta f_{n} - D\Delta f_{n} ) $$
In (9), BESS inertia and droop characteristics turn to be a state feedback on system frequency. In order to provide a better frequency response, we need to tune the inertia and damping of the BESS, which is achieved by regulating the converters' SPC parameters (H and R) through a Multi-objective optimal gain-tuning method. A SPC-based BESS as a state feedback of the system DAEs is shown in Fig. 3 in which x is [\( f,\dot{f} \)] and K = [H, R] of the controller.
SPC-based BESS as a State feedback control.
2.4 Discussion on BESS Frequency Response
We may state that feedback can cause a system that is originally stable to become unstable. Certainly, feedback is a double-edged sword; when it is improperly used, it can be harmful [14]. Based on the (5) and (6), during accelerating time-intervals, to arrest the rate of change of frequency the system needs inertia emulated by the BESS. As soon as the system starts decelerating, inertia should reduce again to prevent overshoot in frequency response. The unused inertia imposes the frequency to return to the steady-state frequency slower, which deteriorates the overall frequency response.
Additionally, damping should be increased during decelerating phases to help decrease the settling time of the frequency deviation. The additional inertia required during accelerating phases is proportional to the RoCoF and the required additional damping is proportional to the frequency deviation from the steady state. Without adaptive damping and inertia control, the BESS response is not canceled out when the system has sufficiently stabilized which may result in unstable oscillations.
Due to the conflicting influence of the battery inertia and droop responses in accelerating and decelerating periods of frequency response, conflict of interests appears between different performance measures of frequency response with respect to the provided inertia and droop responses of BESS.
3 Response-Driven UFLS as a Non-linear State Feedback Controller
The response driven based load shedding, widely uses Under Frequency-based Load Shedding solutions. The inherent closed loop/ feedback based control scheme in decentralized response driven UFLS system makes them efficient in acting against the disturbances.
As frequency is a very good indicator of Power Mismatch, the operation based on under frequency relays has high control precision and robustness with respect to uncertainties. However, it can have many difficulties and challenges. The setting of the UFLS is highly complex, which involves many parameters including the number of LS stages, percentage of load allowed to be shed, the time delay for each stage, real time topology etc. Together these variables make the UFLS settings a non-linear and multi-dimensional problem. Secondly, UFLS is triggered after the frequency has already declined to certain low values causing a time delay, which makes the solution more reactive in nature and the system stability requires more time.
UFLS relays are characterized using incremented/decremented step behavior in swing equation. For simplicity, the related blocks can be represented as a sum of incremental (decremental) step functions. For instance, as presented in (10), for a fixed UFLS scheme [15], the function of \( \Delta P^{sh} \left( t \right) \) in the time domain could be considered as a sum of the incremental step functions of ∆Pk.u(t – tk). Therefore, for L load shedding steps:
$$ \Delta P^{sh} \left( n \right) = \sum\nolimits_{k = 1}^{L} {\Delta P_{k} .u\left( {n - n_{k} } \right)} $$
$$ u\left( {n - n_{k} } \right) = 1\;{\text{if}}\;f_{0} + \Delta f_{{n - n_{k} + i}} < f_{th}^{k} \quad i = 0, \ldots ,n_{k} $$
(10) and (11) could be linearized using binary and big variables [16]. As stated above, the relay logic dictates that the block of load \( \Delta P_{k} \) be shed when the corresponding timer exceeds \( n_{k} \). When a contingency occurs, for each load-shedding stage k with \( f_{th}^{k} \), the relay disconnects a block of load after \( n_{k} \) time step when the frequency trajectory [computed using (3)–(4)] violates the frequency set point for a predetermined time delay.
3.1 Discussion on UFLS and BESS Mutual Effect
Similar to the SPC-based BESS frequency response, a response-driven based UFLS scheme is affecting the frequency response as a non-linear state feedback controller as presented in Fig. 4. f(x) is presented in (10).
SPC-based BESS and UFLS scheme as State feedback controls control.
UFLS action at \( t_{{f_{th} }} - t_{k} \) is a positive step change in generation-load imbalance which is reflected in system DAEs as a disturbance. Subsequent change in the rocof and frequency values alter the BESS inertial and primary response and reduce the BESS frequency support.
In some situations, however, frequency support of EIRs may be harmful to frequency control as they generate an extra energy in a short period of time, which reduces frequency decay and derivative but could not be maintained over time. Reduction of frequency derivative through emulating inertia via EIRs looks positive at first sight as the reduced frequency derivative triggers less under-frequency relays, but in some conditions, the shedded load is smaller than the amount what the event requires. As the EIRs cannot maintain extra generation over time, the frequency may continue to decay until the shedding of the next load step. It is also possible to activate other system and components protection by resulting in lower steady state frequency.
Based on (10) and (11), load shedding steps are activated, if the frequency reduces temporary or recurrently below a certain UFLS threshold. If the steady state frequency settles at a frequency above the same threshold, load shedding may be considered as nonvital. BESS can provide fast injection of power with limited energy capacity to prevent temporary frequency decreases and unnecessary load shedding. On the other hand, BESS should withdraw from support when the frequency decline is under arrest and additional load shedding would not be activated.
In order to avoid mal operation of UFLS scheme in the presence of the BESS frequency support, UFLS schemes should be incorporated in the controller tuning of the BESS as an economic objective to be optimized in conjunction with other frequency performance measures. In comparison with BESS controller tuning, most power systems operate with predetermined load shedding schemes and use fixed relay settings. Hence, resetting UFLS thresholds or changing load shedding amounts are not often viable options.
Moreover, with the existing supervisory and control capabilities, TSOs are not tendentious to re-design UFLS scheme for a short while. Then, the frequency thresholds, time delays and shed load percentage could be assumed fixed during the optimal gain-tuning of BESS. Using synchrophasor measurements for control of grid interactive energy storage system and UFLS relays [17] provide the possibility of coordinated online control of these resources of inertia and primary frequency response.
4 Studied System
4.1 System Modelling and Characteristics
There are 13 load shedding stages (~68% of the total load) spread over all Cyprus, which are presented in the following Table 1. This UFLS scheme is implemented in the system by definition and setting of the under frequency relays in DIgSILENT PowerFactory. A 50 MW, 100MWh BESS is added to the network equipped by SPC with tunable inertia and droop parameters of 15 and 60 as the maximum gains, as it is presented in [18]. The initial values of 0.5 and 0 are assumed for the SOC and power output states of the BESS, respectively. SOC limits are supposed as 0.2 and 0.8.
Other RHS terms of the swing equation in (2) are enclosed in the system modelling and control like turbine and governors, and Wind turbine standard controllers accorded to Grid Code requirements. Load damping effect are not considered in this study. The coincident disturbance of two generation units with 90 and 60 MW dispatch power is considered to tune the BESS parameters.
4.2 Conflict of Interests
In order to investigate the mutual effect of UFLS plan and BESS frequency support, system frequency response is presented in different scenarios in Fig. 5. As it is shown, without UFLS relays and BESS (a), with UFLS relays and without BESS (b), and with BESS (H = 5, R = 20) and UFLS (c) scenarios are studied.
Effect of UFLS plan and BESS frequency support on frequency response: red (b) and green (c).
Comparison of transient responses in these scenarios are summarized as follows:
In (a), the system frequency decreases continuously until the system becomes unstable due to the cascading outage of conventional and renewable resources due to their under frequency relay activation. Primary and inertia support of conventional generation are not enough to arrest the frequency decline.
By activation of UFLS in (b), the first three blocks of the UFLS plan are activated with 0.2 s delay after the frequency reaches the thresholds (taction = [2.89, 3.15, 3.95]). The non-vital shedding stage is avoided by BESS frequency support. Re-producing the frequency responses for three different combination of activated shedding stages in Fig. 6 (red: stage one, green: stage one and two, blue: all stages) analyses the necessity of the activated stage with respect to steady state frequency. While two first stages are categorized as vital, third stage is non-vital as the frequency settles above 48.8 without the activation of this stage.
Vitality analysis of activated load shedding stages.
Addition of BESS in (c), the first two blocks of the UFLS plan are activated with 0.2 s delay after the frequency reaches the thresholds (taction = [3.24, 3.74]). Although the non-vital shedding stage is avoided by BESS frequency support, other performance measures like RoCoF and minimum frequency are improved and response time and steady state frequency affected harmfully.
As it is presented in Table 2, a conflict of interests appears due to the parallel activation of UFLS and BESS frequency support in viewpoint of different performance measures. Undesirable effect of the BESS are grayed out in the table.
Load shedding stages spread over all Cyprus.
Frequency thresholds
Shed Percentage
Shed MW
Frequency performance measures.
With BESS
Without BESS
fNadir
RoCoFmin
−0.0845
tNadir
Pshed
First, it was shown that by considering BESS droop and inertia gains along with the multistage UFLS plan, the overall performance of the load shedding plan and BESS frequency support are mutually influenced by each other in the both favorable and unfair directions. It was also revealed that the BESS droop and inertia gains are affecting frequency performance measures as a double-edged sword regarding system acceleration behaviour. Accordingly, the conflict of interests raised with respect to the UFLS and BESS state feedback controls interference (with similar operation and different planning horizons) should be handled through a gain tuning approach. Enabling coordinated online control of BESS parameters UFLS scheme using synchrophasor measurements provides better inertia and primary frequency response considering demonstrated conflicts of interest.
This work was partially supported by the European Commission under project FLEXITRANSTORE—H2020-LCE-2016-2017-SGS-774407 and by the Spanish Ministry of Science under project ENE2017-88889-C2-1-R. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the host institutions or funders.
Dreidy, M., Mokhlis, H., Mekhilef, S.: Inertia response and frequency control techniques for renewable energy sources: a review. Renew. Sustain. Energy Rev. 69(November 2015), 144–155 (2017)CrossRefGoogle Scholar
Rodriguez, P., Citro, C., Candela, J.I., Rocabert, J., Luna, A.: Flexible grid connection and islanding of SPC-based PV power converters. IEEE Trans. Ind. Appl. 54(3), 2690–2702 (2018)CrossRefGoogle Scholar
D'Arco, S., Suul, J.A., Fosso, O.B.: A virtual synchronous machine implementation for distributed control of power converters in SmartGrids. Electr. Power Syst. Res. 122, 180–197 (2015)CrossRefGoogle Scholar
Morren, J., de Haan, S.W.H., Kling, W.L., Ferreira, J.A.: Wind turbines emulating inertia and supporting primary frequency control. IEEE Trans. Power Syst. 21(1), 433–434 (2006)CrossRefGoogle Scholar
Tang, L., McCalley, J.: Two-stage load control for severe under-frequency conditions. IEEE Trans. Power Syst. 31(3), 1943–1953 (2016)CrossRefGoogle Scholar
Greenwood, D.M., Lim, K.Y., Patsios, C., Lyons, P.F., Lim, Y.S., Taylor, P.C.: Frequency response services designed for energy storage. Appl. Energy 203, 115–127 (2017)CrossRefGoogle Scholar
Gonzalez-Longatt, F., Rueda, J., Vázquez Martínez, E.: Effect of fast acting power controller of battery energy storage systems in the under-frequency load shedding scheme. In: International Conference on Innovative Smart Grid Technologies (ISGT Asia 2018), no. June (2018)Google Scholar
Aparicio, N., Añó-Villalba, S., Belenguer, E., Blasco-Gimenez, R.: Automatic under-frequency load shedding mal-operation in power systems with high wind power penetration. Math. Comput. Simul. 146(2017), 200–209 (2018)MathSciNetCrossRefGoogle Scholar
Pulendran, S., Tate, J.E.: Energy storage system control for prevention of transient under-frequency load shedding, pp. 1–11 (2015)Google Scholar
Zhang, W., Cantarellas, A.M., Rocabert, J., Luna, A., Rodriguez, P.: Synchronous power controller with flexible droop characteristics for renewable power generation systems. IEEE Trans. Sustain. Energy 7(4), 1572–1582 (2016)CrossRefGoogle Scholar
Marin, L., Tarras, A., Candela, I., Rodriguez, P.: Stability analysis of a grid-connected VSC controlled by SPCGoogle Scholar
Eliassi, M., Paulino, P.A.B., Torkzadeh, R., Rodriguez, P.: Event-based under-frequency inertia emulation scheme for severe conditionsGoogle Scholar
Khazaei, J., Nguyen, D.H., Thao, N.G.M.: Primary and secondary voltage/frequency controller design for energy storage devices using consensus theory. In: 2017 6th International Conference on Renewable Energy Research and Applications, ICRERA 2017, vol. 2017–Janua (2017)Google Scholar
Kuo, F.G.B.: Automatic Control Systems, pp. 398–401 (2002)Google Scholar
Hassan Bevrani, B., Verbic, G.S., Koepper, H.D.: Robust Power System Frequency Control, vol. 48, no. 2 (2009)Google Scholar
Banijamali, S., Amraee, T.: Semi adaptive setting of under frequency load shedding relays considering credible generation outage scenarios. IEEE Trans. Power Deliv. (2018)Google Scholar
Torkzadeh, R., Eliassi, M., Mazidi, P., Rodriguez, P.: Synchrophasor measurements for control of grid interactive energy storage systemGoogle Scholar
Rocabert, J., Luna, A., Blaabjerg, F.: Control of power converters in AC microgrids. IEEE Trans. Power Electron. 27(11), 4734–4749 (2012)CrossRefGoogle Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
1.Research Institute of Science and TechnologyLoyola University AndalusiaSevilleSpain
2.Research Center on Renewable Electrical Energy SystemsTechnical University of CataloniaBarcelonaSpain
3.Centro de Investigação em Energia REN - State Grid, S.ASacavémPortugal
4.Institute of Communications and Computer SystemsAthensGreece
5.Independent Power Transmission Operator (IPTO TSO)AthensGreece
6.Transmission System Operator, Cyprus (TSOC)StrovolosCyprus
7.European Dynamics Luxembourg SA.Luxembourg CityLuxembourg
Eliassi M. et al. (2020) Conflict of Interests Between SPC-Based BESS and UFLS Scheme Frequency Responses. In: Németh B., Ekonomou L. (eds) Flexitranstore. ISH 2019. Lecture Notes in Electrical Engineering, vol 610. Springer, Cham
eBook Packages Engineering
|
CommonCrawl
|
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지)
Asian Australasian Association of Animal Production Societies (아세아태평양축산학회)
Effects of Organic Acids on Growth Performance, Gastrointestinal pH, Intestinal Microbial Populations and Immune Responses of Weaned Pigs
Li, Zheji (National Key Laboratory of Animal Nutrition, China Agricultural University) ;
Yi, Ganfeng (Twenty-second floor, office Tower 1, Henderson Center, 18 Jianguomen Nei Avenue, DongCheng District) ;
Yin, Jingdong (National Key Laboratory of Animal Nutrition, China Agricultural University) ;
Sun, Peng (National Key Laboratory of Animal Nutrition, China Agricultural University) ;
Li, Defa (National Key Laboratory of Animal Nutrition, China Agricultural University) ;
Knight, Chris (NOVUS International Inc.)
https://doi.org/10.5713/ajas.2008.70089
Two experiments were conducted to compare the effects of feeding organic acids and antibiotic growth promoters in weaned pigs. In Exp. 1, 96 nursery pigs (Large White$\times$Landrace; initial weight $7.80{\pm}0.07kg$) were randomly allotted into one of four dietary treatments. Pigs in treatment 1 were fed a complex starter diet. Treatments 2 to 4 were the same as treatment 1 but supplemented with antibiotics (200 ppm chlortetracycline plus 60 ppm Lincospectin), 0.5% potassium diformate or 0.5% dry organic acid blend ACTIVATE Starter DA (ASD). During the 4-week post-weaning period, pigs fed ASD or antibiotics had better gain (p = 0.03) and feed efficiency (p = 0.04) than pigs fed the control diet. On d 14 post-weaning, pigs fed the control diet had the lowest fecal lactobacilli count among all dietary treatments (p = 0.02), whereas pigs fed ASD or antibiotics had a trend for lower fecal E. coli count compared to the control pigs (p = 0.08). Serum insulin-like growth factor-1 (IGF-1) of pigs fed ASD did not differ from pigs fed the control diet (p>0.05) at d 14 after weaning. In Exp. 2, 24 weaned pigs (Large White$\times$Long White; initial weight $5.94{\pm}0.33kg$) were allotted into four groups and housed individually. Pigs were fed a control diet or diets supplemented with antibiotics (100 ppm colistin sulfate, 50 ppm Kitasamycin plus 60 ppm Olaquindox), 0.5% or 1% ASD. All pigs were orally challenged with E. coli $K88^+$ on d 5. During d 5 to 14 after challenge, pigs fed antibiotics, 0.5% or 1% ASD had better gain (p = 0.01) and feed efficiency (p = 0.03) than pigs fed the control diet. On d 14, compared to the control pigs, pigs fed 0.5% ASD had higher lactobacilli in the duodenum and pigs fed 1% ASD and antibiotics had a trend for higher lactobacilli in the ileum (p = 0.08). Pigs fed antibiotics, 0.5% or 1% ASD diets tended to have decreased ileal E. coli count compared to those fed the control diet (p = 0.08). Serum interleukin-6 and cortisol and digesta pH values were not affected by treatment or time. These results indicate that feeding ASD can improve the growth performance of weaning pigs, mainly via modulating intestinal microflora populations without affecting gastrointestinal pH or immune indices.
Organic Acids;Antibiotics;Weaned Pigs;Growth Performance;Gastrointestinal pH;Microbial Population
Supported by : Nature Science Foundation of China
Burkey, T. E., S. S. Dritz, J. C. Nietfeld, B. J. Johnson and J. E. Minton. 2004. Effects of dietary mananoligosaccharide and sodium chlorate on the growth performance, acute-phase response, and bacterial shedding of weaned pigs challenged with Salmonella enterica serotype Typhimurium. J. Anim. Sci. 82:397-404. https://doi.org/10.2527/2004.822397x
Doyle, M. E. 2001. Alternatives to antibiotic use for growth promotion in animal husbandary. Food Res. April:1-17.
Dibner, J. J. and P. Buttin. 2002. Use of organic acids as a model to study the impact of gut microflora on nutrition and metabolism. J. Appl. Poult. Res. 11:453-463. https://doi.org/10.1093/japr/11.4.453
Dibner, J. J. and J. D. Richards. 2005. Antibiotic growth promoters in Agriculture: history and mode of action. Poult. Sci. 84:634-643. https://doi.org/10.1093/ps/84.4.634
Dibner, J. J., C. Knight, G. F. Yi and J. D. Richards. 2007. Gut development and health in the absence of antibiotic growth promoters. Asian-Aust. J. Anim. Sci. 20:1007-1014. https://doi.org/10.5713/ajas.2007.1007
Cole, D. J. A., R. M. Beal and J. R. Luscombe. 1968. The effect on perfoemance and bacterial flora of lactic acid, propionic acid, calcium propionate and calcium acrylate in the drinking water of weaned pigs. Vet. Rec. 83:459-464. https://doi.org/10.1136/vr.83.18.459
Collier, C. T., M. R. Smiricky-Tjardes, D. M. Albin, J. E. Wubben, V. M. Gabert, B. Deplancke, D. Bane, D. B. Anderson and H. R. Gaskins. 2003. Molecular ecological analysis of porcine ileal microbiota responses to antimicrobial growth promoters. J. Anim. Sci. 81:3035-3045. https://doi.org/10.2527/2003.81123035x
Choct, M. 2001. Alternatives to in-feed antibiotics in monogastric animal industry. ASA Technical Bulletin. AN30:1-6.
Barrow, P. A., R. Fuller and M. J. Nweport. 1977. Changes in the microflora and physiology of the anteriorintestinal tract of pigs weaned at 2 days with special reference to the pathogenesis if diarrhea. Infect. Immun. 18:586-595.
Blank, R., W. C. Sauer, R. Mosenthin, J. Zentek, S. Huang and S. Roth. 2001. Effect of fumaric acid supplementation and dietary buffering capacity on the concentration of microbial metabolites in ileal digesta of young pigs. Can. J. Anim. Sci. 81:345-353. https://doi.org/10.4141/A00-040
Bosi, P., H. J. Jung, In K. Han, S. Perini, J. A. Cacciavillani, L. Casini, D. Creston, C. Gremokolini and S. Mattuzzi. 1999. Effects of dietary buffering characteristic and protected or unprotected acids on piglet growth, digestibility and characteristics of gut content. Asian-Aust. J. Anim. Sci. 12:1104-1110. https://doi.org/10.5713/ajas.1999.1104
Anderson, D. B., V. J. McCracken, R. I. Aminov, J. M. Simpson, R. I. Mackie, M. W. A. Verstegen and H. R. Gaskins. 1999. Gut microbiology and growth-promoting antibiotics in swine. Pig News Inf. 20:115N-112N.
Kim, Y. G., J. D. Lohakare, J. H. Yun, S. Heo and B. J. Chae. 2007. Effect of feeding levels of microbial fermented soy protein on the growth performance, nutrient digestibility and intestinal morphology in weaned piglets. Asian-Aust. J. Anim. Sci. 20:399-404. https://doi.org/10.5713/ajas.2007.399
Kelley, K. W. 2004. From hormones to immunity: The physiology of immunology. Brain Behav. Immun. 18:95-113. https://doi.org/10.1016/j.bbi.2003.10.003
Harada, E., K. Hiroko, E. Kobayashi and H. Tsuchita. 1988. Postnatal development of biliary and pancreatic exocrine secretion in piglets. Comp. Biochem. Phsiol. 91A:43-51.
Hathaway, M. R., W. R. Dayton, M. E. White and M. S. Pampusch. 2003. Effects of antimicrobials and weaning on porcine serum insulin-like growth factor binding protein levels. J. Anim. Sci. 81:1456-1463. https://doi.org/10.2527/2003.8161456x
Jenkins, N. L., J. L. Turner, S. S. Dritz, S. K. Durham and J. E. Minton. 2004. Changes in circulating insulin-like growth factor-1, insulin-like growth factor binding proteins, and leptin in weaned pigs infected with Salmonella enterica serovar Typhimurium. Domest. Anim. Endocrinol. 26:49-60. https://doi.org/10.1016/j.domaniend.2003.09.001
Jones, P. H., J. M. Roe and B. G. Miller. 2001. Effects of stressors on immune parameters and on the fecal shedding of enterotoxigenic Escherichia coli in piglets following experimental inoculation. Res. Vet. Sci. 70:9-17. https://doi.org/10.1053/rvsc.2000.0436
Francis, D. H., P. A. Grange, D. H. Zeman, D. R. Baker, R. G. Sun and A. K. Erickson. 1998. Expression of mucin-type glycoprotein K88 receptors strongly correlates with piglet susceptibility to K88 enterotoxigenic Escherichia coli, but adhesion of this bacterium to brush border does not. Infec. Immun. 66:4050-4055.
Galfi, P. and J. Bokori. 1990. Feeding trial in pigs with a diet containing sodium n-butyrate. Acta. Vet. Hung. 38:3-17.
Giesting, D. W. and R. A. Easter. 1985. Response of starter pigs to supplementation of corn soybean meal diets with organic acids. J. Anim. Sci. 60:1288-1294. https://doi.org/10.2527/jas1985.6051288x
Kirchgessner, M. and F. X. Roth. 1988. Ergotrope Effekte durch Sauren in der Ferkelaufzucht und Schweinemast. Ubers Tierernahrung. 16:93-108.
Kluge, H., J. Broz and K. Eder. 2006. Effect of benzoic acid on growth performance, nutrient digestibility, nitrogen balance, gastrointestinal microflora and parameters of microbial metabolism in piglets. J. Anim. Physiol. Anim. Nutr. 90:316-324. https://doi.org/10.1111/j.1439-0396.2005.00604.x
Mathew, A. G., M. A. Franklin, W. G. Upchurch and S. E. Chattin. 1996. Influence of weaning age on ileal microflora and fermentation acids in young pigs. Nutr. Res. 16:817-827. https://doi.org/10.1016/0271-5317(96)00074-7
Mroz, Z. 2005. Organic acids as alternatives to antibiotic growth promoters for pigs. In: Advances in Pork Production, (Ed. G. Foxcroft). University of Alberta Press, Edmonton, Alberta. pp. 169-182.
Namkung, H., M. Li, J. Gong, H. Yu, M. Cottrill and C. F. M. de Lange. 2004. Impact of feeding blends of organic acids and herbal extracts on growth performance, gut microbiota and digestive function in newly weaned pigs. Can. J. Anim. Sci. 84:697-704. https://doi.org/10.4141/A04-005
Kommera, S. K., R. D. Mateo, F. J. Neher and S. W. Kim. 2006. Phytobiotics and organic acids as potential alternatives to the use of antibiotics in nursery pig diets. Asian-Aust. J. Anim. Sci. 19:1784-1789. https://doi.org/10.5713/ajas.2006.1784
Lai, C. H., J. D. Yin, D. F. Li, L. D. Zhao, S. Y. Qiao and J. J. Xing. 2005. Conjugated linoleic acid attenuates the production and gene expression of proinflammatory cytokines in weaned pigs challenged with lipopolysaccharide. J. Nutr. 135:239-244. https://doi.org/10.1093/jn/135.2.239
Mathew, A. G., A. L. Sutton, A. B. Scheidt, D. M. Forsyth, J. A. Patterson and D. T. Kelly. 1991. Effects of a propionic acid containing feed additive on performance and intestinal microbial fermentation of the weanling pigs. In: Proc Sixth Int Symposium on the Digestive Physiology in Pigs. PUDOC. Wageningen, The Netherlands, pp. 464-469.
Partanen, K. H. and Z. Morz. 1999. Organic acids for performance enhancement in pig diets. Nutr. Res. Rev. 12:117-145. https://doi.org/10.1079/095442299108728884
National Research Council (NRC). 1998. Nutrient requirements of swine. 10th ed. National Academy Press, Washington, DC.
Omogbenigun, F. O., C. M. Nyachoti and B. A. Slominski. 2003. The effect of supplementing microbial phytase and organic acids to a corn-soybean based diet fed to early-weaned pigs. J. Anim. Sci. 81:1806-1813. https://doi.org/10.2527/2003.8171806x
Paulicks, H. R., F. X. Roth and M. Kirchgessner. 2000. Effects of potassium diformate (FormiTM LHS) in combination with different grains and energy desities in the feed on growth performance of weaned piglets. J. Anim. Physiol. Anim. Nutr. 84:102-111. https://doi.org/10.1046/j.1439-0396.2000.00288.x
Risley, C. R., E. T. Kornegay, M. D. Lindemann, C. M. Wood and W. N. Eigel. 1992. Effect of feeding organic acids on selected intestinal content measurements at varying times postweaning in pigs. J. Anim. Sci. 70:196-206. https://doi.org/10.2527/1992.701196x
Ravindran, V. and E. T. Kornegay. 1993. Acidification of weaner pig diets: A review. J. Sci. Food Agric. 62:313-322. https://doi.org/10.1002/jsfa.2740620402
Richards, J. D., J. Gong and C. F. M. de Lange. 2005. The gastrointestinal microbiota and its role in monogastric nutrition and health with an emphasis on pigs: current understanding, possible modulations, and new technologies for ecological studies. Can. J. Anim. Sci. 85:421-435. https://doi.org/10.4141/A05-049
Roura, E., J. Homedes and K. C. Klasing. 1992. Prevention of immunologic stress contributes to the growth-pronoting ability of dietary antibiotics in chicks. J. Nutr. 122:2283-2290. https://doi.org/10.1093/jn/122.11.2283
Sciopioni, R., G. Zaghini and B. Biavati. 1978. Researches on the use of acidified diets for early weaning of piglets. Zootechnol Nutr. Anim. 4:201-218.
Sissons, J. W. 1989. Potential of probiotic organisms prevent diarrhea and promote digestion in farm animals - a review. J. Sci. Food Agric. 49:1-13. https://doi.org/10.1002/jsfa.2740490102
Tao, Y., C. Ying, Z. Ni, J. Li and L. Li. 1982. Clinical Biochemical Analysis, 2nd edn. Shanghai: Shanghai Scientific Technical Press, pp. 102-103.
Thaela, M. J., M. S. Jensen, S. G. Pierzynowski, S. Jakob and B. B. Jensen. 1998. Effect of lactic acid supplementation in pigs after weaning. J. Anim. Feed Sci. 7:181-183. https://doi.org/10.22358/jafs/69972/1998
Yi, G. F., J. A. Carroll, G. L. Allee, A. M. Gaines, D. C. Kendall, J. L. Usry, Y. Toride and S. Izuru. 2005. Effect of glutamine and spray-dried plasma on growth performance, small intestinal morphology, and immune responses of Escherichia coli $K88^{+}$-challenged weaned pigs. J. Anim. Sci. 83:634-643. https://doi.org/10.2527/2005.833634x
Yi, G. F., R. Harrell, J. J. Dibner, C. S. Schasteen, J. Wu, K. R. Perryman and C. D. Knight. 2006. Evaluation of 2-hydroxy-4-(methylthio) butanoic acid (HMTBa) and HMTB-containing organic acid blends in different nursery pig feed programs. (submitted to J. Anim. Sci.).
Eller, C., M. R. Crabill and M. P. Bryant. 1971. Anaerobic roll tube media for nonselective enumeration and isolation of bacteria in human feces. Appl. Microbiol. 22:522-529.
Kil, D. Y., L. G. Piao, H. F. Long, J. S. Lim, M. S. Yun, C. S. Kong W. S. Ju, H. B. Lee and Y. Y. Kim. 2006. Effects of organic or inorganic acid supplementation on growth performance, nutrient digestibility and white blood cell counts in weaning pigs. Asian-Aust. J. Anim. Sci. 19:252-261.
Partanen, K., J. Valaja, J. Siljander-Rasi, T. Jalava and S. Panula. 1998. Effects of carbadox or formic acid and diet type on ileal digestion of amino acids by piglets. J. Anim. Feed. Sci. 7:199-203. https://doi.org/10.22358/jafs/69975/1998
Paulicks, H. R., F. X. Roth and M. Kirchgessner. 1996. Dose effects of potassium diformate (FormiTM LHS) on the performance of growing piglets. Agribiol Res 49:318-326.
Sacakli, P., A. Sehu, A. Ergun, B. Genc and Z. Selcuk. 2006. The effect of phytase and organic acid on growth performance, carcass yield and tibia ash in quails fed diets with low levels of non-phytate phosphorus. Asian-Aust. J. Anim. Sci. 19:198-202.
Protein–phytate interactions in pig and poultry nutrition: a reappraisal vol.25, pp.01, 2012, https://doi.org/10.1017/S0954422411000151
Gastrointestinal health and function in weaned pigs: a review of feeding strategies to control post-weaning diarrhoea without using in-feed antimicrobial compounds vol.97, pp.2, 2012, https://doi.org/10.1111/j.1439-0396.2012.01284.x
Dual Effects of Sodium Phytate on the Structural Stability and Solubility of Proteins vol.61, pp.2, 2013, https://doi.org/10.1021/jf303926v
Protected Organic Acid Blends as an Alternative to Antibiotics in Finishing Pigs vol.27, pp.11, 2014, https://doi.org/10.5713/ajas.2014.14356
Effects of <i>Bacillus subtilis</i> KN-42 on Growth Performance, Diarrhea and Faecal Bacterial Flora of Weaned Piglets vol.27, pp.8, 2014, https://doi.org/10.5713/ajas.2013.13737
The site of net absorption of Ca from the intestinal tract of growing pigs and effect of phytic acid, Ca level and Ca source on Ca digestibility vol.68, pp.2, 2014, https://doi.org/10.1080/1745039X.2014.892249
Analysis of the effect of dietary protected organic acid blend on lactating sows and their piglets vol.45, pp.2, 2016, https://doi.org/10.1590/S1806-92902016000200001
Rearing conditions affected responses of weaned pigs to organic acids showing a positive effect on digestibility, microflora and immunity vol.87, pp.10, 2016, https://doi.org/10.1111/asj.12544
Current status and prospects for in-feed antibiotics in the different stages of pork production — A review vol.30, pp.12, 2017, https://doi.org/10.5713/ajas.17.0418
Effect of potential multimicrobe probiotic product processed by high drying temperature and antibiotic on performance of weanling pigs1 vol.89, pp.6, 2011, https://doi.org/10.2527/jas.2009-2794
Calcium anacardate as growth promoter for piglets at the nursery phase vol.52, pp.12, 2017, https://doi.org/10.1590/s0100-204x2017001200014
Effects of organic acid and medium chain fatty acid blends on the performance of sows and their piglets pp.13443941, 2018, https://doi.org/10.1111/asj.13111
Effects of a matrix-coated organic acids and medium-chain fatty acids blend on performance, and in vitro fecal noxious gas emissions in growing pigs fed in-feed antibiotic-free diets vol.98, pp.3, 2018, https://doi.org/10.1139/cjas-2017-0053
|
CommonCrawl
|
Reduced order model of flows by time-scaling interpolation of DNS data
Tapan K. Sengupta1 na1,
Lucas Lestandi ORCID: orcid.org/0000-0001-8457-11312 na1,
S. I. Haider1 na1,
Atchyut Gullapalli1 na1 &
Mejdi Azaïez3 na1
This article has been updated
The Correction to this article has been published in Advanced Modeling and Simulation in Engineering Sciences 2018 5:27
A new reduced order model (ROM) is proposed here for reconstructing super-critical flow past circular cylinder and lid driven cavity using time-scaling of vorticity data directly. The present approach is a significant improvement over instability-mode (developed from POD modes) based approach implemented in Sengupta et al. [Phys Rev E 91(4):043303, 2015], where governing Stuart–Landau–Eckhaus equations are solved. In the present method, we propose a novel ROM that uses relation between Strouhal number (St) and Reynolds number (Re). We provide a step by step approach for this new ROM for any Re and is a general procedure with vorticity data requiring very limited storage as well as being extremely fast. We emphasize on the scientific aspects of developing ROM by taking data from close proximity of the target Re to produce DNS-quality reconstruction, while the applied aspect is also shown. All the donor points need not be immediate neighbors and the reconstructed solution has equivalent relaxed accuracy. However, one would restrain the range where the flow behavior is coherent between donors. The reported work is a proof of concept utilizing the external and internal flow examples, and this can be extended for other flows characterized by appropriate Re–St data.
High performance computing using DNS for complex flow problems provide insight into physical mechanism at prohibitive cost of data storage, as voluminous data are created to resolve small scales in both space and time. DNS of Navier–Stokes equation (NSE) to understand flow generates huge amount of data. The major challenges of big data are processing, storage, transfer and analysis. The central motivations here is to replace time/memory-intensive DNS for the model problems of flow past a circular cylinder and LDC. Similar attempts are recorded in [34, 37] and other references contained therein. Memory requirements of such instability mode-based ROM in [34] come down drastically, due to the requirement of storing only fewer coefficients of the SLE equations and initial conditions. Henceforth this reference will be called SHPG for brevity.
There are numerous efforts in developing ROM's, e.g. via Koopman modes, as in [12, 31]; dynamic mode decomposition in [32]; POD-based analysis of Reynolds-averaged Navier–Stokes (RANS) in [22, 23, 40]. In [9], authors reported low-dimensional model for 3D flow past a square cylinder using solutions of NSE obtained by a pseudo-spectral approach. However, even using thousands of snapshots, the reconstruction error was of the order of \(30 \%\), indicating an exponential divergence between any model prediction and the actual solution outside the snapshot range. In [24], authors used fourth order finite difference scheme for spatial discretization of NSE in primitive variable formulation for time accurate simulation for POD analysis of the flow field. The time discretization used second order accurate, three-time level discretization method, which invokes a numerical extraneous mode. It was noted that with only four POD modes, the model without pressure term gives rise to important amplitude errors which cannot be compensated by an increase in the number of modes. In naive energy-based POD approaches, researchers calculate amplitude functions of POD representation by solving ODE's derived from NSE by simplifying nonlinear and pressure terms. Iollo et al. [16] have shown that this approach is inherently unstable. Thereafter many stabilzation techniques have been proposed [5, 8, 10, 15] in the finite element framework. A survey on projection based ROM for parametric problems is proposed by Benner et al. [7]. POD-Galerkin continues to be an active field of research for fluid dynamics problems, it has lead to recent successful application to finite volume [19, 41, 42] in velocity–pressure formulation. Generally, this approach enters in the reduced basis framework popularized in the early 2000s [20, 28] which is presented in detail in Quarteroni et al. books [29, 30]. Authors in [25] have also used an adaptive approach to construct ROM with respect to changes in parameters, by first identifying the parameters for which the error is high. Thereafter a surrogate model based on error-indicator was constructed to achieve a desired error tolerance in this work. Recent work lead by Pitton and Rozza [26, 27] has focused on applying ROM to detect bifurcations in the context of fluid dynamics. To do so, they developed accurate ROM and evaluated steady state eigenvalues of these ROM linearized Navier–Stokes operator to detect bifurcations. Yet, it was shown in [18] that singular LDC flow requires extremely accurate numerical schemes due to very high sensitivity to numerical conditions. Consequently, in this paper, we rely on previously established bifurcation diagram (see Refs. [17, 37] for details) to bound the ROM domain.
Other approaches have been explored, in particular relying on interpolation instead of projection. Among them, discrete empirical interpolation method (DEIM) in [6, 11] has encountered widespread success with applications ranging from non-linear multiparameter interpolation to hyper reduction techniques. A new family of interpolation method parametric PDEs problems has been developed by Amsallem and Farhat in series of papers [1,2,3]. The Grassmann interpolation method relies on a series of projection from the Grassmann manifold of solutions onto flat vector spaces on which usual interpolation techniques can be used. Then the interpolated field is projected back onto the manifold. This approach has proved very successful for aeroelastic flows [2] and has also been combined with other ROM tools such as DEIM in [4]. Yet, this approach relies on a complex mathematical framework and requires careful tuning to be accurate, for instance choosing the projection origin point. These issues have been addressed in recent thesis by Mosquera [21] which also proposes alternative algorithms. These technical difficulties motivate the introduction of simpler, physics based interpolation methods such as the one proposed in this paper.
The flow governed by unsteady NSE presents the physical dispersion relation linking each length scale (wavenumber) with corresponding time scale (circular frequency). Thus, the ranges of time and length scales are important, even though a single St and Re are often used to describe the flow field. Multitude of length and time scales also are inherently noted in [18] via POD modes and multiple Hopf bifurcations for flow in LDC. The existence of such ranges assists in developing a ROM, when donor Re's are in the same range, where the target Re resides. If one takes one or two donor points far from the range where target Re resides, the presented ROM will provide a reconstructed solution, still with acceptable accuracy. These aspects of multiple Hopf bifurcations and existence of ranges of Re is highlighted in the present research, apart from developing an efficient ROM for this model problem.
For a vortex dominated flow, the time scale is defined as \(\mathrm{St}\; (= fD/U_\infty )\), relating dominant physical frequency (f) with flow velocity, (\(U_\infty \)) and the length scale (D). However the flow does not display a single frequency, as one notices several peaks for both flows in Fig. 1. The time series of the vorticity data at indicated locations are shown in the left hand side frames. While the flow past a circular cylinder displays a single dominant peaks with side bands in the spectrum (shown on the right hand side frames), the flow inside LDC clearly demonstrates multiple peaks. This property has been explored thoroughly for the LDC in [17] to explain the roles of multiple POD modes.
Specifically for flow past circular cylinders, an empirical relation of the type has been provided
$$\begin{aligned} \mathrm{St} = St^* + m /\sqrt{Re} \end{aligned}$$
in [13] with experimental data, for variation of St with Re in the wide range of \(47< \mathrm{Re} < 2\times 10^5\), with values of \(St^*\) and m being different, for different ranges of Re. Instead of using such an algebraic additive relationship, here we propose a power law relation and test it for the range: \(55 \le \mathrm{Re} \le 200\), for the purpose of demonstration. Consequently a relationship between Re and St will be proposed, in order to perform interpolation on the vorticity time series.
The existence of unique St for a fixed value of Re, as embodied in Eq. (1) implies that employing simple-minded interpolation strategies like Lagrange interpolation, will display unphysical wave-packets in reconstructed solution, as the time scales are function of Re at the target. This is clearly demonstrated in Fig. 3. The proposed ROM tackles this issue with the time scaling technique that is presented in this article.
The paper is formatted in the following manner. In the next section, governing equations employed for DNS and associated auxiliary conditions are described. In "Need for time scaling" section, the proposed time-scaling interpolation algorithm is presented. Time-scaled ROM of vorticity field is applied to two complex flows in "Time-scaled ROM applied to the ow past a cylinder" section. Summary and conclusions are provided in the last section.
DNS time series and their associated FFT's are shown for a the flow inside a LDC and b the external flow past a cylinder, at indicated points in the flow
Governing equations and numerical methods
DNS of the 2D flow is carried out by solving NSE in stream function-vorticity formulation given by,
$$\begin{aligned} \nabla ^2 \psi= & {} - \omega \end{aligned}$$
$$\begin{aligned} \frac{\partial \omega }{\partial t} + (\vec {V} \cdot \nabla ) \omega= & {} \frac{1}{Re}\nabla ^2 \omega \end{aligned}$$
where \(\omega \) is the only non-zero, out-of-plane component of vorticity for the 2D problem considered. The velocity is related to the stream function as \(\vec {V} = \nabla \times \vec {\Psi },\) where \(\vec {\Psi } = [0\;0\;\psi ]^T\), with (D) and \((U_\infty )\) used as length and velocity scales for non-dimensionalization. Equations (2) and (3) are solved in an orthogonal curvilinear coordinates \((\xi , \eta )\) and the governing equations in transformed plane are
$$\begin{aligned} \frac{\partial }{\partial \xi } \biggl (\frac{h_2}{h_1}\frac{\partial \psi }{\partial \xi }\biggr ) + \frac{\partial }{\partial \eta } \biggl (\frac{h_1}{h_2}\frac{\partial \psi }{\partial \eta }\biggr )= & {} -\,h_1h_2\omega \end{aligned}$$
$$\begin{aligned} h_1 h_2\frac{\partial \omega }{\partial t} + h_2 u \frac{\partial \omega }{\partial \xi } + h_1 v \frac{\partial \omega }{\partial \eta }= & {} \frac{1}{Re}\biggl \{\frac{\partial }{\partial \xi }\biggl (\frac{h_2}{h_1}\frac{\partial \omega }{\partial \xi }\biggr ) + \frac{\partial }{\partial \eta } \biggl (\frac{h_1}{h_2}\frac{\partial \omega }{\partial \eta }\biggr )\biggr \} \end{aligned}$$
where \({h_1}\) and \({h_2}\) are the scale factors of the transformation given by: \( h_1^2 = x^2_\xi + y^2_\xi \) and \( h_2^2 = x^2_\eta + y^2_\eta .\) The co-ordinate given by \(\xi \) is along azimuthal direction for the flow past the cylinder and along x-direction for flow inside LDC and the co-ordinate \(\eta \) is in the wall-normal direction for flow past the cylinder and along y-direction for the flow inside LDC. No-slip boundary condition is applied on the wall for both the flows via
$$\begin{aligned} \biggl (\frac{\partial \psi }{\partial \eta }\biggr )_{body} = 0\;\;\; \mathrm{and}\;\;\;\; \psi = constant \end{aligned}$$
For the flow inside LDC, the corresponding conditions are given by the same equations, except along the lid, the right hand side of the first condition is given by \(U_{\infty }\). These conditions are used to solve Eq. (4) and to obtain the wall vorticity \(\omega _b\), which in turn provides the wall boundary condition for Eq. (5). At the outer boundary of the domain for flow past cylinder, uniform flow boundary condition (Dirichlet) is provided at the inflow and a convective condition (Sommerfeld) is provided for the radial velocity at the outflow.
Direct Lagrange interpolation of DNS vorticity disturbance time series between Re causes wave packets in the cylinder wake at point (0.504, 0.0)
The convection terms of Eq. (5) are discretized using the high accuracy compact OUCS3 scheme for flow past the cylinder and the combined compact difference (CCD) scheme for the flow inside LDC, both of which provides near-spectral accuracy for non-periodic value of the convective acceleration terms, as explained in detail in [33]. A central differencing scheme is used to discretize the Laplacian operator of Eqs. (4) and (5) for the circular cylinder and the CCD scheme is used for the flow inside LDC. An optimized four-stage, third-order Runge–Kutta (OCRK3) dispersion relation preserving method in [36] is used for time marching. Equation (4) is solved using Bi-CGSTAB method given in [44].
Variation of equilibrium amplitude of disturbance vorticity with Re indicating the segments of Re with respect to bifurcation sequences for a flow in LDC and b for flow past a cylinder
These same methods have been used earlier for validating and computing the respective flows in [37], SHPG for flow over cylinder and in [35, 38, 39] for flow inside the LDC. Here the simulations are performed in a fine grid, with \((1001 \times 401)\) points in the \(\xi \) and \(\eta \) directions for the flow past circular cylinder, and \((257 \times 257)\) points are taken for the LDC problem.
Need for time scaling
The proposed ROM aims at interpolating vorticity fields at a target Re (\(Re_t\)) from precomputed DNS at different donor Re's. If Lagrange interpolation is used directly, then it will not work due to variation of St with Re. Even with close-by donor Reynolds numbers data, upon interpolation, will produce wave-packets for flow past a cylinder as shown in Fig. 2. In this figure, results are shown for \(\hbox {Re} = 83\), as obtained by DNS of NSE (shown by solid lines) and that is obtained by Lagrange interpolation of NSE solution donor data obtained for \(\hbox {Re} = 78\), 80, 86 and 90.
We have also noted in SHPG that the flow past a circular cylinder suffers multiple Hopf bifurcations (experimentally shown in [14, 43]) and in [38] for flow inside LDC and flow over cylinder. Hence the accuracy of reconstruction naturally demands that the target and donor Re's should be in the same segments of Fig. 3, as the flow fields are dynamically similar. In Fig. 3, the equilibrium amplitude of disturbance vorticity are plotted as a function of Re for both the flows. The equilibrium amplitude refers to the value of the disturbance quantity, which settles down in a quasi-periodic manner, due to nonlinear saturation after the primary and secondary instabilities. Presence of multiple quadratic segments in Fig. 3, indicates multiple bifurcations originating at different Re's. Thus, it is imperative that one identifies the target Re in the same segment of donor Re's for DNS-quality reconstruction for flow past circular cylinder as in SHPG and for flow inside LDC in [17]. In each of these sectors of Re, the flow behaves similarly and the (St, Re)-relation is distinct. It is to be emphasized that the present sets of simulations are performed using highly accurate dispersion relation preserving numerical methods.
The physical frequency (f) varies slowly with Re and superposition of time-series of donor data causes beat phenomenon observed by superposition of waves of slightly different frequencies. Thus, the knowledge of variation of St with Re is imperative in scaling out f-dependence of donor data before Lagrange interpolation and this is one of the central aspects of the present work. After obtaining frequency-independent data at target Re, one can put back the correct f-dependence via its variation with Re at the target Reynolds number.
In Fig. 3a, the range of Re from 8000 to 12,000 for the LDC is subdivided according to the bifurcation sequence uncovered in [18] using a \((257 \times 257)\)-grid. For the purpose of interpolation, four ranges are defined with the first one given by: \({\mathtt {R}}_{\mathtt {I}}=[8020:8660]\) that corresponds to externally excited range, which shows rapid variation of the amplitude, nearly culminating in a vertical fall at the onset of solution bifurcation. The used CCD scheme, for flow in LDC, has near-spectral accuracy, as explained in [35, 39], and the onset of unsteadiness is due to aliasing error predominant near the top right corner of LDC, while truncation, round-off and dispersion errors are negligibly small. To avoid the issue of lower numerical excitation in the present work, a pulsating vortex is placed (\(\omega _s\)) at \(x_0 = 0.015625\), and \(y_0 = 0.984375\) whose spread is defined by the exponent \(\alpha \) given in the following,
$$\begin{aligned} \omega _s = A_{0} [1 + \cos (\pi (r- r_0)/0.0221)] \sin (2\pi f_0 t)\;\;\; \mathrm{for}\; (r -r_0) \le 0.0221 \end{aligned}$$
where in the presented results here we have taken \(f_0 = 0.41\) for the single amplitude, \(A_0 = 1.0\).
For the next two ranges, no explicit excitation is needed (i.e., \(A_0 = 0\)) to achieve a stable limit cycle. \({\mathtt {R}}_{\mathtt {II}} = [8660:9350]\) and \({\mathtt {R}}_{\mathtt {III}} = [9450:10{,}600]\) are ranges for which the amplitude (\(A_e\)) follows a square root law, these are however different because of the peculiar behavior of the flow in the vicinity of Re \(= 9400\), which indicates the onset of second Hopf bifurcation. Finally, \({\mathtt {R}}_{\mathtt {IV}} = [10{,}600:12{,}000]\) is difficult for interpolation, as one can see two branches in this range, one of which is unstable (U-branch) with respect to any miniscule vortical excitation, as opposed to the stable one (S-branch). The flow past cylinder is also divided in ranges as shown in Fig. 3b. The range of Re from 55 to 130 is subdivided according to the bifurcation sequences by: \(55 \le \mathrm{Re} \le 68\); \(68 \le \mathrm{Re} \le 78\); \(78 \le \mathrm{Re} \le 90\); \(90 \le \mathrm{Re} \le 100\) and \(100 \le \mathrm{Re} \le 130\). For example, to reconstruct solution for Re = 83, we have used data in the range of \(78 \le Re \le 90\) for the most accurate ROM.
Formulation and modeling of ROM
In Eq. (1), a relation between St and Re is shown for a wide range, for the latter. In the proposed ROM here, we do not need DNS data for the target Re, as was the case in SHPG to train the ROM. This is a significant improvement over the previous approaches. One should scale out dependence of DNS data on f or St, for any Re, by a proposed power law scaling given below,
$$\begin{aligned} \frac{St(Re_s)}{St(Re_b)}=\left( \frac{Re_b}{Re_s}\right) ^n \end{aligned}$$
The exponent n will depend upon the segment of Re shown in Fig. 3, with \(Re_b\) denoting a base Reynolds number in each segment. Here in this equation, any donor Re is indicated as \(Re_s\). Thus in a cluster of four donor Re's, one is identified as \(Re_b\) and the other three identified as \(Re_s\). From Eq. (6) one identifies n, by the following,
$$\begin{aligned} n= \frac{ \mathrm {log} (St(Re_s)/St(Re_b))}{\mathrm {log}(Re_b/Re_s)} \end{aligned}$$
Table 1 Scaling constant and base \(Re_{b}\) for different range of \(Re_{s}\)
The scaling exponent n is a characteristic number of each segment and \(Re_b\). In Table 1, we show five segments and the corresponding n, along with \(Re_b\) used in each range. For the flow past a circular cylinder, the value of n is obtained with the tolerance of \(\pm 0.02\) for all Re's in the respective segment. As discussed in [18], f is almost constant on each segment, so that we can set \(n=0\) for the LDC, individually in each segment. Having fixed n for any \(Re_s\) in the segment of choice, time-scaling is performed by the following,
$$\begin{aligned} t_s= t_b\biggl (\frac{Re_b}{Re_s}\biggr )^n +t_0(Re_b,Re_s) \end{aligned}$$
To interpret Eq. (8), we plot the disturbance vorticity for the flow past a cylinder at a fixed location in the wake center-line (\(x =0.504\), \(y= 0\)), in Fig. 4. The same format of time scaling should apply to many other flows, including the same for the internal flow inside a LDC. It is noted that there exists a time-shift between the maximum of these two time series, shown as \(t_0\) in the figure. Let us consider the time for \(Re_b\) as \(t_b\), and then to apply the proposed time-scaling for the data for \(Re_s\), we change the physical time of \(Re_s\), by the expression given in Eq. (8). Consequently, the left hand side of Eq. (8) is the scaled time. After obtaining \(t_0\), it is needed to collapse the two time series for \(Re_s\) and \(Re_b\), so that the maximum for these two time series coincide. Thus having fixed the base Reynolds number in each windows of bifurcation sequences, we can obtain the time-scaled abscissa for each \(Re_s\) in that range.
Variation of disturbance vorticity at a point (0.504, 0.0) with \(t_b\) and \(t_{s}\) for \(Re_b\) and \(Re_{s}\), respectively, for the pair of \(Re_b = 80\) and \(Re_s = 86\) in the bifurcation sequence \(78 \le Re \le 90\)
The search for \(t_0\) is performed in such a way that the phases of both \(Re_b\) and \(Re_s\) match accurately. One should note that the effects of \(t_0\) are significant, despite the fact that it has a very small value. There are many ways to compute \(t_0\), but accuracy must be very high in estimating it. A specific way is to view the time series in the spectral plane and using the imaginary part of FFT to be used as the accuracy parameter, as described in the next subsection.
Computing the initial time-shift (\(t_0\))
The present method is both accurate and computationally cheap, since it relies on the fast Fourier transform (FFT) that is provided in the numpy library. A FFT is applied to the vorticity time series at one relevant space point. On one hand, for the LDC problem it has been shown in [18] that (0.95, 0.95) point near the top right corner is relevant for monitoring the flow behavior. On the other hand for the flow past a circular cylinder, point (0.504, 0.0) in the cylinder wake is adequate. For each sampled frequency, a complex value (\(z(f)=A e^{i\theta }\)) is obtained consisting of the modulus (A), which corresponds to the amplitude and a phase (\(\theta \)). Consequently, we can recover the phase associated with the leading frequency (L) for both signals \(\theta _b\) and \(\theta _s\). Finally the time shift of signal s with respect to the signal b is given by
$$\begin{aligned} t_0 = \frac{\theta _b^L - \theta _s^L}{2\pi f^L} \end{aligned}$$
Here, \(f^L\) is the lead frequency in the amplitude spectrum for both the signals as \(t_0\) is computed only after the frequency scaling has been performed, with \(\theta \) as the angle of the complex value of the FFT associated with the lead frequency for signal b or s. This method yields reliable and accurate values of \(t_0\), as the ROM accuracy will prove in the following sections.
Time-scaling ROM algorithm for discrete DNS data
In this subsection, a brief recap of the time shifting procedure for ROM building is given for the simple case of discrete signals \(\omega _b(t_i)\) and \(\omega _s(t_i)\) with \(\{t_i\}_{i=1}^N\) indicating the time discretization. It can be directly applied to any space-time dependent field, with a reference signal chosen at a reference point. The ROM is then built as follow:
Perform the algorithm (Algorithm 1) on all signals, except the base donor signal, in order to scale their oscillations.
Perform Lagrange interpolation on the scaled donor signals at target \(Re_t\) for all discrete times \(t_i\).
$$\begin{aligned} {\bar{\omega }}^\star (t_i)=\sum _{s \in \mathrm {donors}} {\hat{\omega }}_s(t_i)l_s(Re_t) \end{aligned}$$
where \({\bar{\omega }}^\star \) is the target signal and \(l_s\) are the Lagrange interpolation polynomials.
Scale-back \({\bar{\omega }}^\star \) to the physical time with \({t}^\star =\frac{{t}-t_0(Re_{t})}{(Re_b/Re_{t})^n}\).
The last step of the ROM is to scale back \({\bar{\omega }}^{\star ({t})}\) to the physical time, \(t^{\star }\). Indeed, the interpolation is performed at grid points for t, which is actually the time-scaled representation of the target vorticity field. Thus the scale-back operation is computed to associate \({\bar{\omega }}^\star \) with the scaled-back time \({t}^{\star }\). One should note that the final domain is cropped according to the information lost after each shift, despite this the discrete time points match the original discretization.
Time-shifting ROM applied to the LDC flow
As we have shown in [18], the main frequency of the LDC flow is nearly constant across large ranges of Re, as shown here in Fig. 5. Thus, the time-scaling procedures simplify to a time-shifting procedure with \(n=0\), resulting in \({t}_s={t}-t_0\) for the donor and target points, which have the same frequency in Fig. 5.
Following Algorithm 1 given above, we have obtained the vorticity field for \(\hbox {Re} = 10{,}040\), using the donor points at \(\hbox {Re} = 10{,}000\), 10,020, 10,060 and 10,080. From the reconstructed ROM data, we have shown the vorticity time series in Fig. 6 for four representative points near four corners. Despite the change in the vorticity magnitude by two orders, the accuracy of reconstruction is excellent and match almost exactly.
Frequency variation for \(\hbox {Re}=[8700,12{,}000]\) for the first three leading frequencies of the vorticity time series at point (0.95, 0.95) obtained for the last 50 periods. The dotted lines indicate the presence of multiple dominating peaks in the spectrum
Reconstructed vorticity field at the points in the cavity: a the point near the bottom left corner, \(x = 0.05\), \(y = 0.05\), b the point near the top left corner, \(x = 0.05\), \(y = 0.95\), c the point near the bottom right corner, \(x =0.95\), \(y=0.05\) and d the point near the top right corner, \(x =0.95\), \(y = 0.95\) for the target Reynolds number, \(Re=10,040\) with donor points having Re as 10,000, 10,020, 10,060 and 10,080
In Fig. 7, the reconstructed vorticity contours inside the LDC is shown for Re \(=10{,}040\), at the indicated time of \(t=1900.199\) by solid line, with the same donor data of Re's for the use in the ROM following Algorithm 1. The corresponding solution obtained by DNS of NSE-Solution for Re = 10,040 is shown in the same figure by dotted lines. It is readily observed that these exact and ROM solution overlap each other in the full domain with a relative RMS error of \(7.1\times 10^{-4}\).
Disturbance vorticity contour plot for reconstructed vorticity (solid lines) and DNS vorticity (dotted lines) field at nondimensional time \(t=1900.199\) for target Re = 10040 with donor points having Re = 10,000, 10,020, 10,060 and 10,080
The above exercise shows the special case of a flow, which is multi-periodic with respect to time, yet the predominant frequency remains constant over different ranges of Re, allowing one to use the special version of time scaling with power law exponent given by, \(n =0\) in Eqs. (6) and (7). Thus, one needs to simply apply a time-shift and reconstruct by the methods described in "Computing the initial time-shift (\(t_0\))" and "Time-scaling ROM algorithm for discrete DNS data" subsections.
Reconstructed vorticity field at the same four points in LDC, as in Fig. 6 for a target \(\hbox {Re} = 9600\) with donor points for \(\hbox {Re} = \{9350, 9500, 9800\) and \(10,000\}\)
Next, ROM is performed for \(\hbox {Re} = 9600\), with the donor points at \(\hbox {Re} = 9350\), 9500, 9800 and 10,000. The choice of the second target Re for LDC is made on purpose, as the bifurcation diagram in Fig. 3a shows that the flow has discontinuity in equilibrium amplitude in the chosen donors the bounds of \(R_{III}\) for \(\hbox {Re} = 9400\) and 10,600. The interpolated vorticity time series are compared with direct simulation results, as shown in Fig. 8, at those same sampling points used in Fig. 6. Once again the match is excellent between interpolated results with DNS data with a very low RMS error of \(5.6 \times 10^{-4}\).
Disturbance vorticity contour plot for reconstructed vorticity (solid lines) and DNS vorticity (dotted line) field at nondimensional time \(t = 1900.199\) for target \(\hbox {Re} = 9600\) with donor points \(\{9350, 9500, 9800\) and \(10,000\}\)
In Fig. 9, the interpolated vorticity contours for \(\hbox {Re} = 9600\) are compared with those computed directly from NSE to show that interpolation works globally in the flow field and not merely at chosen sampling points. In this flow field, the power law exponent is zero and the strength of the interpolation is in obtaining the initial time shift (\(t_0\)) obtained using Algorithm 1, obtained from the FFT of the donor point vorticity with respect to the baseline Re chosen.
Regarding the CPU time gain, the full order model (FOM) typically requires 24 h to reach stale limit cycle and captures 100 cycle. The offline phase requires four samples per range while the online phase is very quick. Indeed, the time-scaling itself is run only one time series at point (0.95, 0.95) which is negligible as compared to the interpolation step at each of the \(257 \times 257\) grid point. All considered, for a \(t\in [1900,1940]\) with 0.2 sampling rate, the online phase requires 10s CPU time. This means that the cost of running one query of the ROM is approximately 0.01% of the full order model:
$$\begin{aligned} \frac{\mathrm{online \;1\; query}}{\mathrm{FOM\; 1\; query}}= \frac{10\,s}{24\,h} \simeq 0.01\% \end{aligned}$$
Due to the very small number of donors required, the ROM offline phase is relatively cheap as compared with other ROM with a break even value of 4.
$$\begin{aligned} \frac{\mathrm{offline} + \mathrm{online\; time}}{\mathrm{FOM \; time}} = \frac{4\times 24\,h + 10\,s}{24\,h} \simeq 4 \end{aligned}$$
This last comment should be mitigated by the need of pre-established bifurcation diagram that usually requires tens of FOM run. If these runs are saved, they can be used directly as input in the ROM, thus removing the need for actual offline phase. These CPU time gain estimates are also valid for the flow past a circular cylinder presented in the section as the orders of magnitude are the same.
In the following, we study the case of flow past a circular cylinder to show the efficacy of the proposed time-scaling algorithm used here. For this flow also one notices presence of multiple time scales, but with a predominant frequency characterized by St, which follows the power law given by Eq. (6), with nonzero power law exponent, n.
Time-scaled ROM applied to the flow past a cylinder
All the time-scaled relation and corresponding power law exponent in Eq. (7), is applicable here for ROM with \(\omega \) obtained by DNS. The time scaled interpolation of the ROM for disturbance vorticity for different combination of donor points, as indicated in Table 2, are obtained and root mean square (RMS) error with respect to DNS data are compiled in the table summed over all the points in the domain. Case I in the table corresponds to the case of donor points at \(\hbox {Re} = 78\), 80, 86 and 90, which is noted as the most accurate based on RMS error for the ROM reconstruction for \(\hbox {Re} =83\). When we choose the donors with \(\hbox {Re} = 55\), 80, 86 and 130 for Case V in Table 2, the RMS error is again low, as compared to cases where only one donor point is taken from the same segment containing the target Re. As has been noted before, for higher accuracy one must choose donor points from the same segment of target Re, as clearly shown in Table 2 in a quantitative manner.
Table 2 RMS error estimates of interpolation for \(\hbox {Re} = 83\)
We draw the attention on error estimates provided in Table 2 for different combinations of donor Re's. It is evident from the table that the best result is obtained when all four donor points are in the same segment of target Re, as in Case I. In Cases II to IV, we have taken the lowest Re, farther to the left with increase in RMS error, with lowering of the smallest donor Re. But in Case V, the extreme Re's are chosen as 55 and 130, and yet the RMS error is acceptable, as two of the donor Re's belong to the segment of target Re. In contrast, for the Case VI, only a single donor Re belongs to the same segment, resulting in RMS error increasing almost ten folds as compared to the Case V. The worst case (Case VII) occurs in Table 2, when all the donor Re's are outside the target Re segment. This justifies the scientific basis of the adopted ROM keeping the various ranges of Re punctuated by various Hopf bifurcations shown in Fig. 3b.
Role of \(t_0\) is also investigated here for \(\omega '\) (the disturbance vorticity field) and the variation of \(t_0\) with the Re is shown in Fig. 10 in the subrange \(55 \le \mathrm{Re} \le 130\). Here, we obtain \(t_0\) for the data sets of (\(\hbox {Re}= 55, 80, 86, 130\)) and (Re \(= 78, 80, 86, 90\)), as indicated separately in the figure. Each of the discrete data are marked in the figure with Re and necessary time shifts in brackets, with \(Re_b = 80\). It is noted that the finding of single \(t_0\) is far easier and less time consuming for \(\omega '\) for the present version of ROM, as compared to any method using POD or instability modes, which would require finding different \(t_0\) for each retained modes.
Variation of \(t_0\) with \(Re_s\) for \(Re_b = 80\) for Case I (solid line) and Case V (dashed line) of Table 2. Shown in parametric form are the pair of Reynolds number and corresponding optimal \(t_0\)
In this method, \(\omega '\) is reconstructed using the identical procedure of interpolation after time-scaling and initial time-shift, using Eq. 8 applied directly on \(\omega \) obtained by DNS. Thus, this procedure even circumvents the need to use the time-consuming method of snapshots to obtain POD modes that is required for any POD based ROM e.g. POD-Galerkin, interpolated POD. Unlike the methods of solving SLE equations given in SHPG, proposed ROM in this paper requires storage of at most four DNS data sets in each segment for most accurate reconstruction. If one is willing to settle for lesser accuracy, then one can reduce the requirement of performing DNS for two Re only, in each segment of Fig. 3. Hence this ROM is not memory intensive and it is faster.
Figure 11a, b show the comparison between DNS and the time-scaled interpolated \(\omega '\) at two different points for Re = 70, located along the wake-center line at (0.504, 0.0) and at (1.014, 0.0), respectively. Excellent match with the DNS data even in the transient state proves the efficacy of the time-scaling interpolation technique applied to vorticity data. It is to be noted that despite the presence of a dominant St, the physical variables demonstrate multiple time-scales as discussed in the introduction and shown in Fig. 1.
Reconstructed disturbance vorticity with time-scaling interpolation for a \(\hbox {Re} = 70\) using \(\hbox {Re} = 68\), 72, and 76 at (0.504, 0.0), b at (1.104, 0.0) and c \(\hbox {Re} = 83\) using \(\hbox {Re} = 78\), 80, 86 and 90 at (0.504, 0.0) and d at (1.104, 0.0). Within each subfigure, the top frame is for comparison at early times, while the bottom frame shows comparison at later times
The case for \(\hbox {Re}= 83\) are shown in Fig. 11c, d, which compare the disturbance vorticity at the same two locations with DNS data. Once again, the reconstructed ROM solution is indistinguishable from the corresponding DNS data. Thus, it is evident that spectrum with multiple peaks can be handled by the presented approach of time-scaling with initial time-shift, utilizing the power law between Re with St.
Summary and conclusion
Here, we have proposed time-scaled ROM for reconstructing super-critical flow past circular cylinder and flow inside LDC using time-scaled Lagrange interpolation of vorticity data obtained by DNS for different donor data at Re's, largely located in the neighbourhood of the target Re. In performing the interpolation, a time-scaling is performed following Eq. (8) along with an initial time-shift, as a direct consequence of (St, Re)-relations given in Eqs. (6) and (7).
The proposed method differ from the ROM based on instability modes in SHPG, with respect to speed, accuracy and generality of application. ROM reconstruction at a target Re is of DNS-quality, if all the donor points belong in the same Re subrange, identified by multiple Hopf bifurcations in Fig. 3a, for flow inside the LDC in the range \(8700 \le Re \le 12{,}000\) and in Fig. 3b for flow past a circular cylinder, in the range of \(55 \le Re \le 130\) and in Table 1.
Data requirement of present ROM is at most for four Re's located in the same subrange. If one wants to perform ROM with only three Re's, then the reconstructed data are of slightly lower accuracy, but of very acceptable quality (not shown here). The present procedure provides scientific and applied basis of ROM, depending upon the number and location of donor points of target Re. The formulation of this procedure does not require the introduction of sophisticated mathematical tools contrary to Grassmann manifold interpolation but rather focus on physics to enable accurate low order model.
In instability based ROM in SHPG, one stores only the coefficients of SLE equations. However, one needs to obtain optimal initial conditions for the stiff SLE equations and is restricted to use of first five POD or three instability modes. This is due to difficulty in finding optimal initial conditions for SLE equations and only three instability modes have been used in SHPG. In the present approach, one finds initial time-shift (\(t_0\)) for the donor vorticity data with respect to a base Reynolds number. This time shift can be obtained by FFT based approach as proposed here.
Present study opens the scope of data mining in computational fluid dynamics. DNS of NSE produces massive amount of data which can be used economically to predict flow behavior of dynamical systems dominated by single or multiple peaks in the spectrum. The proposed ROMs can be used at any arbitrary Re on demand, by the proposed ROM performed with limited number of DNS at neighbouring Re's. The novel procedure proposed here has been tested for the internal flow inside a LDC and an external flow over a circular cylinder, as proofs of concept.
Acknowlegements
The authors acknowledge the support provided to the second author from the Raman-Charpak Fellowship by CEFIPRA which made his visit to HPCL, IIT Kanpur possible. This work reports partly the results obtained during the visit.
The original version of the article [1] unfortunately contained an error in the figure.
Amsallem D, Cortial J, Carlberg K, Farhat C. A method for interpolating on manifolds structural dynamics reduced-order models. Int J Numer Methods Eng. 2009;80:1241–58.
Amsallem D, Farhat C. Interpolation method for adapting reduced-order models and application to aeroelasticity. AIAA J. 2008;46:1803–13.
Amsallem D, Farhat C. An online method for interpolating linear parametric reduced-order models. SIAM J Sci Comput. 2011;33:2169–98.
Amsallem D, Zahr MJ, Washabaugh K. Fast local reduced basis updates for the efficient reduction of nonlinear systems with hyper-reduction. Adv Comput Math. 2015;41:1187–230.
Baiges J, Codina R, Idelsohn S. Explicit reduced-order models for the stabilized finite element approximation of the incompressible Navier-Stokes equations. Int J Numer Methods Fluids. 2013;72:1219–43.
Barrault M, Maday Y, Nguyen NC, Patera AT. An 'empirical interpolation' method: application to efficient reduced-basis discretization of partial differential equations. C R Math. 2004;339(9):667–72.
Benner P, Gugercin S, Willcox K. A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Rev. 2015;57:483–531.
Bergmann M, Bruneau C-H, Iollo A. Enablers for robust POD models. J Comput Phys. 2009;228:516–38.
Buffoni M, Camarri S, Iollo A, Salvetti MV. Low-dimensional modelling of a confined three-dimensional wake flow. J Fluid Mech. 2006;569:141.
Caiazzo A, Iliescu T, John V, Schyschlowa S. A numerical investigation of velocity-pressure reduced order models for incompressible flows. J Comput Phys. 2014;259:598–616.
Chaturantabut S. Nonlinear model reduction via discrete empirical interpolation. Ph.D. Thesis Rice Univ., Houston, Texas. 2011.
Chen K, Tu JH, Rowley C. Variants of dynamic mode decomposition: boundary condition, Koopman and Fourier analyses. J Fluid Mech. 2012;656:5.
Fey U, König M, Eckelmann H. A new Strouhal-Reynolds-number relationship for the circular cylinder in the range 47\(< \)Re \( <\) 2\(\times \) \(10^5\). Phys Fluids Lett. 1998;10:1547.
Homann F. Einfluss grosser z\({\ddot{\rm a}}\)higkeit bei str\({\ddot{\rm o}}\)mung um zylinder. Forsch. auf dem Gebiete des Ingenieurwesens. 1936;7:1–10.
Iliescu T, Wang Z. Variational multiscale proper orthogonal decomposition: Navier-Stokes equations. Numer Methods Partial Differ Eq. 2014;30:641–63.
Iollo A, Lanteri S, Désidéri J-A. Stability properties of POD-Galerkin approximations for the compressible Navier-Stokes equations. Theor Comput Fluid Dyn. 2000;13:377–96.
Lestandi L, Bhaumik S, Sengupta TK, Avatar GRKC, Azaiez M. POD applied to numerical study of unstudy flow inside lid-driven cavity. J Math Study. 2018;51:150–76.
Lestandi L, Bhaumik S, Avatar GRKC, Azaiez M, Sengupta TK. Multiple Hopf bifurcations and flow dynamics inside a 2D singular lid driven cavity. Comput Fluid. 2018;166:86–103.
Lorenzi S, Cammi A, Luzzi L, Rozza G. POD-Galerkin method for finite volume approximation of Navier-Stokes and RANS equations. Comput Methods Appl Mech Eng. 2016;311:151–79.
Machiels L, Maday Y, Patera AT, Rovas DV. Blackbox reduced-basis output bound methods for shape optimization. In: Proceedings of \(12^{th}\) international conference on domain decomposition. 2000. p. 429–36.
Mosquera Meza R. Interpolation sur les variétés grassmaniennes et application à la réduction de modèles en mécanique. Thesis: Univ. La Rochelle; 2018.
Morzynski M, Afanasiev K, Thiele F. Solution of the eigenvalue problems resulting from global nonparallel flow stability analysis. Comput Methods Appl Mech Eng. 1999;169:161–76.
Noack BR, Afanasiev K, Morzynski M, Tadmor G, Thiele F. A hierarchy of low-dimensional models for the transient and post-transient cylinder wake. J Fluid Mech. 2003;497:335–63.
Noack BR, Papas P, Monkewitz PA. The need for a pressure-term representation in empirical Galerkin models of incompressible shear flows. J Fluid Mech. 2005;523:339–65.
Paul-Dubois-Taine A, Amsallem D. An adaptive and efficient greedy procedure for the optimal training of parametric reduced-order models. Int J Numer Methods Eng. 2015;102:1262.
Pitton G, Quaini A, Rozza G. Computational reduction strategies for the detection of steady bifurcations in incompressible fluid-dynamics: applications to Coanda effect in cardiology. J Comput Phys. 2017;344:534–57.
Pitton G, Rozza G. On the application of reduced basis methods to Bifurcation problems in incompressible fluid dynamics. J Sci Comput. 2017;73:157–77.
Prud'homme C, Rovas DV, Veroy K, Machiels L, Maday Y, Patera AT, Turinici G. Reliable real-time solution of parametrized partial differential equations: reduced-basis output bound methods. J Fluids Eng. 2002;124:70–80.
Quarteroni A, Manzoni A, Negri F. Reduced basis methods for partial differential equations. New York: Springer; 2016.
Quarteroni A, Rozza G. Reduced order methods for modeling and computational reduction. New York: Springer; 2013.
Rowley C, Mezić I, Bagheri S, Schlatter P, Henningson DS. Spectral analysis of nonlinear flows. J Fluid Mech. 2009;641:1.
Schmid PJ. Dynamic mode decomposition of numerical and experimental data. J Nonlinear Sci. 2010;22:887.
Sengupta TK. High accuracy computing methods: fluid flows and wave phenomena. Cambridge: Cambridge University Press; 2013.
Sengupta TK, Haider SI, Parvathi MK, Gumma P. Enstrophy-based proper orthogonal decomposition for reduced-order modeling of flow a past cylinder. Phys Rev E. 2015;91(4):043303.
Sengupta TK, Lakshmanan V, Vijay VVSN. A new combined stable and dispersion relation preserving compact scheme for non-periodic problems. J Comput Phys. 2009;228:3048–71.
Sengupta TK, Rajpoot MK, Bhumkar YG. Space-time discretizing optimal DRP schemes for flow and wave propagation problems. Comput Fluids. 2011;47(1):144–54.
Sengupta TK, Singh N, Suman VK. Dynamical system approach to instability of flow past a circular cylinder. J Fluid Mech. 2010;656:82–115.
Sengupta TK, Singh N, Vijay VVSN. Universal instability modes in internal and external flows. Comput Fluids. 2011;40:221–35.
Sengupta TK, Vijay VVSN, Bhaumik S. Further improvement and analysis of CCD scheme: dissipation discretization and de-aliasing properties. J Comput Phys. 2009;228:6150–68.
Siegel SG, Seidel J, Fagley C, Luchtenberg DM, Cohen K, McLaughlin T. Low dimensional modelling of a transient cylinder wake using double proper orthogonal decomposition. J Fluid Mech. 2008;28:1182.
Stabile G, Hijazi S, Mola A, Lorenzi S, Rozza G. POD-Galerkin reduced order methods for CFD using Finite Volume Discretisation: vortex shedding around a circular cylinder. Commun App Ind Math. 2017;8:210–36.
Stabile G, Rozza G. Finite volume POD-Galerkin stabilised reduced order methods for the parametrised incompressible Navier-Stokes equations. Comput Fluids. 2018;173:273–84.
Strykowski PJ. The control of absolutely and convectively unstable shear flows. Ph.D dissertation, Yale University. 1986.
Van der Vorst HA. Bi-CGSTAB: a fast and smoothly converging variant of Bi-CG for the solution of non-symmetric linear systems. SIAM J Sci Stat Comput. 1992;12:631–44.
Authors' contributions TKS provided the concept and was active in all work performed for this paper. LL and MA performed the time scaling numerical implementation and tests for LDC. SIH and AG did the same on the flow around a circular cylinder. All authors read and approved the final manuscript.
Tapan K. Sengupta, Lucas Lestandi, S. I. Haider, Atchyut Gullapalli, and Mejdi Azaïez contributed equally to this work.
High Performance Computing Laboratory, Department of Aerospace Engineering, I. I. T. Kanpur, Kanpur, 208 016, India
Tapan K. Sengupta
, S. I. Haider
& Atchyut Gullapalli
I2M UMR 5295, University of Bordeaux, Bordeaux, France
Lucas Lestandi
Bordeaux Institut National Polytechnique, I2M UMR 5295, Bordeaux, France
Mejdi Azaïez
Search for Tapan K. Sengupta in:
Search for Lucas Lestandi in:
Search for S. I. Haider in:
Search for Atchyut Gullapalli in:
Search for Mejdi Azaïez in:
Correspondence to Lucas Lestandi.
The original version of this article was revised: The additional author corrections have been corrected.
Sengupta, T.K., Lestandi, L., Haider, S.I. et al. Reduced order model of flows by time-scaling interpolation of DNS data. Adv. Model. and Simul. in Eng. Sci. 5, 26 (2018) doi:10.1186/s40323-018-0119-2
Time-scaling
Flow past a circular cylinder
|
CommonCrawl
|
Wed, 05 Jun 2019 18:41:33 GMT
8.4: Introduction to Logistic Regression
[ "article:topic", "logistic regression", "generalized linear model", "logit transformation", "natural splines", "authorname:openintro", "showtoc:no", "license:ccbysa" ]
Book: OpenIntro Statistics (Diez et al).
8: Multiple and Logistic Regression
Contributed by David M Diez, Christopher D Barr, & Mine Çetinkaya-Rundel
Statistics at OpenIntro Statistics
Email data
Modeling the probability of an event
Practical decisions in the email application
Diagnostics for the email classifier
Improving the set of variables for a spam filter
In this section we introduce logistic regression as a tool for building models when there is a categorical response variable with two levels. Logistic regression is a type of generalized linear model (GLM) for response variables where regular multiple regression does not work very well. In particular, the response variable in these settings often takes a form where residuals look completely different from the normal distribution.
GLMs can be thought of as a two-stage modeling approach. We first model the response variable using a probability distribution, such as the binomial or Poisson distribution. Second, we model the parameter of the distribution using a collection of predictors and a special form of multiple regression.
In Section 8.4 we will revisit the email data set from Chapter 1. These emails were collected from a single email account, and we will work on developing a basic spam filter using these data. The response variable, spam, has been encoded to take value 0 when a message is not spam and 1 when it is spam. Our task will be to build an appropriate model that classi es messages as spam or not spam using email characteristics coded as predictor variables. While this model will not be the same as those used in large-scale spam filters, it shares many of the same features.
Table \(\PageIndex{1}\): Descriptions for 11 variables in the email data set. Notice that all of the variables are indicator variables, which take the value 1 if the specified characteristic is present and 0 otherwise.
spam Specifies whether the message was spam.
to_multiple An indicator variable for if more than one person was listed in the To field of the email.
cc An indicator for if someone was CCed on the email.
attach An indicator for if there was an attachment, such as a document or image.
dollar An indicator for if the word "dollar" or dollar symbol ($) appeared in the email.
winner An indicator for if the word "winner" appeared in the email message.
inherit An indicator for if the word "inherit" (or a variation, like "inheritance") appeared in the email.
password An indicator for if the word "password" was present in the email.
format Indicates if the email contained special formatting, such as bolding, tables, or links
re_subj Indicates whether "Re:" was included at the the start of the email subject.
exclaim_subj Indicates whether any exclamation point was included in the email subject.
The email data set was first presented in Chapter 1 with a relatively small number of variables. In fact, there are many more variables available that might be useful for classifying spam. Descriptions of these variables are presented in Table \(\PageIndex{1}\). The spam variable will be the outcome, and the other 10 variables will be the model predictors. While we have limited the predictors used in this section to be categorical variables (where many are represented as indicator variables), numerical predictors may also be used in logistic regression. See the footnote for an additional discussion on this topic.13
TIP: Notation for a logistic regression model
The outcome variable for a GLM is denoted by \(Y_i\), where the index i is used to represent observation i. In the email application, \(Y_i\) will be used to represent whether email i is spam (\(Y_i = 1\)) or not (\(Y_i = 0\)). The predictor variables are represented as follows: \(x_{1;i}\) is the value of variable 1 for observation i, \(x_{2;i}\) is the value of variable 2 for observation i, and so on.
Logistic regression is a generalized linear model where the outcome is a two-level categorical variable. The outcome, \(Y_i\), takes the value 1 (in our application, this represents a spam message) with probability \(p_i\) and the value 0 with probability \(1 - p_i\). It is the probability pi that we model in relation to the predictor variables.
13Recall from Chapter 7 that if outliers are present in predictor variables, the corresponding observations may be especially influential on the resulting model. This is the motivation for omitting the numerical variables, such as the number of characters and line breaks in emails, that we saw in Chapter 1. These variables exhibited extreme skew. We could resolve this issue by transforming these variables (e.g. using a log-transformation), but we will omit this further investigation for brevity.
Figure \(\PageIndex{1}\): Values of pi against values of logit(\(p_i\)).
The logistic regression model relates the probability an email is spam (\(p_i\)) to the predictors \(x_{1;i}, x_{2;i},\dots, x_{k;i}\) through a framework much like that of multiple regression:
\[\text {transformation(pi)} = \beta _0 + \beta _1x_{1;i} + \beta _2x_{2;i} + \dots \beta _kx_{k;i} \label {8.19}\]
We want to choose a transformation in Equation \ref{8.19} that makes practical and mathematical sense. For example, we want a transformation that makes the range of possibilities on the left hand side of Equation \ref{8.19} equal to the range of possibilities for the right hand side; if there was no transformation for this equation, the left hand side could only take values between 0 and 1, but the right hand side could take values outside of this range. A common transformation for \(p_i\) is the logit transformation, which may be written as
\[ logit(p_i) = log_e (\dfrac {p_i}{1 - p_i})\]
The logit transformation is shown in Figure 8.14. Below, we rewrite Equation \ref{8.19} using the logit transformation of \(p_i\):
\[log_e (\dfrac {p_i}{1 - p_i}) = \beta _0 + \beta _1x_{1;i} + \beta _2x_{2;i} + \dots + \beta _kx_{k;i}\]
In our spam example, there are 10 predictor variables, so k = 10. This model isn't very intuitive, but it still has some resemblance to multiple regression, and we can t this model using software. In fact, once we look at results from software, it will start to feel like we're back in multiple regression, even if the interpretation of the coefficients is more complex.
Example \(\PageIndex{1}\)
Here we create a spam lter with a single predictor: to_multiple. This variable indicates whether more than one email address was listed in the To field of the email. The following logistic regression model was fit using statistical software:
\[ log (\dfrac {p_i}{1 - p_i}) = -2.12 - 1.81 \times \text { to_multiple}\]
If an email is randomly selected and it has just one address in the \(T_o\) field, what is the probability it is spam? What if more than one address is listed in the \(T_o\) field?
If there is only one email in the \(T_o\) field, then to multiple takes value 0 and the right side of the model equation equals -2.12. Solving for \(p_i: \dfrac {e^{2.12}}{1+e^{-2.12}} = 0.11\). Just as we labeled a tted value of \(y_i\) with a "hat" in single-variable and multiple regression, we will do the same for this probability: \(\hat {p}_i = 0.11\).
If there is more than one address listed in the \(T_o\) field, then the right side of the model equation is \(-2.12 - 1.81 \times 1 = -3.93\), which corresponds to a probability \(\hat {p}_i = 0.02\). Notice that we could examine -2.12 and -3.93 in Figure 8.14 to estimate the probability before formally calculating the value.
To convert from values on the regression-scale (e.g. -2.12 and -3.93 in Example 8.20), use the following formula, which is the result of solving for \(p_i\) in the regression model:
\[p_i = \dfrac {e^{\beta _0+ \beta _1x_{1;i}+ \dots+ \beta _kx_{k;i}}}{1 + e^{\beta _0+ \beta _1x_{1;i}+ \dots + \beta _kx_{k;i}}}\]
As with most applied data problems, we substitute the point estimates for the parameters (the \(\beta _i\)) so that we may make use of this formula. In Example \(\PageIndex{1}\), the probabilities were calculated as
\[ \dfrac {e^{-2.12}}{1 + e^{-2.12}} = 0.11 \dfrac {e^{-2.12-1.81}}{1 + e^{-2.12-1.81}} = 0.02\]
While the information about whether the email is addressed to multiple people is a helpful start in classifying email as spam or not, the probabilities of 11% and 2% are not dramatically different, and neither provides very strong evidence about which particular email messages are spam. To get more precise estimates, we'll need to include many more variables in the model.
We used statistical software to fit the logistic regression model with all ten predictors described in Table 8.13. Like multiple regression, the result may be presented in a summary table, which is shown in Table \(\PageIndex{2}\). The structure of this table is almost identical to that of multiple regression; the only notable difference is that the p-values are calculated using the normal distribution rather than the t distribution.
Just like multiple regression, we could trim some variables from the model using the p-value. Using backwards elimination with a p-value cutoff of 0.05 (start with the full model and trim the predictors with p-values greater than 0.05), we ultimately eliminate the exclaim_subj, dollar, inherit, and cc predictors. The remainder of this section will rely on this smaller model, which is summarized in Table \(\PageIndex{3}\).
Exercise \(\PageIndex{1}\)
Examine the summary of the reduced model in Table \(\PageIndex{3}\), and in particular, examine the to_multiple row. Is the point estimate the same as we found before, -1.81, or is it different? Explain why this might be.
The new estimate is different: -2.87. This new value represents the estimated coefficient when we are also accounting for other variables in the logistic regression model.
Table \(\PageIndex{2}\): Summary table for the full logistic regression model for the spam lter example.
Std. Error
z value
Pr(>|z|)
(Intercept)
to multiple
re_subj
exclaim_subj
Table \(\PageIndex{3}\): Summary table for the logistic regression model for the spam lter, where variable selection has been performed.
Point estimates will generally change a little - and sometimes a lot - depending on which other variables are included in the model. This is usually due to colinearity in the predictor variables. We previously saw this in the Ebay auction example when we compared the coefficient of cond new in a single-variable model and the corresponding coefficient in the multiple regression model that used three additional variables (see Sections 8.1.1 and 8.1.2).
Spam lters are built to be automated, meaning a piece of software is written to collect information about emails as they arrive, and this information is put in the form of variables. These variables are then put into an algorithm that uses a statistical model, like the one we've t, to classify the email. Suppose we write software for a spam lter using the reduced model shown in Table \(\PageIndex{3}\). If an incoming email has the word "winner" in it, will this raise or lower the model's calculated probability that the incoming email is spam?
The estimated coefficient of winner is positive (1.7370). A positive coefficient estimate in logistic regression, just like in multiple regression, corresponds to a positive association between the predictor and response variables when accounting for the other variables in the model. Since the response variable takes value 1 if an email is spam and 0 otherwise, the positive coefficient indicates that the presence of "winner" in an email raises the model probability that the message is spam.
Suppose the same email from Example \(\PageIndex{2}\) was in HTML format, meaning the format variable took value 1. Does this characteristic increase or decrease the probability that the email is spam according to the model?
Since HTML corresponds to a value of 1 in the format variable and the coefficient of this variable is negative (-1.5569), this would lower the probability estimate returned from the model.
Examples 8.22 and 8.23 highlight a key feature of logistic and multiple regression. In the spam lter example, some email characteristics will push an email's classification in the direction of spam while other characteristics will push it in the opposite direction. If we were to implement a spam filter using the model we have fit, then each future email we analyze would fall into one of three categories based on the email's characteristics:
The email characteristics generally indicate the email is not spam, and so the resulting probability that the email is spam is quite low, say, under 0.05.
The characteristics generally indicate the email is spam, and so the resulting probability that the email is spam is quite large, say, over 0.95.
The characteristics roughly balance each other out in terms of evidence for and against the message being classified as spam. Its probability falls in the remaining range, meaning the email cannot be adequately classified as spam or not spam.
If we were managing an email service, we would have to think about what should be done in each of these three instances. In an email application, there are usually just two possibilities: filter the email out from the regular inbox and put it in a "spambox", or let the email go to the regular inbox.
The first and second scenarios are intuitive. If the evidence strongly suggests a message is not spam, send it to the inbox. If the evidence strongly suggests the message is spam, send it to the spambox. How should we handle emails in the third category?
In this particular application, we should err on the side of sending more mail to the inbox rather than mistakenly putting good messages in the spambox. So, in summary: emails in the first and last categories go to the regular inbox, and those in the second scenario go to the spambox.
Suppose we apply the logistic model we have built as a spam filter and that 100 messages are placed in the spambox over 3 months. If we used the guidelines above for putting messages into the spambox, about how many legitimate (non-spam) messages would you expect to find among the 100 messages?
First, note that we proposed a cutoff for the predicted probability of 0.95 for spam. In a worst case scenario, all the messages in the spambox had the minimum probability equal to about 0.95. Thus, we should expect to nd about 5 or fewer legitimate messages among the 100 messages placed in the spambox.
Almost any classifier will have some error. In the spam lter guidelines above, we have decided that it is okay to allow up to 5% of the messages in the spambox to be real messages. If we wanted to make it a little harder to classify messages as spam, we could use a cutoff of 0.99. This would have two effects. Because it raises the standard for what can be classified as spam, it reduces the number of good emails that are classified as spam.
However, it will also fail to correctly classify an increased fraction of spam messages. No matter the complexity and the confidence we might have in our model, these practical considerations are absolutely crucial to making a helpful spam filter. Without them, we could actually do more harm than good by using our statistical model.
Logistic regression conditions
There are two key conditions for fitting a logistic regression model:
The model relating the parameter \(p_i\) to the predictors \(x_{1;i}, x_{2;i},\dots, x_{k;i}\) closely resembles the true relationship between the parameter and the predictors.
Each outcome \(Y_i\) is independent of the other outcomes.
The first condition of the logistic regression model is not easily checked without a fairly sizable amount of data. Luckily, we have 3,921 emails in our data set! Let's first visualize these data by plotting the true classification of the emails against the model's fitted probabilities, as shown in Figure \(\PageIndex{2}\). The vast majority of emails (spam or not) still have fitted probabilities below 0.5.
Figure \(\PageIndex{2}\): The predicted probability that each of the 3,912 emails is spam is classified by their grouping, spam or not. Noise (small, random vertical shifts) have been added to each point so that points with nearly identical values aren't plotted exactly on top of one another. This makes it possible to see more observations.
This may at first seem very discouraging: we have t a logistic model to create a spam filter, but no emails have a fitted probability of being spam above 0.75. Don't despair; we will discuss ways to improve the model through the use of better variables in Section 8.4.5.
We'd like to assess the quality of our model. For example, we might ask: if we look at emails that we modeled as having a 10% chance of being spam, do we nd about 10% of them actually are spam? To help us out, we'll borrow an advanced statistical method called natural splines that estimates the local probability over the region 0.00 to 0.75 (the largest predicted probability was 0.73, so we avoid extrapolating). All you need to know about natural splines to understand what we are doing is that they are used to fit flexible lines rather than straight lines.
Figure \(\PageIndex{3}\): The solid black line provides the empirical estimate of the probability for observations based on their predicted probabilities (confidence bounds are also shown for this line), which is t using natural splines. A small amount of noise was added to the observations in the plot to allow more observations to be seen.
The curve fit using natural splines is shown in Figure \(\PageIndex{3}\) as a solid black line. If the logistic model fits well, the curve should closely follow the dashed \(y = x\) line. We have added shading to represent the confidence bound for the curved line to clarify what fluctuations might plausibly be due to chance. Even with this confidence bound, there are weaknesses in the first model assumption. The solid curve and its confidence bound dips below the dashed line from about 0.1 to 0.3, and then it drifts above the dashed line from about 0.35 to 0.55. These deviations indicate the model relating the parameter to the predictors does not closely resemble the true relationship.
We could evaluate the second logistic regression model assumption - independence of the outcomes - using the model residuals. The residuals for a logistic regression model are calculated the same way as with multiple regression: the observed outcome minus the expected outcome. For logistic regression, the expected value of the outcome is the fitted probability for the observation, and the residual may be written as
\[e_i = Y_i - \hat {p}_i\]
We could plot these residuals against a variety of variables or in their order of collection, as we did with the residuals in multiple regression. However, since we know the model will need to be revised to effective classify spam and you have already seen similar residual plots in Section 8.3, we won't investigate the residuals here.
If we were building a spam filter for an email service that managed many accounts (e.g. Gmail or Hotmail), we would spend much more time thinking about additional variables that could be useful in classifying emails as spam or not. We also would use transformations or other techniques that would help us include strongly skewed numerical variables as predictors.
Take a few minutes to think about additional variables that might be useful in identifying spam. Below is a list of variables we think might be useful:
An indicator variable could be used to represent whether there was prior two-way correspondence with a message's sender. For instance, if you sent a message to [email protected] and then John sent you an email, this variable would take value 1 for the email that John sent. If you had never sent John an email, then the variable would be set to 0.
A second indicator variable could utilize an account's past spam flagging information. The variable could take value 1 if the sender of the message has previously sent messages flagged as spam.
A third indicator variable could flag emails that contain links included in previous spam messages. If such a link is found, then set the variable to 1 for the email. otherwise, set it to 0.
The variables described above take one of two approaches. Variable (1) is specially designed to capitalize on the fact that spam is rarely sent between individuals that have two-way communication. Variables (2) and (3) are specially designed to flag common spammers or spam messages. While we would have to verify using the data that each of the variables is effective, these seem like promising ideas.
Table \(\PageIndex{4}\) shows a contingency table for spam and also for the new variable described in (1) above. If we look at the 1,090 emails where there was correspondence with the sender in the preceding 30 days, not one of these message was spam. This suggests variable (1) would be very effective at accurately classifying some messages as not spam. With this single variable, we would be able to send about 28% of messages through to the inbox with confidence that almost none are spam.
Table \(\PageIndex{4}\): A contingency table for spam and a new variable that represents whether there had been correspondence with the sender in the preceding 30 days.
prior correspondence
no yes Total
not spam
The variables described in (2) and (3) would provide an excellent foundation for distinguishing messages coming from known spammers or messages that take a known form of spam. To utilize these variables, we would need to build databases: one holding email addresses of known spammers, and one holding URLs found in known spam messages. Our access to such information is limited, so we cannot implement these two variables in this textbook. However, if we were hired by an email service to build a spam filter, these would be important next steps.
In addition to finding more and better predictors, we would need to create a customized logistic regression model for each email account. This may sound like an intimidating task, but its complexity is not as daunting as it may at first seem. We'll save the details for a statistics course where computer programming plays a more central role. For what is the extremely challenging task of classifying spam messages, we have made a lot of progress. We have seen that simple email variables, such as the format, inclusion of certain words, and other circumstantial characteristics, provide helpful information for spam classi cation. Many challenges remain, from better understanding logistic regression to carrying out the necessary computer programming, but completing such a task is very nearly within your reach.
David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
8.3: Checking Model Assumptions using Graphs
8.5: Exercises
OpenIntro Statistics
logit transformation
natural splines
|
CommonCrawl
|
On the dual codes of skew constacyclic codes
Characterization of extended Hamming and Golay codes as perfect codes in poset block spaces
November 2018, 12(4): 641-657. doi: 10.3934/amc.2018038
$ {{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{4}}$-additive cyclic codes
Tingting Wu 1, , Jian Gao 2,, , Yun Gao 3, and Fang-Wei Fu 3,
College of Science, Civil Aviation University of China, Tianjin 300300, China
School of Mathematics and Statistics, Shandong University of Technology, Zibo, Shandong 255091, China
Chern Institute of Mathematics and LPMC, Nankai University, Tianjin 300071, China
* Corresponding author: Jian Gao
Received April 2016 Published September 2018
Fund Project: This research is supported by the National Natural Science Foundation of China (Grant Nos. 11701336, 11626144, 11671235, 61571243 and 61171082), the Scientific Research Foundation of Civil Aviation University of China (Grant No. 2017QD22X).
This paper is concerned with ${{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{4}}$-additive cyclic codes. These codes can be identified as submodules of the ring ${\mathbb{Z}}_{2}[x]/\langle x^r-1\rangle × {\mathbb{Z}}_{2}[x]/\langle x^s-1\rangle × {\mathbb{Z}}_{4}[x]/\langle x^t-1\rangle$. There are two major ingredients. First, we determine the generator polynomials and minimum generating sets of this kind of codes. Furthermore, we investigate their dual codes. We determine the structure of the dual of separable ${{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{4}}$-additive cyclic codes completely. For the dual of non-separable ${{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{4}}$-additive cyclic codes, we give their structural properties in a few special cases.
Keywords: $ {{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{4}}$-cyclic codes, generator polynomials, minimum generating sets, dual code.
Mathematics Subject Classification: Primary: 94B05, 94B15; Secondary: 11T71.
Citation: Tingting Wu, Jian Gao, Yun Gao, Fang-Wei Fu. $ {{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{4}}$-additive cyclic codes. Advances in Mathematics of Communications, 2018, 12 (4) : 641-657. doi: 10.3934/amc.2018038
T. Abualrub, I. Siap and N. Aydin, $ \mathbb{Z}_2\mathbb{Z}_4$-additive cyclic codes, IEEE Trans. Inform. Theory, 60 (2014), 1508-1514. doi: 10.1109/TIT.2014.2299791. Google Scholar
I. Aydogdu, T. Abualrub and I. Siap, $ \mathbb{Z}_2\mathbb{Z}_2[u]$-additive codes, Int. J. Comput. Math., 92 (2015), 1806-1814. doi: 10.1080/00207160.2013.859854. Google Scholar
I. Aydogdu and I. Siap, On $ \mathbb{Z}_{p^r}\mathbb{Z}_{p^s}$-additive codes, Linear and Multilinear Algebra, 63 (2015), 2089-2102. doi: 10.1080/03081087.2014.952728. Google Scholar
J. Borges, C. Fernández-Córdoba, J. Pujol and J. Rifà, $ \mathbb{Z}_2\mathbb{Z}_4$-linear codes: Geneartor matrices and duality, Des. Codes Cryptogr., 54 (2009), 167-179. doi: 10.1007/s10623-009-9316-9. Google Scholar
J. Borges, C. Fernández-Córdoba and R. Ten-Valls, $ \mathbb{Z}_2$-double cyclic codes, Des.Codes Cryptogr., 86 (2018), 463-479. doi: 10.1007/s10623-017-0334-8. Google Scholar
P. Delsarte, An algebraic approach to the association schemes of coding theory, Philips Res. Reports Suppl., 10 (1973), vi+97 pp. Google Scholar
C. Fernández-Córdoba, J. Pujol and M. Villanueva, $ {\mathbb{Z}}_{2}{\mathbb{Z}}_{4}$-linear coes: Rank and kernel, Des. Codes Cryptogr., 56 (2010), 43-59. doi: 10.1007/s10623-009-9340-9. Google Scholar
J. Gao, M. Shi, T. Wu and F.-W. Fu, On double cyclic codes over $ \mathbb{Z}_4$, Finite Fields Appl., 39 (2016), 233-250. doi: 10.1016/j.ffa.2016.02.003. Google Scholar
W. C. Huffman and V. Pless, Fundamentals of Error-Correcting Codes, Cambridge University Press, 2003. doi: 10.1017/CBO9780511807077. Google Scholar
H. Mostafanasab, Triple cyclic codes over $ \mathbb{Z}_2$, Palest. J. Math., 6 (2017), Special Issue Ⅱ, 123–134, arXiv: 1509.05351. Google Scholar
M. Shi, P. Solé and B. Wu, Cyclic codes and the weight enumerators of linear codes over $ \mathbb{F}_2 + v\mathbb{F}_2 + v^2\mathbb{F}_2$, Applied and Computational Mathematics, 12 (2013), 247-255. Google Scholar
M. Shi and Y. Zhang, Quasi-twisted codes with constacyclic constituent codes, Finite Fields Appl., 39 (2016), 159-178. doi: 10.1016/j.ffa.2016.01.010. Google Scholar
M. Shi, L. Qian, S. Lin, N. Aydin and P. Solé, On constacyclic codes over $ \mathbb{Z}_4[u]/\langle u^2-1 \rangle$ and their Gray images, Finite Fields Appl., 45 (2017), 86-95. doi: 10.1016/j.ffa.2016.11.016. Google Scholar
Z.-X. Wan, Quaternary Codes, World Scientific, Singapore, 1997. doi: 10.1142/3603. Google Scholar
Yuan Cao, Yonglin Cao, Hai Q. Dinh, Ramakrishna Bandi, Fang-Wei Fu. An explicit representation and enumeration for negacyclic codes of length $ 2^kn $ over $ \mathbb{Z}_4+u\mathbb{Z}_4 $. Advances in Mathematics of Communications, 2021, 15 (2) : 291-309. doi: 10.3934/amc.2020067
Junchao Zhou, Yunge Xu, Lisha Wang, Nian Li. Nearly optimal codebooks from generalized Boolean bent functions over $ \mathbb{Z}_{4} $. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020121
Chaoqian Li, Yajun Liu, Yaotang Li. Note on $ Z $-eigenvalue inclusion theorems for tensors. Journal of Industrial & Management Optimization, 2021, 17 (2) : 687-693. doi: 10.3934/jimo.2019129
Lei Liu, Li Wu. Multiplicity of closed characteristics on $ P $-symmetric compact convex hypersurfaces in $ \mathbb{R}^{2n} $. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020378
Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052
Mathew Gluck. Classification of solutions to a system of $ n^{\rm th} $ order equations on $ \mathbb R^n $. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5413-5436. doi: 10.3934/cpaa.2020246
Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $ p $-Laplacian equations on $ \mathbb{R}^N $. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265
Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033
Magdalena Foryś-Krawiec, Jiří Kupka, Piotr Oprocha, Xueting Tian. On entropy of $ \Phi $-irregular and $ \Phi $-level sets in maps with the shadowing property. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1271-1296. doi: 10.3934/dcds.2020317
Aihua Fan, Jörg Schmeling, Weixiao Shen. $ L^\infty $-estimation of generalized Thue-Morse trigonometric polynomials and ergodic maximization. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 297-327. doi: 10.3934/dcds.2020363
Tingting Wu Jian Gao Yun Gao Fang-Wei Fu
\begin{document}$ {{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{4}}$\end{document}-additive cyclic codes" readonly="readonly">
|
CommonCrawl
|
Local well-posedness and small Deborah limit of a molecule-based $Q$-tensor system
Local pathwise solutions to stochastic evolution equations driven by fractional Brownian motions with Hurst parameters $H\in (1/3,1/2]$
October 2015, 20(8): 2583-2609. doi: 10.3934/dcdsb.2015.20.2583
Fully discrete finite element method based on second-order Crank-Nicolson/Adams-Bashforth scheme for the equations of motion of Oldroyd fluids of order one
Yingwen Guo 1, and Yinnian He 2,
School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China
Center for Computational Geosciences, School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049
Received October 2014 Revised April 2015 Published August 2015
In this paper, we study a fully discrete finite element method with second order accuracy in time for the equations of motion arising in the Oldroyd model of viscoelastic fluids. This method is based on a finite element approximation for the space discretization and the Crank-Nicolson/Adams-Bashforth scheme for the time discretization. The integral term is discretized by the trapezoidal rule to match with the second order accuracy in time. It leads to a linear system with a constant matrix and thus greatly increases the computational efficiency. Taking the nonnegativity of the quadrature rule and the technique of variable substitution for the trapezoidal rule approximation, we prove that this fully discrete finite element method is almost unconditionally stable and convergent. Furthermore, by the negative norm technique, we derive the $H^1$ and $L^2$-optimal error estimates of the velocity and the pressure.
Keywords: mixed finite element, Adams-Bashforth scheme, Crank-Nicolson scheme., Viscoelastic fluids, Oldroyd fluids of order one.
Mathematics Subject Classification: Primary: 35L70, 65N30, 76A1.
Citation: Yingwen Guo, Yinnian He. Fully discrete finite element method based on second-order Crank-Nicolson/Adams-Bashforth scheme for the equations of motion of Oldroyd fluids of order one. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2583-2609. doi: 10.3934/dcdsb.2015.20.2583
R. A. Adams, Sobolev Spaces,, Academic Press, (1975). Google Scholar
Yu. Ya. Agranovich and P. E. Sobolevskiĭ, Investigation of a mathematical model of a viscoelastic fluid,, Dokl. Akad. Nauk Ukrain. SSR Ser. A, 86 (1989), 3. Google Scholar
A. Ait Ou Ammi and M. Marion, Nonlinear Galerkin methods and mixed finite elements: Two-grid algorithms for the Navier-Stokes equations,, Numer. Math., 68 (1994), 189. doi: 10.1007/s002110050056. Google Scholar
P. G. Ciarlet, The Finite Element Method for Elliptic Problems,, North-Holland, (1978). Google Scholar
V. Girault and P. A. Raviart, Finite Element Method for Navier-Stokes Equations: Theory and Algorithms,, Springer-Verlag, (1986). doi: 10.1007/978-3-642-61623-5. Google Scholar
D. Goswami and A. K. Pani, A priori error estimates for semidiscrete finite element approxi- mations to the equations of motion arising in Oldroyd fluids of order one,, Int. J. Numer. Anal. Model., 8 (2011), 324. Google Scholar
D. Goswami and A. K. Pani, Backward Euler method for the equations of motion arising in Oldroyd fluids of order one with nonsmooth initial data,, preprint, (). Google Scholar
D. Goswami, A two-level finite element method for viscoelastic fluid flow: Non-smooth initial data,, preprint, (). Google Scholar
Y. He and K. Li, Convergence and stability of finite element nonlinear Galerkin method for the Navier-Stokes equations,, Numer. Math., 79 (1998), 77. doi: 10.1007/s002110050332. Google Scholar
Y. He, Two-level method based on finite element and Crank-Nicolson extrapolation for the time-dependent Navier-Stokes equations,, SIAM J. Numer. Anal., 41 (2003), 1263. doi: 10.1137/S0036142901385659. Google Scholar
Y. He and K. M. Liu, A multilevel finite element method in space-time for the Navier-Stokes problem,, Numer. Methods Partial Differential Equations, 21 (2005), 1052. doi: 10.1002/num.20077. Google Scholar
Y. He, Y. Lin, S. S. P. Shen, W. Sun and R. Tait, Finite element approximation for the viscoelastic fluid motion problem,, J. Comput. Appl. Math., 155 (2003), 201. doi: 10.1016/S0377-0427(02)00864-6. Google Scholar
Y. He, Y. Lin, S. S. P. Shen and R. Tait, On the convergence of viscoelastic fluid flows to a steady state,, Adv. Differential Equations, 7 (2002), 717. Google Scholar
Y. He and W. Sun, Stability and convergence of the Crank-Nicolson/Adams-Bashforth scheme for the time-dependent Navier-Stokes equations,, SIAM J. Numer. Anal., 45 (2007), 837. doi: 10.1137/050639910. Google Scholar
J. G. Heywood and R. Rannacher, Finite-element approximations of the nonstationary Navier-Stokes problem. Part I: Regularity of solutions and second-order spatial discretization,, SIAM J. Numer. Anal., 19 (1982), 275. doi: 10.1137/0719018. Google Scholar
J. G. Heywood and R. Rannacher, Finite-element approximations of the nonstationary Navier-Stokes problem. Part IV: Error estimates for second-order time discretization,, SIAM J. Numer. Anal., 27 (1990), 353. doi: 10.1137/0727022. Google Scholar
A. T. Hill and E. Süli, Approximation of the global attractor for the incompressible Navier-Stokes equations,, IMA J. Numer. Anal., 20 (2000), 633. doi: 10.1093/imanum/20.4.633. Google Scholar
D. D. Joseph, Fluid Dynamics of Viscoelastic Liquids,, Springer Verlag, (1990). doi: 10.1007/978-1-4612-4462-2. Google Scholar
N. A. Karzeeva, A. A. Kot.siolis and A. P. Oskolkov, On dynamical system generated by initial-boundary value problems for the equations of motion of linear viscoelastic fluids,, Boundary value problems of mathematical physics, 188 (1990), 59. Google Scholar
R. B. Kellogg and J. E. Osborn, A regularity result for the Stokes problem in a convex polygon,, J. Functional Anal., 21 (1976), 397. doi: 10.1016/0022-1236(76)90035-5. Google Scholar
A. A. Kotsiolis, A. P. Oskolkov and R. D. Shadiev, A priori estimate on the semiaxis $t\geq0$ for the solutions of the equations of motion of linear viscoelastic fluids with an infinite Dirichlet integral and their applications,, J. Soviet Math., 62 (1992), 2777. doi: 10.1007/BF01671001. Google Scholar
S. Larsson, The long-time behavior of finite-element approximations of solutions to semilinear parabolic problems,, SIAM J. Numer. Anal., 26 (1989), 348. doi: 10.1137/0726019. Google Scholar
W. McLean and V. Thomée, Numerical solution of an evolution equation with a positive-type memory term,, J. Austral. Math. Soc. Ser. B, 35 (1993), 23. doi: 10.1017/S0334270000007268. Google Scholar
A. P. Oskolkov, Initial-boundary value problems for the equations of motion of Kelvin-Voigt fluids and Oldroyd fluids,, Boundary value problems of mathematical physics, 179 (1988), 126. Google Scholar
A. P. Oskolkov and D. V. Emel'yanova, Some nonlocal problems for two-dimensional equations of motion of Oldroyd fluids,, (Russian) Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov., 189 (1991), 101. doi: 10.1007/BF01097499. Google Scholar
A. K. Pani and J. Y. Yuan, Semidiscrete finite element Galerkin approximations to the equations of motion arising in the Oldroyd model,, IMA J. Numer. Anal., 25 (2005), 750. doi: 10.1093/imanum/dri016. Google Scholar
A. K. Pani, J. Y. Yuan and P. Damazio, On a linearized backward Euler method for the equations of motion arising in the Oldroyd fluids of order one,, SIAM J. Numer. Anal., 44 (2006), 804. doi: 10.1137/S0036142903428967. Google Scholar
J. Shen, Long time stability and convergence for fully discrete nonlinear Galerkin methods,, Appl. Anal., 38 (1990), 201. doi: 10.1080/00036819008839963. Google Scholar
Z. Si, W. Li and Y. Wang, A gauge-Uzawa finite element method for the time-dependent Viscoelastic Oldroyd flows,, J. Math. Anal. Appl., 425 (2015), 96. doi: 10.1016/j.jmaa.2014.12.020. Google Scholar
R. Temam, Navier-Stokes Equations, Theory and Numerical Analysis,, AMS Chelsea Publishing, (1984). Google Scholar
K. Wang, Y. He and Y. Shang, Fully discrete finite element method for the viscoelastic fluid motion equations,, Discrete Continuous Dynam. Systems-B, 13 (2010), 665. doi: 10.3934/dcdsb.2010.13.665. Google Scholar
K. Wang, Y. He and X. Feng, On error estimates of the fully discrete penalty method for the viscoelastic flow problem,, Int. J. Comput. Math, 88 (2011), 2199. doi: 10.1080/00207160.2010.534781. Google Scholar
K. Wang, Y. Lin and Y. He, Asymptotic analysis of the equations of motion for viscoealstic Oldroyd fluid,, Discrete Contin. Dyn. Syst, 32 (2012), 657. Google Scholar
K. Wang, Z. Si and Y. Yang, Stabilized finite element method for the viscoelastic Oldroyd fluid flows,, Numer. Algorithms, 60 (2012), 75. doi: 10.1007/s11075-011-9512-3. Google Scholar
K. Wang, Y. He and Y. Lin, Long time numerical stability and asymptotic analysis for the viscoelastic Oldroyd flows,, Discrete Continuous Dynam. Systems-B, 17 (2012), 1551. doi: 10.3934/dcdsb.2012.17.1551. Google Scholar
W. L. Wilkinson, Non-Newtonian Fluids: Fluid Mechanics, Mixing and Heat Transfer,, Pergamon Press, (1960). Google Scholar
Sondre Tesdal Galtung. A convergent Crank-Nicolson Galerkin scheme for the Benjamin-Ono equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1243-1268. doi: 10.3934/dcds.2018051
Dongho Kim, Eun-Jae Park. Adaptive Crank-Nicolson methods with dynamic finite-element spaces for parabolic problems. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 873-886. doi: 10.3934/dcdsb.2008.10.873
Alexander Zlotnik. The Numerov-Crank-Nicolson scheme on a non-uniform mesh for the time-dependent Schrödinger equation on the half-axis. Kinetic & Related Models, 2015, 8 (3) : 587-613. doi: 10.3934/krm.2015.8.587
Nicolas Crouseilles, Mohammed Lemou, SV Raghurama Rao, Ankit Ruhi, Muddu Sekhar. Asymptotic preserving scheme for a kinetic model describing incompressible fluids. Kinetic & Related Models, 2016, 9 (1) : 51-74. doi: 10.3934/krm.2016.9.51
Nora Aïssiouene, Marie-Odile Bristeau, Edwige Godlewski, Jacques Sainte-Marie. A combined finite volume - finite element scheme for a dispersive shallow water system. Networks & Heterogeneous Media, 2016, 11 (1) : 1-27. doi: 10.3934/nhm.2016.11.1
François Alouges. A new finite element scheme for Landau-Lifchitz equations. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 187-196. doi: 10.3934/dcdss.2008.1.187
Daoyuan Fang, Ting Zhang, Ruizhao Zi. Dispersive effects of the incompressible viscoelastic fluids. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5261-5295. doi: 10.3934/dcds.2018233
Matthias Hieber. Remarks on the theory of Oldroyd-B fluids in exterior domains. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1307-1313. doi: 10.3934/dcdss.2013.6.1307
R. Ryham, Chun Liu, Zhi-Qiang Wang. On electro-kinetic fluids: One dimensional configurations. Discrete & Continuous Dynamical Systems - B, 2006, 6 (2) : 357-371. doi: 10.3934/dcdsb.2006.6.357
Giovambattista Amendola, Sandra Carillo, John Murrough Golden, Adele Manes. Viscoelastic fluids: Free energies, differential problems and asymptotic behaviour. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 1815-1835. doi: 10.3934/dcdsb.2014.19.1815
Colette Guillopé, Zaynab Salloum, Raafat Talhouk. Regular flows of weakly compressible viscoelastic fluids and the incompressible limit. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 1001-1028. doi: 10.3934/dcdsb.2010.14.1001
Wen Li, Song Wang, Volker Rehbock. A 2nd-order one-point numerical integration scheme for fractional ordinary differential equations. Numerical Algebra, Control & Optimization, 2017, 7 (3) : 273-287. doi: 10.3934/naco.2017018
Martin Burger, José A. Carrillo, Marie-Therese Wolfram. A mixed finite element method for nonlinear diffusion equations. Kinetic & Related Models, 2010, 3 (1) : 59-83. doi: 10.3934/krm.2010.3.59
Wei Qu, Siu-Long Lei, Seak-Weng Vong. A note on the stability of a second order finite difference scheme for space fractional diffusion equations. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 317-325. doi: 10.3934/naco.2014.4.317
Francis Filbet, Roland Duclous, Bruno Dubroca. Analysis of a high order finite volume scheme for the 1D Vlasov-Poisson system. Discrete & Continuous Dynamical Systems - S, 2012, 5 (2) : 283-305. doi: 10.3934/dcdss.2012.5.283
Kun Wang, Yinnian He, Yueqiang Shang. Fully discrete finite element method for the viscoelastic fluid motion equations. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 665-684. doi: 10.3934/dcdsb.2010.13.665
Giulio G. Giusteri, Alfredo Marzocchi, Alessandro Musesti. Nonlinear free fall of one-dimensional rigid bodies in hyperviscous fluids. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2145-2157. doi: 10.3934/dcdsb.2014.19.2145
Caifang Wang, Tie Zhou. The order of convergence for Landweber Scheme with $\alpha,\beta$-rule. Inverse Problems & Imaging, 2012, 6 (1) : 133-146. doi: 10.3934/ipi.2012.6.133
François Dubois. Third order equivalent equation of lattice Boltzmann scheme. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 221-248. doi: 10.3934/dcds.2009.23.221
Yingwen Guo Yinnian He
|
CommonCrawl
|
A converse to a theorem of Adamyan, Arov and Krein
by J. Agler and N. J. Young PDF
A well known theorem of Akhiezer, Adamyan, Arov and Krein gives a criterion (in terms of the signature of a certain Hermitian matrix) for interpolation by a meromorphic function in the unit disc with at most $m$ poles subject to an $L^\infty$-norm bound on the unit circle. One can view this theorem as an assertion about the Hardy space $H^2$ of analytic functions on the disc and its reproducing kernel. A similar assertion makes sense (though it is not usually true) for an arbitrary Hilbert space of functions. One can therefore ask for which spaces the assertion is true. We answer this question by showing that it holds precisely for a class of spaces closely related to $H^2$.
V. M. Adamjan, D. Z. Arov, and M. G. Kreĭn, Analytic properties of the Schmidt pairs of a Hankel operator and the generalized Schur-Takagi problem, Mat. Sb. (N.S.) 86(128) (1971), 34–75 (Russian). MR 0298453
J. Agler, Interpolation, preprint (1987).
Jim Agler, Nevanlinna-Pick interpolation on Sobolev space, Proc. Amer. Math. Soc. 108 (1990), no. 2, 341–351. MR 986645, DOI 10.1090/S0002-9939-1990-0986645-2
J. Agler and N. J. Young, Functions which are almost multipliers of Hilbert function spaces, to appear in Proc. London Math. Soc.
N. I. Akhiezer, On a minimum problem in the theory of functions and on the number of roots of an algebraic equation which lie inside the unit circle, Izv. Akad. Nauk SSSR 9(1931) 1169-1189.
P. Hebroni, Sur les inverses des éléments dérivables dans un anneau abstrait, C. R. Acad. Sci. Paris 209 (1939), 285–287 (French). MR 14
Joseph A. Ball and J. William Helton, A Beurling-Lax theorem for the Lie group $\textrm {U}(m,\,n)$ which contains most classical interpolation theory, J. Operator Theory 9 (1983), no. 1, 107–142. MR 695942
Tiberiu Constantinescu and Aurelian Gheondea, Minimal signature in lifting of operators. I, J. Operator Theory 22 (1989), no. 2, 345–367. MR 1043732
Mischa Cotlar and Cora Sadosky, Nehari and Nevanlinna-Pick problems and holomorphic extensions in the polydisk in terms of restricted BMO, J. Funct. Anal. 124 (1994), no. 1, 205–210. MR 1284610, DOI 10.1006/jfan.1994.1105
John C. Doyle, Bruce A. Francis, and Allen R. Tannenbaum, Feedback control theory, Macmillan Publishing Company, New York, 1992. MR 1200235
Ph. Delsarte, Y. Genin, and Y. Kamp, On the role of the Nevanlinna-Pick problem in circuit and system theory, Internat. J. Circuit Theory Appl. 9 (1981), no. 2, 177–187. MR 612269, DOI 10.1002/cta.4490090204
Ciprian Foias and Arthur E. Frazho, The commutant lifting approach to interpolation problems, Operator Theory: Advances and Applications, vol. 44, Birkhäuser Verlag, Basel, 1990. MR 1120546, DOI 10.1007/978-3-0348-7712-1
Keith Glover, All optimal Hankel-norm approximations of linear multivariable systems and their $L^{\infty }$-error bounds, Internat. J. Control 39 (1984), no. 6, 1115–1193. MR 748558, DOI 10.1080/00207178408933239
Israel Gohberg, Leiba Rodman, Tamir Shalom, and Hugo J. Woerdeman, Bounds for eigenvalues and singular values of matrix completions, Linear and Multilinear Algebra 33 (1993), no. 3-4, 233–249. MR 1334675, DOI 10.1080/03081089308818197
J. William Helton, Joseph A. Ball, Charles R. Johnson, and John N. Palmer, Operator theory, analytic functions, matrices, and electrical engineering, CBMS Regional Conference Series in Mathematics, vol. 68, Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1987. MR 896034, DOI 10.1090/cbms/068
Irving Kaplansky, Linear algebra and geometry. A second course, Allyn and Bacon, Inc., Boston, Mass., 1969. MR 0249444
D. E. Marshall and C. Sundberg, Interpolating sequences for multipliers of the Dirichlet space, to appear.
G. Pick, Über die Beschränkungen analytischer Funktionen, welche durch vorgegebene Funktionswerte bewirkt werden, Math. Ann. 77 (1916), 7–23.
Peter Quiggin, For which reproducing kernel Hilbert spaces is Pick's theorem true?, Integral Equations Operator Theory 16 (1993), no. 2, 244–266. MR 1205001, DOI 10.1007/BF01358955
P. Quiggin, Generalisations of Pick's Theorem to Reproducing Kernel Hilbert Spaces, Ph.D. thesis, Lancaster University, 1994.
Donald Sarason, Generalized interpolation in $H^{\infty }$, Trans. Amer. Math. Soc. 127 (1967), 179–203. MR 208383, DOI 10.1090/S0002-9947-1967-0208383-8
Retrieve articles in Journal of the American Mathematical Society with MSC (1991): 46E22, 47B38
Retrieve articles in all journals with MSC (1991): 46E22, 47B38
J. Agler
Affiliation: Department of Mathematics, University of California at San Diego, La Jolla, California 92093
Email: [email protected]
N. J. Young
Affiliation: Department of Mathematics, University of Newcastle, Newcastle upon Tyne NE1 7RU, England
Email: [email protected]
Additional Notes: J. Agler's research was supported by an NSF grant in Modern Analysis.
MSC (1991): Primary 46E22; Secondary 47B38
DOI: https://doi.org/10.1090/S0894-0347-99-00291-X
|
CommonCrawl
|
Geometry of Differential Equations, Real and Complex
Session code: gde
Session type: Special Sessions
All abstracts
Ronaldo Alves Garcia (Federal University of Goiás)
Mikhail Malakhaltsev (University of los Andes)
Jesus Mucino Raymundo (National Autonomous University of Mexico)
Daniel Offin (Queens University)
Farid Tari (University of São Paulo)
Tuesday, Jul 25 [McGill U., Arts Building, Room W-120]
11:45 Kenneth Meyer (University of Cincinnati, USA), Asymptotic Stability Estimates near an Equilibrium Point
12:15 Alexander Cardona (University of los Andes, Colombia), Spectral invariants and global pseudo-differential calculus on homogeneous spaces
14:15 Johanna Garcia Saldaña (Catholic University of the Most Holy Conception, Chile), An approach to the period function through the harmonic balance
14:45 Jesus Mucino Raymundo (National Autonomous University of Mexico, Morelia, Mexico), Essential singularities of complex analytic vector fields on $\mathbb{C}$
15:45 Regilene Delazari dos Santos Oliveira (Universidade de São Paulo, Campus de S. Carlos), Singular levels and topological invariants of Morse Bott integrable systems on surfaces
16:15 Nabil Kahouadji (Northeastern Illinois University, USA), Isometric Immersions of Pseudo-Spherical Surfaces via Differential Equations
17:00 Ana Rechtman (National Autonomous University of Mexico, Mexico City, Mexico), The trunkennes of a flow
17:30 Mikhail Malakhaltsev (University of los Andes, Colombia), Binary differential equations and 3-webs with singularities
Wednesday, Jul 26 [McGill U., Arts Building, Room W-120]
11:15 Alessandro Portaluri (University of Torino, Italy), Index and stability of closed semi-Riemannian geodesics
11:45 Martha P. Dussan Angulo (Universidade de São Paulo, Brazil), Bjorling problem for timelike surfaces and solutions of homogeneous wave equation
13:45 Débora Lopes da Silva (Universidade Federal do Sergipe, Brazil), Codimension one partially umbilic singularities of hypersurfaces of $\mathbb{ R}^4$
14:15 Jean Carlos Cortissoz (University of los Andes, Colombia), On Bloch's Theorem
14:45 Adolfo Guillot Santiago (National Autonomous University of Mexico, Cuernavaca, Mexico), Algebraic differential equations with uniform solutions
15:15 Frederico Xavier (Texas Christian University, USA), On the inversion of real polynomial maps
16:15 Ronaldo Alves Garcia (Universidade Federal de Goiás, Brazil), Darboux curves on surfaces
16:45 Daniel Offin (Queens University, Canadá), Multiple periodic solutions in classical Hamiltonian systems.
Kenneth Meyer
University of Cincinnati, USA
Asymptotic Stability Estimates near an Equilibrium Point
PDF abstract
We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.
Scheduled time: Tuesday, July 25 at 11:45
Location: McGill U., Arts Building, Room W-120
Alexander Cardona
University of los Andes, Colombia
Spectral invariants and global pseudo-differential calculus on homogeneous spaces
Global pseudo-differential calculus on compact Lie groups and homogeneous spaces gives, via representation theory, a semi-discrete description of the global analysis and spectral theory of a wide class of operators on these objects. Based on the theory developed by Ruzhansky and Turunen, during this talk we will consider spectral invariants of index type for global homogeneous pseudo-differential operators; some examples and potential applications will be addressed.
Johanna Garcia Saldaña
Catholic University of the Most Holy Conception, Chile
An approach to the period function through the harmonic balance
Each differential system with a period annulus P has associated a period function T. The geometry of T is determined by the number and properties of its critical periods, i.e. its critical points. The critical periods in T is a counterpart to the problem of limit cycles in polynomial systems, since they play a fundamental roll in the geometry of the phase portrait of the differential system. In fact, several analytic techniques have been developed for studying these problems. In this talk we will show that the shape of T can be recovered by using a quantitative approach: the harmonic balance.
Jesus Mucino Raymundo
National Autonomous University of Mexico, Morelia, Mexico
Essential singularities of complex analytic vector fields on $\mathbb{C}$
Let $X$ be a complex analytic vector field on $(\mathbb{C}, 0)$. The real trajectories of it are geodesics of a suitable singular flat metric. Analogously to the classical Picard's Theorem, in the vicinity of an isolated essential singularity, the local complexity of $X$ must be studied using certain ``global'' flow box maps. We describe the geometry encoded is the simplest cases, for these kind of singularities. \noindent Joint work with A. Alvarez--Parrilla.
Regilene Delazari dos Santos Oliveira
Universidade de São Paulo, Campus de S. Carlos
Singular levels and topological invariants of Morse Bott integrable systems on surfaces
In this talk we shall classify (up to homeomorphisms) closed curves and eights of saddle points on orientable closed surfaces. This classification is applied to Morse Bott foliations and Morse Bott integrable systems to define a complete invariant. We also present a realization Theorem based in two transformations and one generator. These results are part of a joint work with José Martínez-Alfaro (UV, Spain) and Ingrid Sarmiento-Mesa (IBILCE-UNESP, Brazil).
Nabil Kahouadji
Northeastern Illinois University, USA
Isometric Immersions of Pseudo-Spherical Surfaces via Differential Equations
Pseudo-spherical surfaces are surfaces of constant negative Gaussian curvature. A way of realizing such a surface in 3d space as a surface of revolution is obtained by rotating the graph of a curve called tractrix around the z-axis (infinite funnel). There is a remarkable connection between the solutions of the sine-Gordon equation $u_{xt}=\sin u$ and pseudo-spherical surfaces, in the sense that every generic solution of this equation can be shown to give rise to a pseudo-spherical surface. Furthermore, the sine-Gordon equation has the property that the way in which the pseudo-spherical surfaces corresponding to its solutions are realized geometrically in 3d space is given in closed form through some remarkable explicit formulas. The sine-Gordon equation is but one member of a very large class of differential equations whose solutions likewise define pseudo-spherical surfaces. These were defined and classified by Chern, Tenenblat and others, and include almost all the known examples of "integrable" partial differential equations. This raises the question of whether the other equations enjoy the same remarkable property as the sine-Gordon equation when it comes to the realization of the corresponding surfaces in 3d space. We will see that the answer is no, and will provide a full classification of second-order hyperbolic and $k$th-order evolution equations. The classification results will show, among other things, that the sine-Gordon equation is quite unique in this regard amongst all integrable equations.
Ana Rechtman
National Autonomous University of Mexico, Mexico City, Mexico
The trunkennes of a flow
Motivated by an invariant for knots known as the trunk, we define the trunkennes of a vector field. This quantity is invariant under mesure preserving homeomorphisms and is not proportional to helicity, as almost all known asymptotic invariants. I will explain the construction and the main results related to this invariant. This is joint work with Pierre Dehornoy.
Mikhail Malakhaltsev
Binary differential equations and 3-webs with singularities
A $3$-web with singularities is a geometric structure locally given by three one-dimensional distributions on an open dense subset $U$ of a two-dimensional manifold $M$. A point in $U$ is called \emph{regular} if values of the distributions are pairwise transversal at this point, all the other points of $M$ are called \emph{singular}. A binary differential equation of third degree determines a $3$-web with singularities (see, for example, T. Fukui and J. J. Nuño-Ballesteros, Isolated singularities of binary differential equations of degree $n$, Publicacions Matemátiques, vol. 56, 65--89, 2012). We describe singularities of this $3$-web, and show how to find topological and differential invariants of these singularities using methods developed in the paper F.A. Arias, J.R. Arteaga, and M. Malakhaltsev, 3-webs with singularities, Lobachevskii J. of Math, 37 (1), 1--20, 2016.
Alessandro Portaluri
University of Torino, Italy
Index and stability of closed semi-Riemannian geodesics
A celebrated result of Poincaré asserts that a closed minimizing geodesic on a orientable (Riemannian) surface is unstable when considered as an orbit of the geodesic flow. In this talk starting from this classical result, we'll discuss some recently results on the strong and linear instability of closed geodesics of any causal character on higher dimensional (maybe not oriented) Lorentzian and more general semi-Riemannian manifolds. Dropping the non-positivity assumption of the metric tensor is a quite challenging task since the Morse index is truly infinite. This is a joint work with X. Hu and R. Yang
Scheduled time: Wednesday, July 26 at 11:15
Martha P. Dussan Angulo
Universidade de São Paulo, Brazil
Bjorling problem for timelike surfaces and solutions of homogeneous wave equation
In this talk we show how to construct split-holomorphic extensions of initial curves $\gamma(t)$ in the Lorentz space $\mathbb R^3_1$, using, in a natural way, the point of view of solutions of homogeneous wave equation. After extending that curve to a subset of split-complex plane, we solve explicitly the Bjorling problem for timelike surfaces in the Lorentzian spaces $\mathbb R^3_1$, $\mathbb R^4_1$ and $\mathbb R^4_2$. As consequences we construct new examples and give applications. In particular, we describe one-parameter families of timelike surfaces which are solutions of the timelike Bjorling problem. In addition, we also establish symmetry principles for the class of minimal timelike surfaces in those ambient spaces. These results are part from the published papers by the author in Journal of Mathematical Analysis and Applications, Journal of Geometry and Physics and Annali di Matematica Pura ed Applicata. We remember that the classical Bjorling problem was proposed by Bjorling in 1844 and consists of the construction of a minimal surface in $\mathbb R^3$ containing the strip in the interior. The solution was obtained by Schwarz in 1890 through of a explicit formula in terms of initial datas. After that, the Bjorling problem has been considered in other ambient spaces, including in bigger codimmension or with indefinite metrics.
Débora Lopes da Silva
Universidade Federal do Sergipe, Brazil
Codimension one partially umbilic singularities of hypersurfaces of $\mathbb{ R}^4$
This talk is about the mutually orthogonal one dimensional singular foliations, in oriented three dimensional manifolds $\mathbb{M}^3$, whose leaves are the integral curves of the principal curvature direction fields associated to immersions $\alpha:\mathbb{M}^3\rightarrow\mathbb{R}^4$. We focus on behavior of these foliations around singularities defined by the points, called partially umbilic, where at least two principal curvature coincide. It will be described the generic behavior of the foliations in the neighborhood of partially umbilic points of codimension one. These are the singularities which appear generically in one parameter families of hypersurfaces. We express the codimension one condition by minimally weakening the genericity condition given by R. Garcia, D. Lopes e J. Sotomayor in \textit{Partially Umbilic Singularities of Hypersurfaces of $\mathbb{R}^4$. Bulletin des Sciences Mathematiques (Paris. 1885), v. 139, p. 431-472, (2015).}
Jean Carlos Cortissoz
On Bloch's Theorem
A classical theorem of André Bloch guarantees that there is a $B>0$ such that for any holomorphic function $f:D\longrightarrow \mathbb{C}$, where $D\subset\mathbb{C}$ is the unit disk, such that $\left|f'\left(0\right)\right|=1$, there is a subdomain $D'\subset D$, so that $f$ restricted to $D'$ is one to one and $f(D')$ contains a disk of radius $B$. Computing the optimal value of $B$ is an open problem. In this talk, we will discuss a new proof of Bloch's theorem, and a possible approach to improve on the known estimates on $B$. This is joint work with Julio Montero.
Adolfo Guillot Santiago
National Autonomous University of Mexico, Cuernavaca, Mexico
Algebraic differential equations with uniform solutions
In the complex domain, the solutions of an ordinary differential equation may present multivaluedness. We will talk about a recent result describing the ordinary differential equations (given by rational vector fields on complex algebraic surfaces) that have a solution that is not multivalued.
Frederico Xavier
Texas Christian University, USA
On the inversion of real polynomial maps
There is some evidence to support the conjecture that a polynomial local diffeomorphism of $\mathbb R^n$ into itself, $n\geq 3$, is injective if the pre-images of all $2$-planes in $\mathbb R^n$ are homeomorphic to connected subsets of $\mathbb R^2$. In this talk, we discuss this problem and offer proofs of some related global invertibility results. The arguments involve geometric constructions that use arguments from topology and complex analysis. Part of this work is joint with S. Nollet.
Ronaldo Alves Garcia
Universidade Federal de Goiás, Brazil
Darboux curves on surfaces
In 1872, Gaston Darboux defined a family of curves on surfaces in the 3-dimensional Euclidean space $\mathbb{R}^3$ which are preserved by the action of the Möbius group and share many properties with geodesics. In this talk the Darboux curves will be considered under a dynamical viewpoint and described globally in special canal surfaces, quadrics and some Darboux cyclides. It will be based, mainly on the paper by R. Garcia; R. Langevin; P. Walczak, Darboux curves on surfaces II. Bull. Braz. Math. Soc. (N.S.) 47 (2016). Some open problems will be posed.
Daniel Offin
Queens University, Canadá
Multiple periodic solutions in classical Hamiltonian systems.
The question of multiple periodic solutions on energy surfaces, has a long and distinguished history. We will consider the question of multiple periodic solutions on non compact energy surfaces, and find conditions which guarantee infinitely many. We use variational techniques including the mountain pass theorem, and a result of the author which guarantees that a minimizing periodic solution must be hyperbolic on its energy surface.
|
CommonCrawl
|
How RSA Works
Using RSA, an Asymmetric Cipher
Jump ahead to see how the RSA cipher works
Asymmetric ciphers like RSA or Rivest Shamir Adelman, and ECC or elliptic-curve cryptography are intended for session key agreement and for digital signature creation and verification.
They are not suited for encrypting the data itself — email messages or files or database contents or data streams or archives — whatever the sensitive data content may be.
Instead, asymmetric ciphers work on keys and hash outputs. Their plaintext and ciphertext are typically 256 bits long, and no more than 512 bits long — 32 to 64 bytes, maximum.
A digital signature provides Proof of Content and Proof of Origin.
Proof of Content comes from the hash function. Even a single-bit change in the content should lead to approximately 50% of the hash output bits changing. The receiver knows the message content did not change.
Proof of Origin comes from the asymmetric encryption and decryption. When the receiver decrypts the signature using the purported sender's public key, and finds a match to the message hash, this verifies that the signature must have been created with the sender's private key. The receiver knows who sent the message.
Either RSA or ECC could be used to encrypt and create the digital signature, and then to decrypt and verify it.
Let's see how RSA is used in digital signatures, and then we'll see how the cipher works.
Follow the steps in the below procedure.
How Digital Signatures Work
Alice has a message for Bob. Alice wants to make sure that Bob can verify that the message came from Alice and has not been modified.
Alice creates the message. If confidentiality is also needed, the message should be encrypted before continuing.
Alice calculates the hash of the message, typically using SHA-2-256.
Alice then encrypts the hash, using RSA or another asymmetric cipher. The encryption key is Alice's private key. No one else knows that key, so only Alice can do this operation in that specific way. The ciphertext output is the digital signature.
The message and digital signature are transmitted to Bob. They could be two components of an email message, two files combined into one archive, two separate files to download from a server, as long as Bob ends up with both pieces. Metadata added to the signature explains which hash function, which asymmetric cipher, and the identity of the creator.
Bob verifies the digital signature. The first step is calculating the hash of the received message, which is suspected of being modified, or being sent by someone masquerading as Alice, or both.
Bob then decrypts the digital signature. The decryption key is Alice's public key. Bob needs to be certain that what he is using is really Alice's public key.
If Bob finds that the received message hash is identical to the signature decryption output, he concludes that the message was not changed (because of the hash being so sensitive to even a single-bit change), and it really came from Alice (because decrypting with what he knows to be Alice's public key implies encryption with Alice's private key, which no one else knows).
Public-Key Infrastructure and Digital Certificates
The practical difficulty is that you must be absolutely certain that what you have really is the other party's public key.
Bob and Alice could meet face-to-face and exchange public keys. However, this doesn't scale beyond a few local acquaintances with plenty of free time.
Instead, everyone's public key should be in the form of a digital certificate. That's a data structure containing information about the owner or "subject" (name, address, email address, URL, etc), their public key, and the name of the certificate issuer. We call the issuer the Certificate Authority or CA. All of certificate data structure is wrapped in a digital signature created by the CA, so the content and the issuer identity can be verified.
Public CA
Everyone must completely trust the CA as an issuer of credentials — a trusted introducer, if you will. The CA says things like "Alice exists" and "The following is Alice's public key", and everyone believes it. This is a completely non-technical human trust issue, confidence that the CA will never makes errors or tells lies. Sometimes this goes wrong.
Everyone then needs a copy of the CA's public key, which will be in the form of a digital certificate that they issued to themselves. This is a completely technical issue, confidence in the mathematics and logic meaning that the cryptography is strong enough.
CA/Browser
All this is built into web browsers and email tools. The makers of Chrome, Firefox, and other browsers have decided that everyone should trust a group of root-level CAs such as Comodo, DigiCert, VeriSign, and others, and so their certificates are included in the browsers. Similarly, the Mozilla organization's Thunderbird and other email tools contain certificates.
Any organization can create its own PKI or Public-Key Infrastructure. They would establish an in-house CA, and install its certificates in all of the organization's browsers and email tools.
Supporting Encryption with RSA
Another possible use for asymmetric ciphers would be to agree on a shared secret session key. That key would be used with AES or another symmetric cipher to encrypt some data.
Consider the above situation and diagram, where Alice is sending a message to Bob. If the message is sensitive and should be encrypted:
Alice obtains a copy of Bob's public key. If it's in the form of a certificate from a CA that Alice trusts, then Alice can verify that it really is Bob's.
Alice generates a random 256-bit key to be used only for this one message or transferred file.
Alice creates a short message containing that key, saying something like "Let's encrypt the data using the AES cipher in CBC mode using key 0x34a09cc1eff4832...", and then encrypts that message using Bob's public key. And really, only the session key itself needs to be encrypted. If we think we have to hide our choice of cipher and mode, then apparently we think we're using a weak choice, and Kerchoff's principle tells us that we should instead use a cipher without a known or suspected flaw. Claude Shannon reached a similar conclusion in the 1940s.
Alice encrypts the data itself according to the message.
Alice sends the asymmetric-encrypted key message combined with the symmetric-encrypted data to Bob. This is hybrid cryptography, combining symmetric and asymmetric. In the scenario of the above diagram, that hybrid-encrypted message would be accompanied by its digital signature.
Only Bob has a copy of Bob's private key. And so only Bob can decrypt the short key message. That explains how to decrypt the data, and so Bob decrypts and reads the file or message.
Ephemeral Keys and Perfect Forward Secrecy
The above explains how you could use RSA to agree on a shared secret session key. However...
Utah Data Center
Consider that your adversary might be recording every ciphertext message transmitted, in the hopes of eventually decrypting something.
If your long-term private key is ever exposed, then every session key created using the above procedure is exposed, and all the messages can be decrypted.
You instead should insist on only using ephemeral session keys. An ephemeral key is used only once, and if your private key is exposed some day, that can't be used to figure out what the ephemeral key was.
I would have called it "Reverse Secrecy", because it protects secrets going back into the past. But they didn't ask me.
Using nothing but ephemeral keys provides Forward Secrecy, also called Perfect Forward Secrecy or just PFS. The concept is: "A breach today does not expose secrets protected in the past."
Diffie-Hellman key agreement ECDHE, or Elliptic-Curve Diffie-Hellman Ephemeral key agreement
The Diffie-Hellman algorithm lets two parties securely agree on an ephemeral shared secret key, this is called DHE. Classic Diffie-Hellman calculates exponential powers of large integers followed by modulo operation. ECDHE or Elliptic-Curve Diffie-Hellman Ephemeral key agreement can run much faster.
TLS 1.3 requires PFS and ephemeral keys agreed upon by either DHE or ECDHE.
However, there is still use for RSA! You don't want to exchange secrets with strangers. The key agreement stage needs authentication to prevent Man-in-the-Middle or MitM attacks. So, you combine DHE or ECDHE for the key negotiation with RSA or an elliptic-curve method for authentication.
So, let's see how RSA works.
Setting up RSA keys
Randomly select two distinct large prime numbers \( p \) and \( q \). These should be large numbers, similar in magnitude but differing in length by a few digits to make factoring even more difficult. For example, in base 10, a 615-digit prime \(p\) and a 617-digit prime \(q\) for a 1232-digit \(n\), approximately \( 2 ^ {4096} \) and thus a 4096-bit modulus or key size. Yes, the numbers become very large.
Compute: \( n = p \cdot q \)
\( n \) will be used as the modulus for both the private and public keys.
Security comes from:
The difficulty of factoring the product of two large prime numbers.
The ambiguity introduced by the modulo function.
Compute \( \lambda (n) \), Carmichael's totient function for \(n\). That's equal to the least common multiple of \( (p-1) \) and \( (q-1) \):
\( \lambda (n) = l.c.m.(p - 1, q - 1) \)
You can compute \( l.c.m.(a, b) \) by first computing the greatest common divisor:
\( l.c.m.(a, b) = \frac{ | a \cdot b | }{ g.c.d.(a, b) } \)
and so:
\( l.c.m.(p-1, q-1) = \frac{ | (p-1) \cdot (q-1) | }{ g.c.d.((p-1), (q-1)) } \)
Choose an integer \( e \) such that:
\( 1 < e < \lambda (n) \)
\( e \) and \( \lambda (n) \) are coprime, they share no factors other than 1. A popular choice is \( e = 2^{16} + 1 = 65537 \)
Some applications choose smaller \( e \) such as \( 3 \), \( 5 \), or \( 17 \), to make encryption and signature verification faster on small devices like smart cards. However, small values of \(e\) are less secure in some settings.
Compute \( d \) to satisfy the congruence relation:
\( ( d \cdot e )\mod{\lambda (n)} = 1 \)
or, put another way:
\( d \cdot e = 1 + k \cdot \lambda (n) \)
for some integer \( k \).
Also note that in computing \( d \) in steps 3-5 you could use the Euler totient function:
\( \varphi (n) = ( p - 1 ) \cdot ( q - 1) \)
in place of:
\( \lambda (n) = l.c.m. ( p - 1, q - 1) \)
as it would be a common multiple instead of the least common multiple.
The exponent \( e \) and modulus \( n \) make up the public key.
\( d \) is the private key.
The first few chapters of Number Theory by George Andrews can give you plenty of background on the mathematics.
The first 29 pages, meaning the first two chapters, takes you through the Fundamental Theorem of Arithmetic, proving that every number has one unique factorization. Continuing on through page 114, through chapter 8, takes you through much more — Fermat's Little Theorem, Wilson's theorem, congruences, residue systems, the Chinese Remainder Theorem, Euler's totient function \(\varphi(n)\), the number \(d(n)\) of divisors of \(n\), the sum of those divisors \(\sigma(n)\), primitive roots, prime numbers, and \(\pi(x)\), the number of primes that do not exceed \(x\), proof that there are infinitely many prime numbers, and some unsolved problems about primes. Not that you really need to go through any of that to follow how RSA works.
For a trivial example with very small prime factors, let's use: $$ \begin{aligned} p &= 5 \\ q &= 11 \\ n &= p \cdot q = 55 \\ \varphi ( n ) &= (p - 1) \cdot (q - 1) \\ & = 4 \cdot 10 \\ & = 40 \\ \lambda ( n ) &= l.c.m. (p - 1, q - 1) \\ &= l.c.m. (4, 10) \\ &= 20 \end{aligned} $$
We need: $$ 1 < e < \lambda (n) $$ where \(e\) and \( \lambda (n) \) are coprime, and so: $$ e \in \{3, 7, 9, 11, 13, 17, 19\} $$
\( d \) is defined by: $$ ( d \cdot e )\mod{20} = 1 $$ So, the possible key pairs are:
\( n, e \) private
\( d \)
55, 3 7
55, 11 11
Using the keys for RSA encryption and decryption
Sender obtains receiver's public key \( (n,e) \).
Sender converts message M into a number \( m < n \) with an agreed-upon reversible padding scheme. Since the point is usually to agree on a 256-bit symmetric session key, or to encrypt a SHA-2-256 hash value to create a digital signature, the typical message is a 256-bit unsigned integer, a number in the range \( \{ 0, 1, 2, ..., 2^{256} - 1 \} \)
Sender computes ciphertext \( c \) using the receiver's public key \(e\) and the modulus \(n\) by:
\( c = ( m^e )\mod{n} \)
Sender transmits ciphertext to receiver.
Receiver calculates cleartext by using their private key \(d\) and the modulus \(n\):
\( m = ( c^d )\mod{n} \)
Receiver reverses padding function to recover message M from cleartext \( m \).
As an almost as trivial example:
Choose prime factors:
$$ \begin{aligned} p &= 61 \\ q &= 53 \end{aligned} $$
Calculate modulus \(n\):
$$ \begin{aligned} n &= p \cdot q \\ & = 61 \cdot 53 \\ & = 3233 \end{aligned} $$
Calculate \( \lambda (n) \):
$$ \begin{aligned} \lambda (n) &= l.c.m. (p - 1, q - 1)\\ &= l.c.m. ( 60, 53 )\\ &= \left( \frac{ | 60 \cdot 53 | }{ g.c.d. ( 60, 53 ) } \right)\\ &= \left( \frac{ 3339 }{ 1 } \right) \\ &= 3339 \end{aligned} $$
Select \(e\) and calculate \(d\):
$$ \begin{aligned} e &= 17 \\ d &= 2753 \end{aligned} $$
For message \( m = 123 \), sender calculates and sends:
$$ \begin{aligned} c & = (123^{17})\mod{3233} \\ & = 337587917446653715596592958817679803\mod{3233} \\ & = 855 \end{aligned} $$
Receiver then calculates:
$$ \begin{aligned} m &= (855^{2753})\mod{3233} \\ & = 123 \end{aligned} $$
The number \(855^{2753}\) is enormous, approximately \( 5 \times 10^{8071} \), but you can do these calculations with the bc command. Leave off the common -l option so it will be set up for large integer arithmetic rather than floating-point:
$ bc
(123^17) % 3233
(855^2753) % 3233
Post-Quantum Cryptography
Asymmetric ciphers rely on "trap-door problems", math problems that are enormously more difficult to solve in one direction than the other. RSA uses the factoring problem, elliptic-curve cryptography relies on the difficulty of the discrete logarithm problem.
If an attacker could factor an RSA modulus \(n\) into its prime factors \( p \cdot q \), they could use the public key \(e\) to derive the private key \(d\). We are gambling on two assumptions to keep us safe.
First, that factoring is and will remain too difficult — factoring \( n = p \cdot q \) would always take too long. Either the attacker would get frustrated and give up, or if they did keep at it until they succeeded, it would have taken so long that we no longer care about those secrets. Factoring and prime numbers have fascinated mathematicians since Ancient Greece, so we aren't too worried about a sudden mathematical discovery.
Second, that no side-channel attack will be discovered — no one will figure out an alternative method to find the private key \(d\) without having to factor \( n = p \cdot q \). RSA is intentionally simple enough that everyone feels reasonably comfortable about this assumption. But you can't prove the non-existence of this type of attack.
There is, however, a growing third worry. Algorithms have been devised to solve some enormously difficult problems when run on a general-purpose quantum computer. One example is Shor's algorithm. It could solve the factoring problem in polynomial time, an enormous speed-up. More recent work has shown that it could also be used to solve the discrete logarithm problem used by ECC. We don't yet have a general-purpose quantum computer, one that has enough stable cubits to solve non-trivial problems and run Shor's algorithm. But advances continue.
In 2015 the NSA announced that ECC wasn't a backup for RSA when facing the threat of quantum computing cryptanalysis, to the point that government agencies and contractors considering a migration from RSA to ECC shouldn't bother. They later modified the page, and later yet they took it down, see the archived update here.
We need post-quantum or quantum-safe or quantum-resistant asymmetric ciphers. Several families of post-quantum cipher algorithms are being explored: lattice-based cryptography, code-base cryptography, multivariate polynomial cryptography, and others. See the Post-Quantum Crypto conference series for details.
U.S. NIST started a project to standardize Post-Quantum Cryptography algorithms in 2016. They hope to have draft standards out in 2022 to 2024.
The Open Quantum Safe project is an open-source project supporting the development and prototyping of quantum-safe or quantum-resistant cryptographic algorithms. They have the open-source liboqs C library ready for download and use, along with prototype integrations into protocols and applications including OpenSSL. See the 2016/2017 paper for a detailed description of the project.
Post-quantum asymmetric cryptography would be used only to encrypt a randomly generated session key that will be used with a symmetric cipher. Symmetric ciphers are thought to be relatively safe against quantum computing attacks. Grover's algorithm on a quantum computer would speed up the search for a symmetric key, but only to the point that keys would need to be doubled in length to preserve the same resistance to attack.
The only alternative to a quantum-safe or quantum-resistant asymmetric cipher requires physics.
Quantum Key Distribution or QKD is a physics-based method to securely communicate a randomly selected symmetric key to the other party. For a discussion of its security, see "The security of practical quantum key distribution", Reviews of Modern Physics, 81, pg 1301, 29 Sep 2009 (also here).
QKD uses single-photon signaling between the legitimate users at either end of a fibre link. An intruder could tap the fibre and monitor the key exchange, but that would corrupt the signal and warn both ends that the key was compromised and must not be used.
Many people think it sounds like science fiction or at least impractical, but it's been in use since 2004. In that year it was used to protect a transaction between the Vienna City Hall and the headquarters of Bank Austria Creditanstalt.
That seems to me to have been a proof of concept organized by the city to support development in a local university. But since 2007 the government of Geneva, Switzerland, has used QKD to protect election results.
In 2007, China began deploying a QKD network in Beijing. A larger network with over 45 nodes began operating in Hefei in 2010. Another large network began operating in Jinan in 2014, joining 28 organizations, including government agencies such as the China Banking Regulatory Commission plus commercial customers including the China Industrial and Commercial Bank, and the Xinhua News Agency. In 2016 China was planning to complete the world's longest QKD network with a 2000-kilometer backbone joining Beijing, Jinan, Hefei, and Shanghai.
In 2010, Japan began operating a QKD network developed by NEC, Toshiba, NTT and others and joining several nodes in Tokyo.
In 2013, Battelle Memorial Institute deployed a QKD network joining its headquarters in Columbus, Ohio, to a manufacturing facility 20 miles away.
In 2015 ETSI, the European Telecommunications Standards Institute, published a number of quantum key distribution specifications.
In 2016, a South Korean QKD network began operating between Seoul, Bundang, and Suwon.
In 2016, China was working on QKD between satellites and ground stations. See "China's quantum space pioneer: We need to explore the unknown", Nature, 13 Jan 2016.
In 2017, China demonstrated QKD between orbit and the ground in their Liángzĭ kēxué shíyàn wèixīng or Quantum Experiments at Space Scale. The Chinese Academy of Sciences operates the Micius satellite and the Chinese ground stations, the University of Vienna and the Austrian Academy of Sciences run the European ground stations.
Wikipedia Science Nature BBC Reuters CNBC
By 2020, the range was extended to over 1,100 kilometers.
New Scientist Nature
ID Quantique in Geneva, Switzerland, sells QKD systems plus random-number generators based on quantum physics.
Just Enough Cryptography
How Elliptic-Curve Cryptography Works
Selecting a Cipher and Mode
Back to the Cybersecurity Index
|
CommonCrawl
|
Journal Prestige (SJR): 0.266
Abstract: Abstract In this article, a historical overview of the development of the Periodic Table has been sketched. After Mendeleev published his Periodic Table in 1869, 55 more elements have been discovered. Of these 55 elements, 35 are radioactive; most of them never existed on Earth earlier. The excitement of the discovery of these unstable elements has been emphasized in this article. In conclusion, the dynamicity of the Periodic Table and its future have been projected.
Status of nuclear physics behind nucleosynthesis processes: The role of
exotic neutron-rich nuclei
Abstract: Abstract We give a brief overview of the current status of important nuclear physics inputs, like reaction rates, in hydrostatic and explosive nucleosynthesis. Recently, it has been proposed that exotic neutron-rich nuclei play an important role even in the formation of heavy elements via the r-process. The main problems here are identification, abundance estimation of seed nuclei in these processes, and their pathways. We will try to highlight how improved nuclear structure and reaction calculations can affect our present understanding of radiative capture rates of light-mass and medium-mass nuclei, which in turn can drastically influence the abundance of heavier-mass elements.
Oxygen abundances of carbon-enhanced stellar population in the halo
Abstract: Abstract The large fraction of carbon-enhanced metal-poor (CEMP) stars at lower metallicities makes them an interesting class of objects to be probed further in greater detail. They show different abundance patterns of neutron-capture elements and based on that CEMP stars are further divided into four categories. Abundances of C, N and O, along with other elements, are required to understand the different nucleosynthetic origins of the subclasses and their progenitors. We studied nine bright carbon-enhanced stars from the Milky Way halo in a metallicity range from −0.8 to −2.5. They show enhancement in C, N, O and Ba and exhibit radial velocity variation. This indicates the presence of a binary companion which might have contributed to the enhanced carbon and s-process abundance through mass transfer during its asymptotic-giant-branch (AGB) phase of evolution. Their abundance pattern of C, N and O favors low-mass nature for their binary companion.
Evolution of lithium in low-mass giants: an observational perspective
Abstract: Abstract The overabundance of lithium in low-mass red giants has been a topic of interest for over four decades. Low-mass stars are expected to destroy lithium gradually throughout their lifetimes. Against this expectation, about \(1\%\) of red giants in the Galaxy show anomalously large Li which, in the literature, are known as lithium-rich giants. The advent of large-scale stellar surveys (LAMOST, GALAH, Kepler, Gaia) coupled with high-resolution spectra enabled to find important clues about Li enhancement origin in red giants. These new studies suggest Li enhancement is mostly associated with the red clump region, post-He-flash. Here, we will describe our recent results along with current updates in the field.
Merged white dwarfs and nucleosynthesis
Abstract: Abstract Orbital decay mechanisms argue that double white dwarf mergers are inevitable, but extremely rare. Whilst some mergers result in explosions, the survivors re-ignite helium and burn brightly for tens of thousands or millions of years. Candidate survivors include extreme helium stars, R CrB variables and various classes of helium-rich subluminous star. Nuclear waste on the survivors' surfaces provides evidence of the stars' nuclear history prior to and their nucleosynthesis during the merger. Extensive and deep spectroscopic surveys offer rich prospects for future discoveries.
UVIT/AstroSat studies of blue straggler stars and post-mass transfer
systems in star clusters: detection of one more blue lurker in M67
Abstract: Abstract The blue straggler stars (BSSs) are main-sequence (MS) stars, which have evaded stellar evolution by acquiring mass while on the MS. The detection of extremely low mass (ELM) white dwarf (WD) companions to two BSSs and one yellow straggler star (YSS) from our earlier study using UVIT/AstroSat, as well as WD companions to main-sequence stars (known as blue lurkers) suggest a good fraction of post-mass transfer binaries in M67. Using deeper UVIT observations, here we report the detection of another blue lurker in M67, with an ELM WD companion. The post-mass transfer systems with the presence of ELM WDs, including BSSs, are formed from Case A/B mass transfer and are unlikely to show any difference in surface abundances. We find a correlation between the temperature of the WD and the \(v\ \sin i\) of the BSSs. We also find that the progenitors of the massive WDs are likely to belong to the hot and luminous group of BSSs in M67. The only detected BSS+WD system by UVIT in the globular cluster NGC 5466 has a normal WD and suggests that open cluster like environment might be present in the outskirts of low density globular clusters.
i-Process nucleosynthesis: Observational evidences from CEMP stars
Abstract: Abstract The surface chemical compositions of a large fraction of carbon-enhanced metal-poor (CEMP) stars, the so-called CEMP-r/s stars, are known to exhibit enhancement of both s-process and r-process elements. For these stars, the heavy-element abundances cannot be explained either by s-process or r-process nucleosynthesis alone, as the production sites of s-process and r-process elements are very different, and these two processes produce distinct abundance patterns. Thus, the observational evidence of the double enhancement seen in CEMP-r/s stars remains a puzzle as far as the origin of the elements is concerned. In this work, we have critically analysed the observed abundances of heavy elements in a sample of eight CEMP-r/s stars from the literature to trace the origin of the observed double enhancement. Towards this, we have conducted a parametric-model-based analysis to delineate the contributions of s-process and r-process nucleosynthesis to the observed elemental abundances. We have further examined if the i-process (intermediate-process) nucleosynthesis that occurs at high neutron density (n \({\sim }\,10^{15}\) cm \(^{-3}\) ) produced during proton ingestion from a H-rich envelope to the intershell region of an AGB star, which is capable of producing both r-process and s-process elements in a single stellar site, could explain the observed abundance patterns of the sample stars. Our analysis shows that the observed abundance patterns of the selected sample of CEMP-r/s stars could be fairly well reproduced using the i-process model yields.
Carbon-enhanced metal-poor stars enriched in s-process and r-process
Abstract: Abstract We present an on-going project consisting of analysis of a sample of twenty-five metal-poor stars, most of them carbon-enriched and thus tagged carbon-enhanced metal-poor (CEMP) stars, observed with the high-resolution HERMES spectrograph mounted on the Mercator telescope (La Palma), the UVES spectrograph on VLT (ESO Chile), or the HIRES spectrograph on KECK (Hawaii). This sample consists of CEMP-s stars, which are CEMP stars enriched in slow-neutron-capture (s-process) elements, as well as CEMP-rs stars enriched with both s-process and rapid-neutron-capture (r-process) elements. We also included an r-process-enriched star for comparison purposes. The origin of the abundance differences between CEMP-s and CEMP-rs stars is presently unknown. It has been claimed that the i-process (intermediate nucleosynthesis process), whose site still remains to be identified, could better reproduce CEMP-rs abundances than the s-process. We aim at understanding whether the i-process and its putative site can reproduce the abundance pattern measured in CEMP-rs stars.
Post-AGB stars as tracers of the origin of elements in the universe
Abstract: Abstract The chemical evolution of galaxies is governed by the chemical yields from stars, especially from Asymptotic Giant Branch (AGB) stars. This underlines the importance of understanding how AGB stars produce their elements by obtaining accurate stellar nucleosynthetic yields. Although AGB nucleosynthesis has general validity, critical uncertainties (such as the treatment of convective-driven mixing processes and mass loss) exist in current stellar models. Observations from post-Asymptotic Giant Branch (post-AGB) stars serve as excellent tools to quantify the strongest discrepancies, and eliminate crucial uncertainties that hamper stellar modelling. Our recent studies of post-AGB stars have shown an intriguing chemical diversity that ranges from stars that are extremely enriched in carbon and s-process elements to the discovery of the first post-AGB star with no traces of carbon nor s-process elements. Additionally, AGB nucleosynthesis is significantly affected by a binary companion. These results reflect the complexity that surrounds the element production in AGB stars. In this review, I will briefly present the intriguing chemical diversity observed in post-AGB stars and its implications on element/isotope production in AGB stars and stellar nucleosynthetic yields.
Recent advances in RV Tauri stars
Abstract: Abstract The availability of multi-wavelength observations and parallaxes from the space missions and very comprehensive models of AGB evolution that include the accretion of matter from the circumbinary disc have strongly impacted our understanding of these enigmatic objects. The important developments made in the recent times are summarized here. The revised estimates of luminosities (derived from better-defined Spectral Energy Distributions (SEDs) and new distances from Gaia DR2) further support the opinion that RV Tauri stars contain a mixture of post-AGB stars and post-RGB stars. Their locations in HR diagram also indicate that the instability strip (IS) of RV Tauri stars have a broader extension in the cooler edge than that of classical Cepheids. A new P−L relation has been calibrated for the galactic Cepheids which have a steeper slope than that derived for the Population II Cepheids and RV Tauri stars in Magellanic clouds . The most significant chemical peculiarity exhibited by RV Tauri stars and other post-AGB stars is the selective depletion of refractory elements that correlates with their condensation temperatures. A large range in the size of depletion as well as in the shapes of the depletion curves has been observed. Earlier models to explain this effect were mostly qualitative. Recent investigators model these depletions using evolutionary codes (e.g. MESA) to evolve stars in the post-AGB phase, while including accretion of metal-poor gas from circumbinary disc. These authors model the accretion rate onto a the binary post-AGB star from a viscously evolving disc for a range of initial accretion rates and disc masses. It is reported that large initial accretion rates and disc masses are required to explain the large depletion and saturated depletion curve that could extend the evolution time of post-AGB star. It is also proposed that the unsaturated depletion curve (with a plateau) are likely to be caused by post-RGB stars.
Recurrent novae: Single degenerate progenitors of Type Ia supernovae
Abstract: Abstract Type Ia supernovae are the result of explosive thermonuclear burning in CO white dwarfs. The progenitors of the Ia supernovae are white dwarfs in an interacting binary system. The donor companion is either a degenerate star (white dwarf) or a non-degenerate star (e.g. red giant). Recurrent novae are interacting binaries with a massive white dwarf accreting from either a main sequence, slightly evolved, or a red giant star. The white dwarf in these systems is a massive, hot white dwarf, accreting at a high rate. Recurrent novae are thought to be the most promising single degenerate progenitors of Type Ia supernovae. Presented here are the properties of a few recurrent novae based on recent outbursts. The elemental abundances and their distribution in the ejected shell are discussed.
Chemical elements in the Universe: Origin and evolution
On the cosmic origin of fluorine
Abstract: Abstract The cosmic origin of fluorine, the ninth element of the periodic table, is still under debate. The reason for this fact is the large difficulties in observing stellar diagnostic lines, which can be used for the determination of the fluorine abundance in stars. Here we discuss some recent work on the chemical evolution of fluorine in the Milky Way and discuss the main contributors to the cosmic budget of fluorine.
Galactic chemical evolution and chemical tagging with open clusters
Abstract: Abstract The article presents the consolidated results drawn from the chemical composition studies of Reddy et al. (2012, 2013, 2015, 2016) and Reddy & Lambert (2019), who through the high-dispersion echelle spectra ( \(R = 60000\) ) of red giant members in a large sample of Galactic open clusters (OCs), derived stellar parameters and chemical abundances for 24 elements by either line equivalent widths or synthetic spectrum analyses. The focus of this article is on the issues with radial-metallicity distribution and the potential chemical tags offered by OCs. Results of these studies confirm the lack of an age–metallicity relation for OCs but argue that such a lack of trend for OCs arise from the limited coverage in metallicity compared to that of field stars which span a wide range in metallicity and age. Results demonstrate that the sample of clusters constituting a steep radial metallicity gradient of slope −0.052 ± 0.011 dex kpc \(^{-1}\) at R \(_\mathrm{gc}<\) 12 kpc are younger than 1.5 Gyr and located close to the Galactic midplane ( \( z <\,\) 0.5 kpc). Whereas the clusters describing a shallow slope of −0.015 ± 0.007 dex kpc \(^{-1}\) at R \(_\mathrm{gc}>\) 12 kpc are relatively old with a striking spread in age and height above the midplane (0.5 \(\,< z <\,\) 2.5 kpc). Results of these studies reveal that OCs and field stars yield consistent radial metallicity gradients if the comparison is limited to samples drawn from the similar vertical heights. The computation of Galactic orbits reveals that the outer disk OCs were actually born inward of 12 kpc but the orbital eccentricity has taken them to present locations very far from their birthplaces. Published results for OCs show that the abundances of the heavy elements La, Ce, Nd and Sm but not so obviously Y and Eu vary from one cluster to another across a sample all having about the solar metallicity. For La, Ce, Nd and Sm the amplitudes of the variations at solar metallicity scale approximately with the main s-process contribution to solar system material. Consideration of published abundances of field stars suggest that such a spread in heavy element abundances is present for the thin and thick disk stars of different metallicity. This result provides an opportunity to chemically tag stars by their heavy elements and to reconstruct dissolved open clusters from the field star population.
Abundances of neutron-capture elements in CH and carbon-enhanced
metal-poor (CEMP) stars
Abstract: Abstract All the elements heavier than Fe are produced either by the slow (-s) or rapid (-r) neutron-capture process. The neutron density prevailing in the stellar sites is one of the major factors that determines the type of neutron-capture processes. We present the results based on the estimates of corrected value of absolute carbon abundance, [C/N] ratio, carbon isotopic ratio and [hs/ls] ratio obtained from the high-resolution spectral analysis of six stars that include both CH stars and CEMP stars. All the stars show enhancement of neutron-capture elements. Location of these objects in the A(C) vs. [Fe/H] diagram shows that they are Group I objects, with external origin of carbon and neutron-capture elements. Low values of carbon isotopic ratios estimated for these objects may also be attributed to some external sources. As the carbon isotopic ratio is a good indicator of mixing, we have used the estimates of \(^{12}\) C/ \(^{13}\) C ratios to examine the occurrence of mixing in the stars. While the object HD 30443 might have experienced an extra mixing process that usually occurs after red giant branch (RGB) bump for stars with log(L/L \(_{\odot }\) ) > 2.0, the remaining objects do not show any evidence of having undergone any such mixing process. The higher values of [C/N] ratios obtained for these objects also indicate that none of these objects have experienced any strong internal mixing processes. Based on the estimated abundances of carbon and the neutron-capture elements, and the abundance ratios, we have classified the objects into different groups. While the objects HE 0110−0406, HD 30443 and CD−38 2151 are found to be CEMP-s stars, HE 0308−1612 and HD 176021 show characteristic properties of CH stars with moderate enhancement of carbon. The object CD−28 1082 with enhancement of both r- and s-process elements is found to belong to the CEMP-r/s group.
Fluorine detection in hot extreme helium stars
Abstract: Abstract The origin and evolution of hydrogen-deficient stars are not yet adequately understood. Their chemical peculiarities, along with hydrogen-deficiency, makes them stand out from the rest and sheds light on their possible origin. Severe fluorine enrichment (of the order of 800–8000) is one such characteristic feature of a class of hydrogen deficient stars, mainly the RCBs (R Coronae Borealis stars) and cool EHes (Extreme Helium stars) which enforces their close connection. For hot EHes, this relationship with the cooler EHes, based on their fluorine abundance is unexplored. Here, first estimates of fluorine abundances in hot EHes are presented and discussed in the light of their cooler counterparts to try to establish an evolutionary connection. The relation between these fluorine estimates with the other elemental abundances observed in these stars plays a pivotal role to predict the formation and evolution of these exotic stars.
New measurements of cross sections and S-factors for $$d(p,\gamma
)^{3}\text{He}$$ d ( p , γ ) 3 He reaction at BBN energies
Abstract: Abstract This communication is a summary of our measurements of cross sections and astrophysical S-factors for radiative proton capture on deuteron. The measurements are a part of a new program to study light-ion induced nuclear capture and inelastic scattering reactions relevant to nucleosynthesis and astrophysics. We are primarily interested in the capture reactions relevant to primordial or Big Bang Nucleosynthesis (BBN). Section 1 provides a brief and general overview of nuclear astrophysics and the primary experimental challenges in the measurements of cross sections and S-factors for reactions relevant to nuclear astrophysics. The next section discusses the significance of the \(d(p,\gamma )^{3}\text{He}\) reaction in context of BBN and the need for generating a more precise data in the relevant energy window. The subsequent section is devoted to the experiment and results of measurements of cross sections and astrophysical S-factors for the \(d(p,\gamma )^{3}\text{He}\) reaction at three new BBN energies. One salient features of the measurements is the use of large volume LaBr \(_{3}\) :Ce scintillation detector for measurement of the capture \(\gamma \) -rays. To the best of our knowledge, capture \(\gamma \) -rays for this reaction had so far been measured with NaI(Tl) or HPGe detectors only. The detection efficiency of the detector has been measured experimentally for different monochromatic \(\gamma \) -ray energies. In addition, realistic GEANT4 Monte Carlo simulations have been carried out to generate the response of the detector for \(\gamma \) -ray energies of interest. The measured cross section and astrophysical S(E)-factor for the three incident proton energies are found to be in agreement with the overall trend of the global data set for the BBN region, reported in the literature. The measured S-factors are also found to be in agreement with recent microscopic calculations of Marcucci et al. (2016).
Chemical composition of the solar surface
Abstract: Abstract The Sun provides a standard reference against which we compare the chemical abundances found anywhere else in the Universe. Nevertheless, there is not a unique 'solar' composition, since the chemical abundances found in the solar interior, the photosphere, the upper atmosphere, or the solar wind, are not exactly the same. The composition of the solar photosphere, usually preferred as a reference, changes with time due to diffusion, convection, and probably accretion. In addition, we do not know the solar photospheric abundances, inferred from the analysis of the solar spectrum using model atmospheres, with high accuracy, and uncertainties for many elements exceed 25%. This article gives an overview of the methods and pitfalls of spectroscopic analysis, and discusses the chemistry of the Sun in the context of the solar system.
[Rb/Zr] ratio in Ba stars as a diagnostic of the companion AGB stellar
Abstract: Abstract Understanding nucleosynthesis in and evolution of asymptotic giant branch (AGB) stars is of primary importance as these stars are the main producers of some of the key elements in the Universe. They are the predominant sites for slow-neutron-capture nucleosynthesis (s-process). The exact physical conditions and nucleosynthetic processes occurring in the interior of AGB stars are not clearly understood, and that hinders better understanding of the contribution of these stars to the Galactic chemical enrichment. Extrinsic-variable stars that are known to have received products of AGB phase of evolution via binary mass-transfer mechanisms are vital tools in tracing AGB nucleosynthesis. The [Rb/Zr] ratio is an important diagnostic to understand the average neutron density at the s-process site and provides important clues to the mass of companion AGB stars in binaries. In this work we have presented estimates of [Rb/Zr] ratios based on high-resolution spectroscopic analysis for a sample of Ba stars, and discussed how the ratio can be used to understand the characteristics of the AGB star. Results from an analysis based on a parametric model to confirm the mass of the companion AGB star are also presented.
The horizontal branch morphology of the globular cluster NGC 1261 using
AstroSat
Abstract: Abstract We present the results obtained from the UV photometry of the globular cluster NGC 1261 using far-UV (FUV) and near-UV (NUV) images acquired with the Ultraviolet Imaging Telescope (UVIT) on-board the AstroSat satellite. We utilized the UVIT data combined with HST, GAIA, and ground-based optical photometric data to construct the different UV colour-magnitude diagrams (CMDs). We detected blue HB (BHB), and two extreme HB (EHB) stars in FUV, whereas full HB, i.e., red HB (RHB), BHB as well as EHB is detected in NUV CMDs. The 2 EHB stars, identified in both NUV and FUV, are confirmed members of the cluster. The HB stars form a tight sequence in UV-optical CMDs, which is almost aligned with Padova isochrones. This study sheds light on the significance of UV imaging to probe the HB morphology in GCs.
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.