text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Connection between sum of normally distributed random variables and mixture of normal distributions
If you have two independent random variables that are normally distributed (not necessarily jointly so), then their sum is also normally distributed, which e.g. means that its excess kurtosis is $0$.
On the other hand in case the mixture of one-dimensional normal distributions the mixture distribution can display non-trivial higher-order moments such as skewness and kurtosis (fat tails) and multi-modality, even in the absence of such features within the components themselves (see also this video for an easy enough example). This means that the result doesn't have to be normally distributed.
My question
How can you reconcile these two results and what is their connection?
normal-distribution mixture
vonjdvonjd
$\begingroup$ A mixture of two normals is not the sum of two normal random variables. It's a random variable whose PDF and CDF are weighted sums of the individuals PDFs and CDFs. $\endgroup$ – Max Jul 29 '12 at 16:43
$\begingroup$ @Max, that's an answer! $\endgroup$ – Dilip Sarwate Jul 29 '12 at 16:49
$\begingroup$ I'll write it up now. $\endgroup$ – Max Jul 29 '12 at 17:05
$\begingroup$ If you have suspicions that something might not be true, it's good to look for simple (counter)examples. Consider $X_1$ and $X_2$ independent Bernoulli random variables. A mixture of these is still Bernoulli (why?), but the sum of them could, in general, take the value $2$ (in addition to $0$ or $1$). So, clearly they can't be the same thing. :) $\endgroup$ – cardinal Jul 29 '12 at 21:56
$\begingroup$ You don't have to go to higher moments to see that a not all mixtures are normal. Mixtures of normal distributions don't have to be unimodal, although they can be. In addition, since distinct normal distributions have distinct tail asymptotics, you can read off the components from the tails. $\endgroup$ – Douglas Zare Jul 30 '12 at 4:48
It's important to make the distinction between a sum of normal random variables and a mixture of normal random variables.
As an example, consider independent random variables $X_1\sim N(\mu_1,\sigma_1^2)$, $X_2\sim N(\mu_2,\sigma_2^2)$, $\alpha_1\in\left[0,1\right]$, and $\alpha_2=1-\alpha_1$.
Let $Y=X_1+X_2$. $Y$ is the sum of two independent normal random variables. What's the probability that $Y$ is less than or equal to zero, $P(Y\leq0)$? It's simply the probability that a $N(\mu_1+\mu_2,\sigma_1^2+\sigma_2^2)$ random variable is less than or equal to zero because the sum of two independent normal random variables is another normal random variable whose mean is the sum of the means and whose variance is the sum of the variances.
Let $Z$ be a mixture of $X_1$ and $X_2$ with respective weights $\alpha_1$ and $\alpha_2$. Notice that $Z\neq \alpha_1X_1+\alpha_2X_2$. The fact that $Z$ is defined as a mixture with those specific weights means that the CDF of $Z$ is $F_Z(z)=\alpha_1F_1(z)+\alpha_2F_2(z)$, where $F_1$ and $F_2$ are the CDFs of $X_1$ and $X_2$, respectively. So what is the probability that $Z$ is less than or equal to zero, $P(Z\leq0)$? It's $F_Z(0)=\alpha_1F_1(0)+\alpha_2F_2(0)$.
Dilip Sarwate
MaxMax
Max has correctly pointed out the difference between sums of independent normals and mixtures of normals. However he did not show directly how that answers your question.
Simply stated the two distributions are vastly different. The sum is normally distributed. A mixture of two or three normals is not. In fact it will be bimodal when two distributions are included with very different means and nearly equal weights. When one distribution has a high weight and the other is far to the right with a small weight this will induce skewness and possibly kurtosis that is larger than for a normal distribution. In the case of three distributions with the middle having weight 0.8 and the other two equally shifted one to the right and the other to the left with weight 0.1 each the distribution will be symmetric with heavy tails. Plot some of the densities and the non-normality will be obvious.
Michael ChernickMichael Chernick
Not the answer you're looking for? Browse other questions tagged normal-distribution mixture or ask your own question.
Intuitive explanation of contribution to sum of two normally distributed random variables
Sum of 2 Normally Distributed Random Variables With a Correlation | CommonCrawl |
$M=\{x: f(x)=0\}$ is orientable — non-vanishing form
Let $f\colon \mathbb{R}^n \to \mathbb{R}$ be a $\mathscr{C}^{\infty}-$ function and let $M=\{x\in \mathbb{R}^n : f(x)=0\}$. Suppose that $df(p)\neq 0$ for all $p\in M$. Then $M$ is an orientable manifold (i.e. there exists a non-vanishing top form).
Let $x_1,\ldots,x_n$ denote the standard coordinates on $\mathbb{R}^n$. Let $p\in M$ and suppose that $\frac{\partial{f}}{\partial{x_i}}(p) \neq 0$. Let $\Psi_i(x_1,\ldots,x_n)=(x_1,\ldots, \hat{i},\ldots,x_n)$ denote the projection. By the implicit function theorem, $\Psi_i$ is locally around $p$ invertible, let $\varphi_i(x_1,\ldots,\hat{i},\ldots,x_n)$ denote the $i$-th component function of $\Psi_i^{-1}$. These charts are compatible and thus $M$ is a manifold. Let $dy_1,\ldots,\widehat{dy_i},\ldots,dy_n$ denote the basis of $T_p^\ast M$ induced by $\Psi_i$.
For proving that $M$ is orientable, it is suggested to look at $\omega_i=(-1)^{i} \frac{1}{\frac{\partial{f}}{\partial{x_i}}} dx_1\wedge \ldots \wedge \hat{i} \wedge \ldots \wedge x_n$. The inclusion $\iota \colon M\hookrightarrow \mathbb{R}^n$ is differentiable and thus we may consider $\eta_i:=\iota^{\ast}(\omega_i)$. Clearly, $\iota^{\ast}(dx_k)=dy_k$ and therefore $$ \eta_i(p)=(-1)^{i} \frac{1}{\frac{\partial{f}}{\partial{x_i}}(p)} dy_1 \wedge \ldots \wedge \widehat{dy_i} \wedge \ldots \wedge dy_n. $$ It remains to prove that this is well-defined, i.e. if $\frac{\partial{f}}{\partial{x_j}}(p) \neq 0$, then $\eta_i(p)=\eta_j(p)$. Let $dz_1,\ldots, \widehat{dz_j},\ldots, dz_n$ denote the basis of $T_p^\ast M$ induced by $\Psi_j$. For $k\neq i,j$, we have $dy_k=\iota^\ast (dx_k)=dz_k$.
$\textbf{The problem}$ is to compute $dz_i$ in terms of $dy_1,\ldots,\widehat{dy_i},\ldots,dy_n$.
The coefficient $a_{\ell}$ in the unique expression $dz_i=\sum_{\ell \neq i} a_\ell dy_\ell$ is given by $$ a_{\ell} = \frac{\partial{(\Psi_i)_{\ell}}}{\partial{z_i}}=\frac{\partial{(\Psi_i \circ \Psi_j^{-1})_{\ell}}}{\partial{x_i}}. $$
Now, $$\Psi_i \circ \Psi_j^{-1}(x_1,\ldots,\widehat{x_j},\ldots,x_n)=(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_{j-1},\varphi_j(x_1,\ldots,\widehat{x_j},\ldots,x_n),x_{j+1},\ldots,x_n)$$
and thus $a_{\ell}=0$ for $\ell \neq j$. And for $\ell = j$, $$ a_j=\frac{\partial{\varphi_j(x_1,\ldots,\widehat{x_j},\ldots,x_n)}}{\partial{x_i}}.$$
$\textbf{Question}$ At this point I am stuck. How can I proceed?
real-analysis differential-geometry manifolds
To explain Thomas's answer a bit more, he chooses a basis $v_1,...,v_{n-1}$ for $T_pM$ which wlog we can assume is the positive orientation. Since $\nabla f \not = 0$ and $\nabla f \not \in T_pM$, we can can normalize it and then $v_1,...,v_{n-1}, v:=\nabla f/\|\nabla f\|$ is a basis for $T_p\mathbb{R}^3$ which induces the positive orientation i.e if $(U, x^1,...,x^n)$ is a chart on $M$ (n-manifold) then $dx^1 \wedge \cdots \wedge dx^n$ gives the positive orientation i.e:
$$\alpha:=\underbrace{dx^1 \wedge \cdots \wedge dx^n}_{dV}(v_1,...,v_{n-1},v) >0$$
Now then, just define $\omega(v_1,...,v_{n-1},v) = \alpha$ and this gives a non-vanishing top form.
Faraad ArmwoodFaraad Armwood
$\begingroup$ Thanks. It seems that we are implicitly using $\Lambda^{k}(T_p^\ast M) \cong (\Lambda^{k}(T_p M))^\ast \cong Alt^{k}(T_p M,\mathbb{R})$, and thus for a point $p\in M$, it is enough to say what $\omega(p) (v_1,\ldots,v_{n-1})$ is for a basis $v_1,\ldots,v_{n-1}$ of $T_pM$. Why do I need that $v_1,\ldots,v_{n-1},v$ is a basis of $T_p\mathbb{R}^n$; can't I define $\omega(p)(v_1,\ldots,v_{n-1},v):=dV(v_1,\ldots,v_{n-1},v)$ for some random $v$? And how does it make sense to write $\nabla f \not\in T_p M$? $\endgroup$ – user363120 Mar 9 '17 at 14:16
$\begingroup$ Moreover, you consider $v_i$ as an element of $T_p\mathbb{R}^n$. By this you mean $\iota_p(v_i)$, where $\iota_p \colon T_pM \to T_p\mathbb{R}^n$ is induced by the inclusion? Why can we then say that $v_1,\ldots,v_{n-1}$ remain linearly independent in $T_p\mathbb{R}^n$ (just wondering because it is not generally true that the induced map on the tangent spaces in injective if the map itself is injective)? Sorry for all the questions. $\endgroup$ – user363120 Mar 9 '17 at 14:23
$\begingroup$ @user363120 you are looking at this from a very technical perspective. But you need to understand the geometry in order to write down a proof. It is true because you have a globally defined nonzero normal vector which allows you to reduce an $n-$ form on the ambient space to an $n-1$-form you can restrict to $T_p M$ If you use a random $v$ then you ignore the information that $M$ is the $0$ set of a $f$. $f$ is defined on the ambient space, so a priori it's gradient is not tangent to $M$, so it's not in $T_p M$ In fact it is normal to $T_p$, as already mentioned. $\endgroup$ – Thomas Mar 9 '17 at 14:51
$\begingroup$ @Thomas Yes, I do look at this from a technical perspective. Before building up geometric intuition I want to make sure that I get things formally correct. So you are saying that in this case $\iota_p \colon T_p M \to T_p \mathbb{R}^n$ is injective? Is there any particular reason for normalizing $\nabla f$? $\endgroup$ – user363120 Mar 9 '17 at 15:59
$\begingroup$ @user363120 Yes, this inclusion is injective in this case (because such manifolds are embedded hypersurfaces). You don't have to normalize the gradient. I'm used to do this since it makes some calculations easier, but you only need to know it's nowhere zero. You will get the same form multiplied by a nonzero scalar function. $\endgroup$ – Thomas Mar 9 '17 at 16:03
I would not try to do that in a coordinate system. You will have to patch them together and show that you can consistently keep the orientation, which is a complicated task.
Hint: $\frac{\nabla{f}}{|\nabla f|}$ is a nonvanishing unit normal vector field defined on all of $M$. Define the $n$-form $\omega$ for $v_1, \ldots, v_{n-1}$ on $TM$ by
$$\omega(v_1, \ldots, v_{n-1}):= dV(v_1, \ldots, v_{n-1},\frac{\nabla{f}}{|\nabla f|} )$$ where $dV$ is the volume form of the ambient space (which may be more or less the same as what you have written down, but it's defined without using coordinate patches).
$\begingroup$ Thanks for your reply. Would you mind explaining your notation? For me a differential $k$-form $\omega$ on $M$ is an assignment $p\mapsto \omega(p)\in \Lambda^{k} (T^{\ast}_p M)$. $\endgroup$ – user363120 Mar 9 '17 at 13:36
$\begingroup$ @user363120 For $p\in M$ choose $v_i \in T_p M$ and define, in that point, $\omega $ as in my answer. Note that $dV(p)\in \Lambda^n ((T_p\mathbb{R}^n)^*)$ $\endgroup$ – Thomas Mar 9 '17 at 13:48
Not the answer you're looking for? Browse other questions tagged real-analysis differential-geometry manifolds or ask your own question.
For a differentiable map $f: \mathbb{R^n}\to \mathbb{R^n}$, Show that $f^*({dy_1 \wedge\cdots \wedge dy_n})=\det(df)dx_1\wedge \cdots\wedge dx_n$
How to prove that volume forms agree on $U_\alpha \cap U_\beta$?
Show that $f^*\omega = \det(df) \, dx_1\wedge\cdots\wedge dx_n$
Computing $\alpha^*w$ in general
Pullback of a differential form over arbitraty complete field $K$
Definition of complex differential forms of bidegree $(p,q)$
De Rham differential is a derivation of wedge product?
Question Regarding the Coordinate Independent Form of the Exterior Derivative
Regarding wells' p.156 conjugate liner 1-form
Expression of $n$-form with two charts. | CommonCrawl |
Philosophy of Physical Science > Philosophy of Cosmology > Design and Observership in Cosmology > Fine-Tuning in Cosmology
Fine-Tuning in Cosmology
Edited by Yann Benétreau-Dupin (San Francisco State University)
Anthropic Principle (73)
Multiple Universes (37)
Observation in Cosmology (20)
Design and Observership in Cosmology, Misc (14)
Series Editor: Current Events in Public Philosophy
Lecturer in Philosophy
The World's Haecceity is the Dual of My Thrownness.Jude Arnout Durieux - manuscriptdetails
We live in a contingent world, a world that could have been different. A common way to deal with this contingency is by positing the existence of all possibilities. This, however, doesn't get rid of the contingency – it merely moves it from the third-person view to the first-person view.
Anthropic Principle in Philosophy of Physical Science
Existentialism in Continental Philosophy
Fine-Tuning in Cosmology in Philosophy of Physical Science
Haecceitism in Metaphysics
2. Programming relativity as the mathematics of perspective in a Planck unit Simulation Hypothesis.Malcolm Macleod - manuscriptdetails
The Simulation Hypothesis proposes that all of reality is in fact an artificial simulation, analogous to a computer simulation. Outlined here is a method for programming relativistic mass, space and time at the Planck level as applicable for use in Planck Universe-as-a-Simulation Hypothesis. For the virtual universe the model uses a 4-axis hyper-sphere that expands in incremental steps (the simulation clock-rate). Virtual particles that oscillate between an electric wave-state and a mass point-state are mapped within this hyper-sphere, the oscillation driven (...) by this expansion. Particles are assigned an N-S axis which determines the direction in which they are pulled along by the expansion, thus an independent particle motion may be dispensed with. Only in the mass point-state do particles have fixed hyper-sphere co-ordinates. The rate of expansion translates to the speed of light and so in terms of the hyper-sphere co-ordinates all particles (and objects) travel at the speed of light, time (as the clock-rate) and velocity (as the rate of expansion) are therefore constant, however photons, as the means of information exchange, are restricted to lateral movement across the hyper-sphere thus giving the appearance of a 3-D space. Lorentz formulas are used to translate between this 3-D space and the hyper-sphere co-ordinates, relativity resembling the mathematics of perspective. (shrink)
Astrophysics in Philosophy of Physical Science
Computer Simulation, Misc in Philosophy of Computing and Information
General Relativity in Philosophy of Physical Science
Observation in Cosmology in Philosophy of Physical Science
Origin of the Universe in Philosophy of Physical Science
Philosophy of Mathematics, Misc in Philosophy of Mathematics
Physics of Time in Philosophy of Physical Science
Simulation Argument in Philosophy of Computing and Information
Simulation Hypothesis in Philosophy of Computing and Information
Space and Time, Misc in Philosophy of Physical Science
The Big Bang in Philosophy of Physical Science
The Early Universe, Misc in Philosophy of Physical Science
Why is there Something? in Philosophy of Physical Science
Mathematical Constants of Natural Philosophy.Michael A. Sherbon - manuscriptdetails
Plato's theory of everything is an introduction to a Pythagorean natural philosophy that includes Egyptian sources. The Pythagorean Table and Pythagorean harmonics from the ancient geometry of the Cosmological Circle are related to symbolic associations of basic mathematical constants with the five elements of Plato's allegorical cosmology: Archimedes constant, Euler's number, the polygon circumscribing limit, the golden ratio, and Aristotle's quintessence. Quintessence is representative of the whole, or the one in four, extraneously considered a separate element or fifth force. This (...) relationship with four fundamental interactions or forces also involves the correlation of constants with the five Platonic solids: tetrahedron, hexahedron, octahedron, icosahedron, and dodecahedron. The values of several fundamental physical constants are also calculated, and a basic equation is given for a unified physical theory in the geometric universe of Plato's natural philosophy. (shrink)
Design and Observership in Cosmology, Misc in Philosophy of Physical Science
Philosophy of Physical Science, Misc in Philosophy of Physical Science
Philosophy of Physics, General Works in Philosophy of Physical Science
Symmetry in Physics in Philosophy of Physical Science
The multiverse doesn't affect the Anthropic argument.Jude Arnout Durieux - details
Often, the possibility of a multiverse is given as a defeater for the anthropic argument: if there are many, possibly even an infinite number of worlds, then the probability of having a life-permitting world is no longer low. This article shows that the possibility of a multiverse doesn't defeat the anthropic argument.
Multiple Universes in Philosophy of Physical Science
Fine-tuning arguments and the concept of law.John Halpin - manuscriptdetails
The Myopic Anthropic Principle: an attempt to show that the popular anthropic reasoning of our time — often taken to show that laws of nature are fine-tuned by a god for us — should be seen merely as an indication of fine-tuning by us. This preference for short-sightedness in this case is shown (shown?) to support the best-system account of scientific law.
Best-Systems Analyses in General Philosophy of Science
Dispositions and Laws in Metaphysics
Splitting the (In)Difference: Why Fine-Tuning Supports Design.Chris Dorst & Kevin Dorst - forthcoming - Thought: A Journal of Philosophy 11 (1):14-23.details
Given the laws of our universe, the initial conditions and cosmological constants had to be "fine-tuned" to result in life. Is this evidence for design? We argue that we should be uncertain whether an ideal agent would take it to be so—but that given such uncertainty, we should react to fine-tuning by boosting our confidence in design. The degree to which we should do so depends on our credences in controversial metaphysical issues.
Design Arguments for Theism, Misc in Philosophy of Religion
Epistemology, Misc in Epistemology
God and the Bayesian Conception of Evidence.David Manley - forthcoming - Religious Studies.details
Contemporary arguments for and against the existence of God are often formulated within a broadly Bayesian framework. Arguments of this sort focus on a specific feature of the world that is taken to provide probabilistic evidence for or against the existence of God: the existence of life in a 'fine-tuned' universe, the magnitude of suffering, divine hiddenness, etc. In each case, the idea is that things were more likely to be this way if God existed than if God did not (...) exist—or the other way around. Less attention, however, has been paid to the deeper question of what it takes for something to count as evidence for or against the existence of God. What exactly is being claimed when it is said that some feature of the world is more or less likely given the existence of God, and how should we go about assessing such a claim? This paper is about epistemological issues—and in particular, certain potential cognitive errors—that arise when we reason probabilistically about the existence of God. The moral is not that we should refrain from reasoning in this way, but that we should be mindful of potential errors when we do. (shrink)
Arguments Against Theism in Philosophy of Religion
Evidentialism in Epistemology
Probability in the Philosophy of Religion, Misc in Philosophy of Probability
The Problem of Old Evidence in Philosophy of Probability
Merely Statistical Evidence: When and Why It Justifies Belief.Paul Silva - forthcoming - Philosophical Studies.details
It is one thing to hold that merely statistical evidence is *sometimes* insufficient for rational belief, as in typical lottery and profiling cases. It is another thing to hold that merely statistical evidence is *always* insufficient for rational belief. Indeed, there are cases where (non-extreme) statistical evidence plainly does justify belief. This project develops a dispositional account of the normativity of statistical evidence, where the dispositions that ground justifying statistical evidence are connected to the goals (=proper function) of objects. There (...) are strong intuitive motivations for doing this. For we can turn almost any case of non-justifying merely statistical evidence into a case of justifying merely statistical evidence by adding information about the dispositions and goals of the objects involved. The resulting view not only helps us understand when and why merely statistical evidence is normatively significant, but it also helps us understand how statistical evidence relates to more standard forms of evidence (perceptual, testimonial). The emerging view also has surprising applications, as it imposes limitations on the epistemic value of fine-tuning arguments for theism as well as undermines a standard class of case-based arguments for moral encroachment. (shrink)
Philosophy of Statistics in Philosophy of Probability
Bayes, God, and the Multiverse.Richard Swinburne - forthcoming - Philosophical Explorations.details
Bayesian Reasoning, Misc in Philosophy of Probability
Calling for Explanation.Dan Baras - 2022 - New York, NY: Oxford University Press.details
The idea that there are some facts that call for explanation serves as an unexamined premise in influential arguments for the inexistence of moral or mathematical facts and for the existence of a god and of other universes. This book is the first to offer a comprehensive and critical treatment of this idea. It argues that calling for explanation is a sometimes-misleading figure of speech rather than a fundamental property of facts.
Debunking Arguments about Mathematics in Philosophy of Mathematics
Epistemic Norms in Epistemology
Theories of Explanation, Misc in General Philosophy of Science
Review of "Against Indifference Objections to the Fine-Tuning Argument". [REVIEW]Levi Durham - 2022 - Southwest Philosophy Review 38 (2):47-49.details
Multiple Universes and Self-Locating Evidence.Yoaav Isaacs, John Hawthorne & Jeffrey Sanford Russell - 2022 - Philosophical Review 131 (3):241-294.details
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Ian Hacking and Roger White influentially argue that it is not. We approach this question through a systematic framework for self-locating epistemology. As it turns out, leading approaches to self-locating evidence agree that the fact that our own universe contains fine-tuned life indeed confirms the existence of a multiverse. This convergence is no accident: we present two theorems showing that, in this setting, any updating (...) rule that satisfies a few reasonable conditions will have the same feature. The conclusion that fine-tuned life provides evidence for a multiverse is hard to escape. (shrink)
Conditionalization in Philosophy of Probability
Doomsday Argument in Philosophy of Probability
First-Person Contents in Philosophy of Mind
Prior Probabilities in Philosophy of Probability
Panpsychism and ensemble explanations.Han Li & Bradford Saad - 2022 - Philosophical Studies 179 (12):3583-3597.details
Panpsychism claims that the vast majority of conscious subjects in our world are inanimate and physical. Ensemble explanations account for striking phenomena by placing them within an ensemble of outcomes, most of which are not striking. This paper develops an explanatory problem for panpsychism: panpsychism renders two appealing ensemble explanations unsatisfactory. Specifically, we argue that panpsychism renders unsatisfactory the multiverse explanation of why a universe supports life and the many-planets explanation of why a planet supports life.
Epistemology of Mind, Misc in Philosophy of Mind
Panpsychism, Misc in Philosophy of Mind
An Agnostic Defends God: How Science and Philosophy Support Agnosticism.Bryan Frances - 2021 - Palgrave-Macmillan.details
This book contains a unique perspective: that of a scientifically and philosophically educated agnostic who thinks there is impressive—if maddeningly hidden—evidence for the existence of God. Science and philosophy may have revealed the poverty of the familiar sources of evidence, but they generate their own partial defense of theism. Bryan Frances, a philosopher with a graduate degree in physics, judges the standard evidence for God's existence to be awful. And yet, like many others with similar scientific and philosophical backgrounds, he (...) argues that the usual reasons for atheism, such as the existence of suffering and success of science, are weak. In this book you will learn why so many people with scientific and philosophical credentials are agnostics despite judging all the usual evidence for theism to be fatally flawed. (shrink)
Agnosticism in Philosophy of Religion
Arguments from Naturalism against Theism in Philosophy of Religion
Kalam Cosmological Argument in Philosophy of Religion
$84.00 used $99.08 new $109.92 from Amazon View on Amazon.com
Are the psychophysical laws fine-tuned?Dan Cavedon-Taylor - 2020 - International Journal for Philosophy of Religion 89 (3):285-292.details
Neil Sinhababu :89–98, 2017) has recently argued against the fine-tuning argument for God. They claim that the question of the universe's fine-tuning ought not be 'why is the universe so hospitable to life?' but rather 'why is the universe so hospitable to morally valuable minds?' and that, moreover, the universe isn't so hospitable. For it is metaphysically possible that psychophysical laws be substantially more permissive than they in fact are, allowing for the realisation of morally valuable consciousness by exceptionally simple (...) physical states and systems, rather than the complex states of brains. I reply that Sinhababu's argument rests upon unsupported claims and that we have reason to doubt that an omnibenevolent God would make the psychophysical laws more permissive than they in fact are. (shrink)
Metaphysics of Mind, Misc in Philosophy of Mind
What Are the Odds that Everyone is Depraved?Scott Hill - 2020 - American Philosophical Quarterly 57 (3):299-308.details
Why does God allow evil? One hypothesis is that God desires the existence and activity of free creatures but He was unable to create a world with such creatures and such activity without also allowing evil. If Molinism is true, what probability should be assigned to this hypothesis? Some philosophers claim that a low probability should be assigned because there are an infinite number of possible people and because we have no reason to suppose that such creatures will choose one (...) way rather than another. Arguments like this depend on the principle of indifference. But that principle is rejected by most philosophers of probability. Some philosophers claim that a low probability should be assigned because doing otherwise violates intuitions about freewill. But such arguments can be addressed through strategies commonly employed to defend theories with counterintuitive results across ethics and metaphysics. (shrink)
Divine Providence in Philosophy of Religion
A Philosophical Approach to MOND: Assessing the Milgromian Research Program in Cosmology.David Merritt - 2020 - Cambridge, UK: Cambridge University Press.details
Dark matter is a fundamental component of the standard cosmological model, but in spite of four decades of increasingly sensitive searches, no-one has yet detected a single dark-matter particle in the laboratory. An alternative cosmological paradigm exists: MOND (Modified Newtonian Dynamics). Observations explained in the standard model by postulating dark matter are explained in MOND by proposing a modification of Newton's laws of motion. Both MOND and the standard model have had successes and failures – but only MOND has repeatedly (...) predicted observational facts in advance of their discovery. In this volume, David Merritt outlines why such predictions are considered by many philosophers of science to be the 'gold standard' when it comes to judging a theory's validity. In a world where the standard model receives most attention, the author applies criteria from the philosophy of science to assess, in a systematic way, the viability of this alternative cosmological paradigm. (shrink)
Imre Lakatos in 20th Century Philosophy
Popper: Philosophy of Science in 20th Century Philosophy
Scientific Progress in General Philosophy of Science
$56.13 new $72.64 used View on Amazon.com
Freedom Giving Birth to Order: Philosophical Reflections on Peirce's Evolutionary Cosmology and its Contemporary Resurrections.Zeyad El Nabolsy - 2020 - Cosmos and History: The Journal of Natural and Social Philosophy 16 (1):1-23.details
This paper seeks to show that Charles Sanders Peirce's interest in an evolutionary account of the laws of nature is motivated both by his desire to extend the scope of the application of the Principle of Sufficient Reason (PSR) and by his attempt to explain the success of our deployment of the PSR, which presupposes the existence of determinate causal structures. One can situate Peirce's concern with the explanation of the laws of nature in relation to the influences of Naturphilosophie (...) on Peirce. I then show that some strands of contemporary physics can be understood as resurrections of Peirce's evolutionary cosmology. I show that we can understand Lee Smolin's theory of "cosmological natural selection" as a version of Peirce's evolutionary cosmology that is characterized by greater refinement and determinacy. However I argue that, contrary to Smolin's claim, an evolutionary account of the laws of nature need not require the abandonment of the relativity of simultaneity as established by the special theory of relativity. I also argue that Lee Smolin and Roberto Unger's characterization of the "original state" in their account of evolutionary cosmology raises philosophical problems of individuation that are best approached from the perspective of Chinese process metaphysics. Finally I turn to the wider consequences of evolutionary cosmology in relation to how we traditionally "rank" fields of knowledge that deal with atemporal structures as "more rigorous" than fields that deal with historical phenomena. (shrink)
Charles Sanders Peirce in 19th Century Philosophy
Classical Chinese Philosophy in Asian Philosophy
Friedrich Schelling in 19th Century Philosophy
Evil in the Fine‐Tuned World.Ebrahim Azadegan - 2019 - Heythrop Journal 60 (5):795-804.details
Cosmological Arguments for Theism in Philosophy of Religion
Evil in Philosophy of Religion
Religious Studies in Arts and Humanities
Why Do Certain States of Affairs Call Out for Explanation? A Critique of Two Horwichian Accounts.Dan Baras - 2019 - Philosophia 47 (5):1405-1419.details
Motivated by examples, many philosophers believe that there is a significant distinction between states of affairs that are striking and therefore call for explanation and states of affairs that are not striking. This idea underlies several influential debates in metaphysics, philosophy of mathematics, normative theory, philosophy of modality, and philosophy of science but is not fully elaborated or explored. This paper aims to address this lack of clear explanation first by clarifying the epistemological issue at hand. Then it introduces an (...) initially attractive account for strikingness that is inspired by the work of Paul Horwich and adopted by a number of philosophers. The paper identifies two logically distinct accounts that have both been attributed to Horwich and then argues that, when properly interpreted, they can withstand former criticisms. The final two sections present a new set of considerations against both Horwichian accounts that avoid the shortcomings of former critiques. It remains to be seen whether an adequate account of strikingness exists. (shrink)
Mathematical Platonism in Philosophy of Mathematics
A Reasonable Little Question: A Formulation of the Fine-Tuning Argument.Luke A. Barnes - 2019 - Ergo: An Open Access Journal of Philosophy 6.details
A new formulation of the Fine-Tuning Argument (FTA) for the existence of God is offered, which avoids a number of commonly raised objections. I argue that we can and should focus on the fundamental constants and initial conditions of the universe, and show how physics itself provides the probabilities that are needed by the argument. I explain how this formulation avoids a number of common objections, specifically the possibility of deeper physical laws, the multiverse, normalisability, whether God would fine-tune at (...) all, whether the universe is too fine-tuned, and whether the likelihood of God creating a life-permitting universe is inscrutable. (shrink)
Is God the Best Explanation of Things?: A Dialogue.Felipe Leon & Joshua Rasmussen - 2019 - Palgrave Macmillan.details
This book provides an up to date, high-level exchange on God in a uniquely productive style. Readers witness a contemporary version of a classic debate, as two professional philosophers seek to learn from each other while making their cases for their distinct positions. In their dialogue, Joshua Rasmussen and Felipe Leon examine classical and cutting-edge arguments for and against a theistic explanation of general features of reality. The book also provides original lines of thought based on the authors' own contributions (...) to the field, and offers a productive and innovative inquiry into on one of the biggest questions people ask: what is the ultimate explanation of things? (shrink)
Arguments for Theism, Misc in Philosophy of Religion
Atheism in Philosophy of Religion
Cosmological Arguments from Contingency in Philosophy of Religion
From a cosmic fine-tuner to a perfect being.Justin Mooney - 2019 - Analysis 79 (3):449-452.details
Byerly has proposed a novel solution to the gap problem for cosmological arguments. I contend that his strategy can be used to strengthen a wide range of other theistic arguments as well, and also to stitch them together into a cumulative case for theism. I illustrate these points by applying Byerly's idea about cosmological arguments to teleological arguments.
Naturalness, Hierarchy, and Fine-Tuning.Joshua Rosaler, Robert Harlander, Gregor Schiemann & Miguel Ángel Carretero Sahuquillo - 2019 - Foundations of Physics 49 (9):855-859.details
Particle Physics in Philosophy of Physical Science
Philosophy of Physics, Misc in Philosophy of Physical Science
Quantum Field Theory in Philosophy of Physical Science
Quantum Gravity in Philosophy of Physical Science
Fine-tuning in the context of Bayesian theory testing.Luke Barnes - 2018 - European Journal for Philosophy of Science 8 (2):253-269.details
Fine-tuning in physics and cosmology is often used as evidence that a theory is incomplete. For example, the parameters of the standard model of particle physics are "unnaturally" small, which has driven much of the search for physics beyond the standard model. Of particular interest is the fine-tuning of the universe for life, which suggests that our universe's ability to create physical life forms is improbable and in need of explanation, perhaps by a multiverse. This claim has been challenged on (...) the grounds that the relevant probability measure cannot be justified because it cannot be normalized, and so small probabilities cannot be inferred. We show how fine-tuning can be formulated within the context of Bayesian theory testing in the physical sciences. The normalizability problem is seen to be a general problem for testing any theory with free parameters, and not a unique problem for fine-tuning. Physical theories in fact avoid such problems in one of two ways. Dimensional parameters are bounded by the Planck scale, avoiding troublesome infinities, and we are not compelled to assume that dimensionless parameters are distributed uniformly, which avoids non-normalizability. (shrink)
Bayesian Reasoning in Philosophy of Probability
Confirmation, Misc in General Philosophy of Science
Life in Philosophy of Biology
Knowledge, Belief, and God: New Insights in Religious Epistemology.Matthew A. Benton, John Hawthorne & Dani Rabinowitz (eds.) - 2018 - Oxford: Oxford University Press.details
Recent decades have seen a fertile period of theorizing within mainstream epistemology which has had a dramatic impact on how epistemology is done. Investigations into contextualist and pragmatic dimensions of knowledge suggest radically new ways of meeting skeptical challenges and of understanding the relation between the epistemological and practical environment. New insights from social epistemology and formal epistemology about defeat, testimony, a priority, probability, and the nature of evidence all have a potentially revolutionary effect on how we understand our epistemological (...) place in the world. Religion is the place where such rethinking can potentially have its deepest impact and importance. Yet there has been surprisingly little infiltration of these new ideas into philosophy of religion and the epistemology of religious belief. -/- Knowledge, Belief, and God incorporates these myriad new developments in mainstream epistemology, and extends these developments to questions and arguments in religious epistemology. The investigations proposed in this volume offer substantial new life, breadth, and sophistication to issues in the philosophy of religion and analytic theology. They pose original questions and shed new light on long-standing issues in religious epistemology; and these developments will in turn generate contributions to epistemology itself, since religious belief provides a vital testing ground for recent epistemological ideas. (shrink)
Epistemology of Religion, Misc in Philosophy of Religion
Faith in Philosophy of Religion
Philosophy of Religion, Misc in Philosophy of Religion
Infinite Cardinalities, Measuring Knowledge, and Probabilities in Fine-Tuning Arguments.Isaac Choi - 2018 - In Matthew A. Benton, John Hawthorne & Dani Rabinowitz (eds.), Knowledge, Belief, and God: New Insights in Religious Epistemology. Oxford: Oxford University Press. pp. 103-121.details
This paper deals with two different problems in which infinity plays a central role. I first respond to a claim that infinity renders counting knowledge-level beliefs an infeasible approach to measuring and comparing how much we know. There are two methods of comparing sizes of infinite sets, using the one-to-one correspondence principle or the subset principle, and I argue that we should use the subset principle for measuring knowledge. I then turn to the normalizability and coarse tuning objections to fine-tuning (...) arguments for the existence of God or a multiverse. These objections center on the difficulty of talking about the epistemic probability of a physical constant falling within a finite life-permitting range when the possible range of that constant is infinite. Applying the lessons learned regarding infinity and the measurement of knowledge, I hope to blunt much of the force of these objections to fine-tuning arguments. (shrink)
Knowledge, Misc in Epistemology
The Infinite in Philosophy of Mathematics
A Fortunate Universe: Life in a Finely Tuned Cosmos. [REVIEW]William Lane Craig - 2018 - Philosophia Christi 20 (2):596-599.details
Philosophy of Physics, Miscellaneous in Philosophy of Physical Science
Science and Religion in Philosophy of Religion
A Theological Critique of the Fine-Tuning Argument.Hans Halvorson - 2018 - In Matthew A. Benton, John Hawthorne & Dani Rabinowitz (eds.), Knowledge, Belief, and God: New Insights in Religious Epistemology. Oxford: Oxford University Press. pp. 122-135.details
According to the premises of the fine-tuning argument, most nomologically possible universes lack intelligent life; and the fact that ours has intelligent life is best explained by supposing it was created. However, if our universe was created, then the creator chose the laws of nature, and hence chose in favor of lifeless universes. In other words, the fine-tuning argument shows that God prefers universes without intelligent life; and the fact that our universe has intelligent life provides no new evidence for (...) God's existence. (shrink)
Laws of Nature, Misc in General Philosophy of Science
Fine-Tuning Fine-Tuning.John Hawthorne & Yoaav Isaacs - 2018 - In Matthew A. Benton, John Hawthorne & Dani Rabinowitz (eds.), Knowledge, Belief, and God: New Insights in Religious Epistemology. Oxford: Oxford University Press. pp. 136-168.details
Programming Planck units from a virtual electron; a Simulation Hypothesis (summary).Malcolm Macleod - 2018 - Eur. Phys. J. Plus 133:278.details
The Simulation Hypothesis proposes that all of reality, including the earth and the universe, is in fact an artificial simulation, analogous to a computer simulation, and as such our reality is an illusion. In this essay I describe a method for programming mass, length, time and charge (MLTA) as geometrical objects derived from the formula for a virtual electron; $f_e = 4\pi^2r^3$ ($r = 2^6 3 \pi^2 \alpha \Omega^5$) where the fine structure constant $\alpha$ = 137.03599... and $\Omega$ = 2.00713494... (...) are mathematical constants and the MLTA geometries are; M = (1), T = ($2\pi$), L = ($2\pi^2\Omega^2$), A = ($4\pi \Omega)^3/\alpha$. As objects they are independent of any set of units and also of any numbering system, terrestrial or alien. As the geometries are interrelated according to $f_e$, we can replace designations such as ($kg, m, s, A$) with a rule set; mass = $u^{15}$, length = $u^{-13}$, time = $u^{-30}$, ampere = $u^{3}$. The formula $f_e$ is unit-less ($u^0$) and combines these geometries in the following ratio M$^9$T$^{11}$/L$^{15}$ and (AL)$^3$/T, as such these ratio are unit-less. To translate MLTA to their respective SI Planck units requires an additional 2 unit-dependent scalars. We may thereby derive the CODATA 2014 physical constants via the 2 (fixed) mathematical constants ($\alpha, \Omega$), 2 dimensioned scalars and the rule set $u$. As all constants can be defined geometrically, the least precise constants ($G, h, e, m_e, k_B$...) can also be solved via the most precise ($c, \mu_0, R_\infty, \alpha$), numerical precision then limited by the precision of the fine structure constant $\alpha$. (shrink)
Physics in Natural Sciences
Virtual Reality, Misc in Philosophy of Computing and Information
Fine-Tuned of Necessity?Ben Page - 2018 - Res Philosophica 95 (4):663-692.details
This paper seeks to explicate and analyze an alternative response to fine-tuning arguments from those that are typically given—namely, design or brute contingency. The response I explore is based on necessity, the necessitarian response. After showing how necessity blocks the argument, I explicate the reply I claim necessitarians can give and suggest how its three requirements can be met: firstly, that laws are metaphysically necessary; secondly, that constants are metaphysically necessary; and thirdly, that the fundamental properties that determine the laws (...) and constants are necessary. After discussing each in turn, I end the paper by assessing how the response fares when running the fine-tuning argument in two ways, as an inference to best explanation and as a Bayesian argument. (shrink)
Dispositions and Powers, Misc in Metaphysics
Geraint F. Lewis and Luke A. Barnes. A Fortunate Universe: Life in a Finely Tuned Cosmos. [REVIEW]Yann Benétreau-Dupin - 2017 - Notre Dame Philosophical Reviews 201706.details
This new book by cosmologists Geraint F. Lewis and Luke A. Barnes is another entry in the long list of cosmology-centered physics books intended for a large audience. While many such books aim at advancing a novel scientific theory, this one has no such scientific pretense. Its goals are to assert that the universe is fine-tuned for life, to defend that this fact can reasonably motivate further scientific inquiry as to why it is so, and to show that the multiverse (...) and intelligent design hypotheses are reasonable proposals to explain this fine-tuning. This book's potential contribution, therefore, lies in how convincingly and efficiently it can make that case. (shrink)
The fine-tuned universe and the existence of God.Man Ho Chan - 2017 - Dissertation, Hong Kong Baptist Universitydetails
Recent research in science indicates that we are living in a fine-tuned universe. Only a very small parameter space of universal fundamental constants in Physics is congenial for the existence of life. Moreover, recent studies in Biological evolution also reveal that fine-tuning did exist in the evolution. It seems that we are so lucky to exist as all universal fundamental constants and life-permitting factors really fall into such a very small life-allowing region. This problem is known as the fine-tuning problem. (...) Does this phenomenon need an explanation? Can the fine-tuning problem point to the existence of God? Modern Science invokes the idea of multiverse to address the fine-tuning problem. Some scientists suggest that each universe in a set of infinitely many universes contains a typical set of fundamental constants. We should not be surprised why our universe is fine-tuned because we would not exist if the constants are not the life-allowed values. Some suggest that the existence of God can explain this fine-tuning problem. The naturalistic multiverse theory and the existence of God are the two most robust proposals to address the fine-tuning problem. Moreover, some argue that the fine-tuning problem is not real because we are just subject to observational selection effect. In this thesis, I will provide a comprehensive discussion on the fine-tuning phenomena in our universe. In particular, I will use the confirmation principle and the inference to the best explanation simultaneously to evaluate different hypotheses in a more systematic way and give some of the new and updated scientific and philosophical arguments to respond to the recent criticisms of the fine-tuning arguments. I conclude that the theistic hypothesis is the best among all to address the fine-tuning problem. (shrink)
The Fine-Tuning Argument and the Requirement of Total Evidence.Peter Fisher Epstein - 2017 - Philosophy of Science 84 (4):639-658.details
According to the Fine-Tuning Argument, the existence of life in our universe confirms the Multiverse Hypothesis. A standard objection to FTA is that it violates the Requirement of Total Evidence. I argue that RTE should be rejected in favor of the Predesignation Requirement, according to which, in assessing the outcome of a probabilistic process, we should only use evidence characterizable in a manner available before observing the outcome. This produces the right verdicts in some simple cases in which RTE leads (...) us astray, and, when applied to FTA, it shows that our evidence does confirm HM. (shrink)
Confirmation in General Philosophy of Science
Probabilistic Reasoning in Philosophy of Probability
The Fine-Tuning Argument and the Simulation Hypothesis.Moti Mizrahi - 2017 - Think 16 (47):93-102.details
In this paper, I propose that, in addition to the multiverse hypothesis, which is commonly taken to be an alternative explanation for fine-tuning, other than the design hypothesis, the simulation hypothesis is another explanation for fine-tuning. I then argue that the simulation hypothesis undercuts the alleged evidential connection between 'designer' and 'supernatural designer of immense power and knowledge' in much the same way that the multiverse hypothesis undercuts the alleged evidential connection between 'fine-tuning' and 'fine-tuner' (or 'designer'). If this is (...) correct, then the fine-tuning argument is a weak argument for the existence of God. (shrink)
Divine Fine-Tuning vs. Electrons in Love.Neil Sinhababu - 2017 - American Philosophical Quarterly 54 (1):89-98.details
I present a novel objection to fine-tuning arguments for God's existence: the metaphysical possibility of different psychophysical laws allows any values of the physical constants to support intelligent life forms, like protons and electrons that are in love.
Cosmological Arguments for Theism, Misc in Philosophy of Religion
Science and supernaturalism.Clement Dore - 2016 - Think 15 (42):35-52.details
In the first section of this paper, I discuss a quantum mechanical account, which is endorsed by the MIT physicist, Alan Guth, of the origin of what Guth believes to have been an absolutely first universe. I argue that, though his explanation is unsound, there is no reason to think that it needs to be replaced by a supernaturalist one. In the second section, I argue that though Professor Steven Weinberg's tentative explanation of the apparent fine-tuning of the cosmological constant (...) is unacceptable, we need not accept a supernaturalist account of the coming about of intelligent life. (shrink)
Taking Pascal's wager: faith, evidence, and the abundant life.Michael Rota - 2016 - Downers Grove, Illinois: IVP Academic, an imprint of Intervarsity Press.details
In part one of this book I argue for the conditional claim that if Christianity has at least a 50% epistemic probability, then it is rational to commit to living a Christian life (and irrational not to). This claim is supported by a contemporary version of Pascal's wager. In part two, I then proceed to argue that Christianity does have at least a 50% epistemic probability, by advancing versions of the cosmological argument, the fine-tuning argument, and historical arguments for the (...) plausibility of the resurrection (along with a few other relevant considerations). Assessments of the problem of evil and divine hiddenness are also given. Finally, in part three, I discuss the lives of three Christians from the 20th century (Dietrich Bonhoeffer, Jean Vanier, and Immaculee Ilibagiza) in an effort to illustrate how a life of Christian commitment is not just reasonable, but worth desiring as well--satisfying both the head and the heart. (shrink)
Divine Hiddenness in Philosophy of Religion
Pascal's Wager in Philosophy of Religion
Alone in the universe.Howard Smith - 2016 - Zygon 51 (2):497-519.details
We are probably alone in the universe—a conclusion based on observations of over 4,000 exoplanets and fundamental physical constraints. This article updates earlier arguments with the latest astrophysical results. Since the discovery of exoplanets, theologians have asked with renewed urgency what the presence of extraterrestrial intelligence says about salvation and human purpose, but this is the wrong question. The more urgent question is what their absence says. The "Misanthropic Principle" is the observation that, in a universe fine-tuned for life, the (...) circumstances necessary for intelligence are rare. Rabbis for 2,000 years discussed the existence of ETI using scriptural passages. We examine the traditional Jewish approaches to ETI, including insights on how ETI affects our perception of God, self, free-will, and responsibility. We explore the implications of our probable solitude, and offer a Jewish response to the ethical lessons to be drawn from the absence of ETI. (shrink)
The Fine-Tuning Argument and the Problem of Poor Design.Jimmy Alfonso Licon - 2015 - Philosophia 43 (2):411-426.details
My purpose, in this paper, is to defend the claim that the fine-tuning argument suffers from the poor design worry. Simply put, the worry is this: if God created the universe, specifically with the purpose of bringing about moral agents, we would antecedently predict that the universe and the laws of nature, taken as a whole, would be well-equipped to do just that. However, in light of how rare a life-permitting universe is, compared to all the ways the universe might (...) be have been life-prohibiting given the laws of physics, strongly suggests that the universe was poorly designed for that purpose. This casts doubt on the claim that God has much to do with designing the universe. First, I introduce the fine-tuning argument, and second, I explain and defend the poor design worry against objections that, while apparently compelling, I argue are misleading. (shrink)
Worlds Without End: The Many Lives of the Multiverse.Mary-Jane Rubenstein - 2015 - Cambridge University Press.details
"Multiverse" cosmologies imagine our universe as just one of a vast number of others. While this idea has captivated philosophy, religion, and literature for millennia, it is now being considered as a scientific hypothesis--with different models emerging from cosmology, quantum mechanics, and string theory. Beginning with ancient Atomist and Stoic philosophies, Mary-Jane Rubenstein links contemporary models of the multiverse to their forerunners and explores the reasons for their recent appearance. One concerns the so-called fine-tuning of the universe: nature's constants are (...) so delicately calibrated that it seems they have been set just right to allow life to emerge. For some thinkers, these "fine-tunings" are evidence of the existence of God; for others, however, and for most physicists, "God" is an insufficient scientific explanation. Hence the allure of the multiverse: if all possible worlds exist somewhere, then like monkeys hammering out Shakespeare, one universe is bound to be suitable for life. Of course, this hypothesis replaces God with an equally baffling article of faith: the existence of universes beyond, before, or after our own, eternally generated yet forever inaccessible to observation or experiment. In their very efforts to sidestep metaphysics, theoretical physicists propose multiverse scenarios that collide with it and even produce counter-theological narratives. Far from invalidating multiverse hypotheses, Rubenstein argues, this interdisciplinary collision actually secures their scientific viability. We may therefore be witnessing a radical reconfiguration of physics, philosophy, and religion in the modern turn to the multiverse. (shrink)
An Introduction to Design Arguments.Benjamin C. Jantzen - 2014 - New York: Cambridge University Press.details
The history of design arguments stretches back to before Aquinas, who claimed that things which lack intelligence nevertheless act for an end to achieve the best result. Although science has advanced to discredit this claim, it remains true that many biological systems display remarkable adaptations of means to ends. Versions of design arguments have persisted over the centuries and have culminated in theories that propose an intelligent designer of the universe. This volume is the only comprehensive survey of 2,000 years (...) of debate, drawing on both historical and modern literature to identify, clarify and assess critically the many forms of design argument for the existence of God. It provides a neutral, informative account of the topic from antiquity to Darwin, and includes concise primers on probability and cosmology. It will be of great value to upper-level undergraduates and graduates in philosophy of religion, theology, and philosophy of science. (shrink)
Intelligent Design in Philosophy of Biology
Collins' core fine-tuning argument.Mark Douglas Saward - 2014 - International Journal for Philosophy of Religion 76 (2):209-222.details
Collins (The Blackwell companion to natural theology, 2009) presents an argument he calls the 'core fine-tuning argument'. In this paper, I show that Collins' argument is flawed in at least two ways. First, the structure, depending on likelihoods, fails to establish anything about the posterior probability of God's existence given fine-tuning. As an argument for God's existence, this is a serious failing. Second, his analysis of what is appropriately restricted background knowledge, combined with the credences of a specially chosen 'alien', (...) do not allow him to establish the premise \( \Pr (LPU \mid NSU~ \& ~k') \ll 1\). (shrink)
From Ought to Is: Physics and the Naturalistic Fallacy.Matthew Stanley - 2014 - Isis 105 (3):588-595.details
In the eighteenth and nineteenth centuries there were many attempts to justify political and social systems on the basis of physics and astronomy. By the early twentieth century such moves increasingly also integrated the life and social sciences. The physical sciences gradually became less appealing as a sole source for sociopolitical thought. The details of this transition help explain the contemporary reluctance to capitalize on an ostensibly rich opportunity for naturalistic social reasoning: the anthropic principle in cosmology, which deals with (...) the apparent "fine-tuning" of the universe for life. (shrink)
History of Physics in Philosophy of Physical Science
The Is/Ought Gap in Meta-Ethics
The Naturalistic Fallacy in Meta-Ethics
Overturning Stumbling Blocks: A Review of A Reasonable Response Answers to Tough Questions on: God, Christianity and the Bible. [REVIEW]Scott D. G. Ventureyra - 2014 - Convivium 3 (14):38-39.details
Arguments from Miracles in Philosophy of Religion
Moral Arguments for Theism in Philosophy of Religion
Scratching the Surface: A Review of Where the Conflict Really Lies: Science, Religion & Naturalism. [REVIEW]Scott D. G. Ventureyra - 2014 - Convivium 3 (16):38-40.details
Evolution and Creationism in Philosophy of Biology
Mechanisms of Evolution in Philosophy of Biology
Fine Tuning Explained? Multiverses and Cellular Automata.Francisco José Soler Gil & Manuel Alfonseca - 2013 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 44 (1):153-172.details
The objective of this paper is analyzing to which extent the multiverse hypothesis provides a real explanation of the peculiarities of the laws and constants in our universe. First we argue in favor of the thesis that all multiverses except Tegmark's "mathematical multiverse" are too small to explain the fine tuning, so that they merely shift the problem up one level. But the "mathematical multiverse" is surely too large. To prove this assessment, we have performed a number of experiments with (...) cellular automata of complex behavior, which can be considered as universes in the mathematical multiverse. The analogy between what happens in some automata (in particular Conway's "Game of Life") and the real world is very strong. But if the results of our experiments can be extrapolated to our universe, we should expect to inhabit—in the context of the multiverse—a world in which at least some of the laws and constants of nature should show a certain time dependence. Actually, the probability of our existence in a world such as ours would be mathematically equal to zero. In consequence, the results presented in this paper can be considered as an inkling that the hypothesis of the multiverse, whatever its type, does not offer an adequate explanation for the peculiarities of the physical laws in our world. (shrink)
Sensory optimization by stochastic tuning.Peter Jurica, Sergei Gepshtein, Ivan Tyukin & Cees van Leeuwen - 2013 - Psychological Review 120 (4):798-816.details
Fine-tuning as evidence for a multiverse: why White is wrong. [REVIEW]Mark Douglas Saward - 2013 - International Journal for Philosophy of Religion 73 (3):243-253.details
Roger White (God and design, Routledge, London, 2003) claims that while the fine-tuning of our universe, $\alpha $ , may count as evidence for a designer, it cannot count as evidence for a multiverse. First, I will argue that his considerations are only correct, if at all, for a limited set of multiverses that have particular features. As a result, I will argue that his claim cannot be generalised as a statement about all multiverses. This failure to generalise, I will (...) argue, is also a feature of design hypotheses. That is, design hypotheses can likewise be made insensitive or sensitive to the evidence of fine-tuning as we please. Second, I will argue that White is mistaken about the role that this evidence plays in fine-tuning discussions. That is, even if the evidence of fine-tuning appears to support one particular hypothesis more strongly than another, this does not always help us in deciding which hypothesis to prefer. (shrink)
Design Arguments for Theism in Philosophy of Religion | CommonCrawl |
Soft Computing and Intelligent Information Systems
A University of Granada research group
EFDAMIS Lab
M4M Lab
DECMAK Lab
SOCCER Lab
GABIC Lab
SECABA Lab
DiCITS Lab
Advised Ph.D.
Edited Special Issues
Scientific Impact
Conference Activities
Rankings I-UGR
Quality Reports (Libraries)
Complementary Material
Graduate Courses: ETSIIT
Tutorials and Plenary Talks
Thematic Sites
Classification with Imbalanced Datasets
Computing with Words in Decision Making
Data Preprocesing in Data Mining
E. A. and Metaheur. for Continuous Optim.
Fuzzy Systems Software
Genetic Fuzzy Systems
h-index and Variants
Interpretability of FRBSs
Missing Values in Data Mining
Noisy Data in Data Mining
Prototype Reduction in N. Neighbor Class.
Semi-supervised Class.: Self-Labeling
Statistical Inf. in Comp. Intel. & Data Mining
Weapons Detection
Big Data Preprocessing - BigDaPTOOLS
KEEL-Dataset Repository
SECABA2
RSNNS
FRBS
RoughSets
SciMAT
WoS Query Partitioner
Home » Thematic Sites » h-index and Variants
This Website contains additional material to the SCI2S research paper on "h-index and review"
S. Alonso, F.J. Cabrerizo, E. Herrera-Viedma, F. Herrera, h-index: A Review Focused in its Variants, Computation and Standardization for Different Scientific Fields. Journal of Informetrics 3:4 (2009) 273-289, doi:10.1016/j.joi.2009.04.001.
The web is organized according to the following summary:
h-index: definition, applications, advantages and disadvantages
New indices based on h-index
Early indices based on the h-index
Aggregation based indices
Indices that take into account time
Other h-index related indices
h-related indices to evaluate scientific production at different levels
Standarization of the h-index for comparing scientific that work in different scientific fields
Some studies analyzing the indices
Studies comparing h-index and other bibliometric indicators
Studies that analyze h- based indices and their correlations
Studies about how self-citation affect the h-index
Studies that stablish some axioms and mathematical interpretations of h- based indices
Other studies that analyze the performance of different indices and their transformations
How to compute h-index using different Databases?
On the use of h- related indices to assess groups of individuals, institutions and journals
Empirical studies that use h- and related indices
WEB sites or journal special issues devoted to h-index
Bibliography compilation about the h-index and related areas
Definition: (Hirsch JE (2005) An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences 102:16569-16572, doi: 10.1073/pnas.0507655102 ) A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np - h) papers have ≤ h citations each.
Applications: Hirsch originally suggested the h-index for application at the micro level, that is, as a measure to quantify the scientific output of a single researcher. However, the h-index can be used not only for the lifetime achievements of a single researcher but can be applied to any (more extensive) publication set (Rousseau R (2006) New developments related to the Hirsch index. Industrial Sciences and Technology, Belgium, . ).
Van Raan (Van Raan AFJ (2006) Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics 67(3):491-502, doi: 10.1007/s11192-006-0066-4 . ) calculates the h-index for university research groups in chemistry and chemical engineering in the Netherlands. With calculation of the h-index for individual research groups, van Raan is applying the index for quantification of scientific performance no longer at the micro but at the meso level.
Braun, Glanzel and Schubert (Braun T, Glänzel W, Schubert A (2006) A Hirsch-type index for journals. Scientometrics 69(1):169-173, doi: 10.1007/s11192-006-0147-4 . ) propose a Hirsch-type index for evaluating the scientific impact of journals as a robust alternative indicator that is an advantageous complement to journal impact factors.
Banks (Banks MG (2006) An extension of the Hirsch index: Indexing scientific topics and compounds. Scientometrics 69(1):161-168, doi: 10.1007/s11192-006-0146-5 . ) applies the h-index to the case of interesting topics and compounds: Bank's h - b index is found by entering a topic (search string, like "superstring" or "teleportation") or compound (name or chemical formula) into the Web of Science database and then ordering the results in terms of citations, by largest first. The h - b is then defined in the same manner as the h-index. With calculation of the h - b index, it can be determined how much work has already been done on certain topics or compounds, what the "hot topics" (or "older topics") of interest are, or what topic or compound is mainstream research at the present time.
Advantages: (Costas R, Bordons M (2007) Advantages, limitations and its relation with other bibliometric indacators at the micro level. Journal of Informetrics 1(3):193-203, doi: 10.1016/j.joi.2007.02.001 . )
It combines a measure of quantity (publications) and impact (citations).
It allows us to characterize the scientific output of a researcher with objectivity and, therefore, may play an important role when making decisions about promotions, fund allocation and awarding prizes.
It performs better than other single-number criteria commonly used to evaluate the scientific output of a researcher (impact factor, total number of documents, total number of citations, citation per paper rate and number of highly cited papers).
The h-index can be easily obtained by anyone with access to the Thomson ISI Web of Science and, in addition, it is easy to understand.
Disadvantages: (Costas R, Bordons M (2007) Advantages, limitations and its relation with other bibliometric indacators at the micro level. Journal of Informetrics 1(3):193-203, doi: 10.1016/j.joi.2007.02.001 . )
There are inter-field differences in typical h values due to differences among fields in productivity and citation practices (Hirsch JE (2005) An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences 102:16569-16572, doi: 10.1073/pnas.0507655102 . ), so the h-index should not be used to compare scientists from different disciplines.
The h-index depends on the duration of each scientist's career because the pool of publications and citations increases over time (Hirsch JE (2005) An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences 102:16569-16572, doi: 10.1073/pnas.0507655102 . ; Kelly CD, Jennions MD (2006) The h-index and career assessment by numbers. Trends in Ecology and Evolution 21(4):167-170, doi: 10.1016/j.tree.2006.01.005 . ). In order to compare scientists at different stages of their career, Hirsch (Hirsch JE (2005) An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences 102:16569-16572, doi: 10.1073/pnas.0507655102 . ) presented the "m parameter", which is the result of dividing h by the scientific age of a scientist (number of years since the author's first publication).
Highly cited papers are important for the determination of the h-index, but once they are selected to belong to the top h papers, it is unimportant the number of citations they receive. This is a disadvantage of the h-index which Egghe has tried to overcome through a new index, called g-index (Egghe L (2006) Theory and practice of the g-index. Scientometrics 69(1):131-152, doi: 10.1007/s11192-006-0144-7 . ).
Since the h-index is easy to obtain, we run the risk of indiscriminate use, such as relying only on it for the assessment of scientists. Research performance is a complex multifaceted endeavour that cannot be assessed adequately by means of a single indicator:
Martin BR (1996) The use of multiple indicators in the assessment of basic research. Scientometrics 36(3):343-362, doi: 10.1007/BF02129599 .
Carbo-Dorca R. A monodimensional scientific performance measure: the h index, can be substituted by simple multidimensional descriptors?. Journal of the Mathematical Chemistry 47 (1) (2010) 548-550, doi: 10.1007/s10910-009-9573-x
Yin C.Y., Aris M.J., Chen X. Combination of Eigenfactor (TM) and h-index to evaluate scientific journals. Scientometrics 84 (3) (2010) 639-648, doi: 10.1007/s11192-009-0116-9
The use of the h-index could provoke changes in the publishing behaviour of scientists, such an artificial increase in the number of self-citations distributed among the documents on the edge of the h-index (Van Raan AFJ (2006) Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics 67(3):491-502, doi: 10.1007/s11192-006-0066-4 . ).
There are also technical limitations, such as the difficulty to obtain the complete output of scientists with very common names, or whether selt-citations should be removed or not. Self-citations can increase a scientist's h, but their effect on h is much smaller than on the total citation count since only self-citations with a number of citations just > h are relevant (Hirsch JE (2005) An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences 102:16569-16572, doi: 10.1073/pnas.0507655102 . ).
g-index: (Egghe L (2006) Theory and practise of the g-index. Scientometrics 69(1):131-152, doi: 10.1007/s11192-006-0144-7 . ) Holding that "a measure which should indicate the overall quality of a scientist ... should deal with the performance of the top articles," Egghe proposed the g-index as a modification of the h-index. For the calculation of the g-index, the same ranking of a publication set -paper in decreasing order of the number of citations received- is used as for the h-index. Egghe defines the g-index "as the highest number g of papers that together received g2 or more citations. From this definition it is already clear that g = h". In contrast to the h-index, the g-index gives more weight to highly cited papers. The aim is to avoid a disadvantage of the h-index that "once a paper belongs to the top h papers, its subsequent citations no longer 'count'" (Bornmann L, Mutz R, Daniel HD (2008) Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology 59(5):830-837, doi: 10.1002/asi.20806 . ).
a-index: (Jin BH, Liang LM, Rousseau R, Egghe L (2007) The R- and AR-indices: Complementing the h-index. Chinese Science Bulletin 52(6):855-863, doi: 10.1007/s11434-007-0145-9 . ) According to Burrell (Burrell QL (2007) On the h-index, the size of the Hirsch core and Jin's A-index. Journal of Informetrics 1(2):170-177, doi: 10.1016/j.joi.2007.01.003 . ) "the h-index seeks to identify the most productive core of an author's output in terms of most received citations". For this core, consisting of the first h papers, Rousseau (Rousseau R (2006) New developments related to the Hirsch index. Industrial Sciences and Technology, Belgium,. ) introduced the term Hirsch core. "The Hirsch core can be considered as a group of high-performance publications, with respect to the scientist's career". The a-index (as well as the m-index, r-index, and ar-index) includes in the calculation only papers that are in the Hirsch core. It is defined as the average number of citations of papers in the Hirsch core. The proposal to use this average number of citations as a variant of the h-index was made by Jin, the main editor of Science Focus (Jin B (2006) h-index: an evaluation indicator proposed by scientist. Science Focus 1(1):8-9). Rousseau referred to this index later as the a-index. The a-index is defined as:
$$A=\frac{1}{h}\displaystyle\sum_{j=1}^{h}cit_j$$
where h = h-index, and cit = citations counts.
h(2)-index: (Kosmulski M (2006) A new Hirsch-type index saves time and works equally well as the original h-index. ISSI Newsletter 2(3):4-6, . ) Like the g-index, calculation of the h(2)-index also gives more weight to highly cited articles: "A scientist's h(2)-index is defined as the highest natural number such that his h(2) most cited papers received each at least [h(2)]2 citations". An h(2)-index of 20, for example, means that a scientist has published at least 20 papers, of which each has been cited at least 400 times. Obviously, for any scientist, the h(2)-index is always lower than the h-index. According to Jin et al. (Jin BH, Liang LM, Rousseau R, Egghe L (2007) The R- and AR-indices: Complementing the h-index. Chinese Science Bulletin 52(6):855-863, doi: 10.1007/s11434-007-0145-9 . ), the main advantage of the h(2)-index is that it reduces the precision problem. That means that when computing the h(2)-index using a publication set put together for a scientist using Web of Science data (Thomson Scientific), less work is needed to check the accuracy of the publications data, especially with regard to homographs -that is, to distinguish between scientists that have the same last name and first initial- than is needed when calculating the h-index. As only few papers in the set are sufficiently highly cited in order to fulfill the criterion of [h(2)]2 citations, there are also fewer papers to check (Bornmann L, Mutz R, Daniel HD (2008) Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology 59(5):830-837, doi: 10.1002/asi.20806 . ).
hg-index: (Alonso S, Cabrerizo FJ, Herrera-Viedma E, Herrera F (2010) hg-index: A new index to characterize the scientific output of researchers based on the h- and g- indices. Scientometrics 82(2):391-400 doi:10.1007/s11192-009-0047-5 . ) Alonso et al. present a new index, called hg-index, to characterize the scientific output of researchers which is based on both h-index and g-index to try to keep the advantages of both measures as well as to minimize their disadvantages. They do agree that both measures incorporate several interesting properties about the publications of a researcher and that both should be taken into account to measure the scientific output of scientists. Therefore, they present a combined index, that they call the hg-index that tries to fuse all the benefits of both previous measures and that tries to minimize the drawbacks that each one of them presented. The hg-index of a researcher is computed as the geometric mean of his h- and g- indices, that is:
$$hg=\sqrt{h·g}$$
It is trivial to demonstrate that h = hg = g and that hg - h = g - hg, that is, the hg-index corresponds to a value nearer to h than to g. This property can be seen as a penalization of the g-index in the cases of a very low h-index, thus avoiding the problem of the big influence that a very successful paper can introduce in the g-index.
$q^2$-index: (Cabrerizo FJ, Alonso S, Herrera-Viedma E, Herrera F (2009) q2-Index: Quantitative and Qualitative Evaluation Based on the Number and Impact of Papers in the Hirsch Core. Journal of Informetrics 4(1):23-28, doi:10.1016/j.joi.2009.06.005 . ) Cabrerizo et al. considered that as different indices measure different aspects in the scientific production of the researchers it is an interesting idea to merge some of those indices in order to obtain a simple but more complete measurement. They developed the q2-index, which is based on the geometric mean of both a quantitative measure (the h-index) and a qualitative measure (the m-index) of the h-core.
$$q^2=\sqrt{h·m}$$
The h-index is used because it is robust and describes the number of the papers (quantitative dimension) in a researcher's productive core, while the m-index is used because it depicts the impact of the papers (qualitative dimension) in a researcher's productive core and because it correctly deals with citation distributions which are usually skewed. It can be noticed that the q2-index is based on two indices which stand for different dimensions of the scientist's research output. Therefore, it obtains a more global view of the scientific production of researchers.
Complementary material: Excel file with the articles/number of citations per paper for a case of study, h-index and q2-index .
r-index: (Jin BH, Liang LM, Rousseau R, Egghe L (2007) The R- and AR-indices: Complementing the h-index. Chinese Science Bulletin 52(6):855-863, doi: 10.1007/s11434-007-0145-9 ) Jin et al. observed criticaly that with the a-index, "the better scientist is 'punished' for having a higher h-index, as the a-index involves a division by h". Therefore, instead of dividing by h, the authors suggest taking the square root of the sum of citations in the Hirsch core to calculate the index. Jin et al. refer to this new index as the r-index, as it is calculated using a square root. As the r-index -similar to the a-index- measures the citation intensity in the Hirsch core, the index can be very sensitive to just a very few papers receiving extremely high citation counts (Bornmann L, Mutz R, Daniel HD (2008) Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology 59(5):830-837, doi: 10.1002/asi.20806 ). The r-index is defined as:
$$R=\sqrt{\displaystyle\sum_{j=1}^{h}cit_j}$$
ar-index: (Jin B (2007) The AR-index: complementing the h-index. ISSI Newsletter 3(1):6,. ) The ar-index is an adaptation of the r-index. It takes into account not only the citation intensity in the Hirsch core but also makes use of the age of the publications in the core. This is an index that not only can increase but also decrease over time. For a good research evaluation indicator, Jin et al. (Jin BH, Liang LM, Rousseau R, Egghe L (2007) The R- and AR-indices: Complementing the h-index. Chinese Science Bulletin 52(6):855-863, doi: 10.1007/s11434-007-0145-9 . ) see it as a necessary condition that the index has sensitivity to performance changes. For this reason, Jin proposes the ar-index, "defined as the square root of the sum of the average number of citations per year of articles included in the h-core". To illustrate the necessity of a decreasing index in concrete application, Jin et al. calculated the h-index, r-index, and the ar-index for the articles written by BC Brookes (Brookes, who was the Derek de Solla Price Medallist in 1989, died in 1991): "Brookes' h-index over the whole period (2002-2007) stays fixed at h = 12 (hence here h > ar). Between 2002 and 2007 his r-index increased by 5% while the ar-index decreased by about 5% (Bornmann L, Mutz R, Daniel HD (2008) Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology 59(5):830-837, doi: 10.1002/asi.20806 . ). The ar-index is defined as:
$$R=\sqrt{\displaystyle\sum_{j=1}^{h}\frac{cit_j}{a_j}}$$
where h = h-index, cit = citations counts, and a = number of years since publishing.
m quotient: (Hirsch JE (2005) An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences 102:16569-16572, doi: 10.1073/pnas.0507655102 . ) According to a stochastic model for an author's production/citation process, Burrell (Burrell QL (2007) Hirsch's h-index: a stochastic model. Journal of Informetrics 1(1):16-25, doi: 10.1016/j.joi.2006.07.001 . ) conjectures that the h-index is approximately proportional to career length. One way to compare scientists with different lengths of scientific careers is to divide the h-index by number of years of research activity. For this reason, Hirsch already proposed dividing the h-index by number of years since a scientist's first publication and called this quotient m.
$$m quotient = \frac{h}{y}$$
where h = h-index, and y = number of years since publishing the first paper.
Contemporary h-index: (Sidiropoulos A, Katsaros D, Manolopoulos Y (2007) Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics 72(2):253-280, doi: 10.1007/s11192-007-1722-z ) The original h-index does not take into account the "age" of an article. It may be the case that some scientist contributed a number of significant articles that produced a large h-index, but now s/he is rather inactive or retired. Therefore, senior scientists, who keep contributing nowadays, or brilliant young scientists, who are expected to contribute a large number of significant works in the near future but now they have only a small number of important articles due to the time constraint, are not distinguished by the original h-index. Thus, it arises the need of defining a generalization of the h-index, in order to account for these facts. Therefore, a novel score $S^c(i)$ for an article i based on citation counting was defined as follows:
$$S^c(i)=\gamma * (Y(now)-Y(i)+1)^{-\delta}*|C(i)|$$
where Y(i) is the publication year of an article i and C(i) are the articles citing the article i. If we set d = 1, then Sc(i) is the number of citations that the article i has received, divided by the "age" of the article. Since the number of citations is divided with the time interval, the quantities Sc(i) will be too small to create a meaningful h-index; thus, the coefficient γ is used. This way, an old article gradually loses its "value", even if it still gets citations. In other words, in the calculations, we mainly take into account the newer articles. Therefore, the contemporary h-index is expressed as follows: A researcher has contemporary h-index hc if hc of its Np articles get a score of Sc(i) = hc each, and the rest (Np - hc) articles get a score of Sc(i) = hc.
Trend h-index: (Sidiropoulos A, Katsaros D, Manolopoulos Y (2007) Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics 72(2):253-280, doi: 10.1007/s11192-007-1722-z ) The original h-index does not take into account the year when an article acquired a particular citation, i.e., the "age" of each citation. For instance, consider a researcher who contributed to the research community a number of really brilliant articles during the decade of 1960, which, say, got a lot of citations. This researcher will have a large h-index due to the works done in the past. If these articles are not cited anymore, it is an indication of an outdated topic or an outdated solution. On the other hand, if these articles continue to be cited, then we have the case of an influential mind, whose contributions continue to shape newer scientists' minds. There is also a second very important aspect in aging the citations. There is the potential of disclosing trendsetters, i.e., scientists whose work is considered pioneering and sets out a new line of research that currently is hot ("trendy"), thus this scientist's works are cited very frequently. To handle this case, the opposite approach than the contemporary h-index's is taken. Instead of assigning to each scientist's article a decaying weight depending on its age, to each citation of an article is assigned an exponentially decaying weight, which is expressed as a function of the "age" of the citation. This way, we aim at estimating the impact of a researcher's work in a particular time instance. We are not interested in how old the articles of a researcher are, but whether they still get citations. The following equation is defined as follows:
$$S^t(i)=\gamma * \displaystyle\sum_{∀x\in C(i)} (Y(now)-Y(x)+1)^{-\delta}$$
where γ, d, Y(i) and S(i) for an article i are as defined as in the contemporary h-index. Therefore, the trend h-index is defined as follows: A researcher has trend h-index ht if ht of its Np articles get a score of St(i) = ht each, and the rest (Np - ht) articles get a score of St(i) = ht. Apparently, for γ = 1, d = 0, the trend h-index coincides with the original h-index.
Dynamic h-Type index: (Rousseau R, Ye FY (2008) A proposal for a dynamic h-type index. Journal of the American Society for Information Science and Technology 59(11):1853-1855, doi: 10.1002/asi.20890 ) This index depends on the h-core, the actual number of citations received by articles belonging to the h-core, and the recent increase in h. The definition contains three time-dependent elements: the size and contents of the h-core, the number of citations received, and the h-velocity. It is indeed possible that two scientists have the same h-index and the same number of citations in the h-core, but that one has no change in his h-index for a long time while the other scientist's h-index is on the rise. For hiring purposes, the second scientist is probably the better choice. Consequently, it is proposed
$$R(T)·V_h(T)$$
as a dynamic h-type index. Here R(T) denotes the R-index (Jin BH, Liang LM, Rousseau R, Egghe L (2007) The R- and AR-indices: Complementing the h-index. Chinese Science Bulletin 52(6):855-863, doi: 10.1007/s11434-007-0145-9 ), equal to the square root of the sum of all citations received by articles belonging to the h-core at Time T. In practice, we have to determine a starting point, T = 0, and a way of determining vh. This starting point should not be the beginning of a scientist's career, but when T is "now", then T = 0 can be taken 10 or 5 years ago (or any other appropriate time). If one has a good-fitting continuous model for h(t) over this period, then this function should be used to determine $v_h(T)$. In practice, it is probably better to find a fitting for $h_{rat}(t)$ -and not for h(t)- as this function is more similar to a continuous function than the standard h-index. Otherwise, the increment $Δh_{rat}(T)=h_{rat}(T)-h_{rat}(T-1)$ can be used (if it is not an obvious outlier). Note that when $h_{rat}(t)$ is concave, this approximation will be larger than the real derivative; when $h_{rat}(t)$ is convex, it will be smaller. When using this approximation, it is certainly appropriate to use the rational h-index as otherwise Δ(h) will often be 0 or 1, and no meaning can be attached to these values. Note that Burrell's rawh-rate h(T)/T should not be used as it is equal for all scientists with the same h(T), and hence, one loses the dynamic aspect. If equation above is actually used for evaluating purposes, self-citations should be removed.
k-index: (Ye FY, Rousseau R (2010) Probing the h-core: an investigation of the tail-core ratio for rank distributions. Scientometrics. In press, doi:10.1007/s11192-009-0099-6 ) In this recent contribution the authors worry not only about the citations in the h-core, but also in the h-tail. Thus, they defined the k-index as the ratio of impact over tail-core ratio. Moreover, the k-index was studied as a time dependant function. Concretely, being C, T, CH and CT the sets of citations, the set of publications, the set of citations receivedby the h-core and the set of citations received by the h-tail respectively, the k-index is defined as:
$$$k(t)=\frac{C(t)}{P(t)}/\frac{C_T(t)}{C_H(t)=\frac{C(t)C_H(t)}{P(t)(C(t)-C_H(t))}}$
Using some practical observations the authors conclude that this index decreases in most practical according to a power law model.
Seniority-independent Hirsch-type index: (Kosmulski M (2009) New seniority-independent Hirsch-type index . Journal of Informetrics 3(4):341-347, doi:10.1016/j.joi.2009.05.003 ) In this contribution the author presents an index which allows to compare the scientific output of researchers in different ages. To do so, the hdp-index is defined inthe following way: "A scientist has index hpd if hpd of his/her papers have at least hpd citations per decade each, and his/her other papers have less than hpd + 1 citations per decade each."
Specific-impact s-index: (De Visscher A. An Index to Measure a Scientist's Specific Impact. Journal of the American Society for Information Science and Technology 61 (2) (2010) 319-328. doi:10.1002/asi.21240) This index is defined as a measure of a scientist's projected impact per paper and aims to reduce the age bias from older papers (which had more time to accumulate citations than recent papers). This index correlates well with the h-index squared.
f-index: (Franceschini F., Maisano D. Analysis of the Hirsch index's operational properties. European Journal of Operational Research 203 (2) (2010) 494-504. doi:10.1016/j.ejor.2009.08.001) This index complements the h-index with the information related to the publication age. One of its main characteristics is that it does not compromise the original simplicity and immediacy of understanding of the h-index.
Impact vitality indicator: (Rons N., Amez L. Impact vitality: an indicator based on citing publications in search of excellent scientists. Research Evaluation 18 (3) (2009) 233-241. doi:10.3152/095820209X470563) This paper contributes to the quest for an operational definition of 'research excellence' and proposes a translation of the excellence concept into a bibliometric indicator. The impact vitality indicator is proposed. It reflects the vitality of the impact of a researcher's publication output, based on the change in volume over time of the citing publications.
m-index: (Bornmann L, Mutz R, Daniel HD (2008) Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology 59(5):830-837, doi: 10.1002/asi.20806 . ) As the distribution of citation counts is usually skewed, the median and not the arithmetic average should be used as the measure of central tendency. Therefore, as a variation of the a-index, the m-index is proposed as the median number of citations received by papers in the Hirsch core.
$h_w$-index: (Egghe L, Rousseau R (2008) An h-index weighted by citation impact. Information Processing and Management 44(2):770-780, doi: 10.1016/j.ipm.2007.05.003 . ) Similar to the ar-index, the $h_w$-index (an h-index weighted by citation impact) developed by Egghe and Rousseau is sensitive to performance changes. The $h_w$-index is defined as: hwindex
$$h_w index = \sqrt{\displaystyle\sum_{j=1}^{r_o}{cit_j}}$$
where $r_0$ is the largest row index j such that $r_w(j)$ = $cit_j$.
hm-index: (Schreiber M (2008) To share the fame in a fair way, $h_m$ for multi-authored manuscripts. New Journal of Physics 10(040201):1-9, doi: 10.1088/1367-2630/10/4/040201 . ) the $h_m$-index which is determined in analogy to the h-index, but counting the papers fractionally according to the number of authors, for example, only as one third for three authors. This yields an effective number which is utilized to define the $h_m$-index as that effective number of papers that have been cited hm or more times. Let r be the rank that is attributed to a paper when the publication list of an author is sorted by the number c(r) of citations. This arrangement is offered, e.g. in the WoS data base. Hirsch's h-index is determined from:
$$h=max_r(r \le c(r))$$
where each paper is fully counted for the (trivial) determination of its rank
$$r=\displaystyle\sum_{r'=1}^{r}1}
Counting a paper with a(r) authors only fractionally, i.e. by 1/a(r) yields an effective rank
$$r=\displaystyle\sum_{r'=1}^{r}\frac{1}{a(r')}
which is used to define the $h_m$-index as
$$h_m=max_r(r_{eff}(r) \le c(r))
More information about this index can be found in a recent contribution: Schreiber M (2009) A Case Study of the Modified Hirsch Index h(m) Accounting for Multiple Coauthors. Journal of the American Society for Information Science and Technology 60(6):1274-1282, doi: 10.1002/asi.21057 .
Normalized h-index: (Sidiropoulos A, Katsaros D, Manolopoulos Y (2007) Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics 72(2):253-280, doi: 10.1007/s11192-007-1722-z . ) Since the scientists do not publish the same number of articles, the original h-index is not the fairer metric; thus, a normalized version of h-index is defined as follows: A researcher has normalized h-index $h^n = h/N_p$, if h of its $N_p$ articles have received at least h citations each, and the rest ($N_p-h$) articles received no more than h citations.
Tapered h-index: (Anderson TR, Hankin KSH, Killworth PD (2008) Beyond the Durfee square: Enhancing the h-index to score total publication output. Scientometrics 76(3):577-588, doi: 10.1007/s11192-007-2071-2 . ) Consider a scientist who has 5 publications which, when ranked, have 6,4,4,2,1 citations. This publication output can be represented by a Ferrers graph, where each row represents a partition of the total 17 cites amongst papers (Fig. 2). The largest completed (filled in) square of points in the upper left hand corner of a Ferrers graph is called the Durfee square. The h-index is equal to the length of the side of the Durfee square (in the case of Fig. 2, h = 3), effectively assigning no credit (zero score) to all points that fall outside.
Let us start by considering h-index scores for sets of citation records that exactly match Durfee squares. If an author has a single paper that has one citation, this scores h = 1. Subsequently, h = 2 is achieved with two papers each with two citations. To move from h = 1 to h = 2, an additional 3 citations are required, one for the first paper and two for the second paper. In turn, moving from h = 2 to h = 3 requires a further 5 citations, reaching a 3, 3, 3 partitioning of the nine citations in the Ferrers graph (and so a Durfee square of side 3). Following this scheme, it is possible to score each citation individually, and in a manner that generates identical h-index scores when the relevant Durfee squares are complete (Fig. 2). Thus, the single citation in the Durfee square of side one has a score of 1, the three additional citations in the Durfee square of side 2 each score 1/3, and the five additional citations in the Durfee square of side 3 each score 1/5. Summing the relevant citations, scores of 1, 2, 3 are achieved for Durfee squares whose width is 1, 2, 3, matching the h-index. This notation immediately suggests a new index, $h_T$, which has the property that each additional citation increases the total score (the index has the property of being "marginally increasing"), whether or not it lies within the h-index Durfee square. The score of any citation on a Ferrers graph is now given by 1/(2L 1), where L is the length of side of a Durfee square whose boundary includes the citation in question. The additional citations that fall outside the Durfee square (of side 3) in Fig. 2 can now be scored, the five papers achieving scores of 1.88, 1.01, 0.74, 0.29 and 0.11, leading to a total score for $h_T$ of 4.03. In mathematical terms, the most cited paper in a given list, with $n_1$ citations, generates a score, $h_{T(1)}$, of:
$$H_{t(1)}=\displaystyle\sum_{i=1}^{n_1}\frac{1}{2i-1}=ln(n_1)/2+o(1)$$
where $ln(n_1)$ is the (natural) log of $n_1$, and o(1) is mathematical shorthand for a term that approaches zero as n1 approaches infinity. The resulting score is 2.13 for 10 citations, 3.28 for 100 citations, 4.44 for 1000 citations and 5.59 for 10000 citations (Fig. 3).
Fig. 3. Tapered h-index score for an author's top-ranked paper, $h_{T(1)}$, as a function of number of citations (n_1)
These scores are markedly higher than the score of 1 that the top-ranked paper would score for the h-index, increasing asymptotically in proportion to $log(n_1)$. The paper ranked second in the list scores 1/3 for its first citation, and then 1/3, 1/5, 1/7 etc., for further citations as for the top-ranked paper. Now, if an author has N papers with associated citations $n_1, n_2, n_3, ..., n_N$ (ranked in descending order as in a Ferrers graph), the $h_{T(1)}$ score for any single paper ranked j in the list (with $n_j$ citations), $h_{T(j)}$, is:
$$h_{T(j)}=\frac{n_j}{2j-1}, n_j \le j$$
$$h_{T(j)}=\frac{j}{2j-1}+\displaystyle\sum_{i=j+1}^{n_j}{\frac{1}{2i-1}n_j>j}$$
The total tapered h-index for a citation-ranked list of publications, $h_T$, is then calculated by summing over all the papers in the list:
$$h_T=\displaystyle\sum_{j=1}^{N}{h_T(j)}$$
$h_{rat}$-index: (Ruane F, Tol RSJ (2008) Rational (successive) h-indices: An application to economics in the Republic of Ireland. Scientometrics 75(2):395-405, doi: 10.1007/s11192-007-1869-7 . ) This index is defined as (h+1) minus the relative number of scores necessary for obtaining a value h+1. It clearly satisfies the inequality $h = h_{rat} < h+1$. More precisely, let n be the (least) number of citations necessary for obtaining an h-index 1 higher than h. This number n is divided by the highest possible n, namely, 2h+1. Indeed, the lowest possible situation leading to an h-index equal to h consists of h articles with h citations, followed by an article without any citation. To get an h-index equal to h+1, one needs one more score for each of the first h sources, h scores in total, and h+1 scores for the last one: a total of 2h+1. This h-index has the advantage of increasing in smaller steps than the standard h-index.
v-index: (Riikonen P, Vihinen M (2008) National research contributions: A case study on Finnish biomedical research. Scientometrics. 77(2):207-222, doi:10.1007/s11192-007-1962-y . ) Riikonen and Vihinen proposed the v-index as the percentage of articles forming the h-index. They suggest that taking together the h-index and the v-index into consideration it can be better measured the recognition of scientists, and the breadth of their productivity. As the h-index grows very slowly with an increase in the number of publications, the v-index indicates great variation in the proportion of highly cited articles for PIs with similar h-index values.
e-index: (Zhang CT (2009) The e-Index, Complementing the h-Index for Excess Citations. PLoS ONE. 4(5):e5429, doi:10.1371/journal.pone.0005429 . ) The e-index is presented as a simple complement to the h-index. This index tries to represent the excess citations that are ignored by the h-index. One of its advantages is that it is independent of the h-index, which is not the case for almost any other related index. It's mathematical formulation is as follows:
$$e^2=\displaystyle\sum_{j=1}^{h}{(cit_j-h)}=\displaystyle\sum_{j=1}^{h}cit_j-h^2$$
Multidimensional h-index: (Garcia-Perez MA (2009) A multidimensional extension to Hirsch's h-index. Scientometrics 81(3):779-785, doi:10.1007/s11192-009-2290-1 . ) The multidimensional h-index is defined in order to be able to discriminate among researchers with similar h-indices. To do so, the author proposes to use the papers outside the h-core to compute a succesive h-index which can help to differentiate among researchers with the same h-index. In this way, the new multidimensional h-index is able to obtain more granularity to compare scientists.
f-index: (Katsaros D, Akritidis L, Bozanis P (2009) The f Index: Quantifying the Impact of Coterminal Citations on Scientists' Ranking. Journal of the American Society for Information Science and Technology 60(5):1051-1056, doi:10.1002/asi.21040 . ) In this paper, the authors present the coterminal citations as an extension of cocitation in which some author has (co)authored multiple papers citing another paper. To avoid the impact that these coterminal citations can introduce in the h- and related indices, they propose the f-index which discriminate those individuals whose work penetrates many scienti?c communities.
π-index: (Vinkler P (2009) The π-index: a new indicator for assessing scientific impact. Journal of Information Science 35(5):602-612, doi:10.1177/0165551509103601 . ) Vinkler suggested a new index, named the π-index, for comparative assessment of scientists active in similar subject fields. The π-index is equal to one hundredth of the number of citations obtained to the top square root of the total number of journal papers ("elite set of papers") ranked by the decreasing number of citations. The author also studies the relation of the π-index to other indexes and its dependence on the field is studied, using data of journal papers of "highly cited researchers".
RC- and CC- indices: (Abbasi A., Altmann J., Hwang J. Evaluating scholars based on their academic collaboration activities: two indices, the RC-index and the CC-index, for quantifying collaboration activities of researchers and scientific communities. Scientometrics 83 (1) (2010) 1-13. doi:10.1007/s11192-009-0139-2) This study addresses the problem of the evaluation of the collaboration activities of researchers. Based on three measures, namely the collaboration network structure of researchers, the number of collaborations with other researchers, and the productivity index of co-authors, two new indices, the RC-Index and CC-Index, are proposed for quantifying the collaboration activities of researchers and scientific communities.
ch-index: (Ajiferuke I., Wolfram D. Citer analysis as a measure of research impact: library and information science as a case study. Scientometrics 83 (3) (2010) 623-638. doi:10.1007/s11192-009-0127-6) This paper proposes to use the number of citers instead of citations for the researcher production. Thus, it is possible to obtain a complementary measure of the author's reach of influence in a field, minimizing the effects of a limited circle of researchers citing the author's works.
Citation speed s-index: (Bornmann L., Daniel H.D. The citation speed index: A useful bibliometric indicator to add to the h index. Journal of Informetrics 4 (3) (2010) 444-446. doi:10.1016/j.joi.2010.03.007) This proposal is constructed as a meaningful complement to the h-index. It uses the number of months that have elapsed since the first citation. It tries to reflect the reception of the publications by the scientific community. Particularly, the speed index is defined as: a group of papers has the index s if for s of its $N_p$ papers the first citation was at least s months ago, and for the other ($N_p − s$) papers the first citation was ≤ s months ago.
$h^2$-lower, $h^2$-center and $h^2$-upper: (Bornmann L., Mutz R., Daniel H.D. The h index research output measurement: Two approaches to enhance its accuracy. Journal of Informetrics 4 (3) (2010) 407-414. doi:10.1016/j.joi.2010.03.005) In this work the authors address the problem that the h-index (and many of its variants) center its attention in just a portion of the scientist's citation distribution. To avoid this problem the authors define three h variants to quantify three different areas in the scientist's citation distribution: the low impact area ($h^2$-lower), the area captured by the h index ($h^2$-center), and the area of publications with the highest visibility ($h^2$-upper).
Environment $H_j$-indices (Dorta-Gonzalez P., Dorta-Gonzalez M.I. Bibliometric indicator based on the h-index. Revista Española de Documentación Científica 33 (2) (2010) 225-245. doi:10.3989/redc.2010.2.733) These indices are introduced to help to discriminate among similar index values (for example, when two citation curves intersect each other). The main idea of this index is to take into account the areas above and under the h-square in the citation curve when the h-index is increased. Thus, it is able to better discriminate among researchers with similar h-indices but different citation distributions.
h̄-index (Hirsch J.E. An index to quantify an individual's scientific research output that takes into account the effect of multiple coauthorship. Scientometrics 85 (3) (2010) 741-754. doi:10.1007/s11192-010-0193-9) In this recent proposal, Hirsch presents the h̄-index ("hbar"), defined as the number of papers of an individual that have citation count larger than or equal to the h̄ of all coauthors of each paper. This new index is useful to characterize the scientific output of a researcher that taking into account the effect of multiple authorship.
Role based h-maj-index (Hu X.J., Rousseau R., Chen J. In those fields where multiple authorship is the rule, the h-index should be supplemented by role-based h-indices. Journal of Information Science 36 (1) (2010) 73-85. doi:10.1177/0165551509348133) As with other recent proposals, the authors are dealing with the problem of multiple co-authorship. In this paper they propose a new index which is computed as the h-index but only on the papers in which the author has played a major or core role. The authors suggest that it can be used as a supplementary index in the fields where "first authors" and / or "corresponding authors" are common.
2nd generation citations h-index (Kosmulski M. Hirsch-type approach to the 2nd generation citations. Journal of Informetrics 4 (3) (2010) 257-264. doi:10.1016/j.joi.2010.01.003) An alternative h-index where the 2nd generation citations (citations to the papers that cite a paper) is presented. This approach allows to better rate the papers as not all direct citations do have the same weight.
n-index (Namazi M.R., Fallahzadeh M.K. n-index: A novel and easily-calculable parameter for comparison of researchers working in different scientific fields. Indian Journal of Dermatology Venereology & Leprology 76 (3) (2010) 229-230. doi:10.4103/0378-6323.62960) The n-index is presented as an easy solution for the comparison of researchers working on different disciplines. To do so, the n-index is computed as the researcher's h-index divided by the highest h-index of the journals of his/her major field of study.
p-index (Prathap G. The 100 most prolific economists using the p-index. Scientometrics 84 (1) (2010) 167-172. doi:10.1007/s11192-009-0068-0) The author critics the h-index as it is a poor indicator of performance. To overcome this issue a new index, the performance p-index is presented. It is defined as to provide the best balance between activity (total citations) and excellence (mean citation rate). The author uses this new indicator to rank the 100 most prolific economists.
Mock $h_m$-index (Prathap G. Is there a place for a mock h-index?. Scientometrics 84 (1) (2010) 153-165. doi:10.1007/s11192-009-0066-2) A new index is proposed in order to enhace the resolving power of the original h-index. It has been designed using ideas from mathematical modeling.
w-index (Wu Q. The w-Index: A Measure to Assess Scientific Impact by Focusing on Widely Cited Papers. Journal of the American Society for Information Science and Technology 61 (3) (2010) 609-614. doi:10.1002/asi.21276) The w-index is defined in a similar way to the h-index but focusing only in excellent papers (or highly cited papers). To do so it is defined as: If w of a researcher's papers have at least 10w citations each and the other papers have fewer than 10(w+1) citations, that researcher's w-index is w. The author shows that there are noticeable diffeerences among the h- and w- indices as the w-index plays close attention to the more widely cited papers.
b-index (Brown R.J.C. A simple method for excluding self-citation from the h-index: the b-index. Online Information Review 33 (6) (2009) 1129-1136. doi:10.1108/14684520911011043) The author addresses the problem of self-citations inflating h- related indices. To do so he assumes that relative self-citation rate is constant across an author's publications and that the citation profile of a set of papers follows a Zipfian distribution. It is shown that a value called the b-index can be computed as the integer value of the author's external citation rate (non-self-citations) to the power three quarters, multiplied by their h-index. This value, does not require an extensive analysis of the self-citation rates of individual papers to produce, and appropriately shows the biggest numerical decreases, as compared to the corresponding h-index, for very high self-citers and thus, the presented method allows the user to assess quickly and simply the effects of self-citation on an author's h-index.
Generalized h-index (Glanzel W., Schubert A. Hirsch-type characteristics of the tail of distributions. The generalised h-index. Journal of Informetrics 4 (1) (2009) 118-123. doi:10.1016/j.joi.2009.10.002) In this paper a generalisation of the h-index and g-index is given on the basis of non-negative real-valued functionals defined on subspaces of the vector space generated by the ordered samples. Several Hirsch-type measures are defined and their basic properties are analysed.
w-index (Wohlin C. A new index for the citation curve of researchers. Scientometrics 81 (2)(2009) 521-533. doi:10.1007/s11192-008-2155-z) In this paper is reflected that usual citation indexes as the h-index reduce the distribution of cites into a single point estimation, which can be seen as an over-simplification. Thus, he proposes and new index that takes into account the whole citation curve of the researcher. He conlcudes that the new index provides an added value as it balances citations and publications through the citation curve.
$IF2^$-index: ( Journal Impact Factors for evaluating scientific performance: use of h-like indicators. Scientometrics 82 (3) (2010) 613-626. doi:10.1007/s11192-010-0175-y) The Impact Factor squared index is presented in order to reflect the degree in which large entities (countries or states) participate in top-level research in a particular field. It uses the Journal Impact Factor instead of the number of citations and can be extended to other h-related indices. It's main advantages are: i) it provides a stable value that does not change over time, reflecting the degree to which a research unit participated in top-level research in a given year; ii) it can be calculated closely approximating the publication date of yearly datasets; iii) it provides an additional dimension when a full article-based citation analysis is not feasible.
Single paper h-index: ( Using the h-index for assessing single publications. Scientometrics 78 (3) (2009) 559-565. doi:10.1007/s11192-008-2208-3 ) This index is a simple extension to measure the direct impact of highly cited publication as well as its indirect influence through the citing papers. It is computed as the h-index of the set of papers citing the work in question.
hint-index: ( Hirsch-type index of international recognition. Journal of Informetrics 4 (3) (2010) 351-357. doi:10.1016/j.joi.2010.02.004) This index tries to measure the broad international recognition of a scientist. To do so it uses the number of countries of the citing papers instead of the number of citations for a paper. One of its advantages is that it prevents the overrating of a citation record by self-citations or citations of a narrow circle of co-workers.
mean h-index: ( Ranking university departments using the mean h-index. Scientometrics 82 (2) (2010) 211-216. doi:10.1007/s11192-009-0048-4) In this work the autor proposed that to rank universities. To do so it computes the h-index to some related departments in each university and the mean of those evaluations is used to rank the research performance of the university in a particular field.
$^nh_3$-index: ( A research impact indicator for institutions. Journal of Informetrics 4 (4) (2010) 581-590. doi:10.1016/j.joi.2010.06.006) The authors present another index to assess the scientific production of institutions. Their main argument is that using just the h-index (based on the number of citations and documents) to measure this performance produce a very institution size biased result. Furthermore, the h-index when applied to institutions tends to retain a very small number of documents making all other research production irrelevant for this indicator. The nh3 index proposed here is designed to measure solely the impact of research in a way that is independent of the size of the institution and is made relatively stable by making a 20-year estimate of the citations of the documents produced in a single year.
πv-index: ( The pi(v)-index: a new indicator to characterize the impact of journals. Scientometrics 82 (3) (2010) 461-475. doi:10.1007/s11192-010-0182-z) Vinkler presents a new indicator stressing the importance of papers in the "elite set" (i.e., highly cited papers). The number of papers in the elite set (P πv) is calculated with the equation: (10 log P) − 10, where P is the total number of papers in the set. The one-hundredth of citations (C) obtained by P πv papers is regarded as the πv-index which is field and time dependent. The πv-index is closely correlated with the citedness (C/P) of P πv papers, and it is also correlated with the Hirsch-index.
(Iglesias JE, Pecharromán C (2007) Scaling the h-index for different scientific ISI fields. Scientometrics 73(3):303-320, doi: 10.1007/s11192-007-1805-x . ) That the h-index cannot be used off-hand to compare research workers of different areas has been pointed out by Hirsch himself, by noting that the most highly cited scientists for the period 1983-2002 in the life sciences had h values that were almost twice those of the most cited physicists; and from a list of the 36 inductees in the US National Academy of Sciences in the biological and biomedical sciences he extracts the same trend, although perhaps with smaller relative differences with respect to the physical sciences. It is also well known that the usual journal citation indicators lack normalisation for reference practices and traditions in the different fields of science (Pinski G, Narin F (1976) Citation influence for journal aggregates of scientific publications: theory, with application to the literature of physics. Information Processing and Management 12(5):297-312, doi: 10.1016/0306-4573(76)90048-0 . ; Glänzel W, Moed HF (2002) Journal impact measures in bibliometric research. Scientometrics 53(2):171-193, doi: 10.1023/A:1014848323806 . ), among other flaws that have been pointed out in the specialised literature. Therefore, it should come as no surprise that the h-index is also flawed in similar ways. For this reason, different standardizations of the h-index for comparing scientists that work in different scientific fields have been developed.
Iglesias JE, Pecharromán C (2007) Scaling the h-index for different scientific ISI fields. Scientometrics 73:(3):303-320, doi: 10.1007/s11192-007-1805-x .
In this paper, the authors suggest a rational method to account for different citation practices, introducing a simple multiplicative correction to the h-index which depends basically on the ISI field the worker is in, and to some extent, on the number of papers the researcher has published. They also propose a list of these normalizing factors, so the corrected h remains relatively simple to obtain.
Imperial J, Rodríguez-Navarro A (2007) Usefulness of Hirsch's h-index to evaluate scientific research in Spain. Scientometrics 71(2):271-282, doi: 10.1007/s11192-007-1665-4 .
In this paper, the authors suggest that, in general, publications in applied areas are less cited that publications in dynamic, basic areas, and therefore, scientists in the former areas show lower values of h. These differences are mainly caused by: (i) the different sizes of the populations that can potentially cite the publication, and (ii) the lower emphasis placed on research by scientists in applied areas. Although the complex dependence of h on the citing population size precludes an overall h normalization across scientific areas, they empirically observed that the highest h values attained for a given area correlate well with the impact factor of journals in that area. They calculated h-indexes for the most highly cited scientists in different areas and subareas (reference h-index or hR) and observed that hR indexes are more dependent on journal impact factors than on specific publication patterns. In general, and for most areas, they observe
$$h_R \sim 16+11f$$
where f is the impact factor of the top journals that characterize that specific scientific area or subarea. Since hR exhibits a linear dependence on f, it is possible to compute it as an average for scientists who publish in more than one area.
Namazi M.R., Fallahzadeh M.K. n-index: A novel and easily-calculable parameter for comparison of researchers working in different scientific fields. Indian Journal of Dermatology Venereology & Leprology 76 (3) (2010) 229-230. doi: 10.4103/0378-6323.62960
The n-index is presented as an easy solution for the comparison of researchers working on different disciplines. To do so, the n-index is computed as the researcher's h-index divided by the highest h-index of the journals of his/her major field of study.
Bornmann L, Wallon G, Ledin A (2008) Is the h index related to (standard) bibliometric measures and to the assessments by peers? An investigation of the h index by using molecular life sciences data. Research Evaluation 17(2):149-156, doi: 10.3152/095820208X319166
In this paper, the authors used some comprehensive data sets of applicants to the long-term fellowship and young investigator programmes of the European Molecular Biology Organization. They determined the relationship between the h-index and three standard bibliometric indicators (total number of publications, total citation counts, and average journal impact factor) as well as peer assessments to test the convergent validity of the h-index. Their results suggest that the h-index is a promising rough measurement of the quality of a young scientist's work as it is judged by internationally renowned scientists.
Costas R, Bordons M (2008) Is g-index better than h-index? An exploratory study at the individual level. Scientometrics 77(2):267-288, doi: 10.1007/s11192-007-1997-0
In this paper, the authors analyse the ability of g-index and h-index to discriminate between different types of scientists (low producers, big producers, selective scientists and top scientists) in the area of Natural Resources at the Spanish CSIC (WoS, 1994-2004). Their results show that these indicators clearly differentiate low producers and top scientists, but do not discriminate between selective scientists and big producers. However, they show that g-index is more sensitive than h-index in the assessment of selective scientists, since this type of scientist shows in average a higher g-index/h-index ratio and a better position in g-index rankings than in the h-index ones. Therefore, current research suggests that these indexes do not substitute each other but that they are complementary.
Lehmann S, Jackson AD, Lautrup BE (2008) A quantitative analysis of indicators of scientific performance. Scientometrics 76(2):369-390, doi: 10.1007/s11192-007-1868-8
In this work, some Bayesian statistics are used in order to analyze the h-index and several other different indicators of scientific performance to try determine each indicator's ability to discriminate between scientific authors. They demonstrate that the best of the indicators that they studied requires approximately 50 papers to draw conclusions regarding long term scientific performance. In addition, they stated how their approach allows a statistical comparison among scientists from different fields.
Van Leeuwen T (2008) Testing the validity of the Hirsch-index for research assessment purposes. Research Evaluation 17(2):157-160, doi: 10.3152/095820208X319175
A bibliometric study in the Netherlands has been conducted focusing on the level of the individual researcher in relation to an academic reward system. He compared the h-index with various bibliometric indicators and other characteristics of researchers and tested its usefulness in research assessment procedures. He found that there is a strong bias towards the research field(s) in which the researcher is active, and thus, he concludes that this limits the validity of the h-index for the specific interest of evaluation practices.
Zhang CT (2009) The e-Index, Complementing the h-Index for Excess Citations. PLoS ONE. 4(5):e5429, doi:10.1371/journal.pone.0005429
In the article where Zhang defined the e-index he also introduces some comparisons between the new index and some classical ones (the r-index, a-index and g-index). In this comparison the author is particulary concerned about the possible loss of citation information of the g-index.
Burrell QL (2009) On Hirsch's h, Egghe's g and Kosmulski's h(2). Scientometrics 79(1):79-91, doi:10.1007/s11192-009-0405-3
The paper investigates the inter-relationships between the h-index, the g-index and the h(2)-index and also their time dependence using the stochastic publication/citation model previously proposed by the author. Some tentative suggestions regarding the relative merits of these three proposed measures are also presented.
Guns R, Rousseau R (2009) Real and rational variants of the h-index and the g-index. Journal of Informetrics 3(1):64-71, doi:10.1016/j.joi.2008.11.004
In this contribution the authors review the definitions of the rational and real-valued variants of the h-index and g-index. They showed how they can be obtained both graphically and by calculation. In addition they show that the relation between the real and the rational g-index depends on the number of citations of the article ranked g + 1.
Bador P., Lafouge T. Comparative Analysis of Impact Factor and h-index for Pharmacology Journals. Therapie 65 (2) (2010) 129-137, doi:10.2515/therapie/2009061
The authors compare the Impact Factor (IF) 2006 and the h-index 2006 for one sample of "Pharmacology and Pharmacy" journals computed from the ISI Web of Science using the same parameters (identical two publication years (2004-2005) and identical one-year citation window (2006)). They concluded that the IF and the h-index rankings of the journals are very different (the correlation coefficient between the IF and the h-index is low for this area journals). The IF and h-index can be completely complementary when evaluating journals of the same scientific discipline. This study has been leater complemented in Bador P., Lafouge T. Comparative analysis between impact factor and h-index for pharmacology and psychiatry journals. Scientometrics 84 (1) (2010) 65-79, doi:10.1007/s11192-009-0058-2
Abramo G., D'Angelo C.A., Viel F. A Robust Benchmark for the h- and g-Indexes. Journal of the American Society for Information Science and Technology 61 (6) (2010) 1275-1280, doi:10.1002/asi.21330
This paper aims to provide some light on the problem of comparison of h- related indices among different scientists. To do so the authors have measured the h- and Egghe's g-indexes of all Italian university researchers in the hard sciences over a 5-year window. Descriptive statistics are provided concerning all of the 165 subject fields examined, offering robust benchmarks for those who wish to compare their individual performance to those of their colleagues in the same subject field.
Schreiber M (2008) An empirical investigation of the g-index for 26 physicists in comparison with the h-index, the a-index, and the r-index. Journal of the American Society for Information Science and Technology, 59(9):1513-1522, doi: 10.1002/asi.20856
In this study, Schreiber works out 26 practical cases of physicists from the Institute of Physics at Chemnitz University of Technology, and compare the h and g values. It is demonstrated that the g-index discriminates better between different citation patterns. As expected, the g-index allows for a better discrimination between the datasets and yields some rearrangement of the order. The rearrangements can be traced to different individual citation patterns, in particular distinguishing between one-hit wonders and enduring performers: The one-hit wonders advance in the g-sorted list. In his opinion, this makes the g-index more suitable than the h-index to characterize the overall impact of the publications of a scientist. Especially for not-so-prominent scientists, the small values of h do not allow for a reasonable distinction between the datasets. This also can be achieved by evaluating the a-index, which reflects the average number of citations in the h-core, and interpreting it in conjunction with the h-index. h and a can be combined into the r-index to measure the hcore's citation intensity. He also determines the a and r values for the 26 datasets. For a better comparison, he utilizes interpolated indices. The correlations between the various indices as well as with the total number of papers and the highest citation counts are discussed. The largest Pearson correlation coefficient is found between g and r. Although the correlation between g and h is relatively strong, the arrangement of the datasets is significantly different depending on whether they are put into order according to the values of either h or g.
Bornmann L, Mutz R, Daniel HD (2008) Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology 59(5):830-837, doi: 10.1002/asi.20806
In this study, the authors examined empirical results on the h-index and its most important variants in order to determine whether the variants developed are associated with an incremental contribution for evaluation purposes. They examined the h-index and the most important h-index variants, that have been proposed and discussed in the literature: the m quotient, g-index, h(2)-index, a-index, r-index, ar-index and hw-index. They also included in their analysis the m-index, a variant that we propose of the a-index. The aim of the analysis is to determine empirically the extent to which the development of the variants of the h-index does in fact result in an incremental contribution. The results of the analysis indicate that with the h-index and its variants, we can assume that there are two types of indices: (i) The one type of indices (h-index, m quotient, g-index and h(2)-index) describe the most productive core of the output of a scientist and tell us the number of papers in the core, and (ii) the other indices (a-index, m-index, r-index, ar-index and hw-index) depict the impact of the papers in the core.
Bornmann L, Marx W, Schier H (2009) Hirsch-type index values for organic chemistry journals: a comparison of new metrics with the Journal Impact Factor. European Journal of Organic Chemistry 10:1471-1476, doi: 10.1002/ejoc.200801243
Further empirical analysis on the h-index and several of its variants (g-index, h(2)-index, a-index and r-index) to measure the performance of journals is presented in this work. Concretely, the authors compare 20 organic chemistry journals with those indices and with the Journal Impact Factor and they found very high intercorrelations among all indices. Thus, the authors conclude that all the examined measures could be called redundant for empirical applications.
Costas R, Bordons M (2007) Advantages, limitations and its relation with other bibliometric indacators at the micro level. Journal of Informetrics 1(3):193-203, doi: 10.1016/j.joi.2007.02.001
The relationship of the h-index with other bibliometric indicators at the micro level is analysed for Spanish CSIC scientists in Natural Resources, using publications downloaded from the Web of Science (1994-2004). Different activity and impact indicators are obtained to describe the research performance of scientists in different dimensions, being the h-index located through factor analysis in a quantitative dimension highly correlated with the absolute number of publications and citations. The need to include the remaining dimensions in the analysis of research performance of scientists and the risks of relying only on the h-index are stressed. The hypothesis that the achievement of some highly visible but intermediate-productive authors might be underestimated when compared with other scientists by means of the h-index is tested. The authors suggest that the h-index tends to underestimate the achievement of scientists with a "selective publication strategy", that is, those who do not publish a high number of documents but who achieve a very important international impact. In addition, a good correlation is found between the h-index and other bibliometric indicators, especially the number of documents and citations received by scientists, that is, the best correlation is found with absolute indicators of quantity. Finally, they notice that The widespread use of the h-index in the assessment of scientists' careers might influence their publication behaviour. It could foster productivity instead of promoting quality, and it may be increasing the presence of least publishable units or salami publications, since the maximum h-index an author can obtain is that of his/her total number of publications.
Schubert A, Glanzel W (2007) A systematic analysis of Hirsch-type indices for journals. Journal of Informetrics 1(3):179-256, doi: 10.1016/j.joi.2006.12.002
In this paper, the authors presented a theoretical model of the dependence of h- related indices on the number of publications and the average citation rate. They successfully tested it against some empirical samples of journal h-indices. Their results demostrated that it is possible to stablish a kind of "similarity transformation" of h-indices between different fields of science. More information about this model compared to others can be found in:
Ye FY (2009) An investigation on mathematical models of the h-index. Scientometrics 81(2):493-498, doi: 10.1007/s11192-008-2169-6
Bar-Ilan J. Rankings of information and library science journals by JIF and by h-type indices. Journal of Informetrics 4 (2) (2010) 141-147, doi:10.1016/j.joi.2009.11.006
In this paper we compute journal rankings in the Information and Library Science JCR category according to the JIF and according to several h-type indices. Even though the correlations between all the ranked lists are very high, there are considerable individual differences between the rankings as can be seen by visual inspection, showing that the correlation measure is not sensitive enough. Spearman's footrule and the M-measure are also computed and found to be more sensitive to the differences between the rankings in the sense that the range of values is larger than the range of correlation values when comparing the JIF ranking to the rankings induced by the h-type indices.
Leydesdorff L. How are New Citation-Based Journal Indicators Adding to the Bibliometric Toolbox?. Journal of the American of the American Society for Information Science and Technology 60 (7) (2009) 1327-1336, doi:10.1002/asi.21024
The paper studies how some of the new indicators for research assessments increments knowledge about scientific production. Particularly the author studies the h-index, the PageRank indicator and the Scimago Journal Ranking indicator.
Liu Y.X., Rousseau R. Properties of Hirsch-type indices: the case of library classification categories. Scientometrics 79 (2) (2009) 235-248, doi:10.1142/10.1007/s11192-009-0415-1
In this contribution the h-, g- and R- indices are found to be statistically equivalent for rankings of library classification categories. Moreover, the suthors found that the discrimination power of those indices are equivalent as measured by the Gini concentration index.
Schreiber M. Twenty Hirsch index variants and other indicators giving more or less preference to highly cited papers. Annalen der Physik 19 (8) (2010) 536-554, doi:10.1002/andp.201000046
This paper presents an empirical study about 26 physicists and compares different h-index variants (A, e, f, g, h(2), hw, hT, h̄, m, π, R, s, t, w, maxprod...). The author discusses the correlation among the results.
Schreiber M (2007) Self-citation corrections for the Hirsch index. Epl 78(3):30002, doi: 10.1209/0295-5075/78/30002
In this paper, Schreiber studies several anonymous datasets concludes that self-citations do have a great impact on the h-index, specially in the case of young scientists with a low h-index. Moreover, he proposes three different ways to sharpen the h-index to avoid the self-citation problem. Each proposal has an increasing level of difficulty as usual citation databases do not allow to easily differentiate among self-citations and external citation.
Schreiber M (2008) The influence of self-citation corrections on Egghe's g index. Scientometrics 76(1):187-200, doi: 10.1007/s11192-007-1886-6
In a later work, Schreiber again studies how both the h- and g- indices are affected by self citations by means of an analysis of nine practical cases in the physics field. He concludes that the g-index is more influenced by self-citations than the h-index and thus, he proposes to exclude those citations in the computation of the g-index.
Engqvist L, Frommen JG (2008) The h-index and self-citations. Trends in Ecology & Evolution 23(5):250-252, doi: 10.1016/j.tree.2008.01.009
In this case, the authors argue that to increase one's own h-index would be necessary to cite many self papers and that it is difficult to predict which papers should be cited in order to improve the author's h-index. In fact, they performed a literature study, selecting 40 authors from the fields of evolutionary biology and ecology and identified the citation causing their most recent increases in h. Next, they distinguished the first citation appearing thereafter, which would have caused the same increase in the author's h. The difference between the publication dates of these two citations give the time that the h-index is dependent on one single citation. This timemeasure is an estimation of how long selective self-citation of target papers would be effective.
Gianoli E, Molina-Montenegro MA (2009) Insights Into the Relationship Between the h-Index and Self-Citations. Journal of the American Society for Information Science and Technology 60(6):1283-1285, doi: 10.1002/asi.21042
Analyzing the publication output of 119 Chilean ecologists the authors found strong evidence that self-citations significantly affect the h-index increase, specially among the low h-index group, where self-citations cause the greater impact.
Costas R., van Leeuwen T.N., Bordons M. Self-citations at the meso and individual levels: effects of different calculation methods. Scientomtrics 82 (3) (2010) 517-537, doi: 10.1007/s11192-010-0187-7
The authors focus on the study of self-citations at the individual levels, on the basis of an analysis of the production (1994–2004) of individual researchers working at the Spanish CSIC in the areas of Biology and Biomedicine and Material Sciences. They described two different types of self-citations: author self-citations (citations received from the author him/herself) and co-author self-citations (citations received from the researchers' co-authors but without his/her participation). They conclude that self-citations do not play a decisive role in the high citation scores of documents either at the individual or at the meso level, which are mainly due to external citations.
Egghe L. Influence of Adding or Deleting Items and Sources on the H-Index. Journal of the American Society for Information Science and Technology 61 (2) (2010) 370-373, doi: 10.1002/asi.21239
Egghe discusses the mathematical influence of adding or deleting items in the computation of the h-index of an author. Thus, he proves how self-citations or minor contributions contribute to the h-index. Moreover, this influence is modelled in the paper.
Engqvist L., Frommen J.G. New Insights Into the Relationship Between the h-Index and Self-Citations?. Journal of the American Society for Information Science and Technology 61 (7) (2010) 1514-1515, doi: 10.1002/asi.21298
MacRoberts M.H., MacRoberts B.R. Problems of citation analysis: A study of uncited and seldom-cited influences. Journal of the American Society for Information Science and Technology 61 (1) (2010) 1-12, doi: 10.1002/asi.21228
The authors examined articles in biogeography and found that most of the influence is not cited, specific types of articles that are influential are cited while other types of that also are influential are not cited, and work that is "uncited" and "seldom cited" is used extensively. As a result, they propose that evaluative citation analysis should take uncited work into account.
Brown R.J.C. A simple method for excluding self-citation from the h-index: the b-index. Online Information Review 33 (6) (2009) 1129-1136, doi: 10.1108/14684520911011043
The author addresses the problem of self-citations inflating h- related indices. To do so he assumes that relative self-citation rate is constant across an author's publications and that the citation profile of a set of papers follows a Zipfian distribution. It is shown that a value called the b-index can be computed as the integer value of the author's external citation rate (non-self-citations) to the power three quarters, multiplied by their h-index. This value, does not require an extensive analysis of the self-citation rates of individual papers to produce, and appropriately shows the biggest numerical decreases, as compared to the corresponding h-index, for very high self-citers and thus, the presented method allows the user to assess quickly and simply the effects of self-citation on an author's h-index.
Woeginger GJ (2008) An axiomatic characterization of the Hirsch-index. Mathematical Social Sciences 56(2):224-232, doi: 10.1016/j.mathsocsci.2008.03.001
In this paper a new axiomatic characterization of the h-index in terms of three natural axioms (concerning the addition of single publications, the addition of new citations to old publications and the joint case of adding new publications and citations) is provided. Some extensions to this work can be found in:
Woeginger GJ (2008) A symmetry axiom for scientific impact indices. Journal of Informetrics 2(4):298-303, doi: 10.1016/j.joi.2008.09.001
Woeginger GJ (2008) Generalizations of Egghe's g-Index. Journal of the American Society for Information Science and Technology 60(6):1267-1273, doi: 10.1002/asi.21061
Rousseau R (2008) Woeginger's axiomatisation of the h-index and its relation to the g-index, the h(2)-index and the R2-index. Journal of Informetrics 2(4):263-372, doi: 10.1016/j.joi.2008.07.001
Torra V, Narukawa Y (2008) The h-index and the number of citations: Two fuzzy integrals. IEEE Transactions on Fuzzy Systems 16(3):795-797, doi: 10.1109/TFUZZ.2007.896327
In this work, the authors have stablished the connection of the h-index (and the number of citations) with the Choquet and Sugeno integrals. In particular they showthat the h-index is a particular case of the Sugeno integral and that the number of citations corresponds to the Choquet integral (in both cases using the same fuzzy measure). This conclusion allows the authors to envision new indexes defined in terms of fuzzy integrals using different types of fuzzy measures. This work is extended in:
Narukawa Y, Torra V (2009) Multidimensional generalized fuzzy integral. Fuzzy Sets and Systems 160(6):802-815, doi: 10.1016/j.fss.2008.10.006
Liang LM (2006) h-index sequence and h-index matrix: Constructions and applications. Scientometrics 69(1):153-159, doi: 10.1007/s11192-006-0145-6
In this early paper, Liang studied how the h-index changes over time using time series. After his initial work there have been several studies about time series, the h-index and its mathematical properties:
Egghe L (2009) Mathematical study of h-index sequences. Information Processing & Management 45(2):288-297, doi: 10.1016/j.ipm.2008.12.002
Guns R, Rousseau R (2009) Simulating growth of the h-index. Journal of the American Society for Information Science and Technology 60(2):410-417, doi: 10.1002/asi.20973
Liu YX, Rousseau R (2008) Definitions of time series in citation analysis with special attention to the h-index. Journal of Informetrics 2(3):202-210, doi: 10.1016/j.joi.2008.04.003
Rousseau R, Ye FY (2008) A proposal for a dynamic h-type index. Journal of the American Society for Information Science and Technology 59(11):1853-1855, doi: 10.1002/asi.20890
In this work, the authors complemented the previous work to find out if power law models for a specific type of h-index time series fit real data sets. Additional comments of this work can be found in:
Burrell QL (2009) Some Comments on "A Proposal for a Dynamic h-Type Index" by Rousseau and Ye. Journal of the American Society for Information Science and Technology 60(2):418-419, doi: 10.1002/asi.20969
Rousseau R (2007) The influence of missing publications on the Hirsch index. Journal of Informetrics 1(1):2-7, doi: 10.1016/j.joi.2006.05.001
Rousseau has also used a continuous power law model in order to show that the influence of missing articles is largest when the total number of publications is small and non-existing when the number of publications is very large (the same conclusion is drawn for missing citations).
Deineko VG, Woeginger GJ (2009) A new family of scientific impact measures: The generalized Kosmulski-indices. Scientometrics 80(3):819-826, doi: 10.1007/s11192-009-2130-0
This article introduces the generalized Kosmulski-indices as a new family of scientific impact measures for ranking the output of scientific researchers. As special cases, this family contains the well-known Hirsch-index h and the Kosmulski-index h(2). The main contribution is an axiomatic characterization that characterizes every generalized Kosmulski-index in terms of three axioms.
This paper studies mathematical properties of h-index sequences as previously developed by Liang. The obtained results are confirmed for the h-, g- and R-sequences (forward and reverse time) of an author.
Quesada A (2009) Monotonicity and the Hirsch index. Journal of Informetrics 3(2):158-160, doi: 10.1016/j.joi.2009.01.002
In this short contribution, the Hirsch index is characterized, when indices are allowed to be real-valued, by adding to Woeginger's monotonicity two axioms in a way related to the concept of monotonicity.
Zhang C.T. Relationship of the h-index, g-index, and e-index. Journal of the American Society for Information Science and Technology 61 (3) (2010) 625-628, doi:10.1002/asi.21274
The authors stablish a relationship among the h-, g- and e-indices when the citations for a scientist re ranked by a power law. In fact they show how the g-index can be computed from the h- and e- indices and the power parameter. The relationship of the h-, g-, and e-indices shows that the g-index contains the citation information from the h-index, the e-index, and some papers beyond the h-core.
Beirlant J., Einmahl J.H.J. Asymptotics for the Hirsch Index. Scandinavian Journal of Statistics 37 (3) (2010) 355-364, doi:10.1111/j.1467-9469.2010.00694.x
In this paper, the authors establish the asymptotic normality of the empirical h-index. The rate of convergence is non-standard: sqrt(h) / (1 + nf(h)), where f is the density of the citation distribution and n is the number of publications of a researcher.
Franceschini F., Maisano D. Analysis of the Hirsch index's operational properties. European Journal of Operational Research 203 (2) (2010) 494-504, doi:10.1016/j.ejor.2009.08.001
The authors provide a detailed analysis of the h-index, from the point of view of the indicator operational properties. It can be helpful to better understand the peculiarities and limits of h and avoid its misuse.
Henzinger M., Sunol J., Weber I. The stability of the h-index. Scientometrics 84 (2) (2010) 465-479, doi:10.1007/s11192-009-0098-7
The authors investigate if ranking according to the h-index is stable with respect to (i) different choices of citation databases, (ii) normalizing citation counts by the number of authors or by removing self-citations, (iii) small amounts of noise created by randomly removing citations or publications and (iv) small changes in the definition of the index. In their they show that although the ranking of the h-index is stable under most of these changes, it is unstable when different databases are used. Therefore, comparisons based on the h-index should only be trusted when the rankings of multiple citation databases agree.
Quesada A. Monotonicity and the Hirsch index. Journal of Informetrics 3 (2) (2009) 158-160, doi:10.1016/j.joi.2009.01.002
In this contribution, the Hirsch index is characterized, when indices are allowed to be real-valued, by adding to Woeginger's monotonicity two axioms in a way related to the concept of monotonicity.
Quesada A. More axiomatics for the Hirsch index. Scientometrics 82 (2) (2010) 413-418, doi:10.1007/s11192-009-0026-x
This contribution suggests three characterizations without adopting the monotonicity axiom.
Ye F.Y. An investigation on mathematical models of the h-index. Scientometrics 81 (2) (2009) 493-498, doi:10.1007/s11192-008-2169-6
Based on two large data samples from ISI databases, the author evaluated the Hirsch model, the Egghe-Rousseau model, and the Glänzel-Schubert model of the h-index. The results support the Glänzel-Schubert model as a better estimation of the h-index at both journal and institution levels.
Egghe L (2008) Examples of simple transformations of the h-index: Qualitative and quantitative conclusions and consequences for other indices. Journal of Informetrics 2(2):136-148, doi: 10.1016/j.joi.2007.12.003
Egghe L (2008) The Influence of transformations on the h-index and the g-index. Journal of the American Society for Information Science and Technology 59(8):1304-1312, doi: 10.1002/asi.20823
In this works a comparative study about how the h-index, the g-index, the R-index and the hw -index are affected by simple transformation as doubling the production per source, doubling the number of sources, doubling the number of sources but halving their production, halving the number of sources but doubling their production (fusion of sources) and some special cases of general power law transformations is made. The author demonstrated that this kind of transformations affect in a similar way to all the h- related indices that he studied.
Egghe L (2008) The influence of merging on h-type indices. Journal of Informetrics 2(3):252-262, doi: 10.1016/j.joi.2008.06.002
In this case the importance of merging h- type indices for different information production processes has been studied. In fact, he studies two types of information production processes mergings for the h-, g-, R- and hw - indices: one where common sources add their number of items and one where common sources get the maximum of their number of items in the two information production processes.
The next link presents a page describing how to compute h-index using different databases such as WoS, Scopus and Google Scholar.
In the literature there exist different studies on the use of these databases that we analyze them.
Until just a few years ago, when citation information was needed the single most comprehensive source was the ISI Citation Indexes. Although the Citation Indexes were often criticized for various reasons, there was no other source to rely on. Data from the ISI Citation Indexes and the Journal Citation Reports are routinely used by promotion committees at universities all over the world. In this paper we refer to the Web version of Citation Indexes, i.e., to the Web of Science (WOS). Recently two alternatives to the ISI Citation Indexes have become available. One of them is Scopus developed by Elsevier and the other is the freely available Google Scholar. Each of these has a different collection policy which affects both the publications covered and the number of citations to the publications (Bar-Ilan J (2008) Which h-index? - A comparison of WoS, Scopus and Google Scholar. Scientometrics 74(2):257-271, doi: dx.doi.org/10.1007/s11192-008-0216-y ). In the literature there exist different studies analyzing them:
Bar-Ilan J (2008) Which h-index? - A comparison of WoS, Scopus and Google Scholar. Scientometrics 74(2):257-271, doi: 10.1007/s11192-008-0216-y
This paper compares the h-indices of a list of highly-cited Israeli researchers based on citations counts retrieved from the Web of Science, Scopus and Google Scholar respectively. In several case the results obtained through Google Scholar are considerably different from the results based on the Web of Science and Scopus. Data cleansing is discussed extensively.
Jacso P (2008) The plausibility of computing the h-index of scholarly productivity and impact using reference-enhanced databases. Online Information Review 32(2):266-283, doi: 10.1108/14684520810879872
This paper aims to provide a general overview, to be followed by a series of papers focusing on the analysis of pros and cons of the three largest, cited-reference-enhanced, multidisciplinary databases (Google Scholar, Scopus, and Web of Science) for determining the h-index. In addition, the practical aspects of determining the h-index also need scrutiny, because some content and software characteristics of reference-enhanced databases can strongly influence the h-index values.
Jacso P (2008) The pros and cons of computing the h-index using Google Scholar. Online Information Review 32(3):437-452, doi: 10.1108/14684520810889718
The aim of this paper is to focus on Google Scholar (GS), from the perspective of calculating the h-index for individuals and journals. The paper shows that effective corroboration of the h-index and its two component indicators can be done only on persons and journals with which a researcher is intimately familiar. Corroborative tests must be done in every database for important research. Furthermore, the paper highlights the very time-consuming process of corroborating data, tracing and counting valid citations and points out GS's unscholarly and irresponsible handling of data.
Jacso P (2008) Testing the calculation of a realistic h-index in Google Scholar, Scopus, and Web of Science for F. W. Lancaster. Library Trends 56(4):784-815,
This paper focuses on the practical limitations in the content and software of the databases that are used to calculate the h-index for assessing the publishing productivity and impact of researchers. To celebrate F.W. Lancaster's biological age of seventy-five, and "scientific age" of forty-five, this paper discusses the related features of Google Scholar, Scopus, and Web of Science (WoS), and demonstrates in the latter how a much more realistic and fair h-index can be computed for F.W. Lancaster than the one produced automatically. Browsing and searching the cited reference index of the 1945-2007 edition of WoS, which in his estimate has over a hundred million "orphan references" that have no counterpart master records to be attached to, and "stray references" that cite papers which do have master records but cannot be identified by the matching algorithm because of errors of omission and commission in the references of the citing works, can bring up hundreds of additional cited references given to works of an accomplished author but are ignored in the automatic process of calculating the h-index. The partially manual process doubled the h-index value for F.W. Lancaster from 13 to 26, which is a much more realistic value for an information scientist and professor of his stature.
Meho LI, Rogers Y (2008) Citation counting, citation ranking, and h-index of human-computer interaction researchers: A comparison of Scopus and Web of Science. Journal of the American Society for Information Science and Technology 59(11):1711-1726, doi: 10.1002/asi.20874
This study examines the differences between Scopus and Web of Science in the citation counting, citation ranking, and h-index of 22 top human-computer interaction (HCI) researchers from EQUATOR, a large British Interdisciplinary Research Collaboration project. Results indicate that Scopus provides significantly more coverage of HCI literature than Web of Science, primarily due to coverage of relevant ACM and IEEE peer-reviewed conference proceedings. No significant differences exist between the two databases if citations in journals only are compared. Although broader coverage of the literature does not significantly alter the relative citation ranking of individual researchers, Scopus helps distinguish between the researchers in a more nuanced fashion than Web of Science in both citation counting and h-index. Scopus also generates significantly different maps of citation networks of individual scholars than those generated by Web of Science. The study also presents a comparison of h-index scores based on Google Scholar with those based on the union of Scopus and Web of Science. The study concludes that Scopus can be used as a sole data source for citation-based research and evaluation in HCI, especially when citations in conference proceedings are sought, and that researchers should manually calculate h scores instead of relying on system calculations.
Meho LI, Yang K (2007) Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar. Journal of the American Society for Information Science and Technology 58(13):2105-2125, doi: 10.1002/asi.20677
The Institute for Scientific Information's (ISI, now Thomson Scientific, Philadelphia, PA) citation databases have been used for decades as a starting point and often as the only tools for locating citations and/or conducting citation analyses. The ISI databases (or Web of Science [WoS]), however, may no longer be sufficient because new databases and tools that allow citation searching are now available. Using citations to the work of 25 library and information science (LIS) faculty members as a case study, the authors examine the effects of using Scopus and Google Scholar (GS) on the citation counts and rankings of scholars as measured by WoS. Overall, more than 10,000 citing and purportedly citing documents were examined. Results show that Scopus significantly alters the relative ranking of those scholars that appear in the middle of the rankings and that GS stands out in its coverage of conference proceedings as well as international, non-English language journals. The use of Scopus and GS, in addition to WoS, helps reveal a more accurate and comprehensive picture of the scholarly impact of authors. The WoS data took about 100 hours of collecting and processing time, Scopus consumed 200 hours, and GS a grueling 3,000 hours.
Bornmann L, Marx W, Schier H, Rahm E, Thor A, Daniel HD (2009) Convergent validity of bibliometric Google Scholar data in the field of chemistry-Citation counts for papers that were accepted by Angewandte Chemie International Edition or rejected but published elsewhere, using Google Scholar, Science Citation Index, Scopus, and Chemical Abstracts. Journal of Informetrics 3(1):27-35, doi: 10.1016/j.joi.2008.11.001
The authors compare the citations obtained from Google Scholar vs the citations obtained by three fee-based databases (Science Citation Index, Scopus and Chemical Abstracts). The analyses using citations returned by the three fee-based databases show very similar results. On the other hand, the results of the analysis using GS citation data differed greatly from the findings using citations from the fee-based databases. The study therefore supports, on the one hand, the convergent validity of citation analyses based on data from the fee-based databases and, on the other hand, the lack of convergent validity of the citation analysis based on the GS data.
Armbruster C. Whose metrics? Citation, usage and access metrics as scholarly information service. Learned Publishing 23 (1) (2010) 33-38, doi: 10.1087/20100107
The authors analize what kind of metric information services would serve scholars to better construct databases that deliver services of value to them.
Bar-Ilan J. Web of Science with the Conference Proceedings Citation Indexes: the case of computer science. Scientometrics 83 (3) (2010) 809-824, doi: 10.1007/s11192-009-0145-4
The author discusses how the inclusion of Conference Proceedings Citation Indexes for Science and for the Social Sciences and Humanities in the ISI Web of Science influence on the citation based indices for highly cited scientists. As Computer Science is a field where proceedings are a major publication venue, it is shown that the most cited publications are journal papers but a large amount of citations come from proceeding papers, thus incresing citation based indices. In addition the author discusses how some publications may be double counted when a work is published in both a conference and a journal.
Derrick G.E., Sturk H., Haynes A.S., Chapman S., Hall W.D. A cautionary bibliometric tale of two cities. Scientometrics 84 (2) (2010) 317-320, doi: 10.1007/s11192-009-0118-7
The authors address the problem of different suscription policies to databases, particularly to Wob of Science and Web of Knowledge. In fact, the authors compare simultaneous search returns at two sites to demonstrate discrepancies that can occur as a result of differences in institutional subscriptions to those databases. Moreover, such discrepancies may have significant implications for the reliability of bibliometric research in general, but also for the calculation of individual and group indices used for promotion and funding decisions.
Franceschet M. A comparison of bibliometric indicators for computer science scholars and journals on Web of Science and Google Scholar. Scientometrics 83 (1) (2010) 243-258, doi: 10.1007/s11192-009-0021-2
In this contribution a case study for computer science scholars and journals evaluated on Web of Science and Google Scholar databases is provided. The study concludes that Google scholar computes significantly higher indicators' scores than Web of Science. Nevertheless, citation-based rankings of both scholars and journals do not significantly change when compiled on the two data sources, while rankings based on the h index show a moderate degree of variation.
Mikki S. Comparing Google Scholar and ISI Web of Science for Earth Sciences. Scientometrics 82 (2) (2010) 321-331, doi: 10.1007/s11192-009-0038-6
The authors compare search results from Google Scholar with ISI WoS in order to measure the degree to which the first can compete with bibliographical databases. For earth science literature 85% of documents indexed by ISI WoS were recalled by Google Scholar. The rank of records displayed in Google Scholar and ISI WoS, is compared by means of Spearman's footrule. For impact measures the h-index is investigated. Similarities in measures were significant for the two sources.
Mingers J., Lipitakis E.A.E.C.G. Counting the citations: a comparison of Web of Science and Google Scholar in the field of business and management. Scientometrics 85 (2) (2010) 613-625, doi: 10.1007/s11192-010-0270-0
Due to the less extensive coverage in social sciences, ISI Web of Science is compared with Google Scholar for two different datasets of 4600 publications from three UK Business Schools. The results show that Web of Science is indeed poor in the area of management and that Google Scholar, whilst somewhat unreliable, has a much better coverage. The results suggest thant Web of Science should not be used for measuring research impact in management.
Kousha K., Thelwall M. Rezaie S. Using the Web for research evaluation: The Integrated Online Impact indicator. Journal of Informetrics 4 (1) (2009) 124-135, doi: 10.1016/j.joi.2009.10.003
This paper presents a combined Integrated Online Impact indicator. It is based in five online sources of citation data: Google Scholar, Google Books, Google Blogs, Power Point presentations and course reading lists. The results from a set of research articles published in the Journal of the American Society for Information Science & Technology (JASIST) and Scientometrics in 2003 are compared with the citation counts obtained by WoS and Scopus. The results show that the mean and median IOI was nearly twice as high as both WoS and Scopus, confirming that online citations are sufficiently numerous to be useful for the impact assessment of research. The authors also found significant correlations between conventional and online impact indicators, confirming that both assess something similar in scholarly communication.
h- and related indices have been used to evaluate not only individual researchers but also higher level research institutions. In addition to the specific indices for the evaluation of the scientific production at different levels here we present some studies that use the h-index and derived ones to asses the scientific production of groups of individuals, institutions and journals.
In this paper a comparison of some of the well known h-index variants is made. The aim of this analysis is to determine empirically the extent to which the usage of the h index and its variants for measuring the performance of journals results in an incremental contribution against the Journal Impact Factor. Particularly the authors used 20 organic chemistry journals for the study. This idea is also discussed in:
Bornmann L, Daniel HD (2009) The state of h index research. Is the h index the ideal way to measure research performance?. EMBO Reports 10(1):2-6, doi: 10.1038/embor.2008.233
Liu YX, Rao IKR, Rousseau R (2009) Empirical series of journal h-indices: The JCR category Horticulture as a case study. Scientometrics 80(1):59-74, doi: 10.1007/s11192-007-2026-z
In this work two types of series of h-indices for journals published in the field of Horticulture during the period 1998–2007 are calculated. The authors proved that the journals (in Horticulture) do not exhibit a linear increase in h-index as argued by Hirsch in the case of life-time achievements of scientists. They also studied how the relative visibility of a journal and its change over time, based on h-indices of journals.
Arencibia-Jorge R, Barrios-Almaguer I, Fernandez-Hernandez S, Carvajal-Espino R (2008) Applying successive H indices in the institutional evaluation: A case study. Journal of the American Society for Information Science and Technology 59(1):155-157, doi: 10.1002/asi.20729
Following a previous idea by Schubert, the authors apply the idea of successive h-indices to perform a case study of the scientific production of different institutions using a researcher-department-institution hierarchy as levels of aggregation. They improve their study by using additional h- related indices as complementary indicators. That idea is further complemented in:
Arencibia-Jorge R (2009) New indicators of institutional scientific performance based on citation analysis: the successive H indices. Revista Española de Documentacion Cientifica 32(3):101-106, doi: 10.3989/redc.2009.3.692
Arencibia-Jorge R, Rousseau R (2009) Influence of individual researchers' visibility on institutional impact: an example of Prathap's approach to successive h-indices. Scientometrics 79(3):507-516, doi: 10.1007/s11192-007-2025-0
Rodriguez-Navarro A (2009) Sound Research, Unimportant Discoveries: Research, Universities, and Formal Evaluation of Research in Spain. Journal of the American Society for Information Science and Technology 60(9):1845-1858, doi: 10.1002/asi.21104
In this work the author tries to relate the growth of the production of Spanish researches with the growth of h-related indices, concluding that both measures do not grow at the same speed.
Riikonen P, Vihinen M (2008) National research contributions: A case study on Finnish biomedical research. Scientometrics. 77(2):207-222, doi:10.1007/s11192-007-1962-y
Riikonen and Vihinen suggest that analyses of the scientific contribution of persons, disciplines, or nations should be based on actual publication and citation counts rather than on derived information like impact factors.
Egghe L, Liang LM, Rousseau R (2009) A Relation Between h-Index and Impact Factor in the Power-Law Model. Journal of the American Society for Information Science and Technology 60(11):2362-2365, doi:10.1002/asi.21144
Using a power-law model, the two best-known topics in citation analysis, namely the impact factor and the Hirsch index, are unified into one relation (not a function). The authors validate of their model (in a qualitative way) using real data.
Jacso P (2009) The h-index for countries in Web of Science and Scopus. Online Information Review 33(4):831-837, doi:10.1108/14684520910985756
In this paper the h-index is used to rank 10 Ibero-American countries of South America using data from both Web of Science and Scopus. Although the data obtained by both databses is quite different the obtained rank correlations is very high.
Schubert A, Korn A, Telcs A (2009) Hirsch-type indices for characterizing networks. Scientometrics 78(2):375-382, doi:10.1007/s11192-008-2218-1
In this contribution, the role of the h-type indices as to characterize networks and network elements is studied. The authors suggest that this kind of indices are not only useful for bibliometric pourposes but in an almost unlimited range of assesement applications.
Sypsa V, Hatzakis A (2009) Assessing the impact of biomedical research in academic institutions of disparate sizes. BMC Medical Research Methodology 9:33, doi:10.1186/1471-2288-9-33
The authors propose a new complementary measure to the h-index when comparing the research output of institutions of disparate sizes. The measure has a conceptual interpretation and, with the data provided in the paper, can be computed for the total research output as well as for field-specific publication sets of institutions in biomedicine.
Moussa S, Touzani M (2010) Ranking marketing journals using the Google Scholar-based hg-index. Journal of Informetrics 4(1):107-117, doi:10.1016/j.joi.2009.10.001
This paper provides a ranking of 69 marketing journals using a new Hirsch-type index, the hg-index which is the geometric mean of hg. The applicability of this index is tested on data retrieved from Google Scholar on marketing journal articles published between 2003 and 2007. The authors investigate the relationship between the hg-ranking, ranking implied by Thomson Reuters' Journal Impact Factor for 2008, and rankings in previous citation-based studies of marketing journals. They also test two models of consumption of marketing journals that take into account measures of citing (based on the hg-index), prestige, and reading preference.
In the literature a yet growing number of empirical studies that use the h- and related indices can be found. In the following we show some of those studies to examplify the broad number of areas in which these indices are gaining attraction:
Economical Geography
Bodman A.R. Measuring the influentialness of economic geographers during the 'great half century': an approach using the h index. Journal of Economic Geography 10 (1) (2010) 141-156, doi:10.1093/jeg/lbp061
Guan J.C., Wang G.B. A comparative study of research performance in nanotechnology for China's inventor-authors and their non-inventing peers. Scientometrics 82 (2) (2010) 331-343, doi:10.1007/s11192-009-0140-9
Haddow G., Genoni P. Citation analysis and peer ranking of Australian social science journals. Scientometrics 85 (2) (2010) 471-487, doi:10.1007/s11192-010-0198-4
Haslam N., Laham S.M. Quality, quantity, and impact in academic publication. European Journal of Social Psychology 40 (2) (2010) 216-220, doi:10.1002/ejsp.727
Hunt G.E., Cleary M., Walter G. Psychiatry and the Hirsch h-index: The Relationship Between Journal Impact Factors and Accrued Citations. Harvard Review of Psychiatry 18 (4) (2010) 207-219, doi:10.3109/10673229.2010.493742
Lee J., Kraus K.L., Couldwell W.T. Use of the h index in neurosurgery Clinical article. Journal of Neurosurgery 111 (2) (2009) 387-392, doi:10.3171/2008.10.JNS08978
Liu Y.X., Rao I.K.R. Empirical series of journal h-indices: The JCR category Horticulture as a case study. Scientometrics 80 (1) (2009) 59-74, doi:10.1007/s11192-007-2026-z
Rad A.E., Brinjikji W., Cloft H.J., Kallmes D.F. The H-Index in Academic Radiology. Academic Radiology 17 (7) (2010) 817-821, doi:10.1016/j.acra.2010.03.011
Manufacturing and Quality Engineering
Franceschini F., Maisano D. The Hirsch Index in Manufacturing and Quality Engineering. Quality and Reliability Engineering International 25 (8) (2009) 987-995, doi:10.1002/qre.1016
Sierra-Flores M.M., Guzman M.V., Raga A.C., Perez I. The productivity of Mexican astronomers in the field of outflows from young stars. Scientometrics 81 (3) (2009) 765-777, doi:10.1007/s11192-008-2264-8
Ronald Rousseau's Homepage: articles related to the h-index and h-type indices.
Journal of Informetrics. Special Issue: The Hirsch Index. Volume 1, Issue 3, pages 179-213 (5 papers), July 2007.
Wikipedia: h-index.
A MATLAB script to compute the h-index.
Publish or Perish calculates various statistics, including the h-index and the g-index using Google Scholar data.
H index for Journals and Countries.
Scientometrics.
Journal of Informetrics.
Journal of the American Society for Information Science and Technology (JASIST).
El Profesional de la Información.
We have performed a bibliography compilation of journal papers on h-index and related areas. It is maintained by F.J. Cabrerizo.
If you would like to include or correct any of the references on this page, please contact the maintainer in his e-mail address: [email protected]
Thematic Web Sites
Search on SCI2S
Ranking I-UGR
SCI2S Web-site Related
About the Webmaster Team
Logos, Images, etc.
Copyright © 2003 - 2021, SCI2S Research Group | CommonCrawl |
Chronic diseases and productivity loss among middle-aged and elderly in India
Shamrin Akhtar1,
Sanjay K. Mohanty2,
Rajeev Ranjan Singh1 &
Soumendu Sen1
Chronic diseases are growing in India and largely affecting the middle-aged and elderly population; many of them are in working age. Though a large number of studies estimated the out-of-pocket payment and financial catastrophe due to this condition, there are no nationally representative studies on productivity loss due to health problems. This paper examined the pattern and prevalence of productivity loss, due to chronic diseases among middle-aged and elderly in India.
We have used a total of 72,250 respondents from the first wave of Longitudinal Ageing Study in India (LASI), conducted in 2017-18. We have used two dependent variables, limiting paid work and ever stopped work due to ill health. We have estimated the age-sex adjusted prevalence of ever stopped working due to ill health and limiting paid work across MPCE quintile and socio- demographic characteristics. Propensity Score Matching (PSM) and logistic regression was used to examine the effect of chronic diseases on both these variables.
We estimated that among middle aged adults in 45–64 years, 3,213 individuals accounting to 6.9% (95%CI:6.46–7.24) had ever-stopped work and 6,300 individuals accounting to 22.7% (95% CI: 21.49–23.95) had limiting paid work in India. The proportion of ever-stopped and limiting work due to health problem increased significantly with age and the number of chronic diseases. Limiting paid work is higher among females (25.1%), and in urban areas (24%) whereas ever-stopped is lower among female (5.7%) (95% CI:5.16–6.25 ) and in urban areas (4.9%) (95% CI: 4.20–5.69). The study also found that stroke (21.1%) and neurological or psychiatric problems (18%) were significantly associated with both ever stopped work and limiting paid work. PSM model shows that, those with chronic diseases are 4% and 11% more likely to stop and limit their work respectively. Regression model reveals that more than one chronic conditions had a consistent and significant positive impact on stopping work for over a year (increasing productivity loss) across all three models.
Individuals having any chronic disease has higher likelihood of ever stopped work and limiting paid work. Promoting awareness, screening and treatment at workplace is recommended to reduce adverse consequences of chronic disease in India.
Ill-health, work, and productivity are interrelated. The pro-longed ill-health due to chronic diseases has a higher chance of premature mortality [1], increasing the chance of disability [2], higher use of medical services and exerts greater economic burden to household and nation. At the households level, economic burden can be both direct and indirect [3]. The high out-of-pocket spending, catastrophic health spending and impoverishment are direct consequences of increasing chronic diseases [4]. Indirect burden of chronic diseases includes work absenteeism, voluntary retirement from work [5], and reduced propensity to work [6]. The cascading effect of ill-health reduces individual income [7] and may lead to poor physical and mental health [8] and may lead to gradual loss of productivity and welfare.
Productivity loss reduces the income and well-being of individuals and households. Ill-health often reduces the work participation as it affects the prime working age group. Productive time forgone due to ill-health cost both, to the household and the nation as well. Productivity loss is measured using multiple indicators; work absenteeism, presenteeism, permanent withdrawal from the workforce, and job interruption [9]. While work absenteeism refers to absence due to illness, presenteeism is low work performance during sickness [10]. Permanent withdrawal from the workforce includes voluntary retirement due to impairment or other health problems. Work-related injuries or accidents and success and failure also add to productivity loss [11].
Most of the studies on the consequences of chronic diseases on work productivity were carried out in developed countries [12,13,14]. People with poor health are more likely to spend a considerable time in seeking healthcare and that may lead to work absenteeism [15]. Among respondents who experienced symptoms related to health conditions in Germany, the average number of workdays lost due to absenteeism and presenteeism was 27 days per respondent annually [16]. Results from a study in Australia shows that the full-time workers with mental disorders lost an average of one day due to absenteeism and three days due to presenteeism in one month reference period [17]. In USA, the weekly absenteeism costs US$1685/employee per year and about 71% of the total productivity loss was contributed by reduced performance at work [18]. Asthma, cancer, heart disease, and respiratory disorders were estimated to have presenteeism costs of more than US$200 per person annually in USA [19]. Presenteeism represents the largest component and leading driver to the medical costs, specifically among the patients with migraine/headache, allergies, and arthritis [20]. Depression ranked third among health conditions with an annual productivity loss of US$878 per person [21]. A higher number of health risks is associated with lower on-the-job productivity [22]. Adults with multiple chronic diseases may have high chance of reduced productivity [23] In India, nearly a quarter of the companies lose approximately 14% of the total working days annually due to sickness [24].
Older adults in India are vulnerable to chronic diseases and, that may affect their work temporary or permanently [25]. The country has achieved the replacement level of fertility and nearing completion of demographic transition, resulting increasing share of older adults and elderly in the country and increasing burden of non-communicable disease (NCD). The share of middle aged and elderly population (45+) has increased from 18.9% to 2001 to 25.1% by 2020 [26]. The median age of onset of NCDs was also declining from 57 years in 2004 to 53 years by 2018 [27]. Though large number of studies estimated the OOP and catastrophic health spending, socio-economic inequality and determinant of OOPS and CHE [28], there is no nationally representative studies on productivity loss due to health problems. Present study explores the pattern and prevalence of limiting paid work and productivity loss among middle-aged and elderly in India and their association with chronic diseases. Figure 1 presents a schematic presentation of productivity loss. It depicts the pathways how economic burden of ill-health lead to loss of income and welfare through various medical and non-medical components. The non-medical component includes absenteeism, presenteeism and job-interruption.
A framework on economic burden of ill-health
Data and methods
The study utilizes data from the first wave of Longitudinal Ageing Study in India (LASI), collected during April 2017 to December 2018. The survey was conducted by International Institute for Population Sciences (IIPS) in collaboration with Harvard T.H. Chan School of Public Health (HSPH), University of Southern California (USC) and other national institutions. Using multistage sampling method, a total of 42,949 households and 72,250 individuals aged 45 years and older and their spouses were successfully interviewed. Among these individuals, a total of 3,213 ever stopped working for a year or more due to health problem and 6,300 had limiting paid work. The data is publicly available for all states except Sikkim at the time of drafting this paper. The household and individual response rate was 95.8% and 87.3% respectively. Detailed about the survey and the findings are available in national report [29].
Outcome variables
In LASI survey, a detailed module on ever work, current work, stopped work and limiting paid work due to health issues were collected. The questions on stopped work begins with "have you ever stopped working for one year or more at a time due to reasons of family, health, education, economic recession, natural disasters, etc.?" and the question on limiting work reads as "Do you have any impairment or health problem that limits the kind or amount of paid work you can do?". We used ever stopped work (1 = yes, 0 = no) for one year or more due to health problem and whether health problem had limit the paid work (1 = yes, 0 = no) as two outcome variables.
Covariates
We have used a set of demographic, economic, behavioural and health covariates in the analyses. These includes age (45–54, 55–64, 65–74, 75+), sex (male/female), educational attainment (illiterate, less than 5 years, 5–9 years completed, 10 years or more), monthly per capita expenditure quintile (MPCE), place of residence (rural/urban), caste (scheduled caste, scheduled tribe, other backward classes, others), religion (Hindu, Muslim, Christian, others), marital status (currently married, widowed, others) and regions (north, central, east, northeast, west, south) were used as the predictors in this study. The MPCE was used to depict the living standard of the household. In addition, the number of chronic diseases (hypertension, diabetes, chronic lung disease, chronic heart diseases, stroke, arthritis, neurological or psychiatric problems), health insurance coverage (yes/no), practicing exercise (yes/rarely/never) and smoking tobacco (yes/no) are included to examine their association with the limiting paid work or ever stopping work for one year or more among older adults.
Treatment variable for PSM
In LASI, respondents were asked if they were diagnosed with chronic disease such as hypertension, diabetes, cancer, chronic lung disease, chronic heart disease, stroke, arthritis, and neurological problem. The individuals who had reported being diagnosed with any chronic diseases (1 = yes, 0 = no) have been considered as treatment group and those not being reported any of the chronic diseases have been treated as control group in the study. The treatment and control group did not overlap as they were mutually exclusive in nature.
Descriptive statistics, age-sex adjusted estimates, propensity score matching and logistic regression model were used in the analysis.
Prevalence of ever stopped work and limiting paid work
We estimated age-sex adjusted prevalence of ever stopped working and limiting paid work using the nationally representative full sample age-sex composition as reference using logistic regression.
Propensity score matching analysis
The propensity score matching (PSM) considers the potential selectivity in the sample. PSM is a statistical technique that estimates the effect of an intervention or a treatment by adjusting for covariates that predicts the results of receiving the treatment [30]. The advantage of using PSM model is that it compares the treated and controlled group on the basis of similar observed characteristics [31, 32]. The PSM has been used for evaluating various programme in a number of research studies [31,32,33,34]. For determining the average treatment effect (i.e., the effect of having any chronic disease), a counterfactual model is estimated.
The PSM is the probability of the middle aged and elderly population who had chronic diseases with certain characteristics, may be written as,
$$\mathrm P(\mathrm X)\:=\:\Pr\;(\mathrm D\:=\:1\vert\;\mathrm X)$$
Where, D = 1 if the population had any chronic diseases D = 0, otherwise.
And X is the vector of all the covariates used in the model.
Generally, PSM model estimated three probabilities, such as, Average Treatment Effect on the Treated (ATT), Average Treatment Effect on the Untreated (ATU) and Average Treatment Effect (ATE).
ATE is the average treatment effect of the intervention variable on the outcome variable and can be explained by using following equation
$$\mathrm{ATE}\;=\;\mathrm E\;(\mathrm\delta)\;=\;\mathrm E\;({\mathrm Y}_1-{\mathrm Y}_0)$$
where E (.) means average and Y1 represents potential outcome for those having any chronic disease and Y0 represents potential outcome for the population having no chronic diseases.
With the help of counterfactual model, the ATT can be written as
$$\mathrm{ATT}\;=\;\mathrm E\;({\mathrm Y}_1/\mathrm D=1)-\mathrm E\;({\mathrm Y}_0/\mathrm D=1)$$
The counterfactual model is the potential outcome that would have been obtained in case of not having any chronic disease and vice versa.
Where, E (Y1/D = 1) is stopping work who have any chronic disease.
E (Y0/D = 1) is the expected outcome for the individuals having any chronic disease if they would not have any of the diseases.
Similarly, the average treatment effect on the untreated (ATU) is defined as:
$$\mathrm{ATU}\;=\;\mathrm E\;({\mathrm Y}_1/\mathrm D=\;0)\;-\;\mathrm E\;({\mathrm Y}_0/\mathrm D=0)$$
Where E (Y1/D = 0) is the expected outcome if the individuals without any chronic disease were to have any chronic disease.
E (Y0/D = 0) is the counterfactual model predicts the outcome for the individuals who would have had any chronic disease but earlier they had not any.
The average treatment effect (ATE) is the difference between the expected outcome for those with any chronic disease and those without any chronic disease.
We used psmatch2 command in the STATA 16 which provides all the estimates using Mahalanobis matching technique.
We used the multivariate logistic regression as a robustness check in support to our PSM model. We used three different models to understand the impact of each covariate on ever stopping work and limiting paid work separately. In the Model 1, we adjusted only for the number of chronic diseases. In model 2, socio-demographic variables were considered (age, sex residence, caste, religion, marital status and region). Finally, the socioeconomic variables along with smoking/substance abuse, exercise, health insurance and other predictors were adjusted in Model 3 to assess the adjusted effect of all the covariates on ever stopping work for one year or more. The following regression equation has been used.
$$\mathrm{Logit}\;({\mathrm Y}_{\mathrm i})\:=\:\ln(\mathrm p/1-\mathrm p)\;=\;\mathrm\alpha\:+{\:{\mathrm\beta}_{\mathrm i}\mathrm X}_{\mathrm i}$$
Where, Y is the probability of outcome event of the ith individual. The model estimates the log odds of ever stopped work and limiting paid work adjusted for a set of explanatory variables (Xi).
STATA version 16 was used for cleaning, standardizing data (to adjusted form), and for analysing data. Independent variables included individual level variables.
Figure 2 shows a flow chart of participant selection for our analysis. Among 72,250 participants interviewed in LASI, 50,941 (72.4%) have ever worked and 21,289 (27.6%) had never worked. Those ever worked, 32,990 were currently working and 17,951 were not working currently. Those who were not working currently, about 31.5% have had stopped work, out of which health related reason accounts 56.5% followed by 20% due to childcare.
Schematic presentation of ever worked never worked, stopped and limiting work among middle age and elderly in India, 2017-18
Table 1 presents the socio-economic and demographic profile of the study samples of ever worked and currently working/ temporarily laid off. Of the total surveyed individuals, 59.3% had ever worked and 40.7% were currently working/temporarily laid off. Over 67.52% of ever worked sample population were in the working age group (under 65) compared to 81.03% for currently working sample. The sample was predominantly rural and currently married. About 56.99% of ever worked sample did not had any chronic disease compared to 62.75% among currently working/ temporarily laid off. Majority of the respondents were illiterates. Sample were proportionately distributed across regions.
Table 1 Descriptive statistics of sample profile by socioeconomic and demographic characteristics among middle aged and elderly in India, 2017–18
Figure 3 shows reasons for ever stopped work among elderly and non-elderly in India. Health issue (60%) is the major reason for ever stopped work followed by child care (21%) and other family issues (9%). It is slightly higher for the elderly as compared to the middle-aged people. In case of child care, it is higher for the middle-aged people than elderly.
Percent distribution of middle-aged adults and elderly ever stopped work by reasons in India, 2017-18
Table 2 presents the age-sex adjusted estimates of ever stopped work and limiting work (whose paid work was limited due to health reasons) by socioeconomic and demographic characteristics among individuals with and without chronic conditions. We estimated that 8.4% [95% CI: 7.52–9.24] older adults in India ever stopped work with a chronic condition compared to 5.35% [95% CI: 4.82–5.96] without chronic condition. Similarly, 31.1% [95% CI: 27.86–34.39] had limiting paid work compared to 18.3% [95% CI: 16.78–19.86] without any chronic condition. The proportion of ever stopped work for one year or more increases with age and decline with the level of education for both the group. The prevalence of stopped work among the treatment group was higher in urban areas (9.8%,95% CI: 9.04–10.54), among males (9.9%, 95% CI: 9.03–10.77) and among those who smoke/use any substance. However, no difference in prevalence were observed across different caste, religion and marital status in both treatment and control group. Notably, the prevalence of ever stopped work for one year or more was highest in poorest MPCE quintile (9.2%, 95% CI: 7.80-10.64) and lowest in richest MPCE quintile (6.7%, 95% CI: 5.33–8.07). However, the prevalence of ever stopped work and limiting paid work varied across the regions of India with highest being in western region in both the groups. The proportion of participants whose paid work was limited due to health reasons also increases with age and higher among females. It was higher in urban areas, and among those who smoke/use any substance. The prevalence of limiting paid work was higher among richest MPCE quintile compared to poorest MPCE quintile. Overall, for each of the background characteristics, prevalence was higher among the ones limiting paid work than those who ever stopped work for 1 year or more due to health reasons in both the groups in India. However, the prevalence of both the outcome variables were higher in the treatment group compared to that in the control group.
Table 2 Age-sex adjusted estimates of ever stopped work for one year or more and limiting paid work by socioeconomic and demographic characteristics among middle aged and elderly in India, LASI 2017–18
Table 3 presents the age-sex adjusted estimates of ever stopped work and limiting work by type and number of chronic diseases. The prevalence of ever stopped work and limiting paid work due to chronic diseases was higher among those who had the chronic disease compared to who did not had across each of the eight diseases category. For instance, respondent who have been diagnosed with hypertension, 8.3% had ever stopped work compare to 6.4% who did not had hypertension. Similarly, among those with hypertension 30.6% had limiting work compared to 20.8% who did not had hypertension. The proportion of older adults who stopped work/ had limiting work was highest in case of stroke (21.1%, 95% CI: 15.29–28.26) and (51.6%, 95% CI: 40.82–62.16) respectively followed by neurological or psychiatric problems. Prevalence of both the outcome variables increased with the increase in the number of chronic diseases. For instance, the proportion of older adults who ever stopped work varies from 5.4% (95% CI: 4.98–5.88) among those with no chronic condition to 19.3% (95% CI: 10.25–33.22) among those with five or more chronic conditions. The pattern was similar in case of limiting paid work. A significant gap is found in the prevalence of stopped working and limiting work between the two groups of population, one who have been diagnosed with diabetes/hypertension and the other who have not.
Table 3 Proportion of middle-aged adults and elderly ever stopped work for 1 year or more and limiting paid work by type of chronic diseases in India, LASI 2017-18
Table 4 shows result of propensity matching score of ever stopped work and limiting paid work. controlling for socio-demographic and economic covariates. The estimated ATT in treated and control groups are 0.085 and 0.046 respectively, suggesting that the population who had chronic condition, if they would not have, then 3.6% of them would not stop working. ATU result for controlled group indicates that among those individuals who had no chronic disease, if they would have chronic disease, then only 10.4% of them would stop working. ATE results indicate the average treatment effect and from the table, the difference in ATE is 4.8%. This indicates that after matching, the population with chronic disease are 4.8% more likely to stop working.
Table 4 Result of propensity matching score of ever stopped work or limiting work
Similarly, the unmatched sample estimate for limiting paid work shows that individuals having any chronic disease are 11% more likely to have increased limiting paid work compared with the ones not having any chronic disease. The estimated ATT values in treated and control groups are 0.253 and 0.141 respectively, indicating that population who had chronic condition, if they would not have, then only 12.5% of them would limit paid work. ATU result for controlled group indicates that among those individuals who had no chronic disease, if they would have chronic disease, then only 25.3% of them would limit paid work. ATE results indicate the average treatment effect and from the table, the difference in ATE is 11.8%. this indicates that after matching, the population with chronic disease are 12% more likely to stop working.
The propensity score results for ever stopped work for 1 year or more and limiting paid work suggest that individual having any chronic disease is indeed associated with greater ever stopped work and limiting paid work.
Table 5 presents the odds ratio of ever stopped work using three regression models. In first model, we have included the number of chronic diseases while in model 2, the socio-demographic factors along with chronic diseases were included. In model 3, economic condition of the household, health insurance along with behavioural factors were included. Noticeably, the odds ratio of the number of chronic diseases show significant variation even after adjusting for socio-economic and demographic covariates. The odds of stopping work among those with 5 and more chronic disease were 4 times higher (OR: 4.17, 95% CI: 1.99–8.75) as compared to those having no chronic disease. Similarly, the odds of ever stopped work was significantly lower among females (OR: 0.70, 95% CI: 0.62–0.79) compared to males. By type of residence, the likelihood of ever stopped work was 1.6 times higher among rural residents (OR: 1.60 95%, CI 1.34–1.90) compared to urban residents. For all other demographic variables except the number of chronic diseases, the pattern remains similar to that of model 2. However, the odds of stopping work were 1.13 (OR: 1.13, 95% CI: 0.95–1.34) times higher among richer compared to that of poorer. The odds of stopping work declined with each gradient of educational level. Those who were using any substance, the odds of stopping work was 1.26 times higher (OR: 1.26, 95% CI: 1.12–1.43) compare to those who don't. Similarly, among those who do not practice exercise or practices rarely, the odds of stopping work was 1.15 times higher (OR: 1.15, 95% CI: 0.94–1.40) than those who practices exercise.
Table 5 Adjusted odds ratio for ever stopped wok by socioeconomic and demographic characteristics among middle aged and elderly people in India, 2017-18
Table 6 shows the unadjusted and adjusted odds ratio for limiting paid work. The odds of the number of chronic diseases show significant variation even after adjusting for socio-economic and demographic covariates. For instance, compared to those having no chronic disease, person with 2 chronic diseases were significantly more likely to have limiting paid work (OR: 2.58, 95% CI: 1.84–3.62). The likelihood of limiting work was significantly higher among females (OR: 1.17, 95% CI: 1.00-1.36) and those residing in rural areas (OR: 1.08, 95% CI: 0.86–1.34) as compared to that of males. Similarly, the odds of limiting paid work was higher among ST (OR: 1.31, 95% CI: 1.10–1.55) followed by SC (OR: 1.34, 95% CI: 1.10–1.63) compared to the other caste. For all other demographic variables in model 3, the pattern remains similar that to of model 2 however for MPCE quintile the chances of limiting paid work was 1.45 times higher among richest quintile (OR: 1.45, 95% CI: 1.11–1.90) compare to that of poorer.
Table 6 Adjusted odds ratio for limiting paid work by socioeconomic and demographic characteristics among middle aged and elderly people in India, 2017-18
Additional file 1: Appendix 2 presents the estimated proportion of ever stopped work and limiting work among working age population (under 65) and 65 + by chronic diseases. In each of the variable, the proportion who stopped work was higher among those with any chronic disease compared to those without chronic diseases. The proportion of ever stopped worked for each of the diseases were higher among those in working age group compared to elderly (65+). However, the proportion of limiting work was higher for those 65+, in most of the chronic diseases.
This is the first ever population-based study that estimated the prevalence of ever stopped work and limiting paid work among middle aged and elderly in India. The key strength of our study is the use of the first and latest data from a high-quality, nationally representative, population-based ageing survey in India. Our study included sample of the middle-aged population, as well as the elderly population who have ever worked. This study fills the critical gaps in knowledge by investigating pattern and prevalence of limiting paid work and productivity loss among middle-aged and elderly in India and their association with chronic diseases and the validity of these findings has been confirmed by employing the robustness checks.
The results of age-sex adjusted estimates of ever stopped work and limiting work suggest that 7% of older adults ever stopped working and 23% had limiting work due to health-related issues. The prevalence of ever-stopped working and limiting work due to ill health is higher among those with a chronic condition compared to those who do not have that across socio-economic characteristics. As expected, the prevalence of ever-stopped work and limiting paid work are higher among the people who have even a single disease than who doesn't and positively associated with age. The results of propensity score matching show that the difference in ATE is 4.8% and 12% which indicates after matching, the population with chronic disease are 4.8% and 12% more likely to stop working. Moreover, the prevalence of ever stopped work was higher among those in working age group compared to elderly (65+). However, the probability of limiting paid work was higher among elderly compared to working age group. Controlling for socio-demographic and economic factors, the probability for ever stopped work was lower among females but higher among rural dwellers. The probability of limiting paid work was higher among females, rural dwellers and people who had health insurance, also this was high among people belonging to comparatively higher MPCE groups. These findings are consistent with literature from low- and middle-income countries [35]. Second, we found educational attainment as significant predictors of ever stopped work and limiting paid work. In the case of full model (model 3) a significant decrease in stopping work and limiting paid work was observed with higher level of education. Zimmerman et al. addressed this and investigated that, those adults with relatively higher educational level are expected to have greater socio-economic resources to attain a healthy lifestyle, also they are well equipped with the health literacy level required to avail later in their lives [36].
We found each of the chronic disease are significantly associated with stopping work and limiting paid work. Overall, among the eight chronic health conditions, the chronic diseases with the strongest association to stopping work or limiting paid work were stroke followed by Neurological or psychiatric problems. Many stroke survivors experience poststroke spasticity resulting in inability to perform daily activities, further necessitating their management and treatment. This exerts a considerable economic burden due to treatment cost and lost productive days [37]. Results from a study also indicate that inability to complete neuropsychological tests at one-year post-injury is associated with non-productive activity [38]. The chance of ever stopped work by each of the chronic diseases was higher among adults in the prime working age group suggesting that chronic diseases significantly inhibit the work. Even after adjusting for other socioeconomic and demographic characteristics, number of chronic diseases is found to be important contextual unit for ever-stopped work and limiting paid work.
As per World global health (WHO) fact on non-communicable diseases 2021 showed that, 71% of all deaths caused by non-communicable each year 15 million people in age group 30 to 69 dies due to NCDs and 85% of them belong to low- and middle-income countries and 77% of all NCDs death takes place in low- and middle-income countries. Chronic disease does not only hinder individual productivity and wellbeing but also it brings economic and human working hours capital loss for the nation. The increased burden of chronic diseases among working population in low-income and middle-income countries that have inadequate health systems might increase the productivity loss and global inequality and instability.
Occurrence of chronic diseases among the working age group is expected to increase along with increasing share of elderly population in India [39]. Chronic disease poses greater risk of high medical expenditure and productivity loss at work for the working population. Our study reflects the very same notion. Evidences from this study on chronic diseases and productivity loss in India is new and staggering, with a demand of policy attention. At present, there is no official programme focusing on work place and chronic diseases in India. The first step in this direction is to create awareness followed by screening for growing non-communicable diseases, at least for employee working in public and private sectors to optimise the productivity potential. The burden of ill-health in terms of productivity loss will further increase if no programs are implemented to manage, control, or prevent chronic diseases among working middle-aged and elderly population in India. There need to be an investment in carefully designing workplace intervention by the policymakers and employers at population and individual level to turn away the adverse economic and health consequences of chronic diseases.
We acknowledge the following limitations of this study. First, the chronic diseases we used are self-reported and medically diagnosed. We believe that a higher proportion of population with chronic diseases has not remain undiagnosed. Second, we did not analysed by actual loss of wage / income due to lack of data. Despite these limitations, we believe that the findings serves as the first population based study on estimates of loss of productivity due to chronic diseases in India.
This study has demonstrated that stopping work and limiting paid work were significantly associated with chronic diseases. The chronic diseases have their greatest impact on performance domain of productivity or limiting paid work. It could be used as an indicator of the performance of workplace health interventions and guide employers and policy makers towards better adjustments for employees with chronic diseases.
The datasets generated and/or analysed during the current study are available with the International Institute for Population Sciences, Mumbai, India repository and could be accessed from the following link: https://iipsindia.ac.in/sites/default/files/LASI_DataRequestForm_0.pdf. Those who wish to download the data have to follow the above link. This link leads to a data request form designed by International Institute for Population Sciences. After completing the form, it should be mailed to: [email protected] for further processing. After successfully sending the mail, individual will receive the data in a reasonable time.
Carter HE, Schofield D, Shrestha R. The long-term productivity impacts of all cause premature mortality in Australia. Aust N Z J Public Health. 2017;41(2):137–43.
Tillett W, et al. A threshold of meaning for work disability improvement in Psoriatic Arthritis measured by the Work Productivity and Activity Impairment Questionnaire. Rheumatol Ther. 2019;6(3):379–91.
Meerding WJ, et al. Health problems lead to considerable productivity loss at work among workers with high physical load jobs. J Clin Epidemiol. 2005;58(5):517–23.
Amaya-Lara JL. Catastrophic expenditure due to out-of-pocket health payments and its determinants in colombian households. Int J Equity Health. 2016;15(1):182.
Cloostermans L, et al. The effectiveness of interventions for ageing workers on (early) retirement, work ability and productivity: a systematic review. Int Arch Occup Environ Health. 2015;88(5):521–32.
Biron C, et al. At work but ill: psychosocial work environment and well-being determinants of presenteeism propensity. J Public Mental Health. 2006;5(No. 4):26–37.
Olivera MJ. Dexamethasone and COVID-19: strategies in low- and Middle-Income Countries to tackle steroid-related Strongyloides Hyperinfection. Am J Trop Med Hyg. 2021;104(5):1611–2.
Fox J, et al. Mental-health conditions, barriers to care, and productivity loss among officers in an urban police department. Conn Med. 2012;76(9):525.
Besen E, Pranksy G. Assessing the relationship between chronic health conditions and productivity loss trajectories. J Occup Environ Med. 2014;56(12):1249–57.
Howard KJ, Howard JT, Smyth AF. The problem of absenteeism and presenteeism in the workplace. InHandbook of occupational health and wellness. Boston: Springer; 2012. p. 151–79.
Kessler RC, et al. The world health organization health and work performance questionnaire (HPQ). J Occup Environ Med. 2003;45:156–74.
Vuong TD, Wei F, Beverly CJ. Absenteeism due to functional limitations caused by seven common chronic diseases in US workers. J Occup Environ medicine/American Coll Occup Environ Med. 2015;57(7):779.
Leijten FR, et al. The influence of chronic health problems on work ability and productivity at work: a longitudinal study among older employees. Scand J Work Environ Health. 2014;40(5):473–82.
Alavinia SM, Molenaar D, Burdorf A. Productivity loss in the workforce: associations with health, work demands, and individual characteristics. Am J Ind Med. 2009;52(1):49–56.
Van den Heuvel SG, et al. Productivity loss at work; health-related and work-related factors. J Occup Rehabil. 2010;20(3):331–9.
Iverson D, et al. The cumulative impact and associated costs of multiple health conditions on employee productivity. J Occup Environ Med. 2010;52:1206–11.
Doki S, et al. Relationship between sickness presenteeism and awareness and presence or absence of systems for return to work among workers with mental health problems in Japan: an internet-based cross‐sectional study. J Occup Health. 2015;57(6):532–9.
Stewart WF, et al. Lost productive time and cost due to common pain conditions in the US workforce. JAMA. 2003;290(18):2443–54.
Goetzel RZ, et al. Health, absence, disability, and presenteeism cost estimates of certain physical and mental health conditions affecting US employers. J Occup Environ Med. 2004;46:398–412.
Schultz AB, Chen C-Y, Edington DW. The cost and impact of health conditions on presenteeism to employers. PharmacoEconomics. 2009;27(5):365–78.
Mitchell RJ, Bates P. Measuring health-related productivity loss. Popul health Manage. 2011;14(2):93–8.
Riedel JE, et al. Use of a normal impairment factor in quantifying avoidable productivity loss because of poor health. J Occup Environ Med. 2009;51:283–95.
Meraya AM, Sambamoorthi U. Chronic condition combinations and productivity loss among employed nonelderly adults (18 to 64 years). J Occup Environ Med. 2016;58(10):974.
Alka C, Ali M, Garima M. Impact of Preventive Health Care on Indian Industry and Economy. Indian Council for Research on International Economic Relations (ICRIER), New Delhi; 2007. Working Paper, No. 198.
Fouad AM, et al. Effect of chronic diseases on work productivity: a propensity score analysis. J Occup Environ Med. 2017;59(5):480–5.
INDIA P. Census of India 2011 provisional population totals. New Delhi: Office of the Registrar General and Census Commissioner; 2011.
Mohanty SK, et al. Awareness, treatment, and control of hypertension in adults aged 45 years and over and their spouses in India: a nationally representative cross-sectional study. PLoS Med. 2021;18(8):e1003740.
Marthias T, et al. Impact of non-communicable disease multimorbidity on health service use, catastrophic health expenditure and productivity loss in Indonesia: a population-based panel data analysis study. BMJ Open. 2021;11(2):e041870.
International Institute for Population Sciences (IIPS) NP for, Health Care of Elderly (NPHCE), MoHFW HTHCS of, (USC) PH (HSPH) and the U of SC. Longitudinal Ageing Study in India (LASI) wave 1, 2017–18, India report. 2020.
Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70(1):41–55.
Sen S, et al. Unintended effects of Janani Suraksha Yojana on maternal care in India. SSM-population health. 2020;11:100619.
Dixit P, Dwivedi LK, Ram F. Strategies to improve child immunization via antenatal care visits in India: a propensity score matching analysis. PLoS ONE. 2013;8(6):e66175.
Yanovitzky I, Zanutto E, Hornik R. Estimating causal effects of public health education campaigns using propensity score methodology. Eval Program Plan. 2005;28(2):209–20.
Burton WN, et al. The association of health risks with on-the-job productivity. J Occup Environ Med. 2005;4:769–77.
Saha A, Alleyne G. Recognizing noncommunicable diseases as a global health security threat. Bull World Health Organ. 2018;96(11):792.
Zimmerman E, Woolf SH. Understanding the relationship between education and health. Washington, DC: Institute of Medicine; 2014. Discussion Paper.
Ganapathy V, et al. Caregiver burden, productivity loss, and indirect costs associated with caring for patients with poststroke spasticity. Clin Interv Aging. 2015;10:1793.
Atchison T, et al. Relationship between neuropsychological test performance and productivity at 1-year following traumatic brain injury. Clin Neuropsychol. 2004;18(2):249–65.
Abegunde DO, et al. The burden and costs of chronic diseases in low-income and middle-income countries. The Lancet. 2007;370(9603):1929–38.
The authors did not receive any funding to carry out this research.
International Institute for Population Sciences, Mumbai, Maharashtra, 400088, India
Shamrin Akhtar, Rajeev Ranjan Singh & Soumendu Sen
Department of Population and Development, International Institute for Population Sciences, Mumbai, 400088, India
Shamrin Akhtar
Rajeev Ranjan Singh
Soumendu Sen
Conceptualization of study: SKM, Analysis and/or interpretation of data: SA and RRS; SA, RRS, SKM, and SS participated in the writing of the manuscript; drafting the manuscript: SA; revising the manuscript critically for important content: SKM. The manuscript was prepared under the overall supervision of SKM. All the authors read and approved the final manuscript.
Correspondence to Shamrin Akhtar.
The authors were not involved in data collection process and therefore they did not require any ethical approval or consent to participate. The LASI data is secondary in nature. The data is freely available on request and survey agencies that conducted the feld survey for the data collection have collected a prior consent from the respondent. The ethical clearance was provided by Indian Council of Medical Research (ICMR), India. The survey agencies that collected data followed all the protocols. To maximize the cooperation of the sampled HHs and individuals, participants were provided with information brochures explaining the purpose of the survey, ways of protecting their privacy, and the safety of the health assessments as part of the ethics protocols. As per ethics protocols, consent forms were administered to each HH and age-eligible individual. In accordance with Human Subjects Protection, four consent forms were used in the LASI: Household Informed Consent, Individual Informed Consent, Consent for Blood Samples Collection for Storage and Future Use (DBS), and Proxy Consent. For each survey participant, the study protocol was described and the steps of each biomarker test were demonstrated by the trained health investigators. Participant's informed consent was obtained for the interviews. Since, the survey obtained either signed or oral consent, it was feasible for each participant to provide his/her consent. All methods were performed in accordance with the relevant guidelines and regulations.
The authors declare that they have no competing interests. Or other interests that might be perceived to influence the results and/or discussion reported in this paper.
Additional file 1: Appendix 1.
Shows the questions asked to generate the two outcome variables. Appendix 2. Estimates of ever stopped work for 1 year or more and limiting paid work by types of chronic diseases, socioeconomic and demographic characteristics among elderly and non-elderly in India, 2017-18.
Akhtar, S., Mohanty, S.K., Singh, R.R. et al. Chronic diseases and productivity loss among middle-aged and elderly in India. BMC Public Health 22, 2356 (2022). https://doi.org/10.1186/s12889-022-14813-2
Ever-stopped work
Limiting paid work
Productivity loss | CommonCrawl |
Study protocol
Effectiveness of dry needling for improving pain and disability in adults with tension-type, cervicogenic, or migraine headaches: protocol for a systematic review
Mohammadreza Pourahmadi ORCID: orcid.org/0000-0001-5202-54781,2,
Mohammad Ali Mohseni-Bandpei ORCID: orcid.org/0000-0001-6638-04381,3,
Abbasali Keshtkar4,
Bart W. Koes5,6,
César Fernández-de-Las-Peñas7,8,
Jan Dommerholt9,10,11,12 &
Mehrdad Bahramian2
Headache is the most common neurological symptoms worldwide, as over 90% of people have noted at least one headache during their lifetime. Tension-type headaches, cervicogenic headaches, and migraines are common types of headache which can have a significant impact on social, physical, and occupational functioning. Therapeutic management of headaches mainly includes physical therapy and pharmacological interventions. Dry needling is a relatively new therapeutic approach that uses a thin filiform needle without injectate to penetrate the skin and stimulate underlying tissues for the management of neuromusculoskeletal pain and movement impairments.
The main objective of this systematic review and meta-analysis is to evaluate the effectiveness of dry needling in comparison to other interventions on pain and disability in patients with tension-type headache, cervicogenic headache, and migraine.
Methods/design
We will focus on clinical trials with concurrent control group(s) and comparative observational studies assessing the effect of dry needling in patients with tension-type headache, cervicogenic headache, and migraine. Electronic databases from relevant fields of research (PubMed/ Medline, Scopus, Embase®, PEDro, Web of Science, Ovid, AMED, CENTRAL, and Google Scholar) will be searched from inception to June 2019 using defined search terms. No restrictions for language of publication or geographic location will be applied. Moreover, grey literature, citation tracking, and reference lists scanning of the selected studies will be searched manually. Primary outcomes of this study are pain intensity and disability, and secondary outcomes are cervical spine ROM, frequency of headaches, health-related quality of life, and TrPs tenderness. Studies will be selected by three independent reviewers based on prespecified eligibility criteria. Three reviewers will independently extract data in each eligible study using a pre-piloted Microsoft Excel data extraction form. The assessment of risk of bias will be implemented using the Cochrane Back and Neck Review Group 13-item criteria and NOS. Direct meta-analysis will be performed using a fixed or random effects model to estimate effect size such as standardized mean difference (Morris's dppc) and 95% confidence intervals. Statistical heterogeneity will also be evaluated using the I2 statistic and the χ2 test. All meta-analyses will be performed using Stata V.11 and V.14 softwares. The overall quality of the evidence for the primary outcomes will be assessed using GRADE.
All analyses in this study will be based on the previous published papers. Therefore, ethical approval and patient consent are not required. The findings of this study will provide important information on the value of dry needling for the management of tension-type headache, cervicogenic headache, and migraine.
PROSPERO registration number: CRD42019124125.
Headache is a major health concern as one of the most common type of all symptoms in the worldwide population [1, 2]. According to the 2016 Global Burden Disease study [3], "tension-type headache" and "migraine" which are described as primary headache syndromes have the third and sixth highest prevalence among 328 diseases and injuries in 195 countries from 1990 to 2016. Haldeman and Dagenais [4] reported that the prevalence of tension-type headaches, migraines, chronic daily headaches, and cervicogenic headaches in the general population is 38, 10, 3, and 0.4% to 2.5%.
Tension-type headache is identified by a bilateral pressing or tightening quality (non-pulsating quality), a mild to moderate intensity, and pain, which is not aggravated by routine physical activity, in the absence of vomiting, nausea, but may be accompanied by either photophobia or phonophobia [5,6,7,8]. These symptoms, however, do not present simultaneously during the same episode [9]. This neurological disorder is more common among female patients (female-to-man ratio of 5:4). The peak prevalence occurs between the ages of 30 and 39 [10]. The International Headache Society [5] classifies tension-type headache into three subtypes according to headache frequency: infrequent episodic (< 1 day of headache per month), frequent episodic (1–14 days of headache per month), and chronic (≥15 days per month).
Despite extensive neurophysiological and clinical studies, the exact cause of tension-type headache remains unknown [6, 7], however, peripheral nociceptive mechanisms appear to be the main cause of episodic tension-type headache, while chronic tension-type headache may be caused by central sensitization, inadequate endogenous pain control, and peripheral myofascial mechanisms (myofascial nociception) [11,12,13,14,15]. Previous experimental studies demonstrated that referred pain originating in myofascial TrPs within neck and shoulder muscles and surrounding soft tissues, such as fascia, tendons, and ligaments may reproduce headaches in patients with tension-type headache [16,17,18,19,20]. TrPs can be defined as hyperirritable palpable spots of taut fibers located within a myofascial tissue, which have been known to cause non-dermatomal referral pain and discomfort [21]. Muscles which are commonly involved in tension-type headache include the sub-occipital, sternocleidomastoid, upper trapezius, levator scapula, splenius, temporalis, and masseter [14, 22, 23].
Cervicogenic headache is characterized by chronic pain that originates from bony structures or soft tissues of the neck and is referred to the head [24]. The pain of cervicogenic headache is usually unilateral with occipitofrontal distribution of spread [25]. The prevalence of cervicogenic headache has been estimated at 15–20% in patients with chronic headaches [26]. The most accepted mechanism of cervicogenic headache is convergence between the trigeminal nerve and C1–3 nerves in the trigeminocervical nucleus [27]. The characteristics of tension-type headache and cervicogenic headache are similar, however, according to the Cervicogenic Headache International Study Group criteria [28] most cervicogenic headaches can be differentiated from tension-type headache and migraine with some overlap. In addition, according to Linde et al. [29], some patients may suffer from both types of headaches.
Migraine is defined as severe throbbing headache with nausea or vomiting associated with photophobia, that is aggravated by routine physical activity such as walking or climbing stairs [30, 31]. Migraine typically lasts between 4 and 72 h and has unilateral location [31]. Despite many migraine publications, the mechanism of migraine is not yet well understood [32]. The mechanism of migraine is believed to involve the trigeminal cervicogenic complex, which receives nociceptive information via afferent projections from the dura matter in large intracranial vessels [33]. A study conducted by Florencio et al. [34] indicated that patients with migraine exhibited active TrPs in their neck extensor muscles. According to the IHS, migraine is diagnosed if a person has at least 5 attacks fulfilling the abovementioned criteria [35].
Therapeutic management of headaches mainly comprises physical therapy and pharmacological approaches [36,37,38]. In the last decade, there has been an increasing interest in the use of dry needling for the treatment of headache as well as for neck and shoulder pain syndromes [38]. Dry needling is a skilled intervention frequently performed by physical therapists, physicians, chiropractors, and acupuncturists for the relief of myofascial pain disorders [39, 40]. In this technique a fine sterile needle is utilized to penetrate the skin, subcutaneous tissues, fascia, and muscle, with the goal of deactivating TrPs without the use of an anesthetic [41]. Once a TrP is deactivated, the fine needle is removed [42]. It is an efficient, easy-to-learn-and-perform procedure with a low risk profile [43]. Hong [44] suggested that local twitch responses should be elicited during dry needling for a successful technique; however, recent studies have questioned this notion [45, 46]. The time of application will rely upon the irritability of the TrP [38]. Although dry needling might not change all central sensitization aspects, it is probable that local and referred pain will be reduced, muscle blood flow, oxygenation, patterns of muscle activation, and range of motion will be improved, and the biochemical environment of TrPs will be changed [47,48,49]. Linde et al. [50] reported that the physiological mechanism of dry needling includes a combination of peripheral effects (such as spinal [i.e., gate control] and supraspinal [i.e., endogenous opioid system] mechanisms), as well as cortical effects (such as psychological or placebo mechanisms). It is hypothesized that dry needling may activate the serotonergic (5-HT) and noradrenergic descending inhibitory systems, which in turn may decrease pain [48]. Furthermore, Cagnie et al. [48] hypothesized that dry needling, via stimulation of the nociceptive fibers, may stimulate the enkephalinergic inhibitory dorsal horn interneurons. It is unclear whether the needle manipulation or the electrical stimulation is responsible for these results or both [48].
To the best of our knowledge, a systematic review conducted by France et al. [6] has investigated the effectiveness of dry needling and conventional physiotherapy in the management of cervicogenic headache or tension-type headache. Ten electronic databases were searched up to October 2012 and three relevant studies (two clinical trials and one case report) were identified through searches. Two included clinical trials with tension-type headache participants (40 male and 35 female) demonstrated statistically significant improvements following dry needling, but no significant differences between groups [6]. Furthermore, one case report study with a cervicogenic headache female that was included in the systematic review showed significant improvement in pain and neck disability index after nine treatment sessions of dry needling combined with manual therapy [6]. A formal meta-analysis was not performed because the number of studies included was not enough [6]. Additionally, grey literature was not included in the former systematic review to assure the comprehensiveness of the search strategy [6]. Although no systematic review and meta-analysis studies have been conducted to evaluate the effectiveness of dry needling on headache, several Cochrane systematic review studies have looked at the effectiveness of acupuncture in headaches [29, 50, 51]. In 2016, Linde et al., [29] investigated whether acupuncture is more effective than routine care, than 'sham' acupuncture; and other interventions in reducing headache frequency in adults with episodic or chronic tension-type headache. Twelve randomized trials with 2349 participants (median 56, range 10 to 1265) were included in the present systematic review and the results indicated that acupuncture is effective for treating frequent episodic or chronic tension-type headaches, but further trials - particularly comparing acupuncture with other treatment options such as physical therapy, massage or exercise - are needed [29]. In another systematic review, Linde et al. [51] assessed the effectiveness of acupuncture in reducing headache frequency in patients with migraine. Twenty-two trials with 4419 participants (median 42, range 27 to 1715) were included and the results showed that there was consistent evidence that acupuncture provides additional benefit to treatment of acute migraine attacks only or to routine care. However, Linde et al. [51] found no evidence for an effect of 'true' acupuncture over 'sham'/'placebo' acupuncture. Moreover, it has been suggested that acupuncture is at least as effective as, or possibly more effective than, prophylactic drug treatment, and has fewer adverse effects [51]. Finally, Linde et al. [51] concluded that acupuncture should be considered a treatment option for patients with migraine willing to undergo this treatment.
Despite the widespread use of dry needling in the treatment of headaches, its effectiveness is still controversial when compared with other techniques. Furthermore, because the previous published systematic review on this topic is out of date, a new systematic review of the literature is needed. Hence, the main objective of this systematic review and meta-analysis is to evaluate the effectiveness of dry needling in comparison to other interventions on pain and disability in patients with tension-type headache, cervicogenic headache, and migraine.
This systematic review will be performed in accordance with the PRISMA statement [52] and principles outlined in the Cochrane Handbook for Systematic Reviews of Interventions [53]. This protocol has been prepared with regard to the PRISMA-P 2015 guidelines [54] and was registered on PROSPERO (International Prospective Register of Systematic Reviews, http://www.crd.york.ac.uk/PROSPERO/; #CRD42019124125) in 4 March 2019. Ethical approval and patient consent will not be required since this is a systematic review of previously published studies and no new data collection will be undertaken.
Search strategy and study selection
A comprehensive electronic database search will be performed from inception to June 31, 2019 on the following databases: Medline (NLM) via the PubMed, Scopus, Embase®, PEDro, Web of Science, Ovid, AMED via the EBSCO, CENTRAL via The Cochrane Library, and Google Scholar. Electronic search strategies are constructed based on the combined keywords: tension-type headache, cervicogenic headache, migraine, and dry needling to identify human studies in the literature that investigated the effectiveness of dry needling in adult patients (≥ 18 years) with tension-type headache, cervicogenic headache, or migraine. A combination of MeSH (Medline), Emtree (Embase®) terms, and free text words in research equations with 'OR' and 'AND' Boolean operators will be used. Free text words will be selected from the indexed keywords of most relevant original studies and reviews in Scopus. To retrieve all possible variations of a specific root word, wildcards, and truncations will also be applied. The search strategy is customized according to the database being searched. In addition, if additional keywords of relevance are detected during electronic searches we will modify and re-formulate the search strategies to incorporate these terms. Three authors (M.R.P., M.A.M.B., and M.B.) will develop the sufficient search syntax, and after piloting and finalizing it, the search of the electronic databases will be conducted by one author (M.R.P.). Moreover, we will consult a biomedical librarian to review our search strategy using the PRESS 2015 guideline evidence-based checklist [55] in order to minimize error in our search strategies. Details of PubMed/Medline (NLM) database search syntax are presented in Additional file 1. PubMed's 'My NCBI' (National Center for Biotechnology Information) email alert service will be employed for identification of newly published systematic reviews using a basic search strategy.
Citation tracking and reference lists scanning of the selected studies and relevant systematic reviews will be searched for eligible studies. Manual search of keywords via internet will be also conducted. Additionally, the table of contents of the journal of Cephalalgia and the Journal of Bodywork & Movement Therapies will be reviewed. The key journals are identified within the research in the Web of Science and Scopus. To minimize publication bias, grey literature will be identified by searching for conference proceedings (via ProQuest, Scopus, and Web of Science Conference Proceedings Citation Index database), unpublished masters and doctoral theses (via ProQuest and OpenGrey; System for Information on Grey Literature in Europe), and unpublished trials (via US National Institutes of Health Ongoing Trials Register [ClinicalTrials.gov], WHO International Clinical Trials Registry Platform, and International Standard Randomized Controlled Trials Number [ISRCTN]. Abstracts from the annual meeting of American Headache Society and European Headache Federation congress in the last 5 years and abstracts from the congress of the International Headache Society in the last 4 years will also be searched. In addition, experts with clinical and research experience on the role of dry needling for headaches will be consulted. Finally, one author (M.R.P.) will complete the search process by manual searching in Google. We will not review content from file sources that are from mainstream publishers (e.g., BMJ, Sage, Wiley, ScienceDirect, Springer, and Taylor & Francis), as we expect these to be captured in our broader search strategy.
If a full text of a relevant article is not accessible, a contact will be made with the corresponding author(s). In addition, when unpublished works are retrieved in our search, an email will be sent to the corresponding author(s) to determine whether the work has been subsequently published. If no response received from the corresponding author(s) after three emails, the study will be excluded.
All publications identified by the searches will be imported into the EndNote reference management software (version X9.1; Clarivate Analytics Inc., Philadelphia, PA, USA), and duplicates will be removed automatically and double-checked manually. The titles and abstracts of each citation will be screened independently by three reviewers (M.R.P. M.A.M.B., and M.B.) according to a checklist that is developed for this purpose (Table 1) with the following criteria:
Study design should be clinical trials with concurrent comparison group(s) or comparative observational studies;
Study participants should have at least one of the three types of headache (tension-type headache, cervicogenic headache, and/or migraine);
Study participants should be ≥18 years of age;
The studies should have at least one of the primary outcomes (i.e., pain and disability) of this review; and,
Dry needling should be the main intervention in the study.
Table 1 PICOS criteria for the study
If a study meets all of the criteria, then the full-text of the study will be assessed for eligibility. In addition, a full-text review will be undertaken if the title and abstract do not provide adequate information. The selection process will be conducted strictly according to the inclusion and exclusion criteria by three independent reviewers simultaneously (M.R.P., M.A.M.B., and M.B.) (Table 1). The three reviewers are physical therapists with experience in performing systematic reviews. Disagreements will be resolved by discussion and if necessary, consultation with a fourth reviewer (A.A.K.). The eligibility criteria are based on the PICOS acronym (Table 1) and will be piloted prior to conducting the review proccess. The entire process of study selection is summarized in the PRISMA flow diagram (Fig. 1).
Flow diagram of study selection process
Risk of bias
The risk of bias of each clinical trial will be evaluated independently by three reviewers (M.R.P., M.A.M.B., and M.B.) using the Cochrane Back and Neck Review Group 13-item criteria [64]. The guideline examines six specific domains of bias, and the scoring criteria for each item in each of the domains are "Yes," "No," and "Unclear" if there is insufficient information to make an accurate judgment. We will categorize studies as "low risk" (at least six of the 13 criteria are met) or "high risk" (less than six criteria are met) [65]. In addition, the risk of bias assessment of each comparative observational study will be judged independently by the same reviewers (M.R.P., M.A.M.B., and M.B.) on the basis of the NOS [66]. The NOS is recommended by the Cochrane Non-Randomized Studies Methods Working Group to assess the quality of observational studies. The scale is based on the following three subscales: Selection (4 items), Comparability (1 item), and Outcome or Exposure (3 items) [67]. A total score of 3 or less will be considered high, 4–6 will be considered moderate, and ≥ 7 will be deemed low risk of bias [68]. Unacceptable bias will be defined as a zero score in any of the NOS subscales. The level of inter-rater agreement will be assessed using weighted Cohen's kappa coefficient, with a method developed for comparing the level of agreement with categorical data along with their respective 95% confidence intervals (κ 0–0.20 = poor agreement; 0.21–0.40 = fair agreement; 0.41–0.60 = moderate agreement; 0.61–0.80 = good agreement; and 0.81–1 = very good agreement) [69]. Disagreements will be resolved by discussion and where it is required with input from a fourth reviewer (A.A.K).
The graphical presentation of assessment of risk of bias will be generated by Review Manager Software (RevMan V.5.3.5) or Stata V.14 (Sata Corp., College Station, TX, USA).
Data extraction and abstraction from each eligible study will be performed independently by three reviewers (M.R.P., M.A.M.B., and M.B.), using a Microsoft Excel spreadsheet (Microsoft, Redmond, Washington, USA) which will be designed according to the Cochrane meta-analysis guidelines and will be adjusted to the needs of this review. The data-extraction form will be pilot-tested before its use. Pilot testing will be performed on two published studies which are not included in the present systematic review but are relatively similar to the eligible studies. During pilot-testing, we will assess the characteristic of the variables (e.g., categorical or continuous) and whether all pre-defined variables in the data-extraction form are useful for the systematic review and meta-analysis. Moreover, we will check if it is possible to include additional variables in the data-extraction form in order to perform further post-hoc sensitivity analyses. The following data will be extracted from all the eligible studies:
Study characteristics: first author's name, journal's name, publication year, country of study performance, study year, study design, single versus multicenter, size of the sample, and duration of follow-up.
Participants' characteristics: ethnicity, age, gender, body mass, stature, BMI, and type of headache.
Intervention and comparator details: sample size for each treatment group, muscles name, features of dry needling treatment (such as type of dry needling [superficial or deep], needle size, needling technique, and whether the technique elicited local twitch response), features of control interventions (sham/placebo methods or standard treatment details), duration of treatment session, frequency of treatment sessions per week or month, withdrawals, dropouts, and any other relevant detail.
Outcome measures: pain intensity, scales and questionnaires used to assess pain, total score of functional disability, disability questionnaires, cervical spine ROM, instruments used to measure cervical spine ROM, questionnaire used to measure health-related quality of life, and instruments used to assess TrPs tenderness. Primary and secondary outcomes will be documented at both baseline and endpoint.
Following the completion of this process, one author (M.R.P.) will double-check the extracted data to avoid any omissions or inaccuracies.
Dealing with missing data
If there are missing data or insufficient details in relation to the characteristics of the studies included in the meta-analysis, we will try to contact the study authors for further information. However, if the authors do not respond to queries, we will apply the following strategies to address missing data:
If ITT analyses were conducted in the eligible studies, we will use the ITT data instead of missing data as the first option.
For continuous missing outcome data, we will try to re-calculate mean difference, standard deviation, or effect size values when the test statistics, medians, p-values, standard errors, or confidence intervals are reported in the selected studies using the Campbell Collaboration effect size calculator (http://www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-SMD-main.php).
If required data are presented only in graphs of the included studies, we will extract the data by using WebPlotDigitizer V.4.2 (https://automeris.io/WebPlotDigitizer/index.html).
If none of the above strategies can be implemented, we will try to estimate mean difference and standard deviation values from the most similar study [65, 70].
Assessment of heterogeneity
Statistical heterogeneity among the included studies will be assessed using the I2 statistic and Q test (χ2) as recommended by the Cochrane Handbook for Systematic Reviews of Interventions [71]. The I2 statistic will be interpreted using the following guide: 0–40% = no important heterogeneity; 30–60% = moderate heterogeneity; 50–90% = substantial heterogeneity; 75–100% = considerable heterogeneity [72]. Heterogeneity will be considered before conducting pooled analysis. When I2 values are higher than 50% and there is overlap between the confidence intervals of the included studies with the summary estimate on the forest plot, the results of all eligible studies will be combined. The potential sources of heterogeneity will be explored by sensitivity and subgroup analyses/meta-regression.
Assessment of publication bias
Publication bias will be explored by constructing funnel plot and performing Begg and Mazumdar's rank correlation [73] and Egger's linear regression tests [74]. A p-value < 0.05 for Begg and Mazumdar's rank correlation and Egger's linear regression tests indicates significant statistical publication bias. However, the p-value will be set at 0.10 if the number of included studies is < 10. Moreover, Duval and Tweedie 'trim and fill' method will be conducted to explore the potential influence of a publication bias [72]. Publication bias will not be assessed by constructing funnel plot when < 10 studies are available per primary outcome of interest, since the plot for publication bias yields unreliable results [70]. Publication bias will be assessed using Stata V.14 (Stata Corp., College Station, TX, USA).
Data synthesis
Pooled effects of continuous variables will be expressed as Morris's delta (Morris's dppc), if the same primary outcomes are used in the eligible studies. Morris described a pre-post control effect size as "the mean pre-post change in the treatment group minus the mean pre-post change in the control group, divided by the pooled baseline standard deviation of both the treatment and control groups" [75, 76]:
$$ {d}_{ppc}={c}_p\left[\frac{\left({M}_{post,T}-{M}_{pre,T}\right)-\left({M}_{post,C}-{M}_{pre,C}\right)}{SD_{pre}}\right] $$
The pooled pretest standard deviation is calculated as [75, 76]:
\( {SD}_{pre}=\sqrt{\frac{\left({n}_T-1\right){SD^2}_{pre,T}+\left({n}_C-1\right){SD^2}_{pre,C}}{n_T+{n}_C-2}} \) T: treatment; C: control
The small sample size bias-correction is calculated as [75, 76]:
$$ {C}_P=1-\frac{3}{4\left({n}_T+{n}_C-2\right)-1} $$
Effect size (Morris's dppc) will be calculated using Campbell Collaboration effect size calculator (http://www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-SMD-main.php) and Psychometrica online tool (https://www.psychometrica.de/effect_size.html#cohc). If continuous outcomes measures are different between studies, we will also express pooled effects with Morris's dppc, but we will first convert the different outcome measures to a 0 to 100 scale [65]. For the measurement of effect sizes three levels are defined: small effect size (dppc < 0.40), medium effect size (0.40 ≤ dppc ≤ 0.70) or large effect size (dppc > 0.70). Although there are no available data for minimally clinically important differences (MCIDs) for pain and disability in adult patients with headache, a clinically important effect for the primary outcomes is considered when the magnitude of the effect size is at least medium [65]. Meta-analysis will be done separately on studies with clinical trial design and on studies with comparative observational design. Additionally, meta-analyses will be conducted separately on tension-type headache, cervicogenic headache, and migraine within each study design. In the presence of a sufficient number of studies, we will also conduct a priori subgroup analysis based on the overall risk of bias score (high, moderate, and low risk of bias). All data from the meta-analyses with 95% confidence intervals will be reported in forest plots. The random-effect model with DerSimonian–Laird (D + L) method [77] will be used to pool the data from individual studies. Stata V.11 and V.14 (Stata Corp., College Station, TX, USA) will be used for meta-analysis. Wherever applicable, NNT will be presented to help the reader better understand how the results can be applied to the individual patient. The Campbell Collaboration effect size calculator and Psychometrica online tool will be used to calculate NNT.
In addition, where a quantitative synthesis will not be deemed suitable due to low number of studies, a qualitative synthesis of results will be undertaken. We will conduct meta-analysis when ≥2 studies are available since "two" is the minimum number of studies required for meta-analysis [78]. If meta-analysis is not possible, we will summarize study results as either statistically significant (p-value < 0.05) or nonsignificant and calculate the effect of intervention on the outcomes of this study.
Unit of analysis issues
The unit of analysis will be based on aggregated outcome data as individual patient data is not available for any study.
Analysis problems
If sufficient homogeneous studies are available for statistical pooling, a meta-analysis will be performed for the time points: short (< 3 months after the baseline measurements were taken), intermediate (at least 3 months but < 12 months after the baseline measurements were taken) and long-term (12 months or more after the baseline measurements were taken) follow-up. If multiple time points fall within the same category, the one that is closest to the end of the treatment, 6 and 12 months will be used [70].
Sensitivity analysis using the leave-one-out method will be performed to determine the effect of each individual study on the pooled results [79]. Furthermore, sensitivity analyses will be conducted by using only high-quality studies in the meta-analyses to explore the robustness of conclusion. All sensitivity analysis will be performed using Stata V.14 (Stata Corp., College Station, TX, USA).
Summary of evidence
The overall quality of the evidence and strength of the recommendations for the primary outcomes will be assessed using GRADE [80]. The 'Summary of findings' tables will be generated by the GRADE working group online tool (GRADEpro GDT (www.gradepro.org)). The downgrading process is based on five domains: study limitations (e.g., risk of bias), inconsistency (e.g., heterogeneity between studies results), indirectness of evidence (including other patient populations or use of surrogate outcomes), imprecision (e.g., small sample size) and reporting bias (e.g., publication bias). The quality of evidence is classified as the following: (i) high quality—further research is unlikely to change confidence in the estimate of effect; the Cochrane criteria and NOS identify no risks of bias and all domains in the GRADE classification are fulfilled. In addition, further research is unlikely to change confidence in the estimate of effect (ii) moderate quality—further research is likely to have an important impact on the confidence in the estimate of effect, and one of the domains in the GRADE classification is not fulfilled; (iii) low quality—further research is likely to have an important impact on the confidence and may change the estimate; two of the domains are not fulfilled in the GRADE classification; and (iv) very low quality—we are uncertain about the estimate; three of the domains in the GRADE classification are not fulfilled [70, 80].
Headaches are one of the main reasons for absenteeism from works or avoid physical and social activities [81]. From 2007 to 2017, the number of all-age years lived with disabilities attributed to headaches increased by 15.4% (95% UI, 14.6–16.2) [2]. Tension-type headaches, cervicogenic headaches, and migraines are three common types of headache which can have a considerable impact on individuals' quality of life. Physical therapy is a treatment option that consists of interventions such as manual therapy, electrotherapy, exercises, and various maneuvers in order to improve pain, disability, and quality of life in patients with headaches. Dry needling is a physical therapy modality that involves inserting a fine filiform needle into the TrPs of soft tissues. There are many theoretical models that have influenced physical therapists and clinicians practicing dry needling [82]. The 'fast-in-and-fast-out' technique described by Hong [44] is probably one of the most widely used for the managements of neuromusculoskeletal pain and dysfunction [82].
Despite an increasing number of studies evaluating the effectiveness of dry needling for musculoskeletal disorders, no systematic review with meta-analysis has been carried out to examine the effectiveness of dry needling in patients with headaches. It is hoped that this study will provide useful information for physical therapists and clinicians on the treatment of tension-type headache, cervicogenic headache, and migraine.
This review will not capture any studies that assess the secondary outcomes (i.e., cervical spine ROM, frequency of headaches, health-related quality of life, and TrPs tenderness) but did not report on pain or disability. Therefore, the findings regarding the secondary outcomes will be limited by the included studies based on the eligibility criteria.
This study is the protocol for a systematic review and materials are not being collected and no data is yet available. After publishing the systematic review results, the dataset will be available from the corresponding author on a reasonable request.
AMED:
Allied and Complementary Medicine
BMI:
Cochrane Central Register of Controlled Clinical Trials
Functional Rating Index
Grading of Recommendations Assessment, Development and Evaluation
IHS:
International Headache Society
ITT:
Intention to-treat
MeSH:
Medical Subject Heading
NLM:
NNT:
Number-needed-to-treat
NOS:
Newcastle-Ottawa Scale
NPRS:
Numeric Pain Rating Scale
PEDro:
Physiotherapy Evidence Database
Participants, intervention, comparison, outcomes, study design
PPC:
Pre-post change
Peer Review of Electronic Search Strategies
PRISMA:
Preferred Reporting Items for Systematic Reviews and Meta-analyses
PRISMA-P:
Preferred Reporting Items for Systematic review and Meta-analysis Protocols
TrPs:
Uncertainty interval
VAS:
Visual analogue scale
Davies PT, Lane RJ, Astbury T, Fontebasso M, Murphy J, Matharu M. The long and winding road: the journey taken by headache sufferers in search of help. Prim Health Care Res Dev. 2019;20:1–6.
James SL, Abate D, Abate KH, Abay SM, Abbafati C, Abbasi N, Abbastabar H, Abd-Allah F, Abdela J, Abdelalim A. Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990–2017: a systematic analysis for the global burden of disease study 2017. Lancet. 2018;392:1789–858.
Vos T, Abajobir AA, Abate KH, Abbafati C, Abbas KM, Abd-Allah F, Abdulkader RS, Abdulle AM, Abebo TA, Abera SF. Global, regional, and national incidence, prevalence, and years lived with disability for 328 diseases and injuries for 195 countries, 1990–2016: a systematic analysis for the global burden of disease study 2016. Lancet. 2017;390:1211–59.
Haldeman S, Dagenais S. Cervicogenic headaches: a critical review. Spine J. 2001;1:31–46.
Olesen J, Steiner T. The international classification of headache disorders, 2nd ed (ICDH-II): BMJ Publishing Group Ltd; 2004:75;808–11.
France S, Bown J, Nowosilskyj M, Mott M, Rand S, Walters J. Evidence for the use of dry needling and physiotherapy in the management of cervicogenic or tension-type headache: a systematic review. Cephalalgia. 2014;34:994–1003.
Fumal A, Schoenen J. Tension-type headache: current research and clinical management. Lancet Neurol. 2008;7:70–83.
Loder E, Rizzoli P. Tension-type headache. BMJ (Clinical research ed). 2008;336:88–92.
Arnold M. Headache classification committee of the international headache society (ihs) the international classification of headache disorders. Cephalalgia. 2018;38:1–211.
Chowdhury D. Tension type headache. Ann Indian Acad Neurol. 2012;15:S83.
Bendtsen L. Central sensitization in tension-type headache—possible pathophysiological mechanisms. Cephalalgia. 2000;20:486–508.
Bezov D, Ashina S, Jensen R, Bendtsen L. Pain perception studies in tension-type headache. Headache. 2011;51:262–71.
Fernández-de-las-Peñas C, Cuadrado ML, Arendt-Nielsen L, Simons DG, Pareja JA. Myofascial trigger points and sensitization: an updated pain model for tension-type headache. Cephalalgia. 2007;27:383–93.
Gildir S, Tüzün EH, Eroğlu G, Eker L. A randomized trial of trigger point dry needling versus sham needling for chronic tension-type headache. Medicine. 2019;98:1–7.
Arendt-Nielsen L, Castaldo M, Mechelli F, Fernández-de-las-Peñas C. Muscle triggers as a possible source of pain in a subgroup of tension-type headache patients? Clin J Pain. 2016;32:711–8.
Fernández-de-las-Peñas C, Alonso-Blanco C, Cuadrado ML, Gerwin RD, Pareja JA. Trigger points in the suboccipital muscles and forward head posture in tension-type headache. Headache. 2006;46:454–60.
Fernández-de-las-Peñas C, Cuadrado ML, Gerwin RD, Pareja JA. Referred pain from the trochlear region in tension-type headache: a myofascial trigger point from the superior oblique muscle. Headache. 2005;45:731–7.
Fernández-de-las-Peñas C, Ge H-Y, Arendt-Nielsen L, Cuadrado ML, Pareja JA. The local and referred pain from myofascial trigger points in the temporalis muscle contributes to pain profile in chronic tension-type headache. Clin J Pain. 2007;23:786–92.
Fernández-Mayoralas DM, Ortega-santiago R, Ambite-quesada S, Palacios-ceña D, Pareja JA. Referred pain from myofascial trigger points in head and neck-shoulder muscles reproduces head pain features in children with chronic tension type headache. J Headache Pain. 2011;12:35.
Kamali F, Mohamadi M, Fakheri L, Mohammadnejad F. Dry needling versus friction massage to treat tension type headache: a randomized clinical trial. J Bodyw Mov Ther. 2019:23;89–93.
Simons DG, Travell JG, Simons LS. Myofascial pain and dysfunction: the trigger point manual, vol 1. Upper half of body. Baltimore: Williams & Wilkins; 1999.
Fernández-de-las-Peñas C, Cuadrado ML, Pareja JA. Myofascial trigger points, neck mobility, and forward head posture in episodic tension-type headache. Headache. 2007;47:662–72.
Melchart D, Streng A, Hoppe A, Brinkhaus B, Witt C, Wagenpfeil S, Pfaffenrath V, Hammes M, Hummelsberger J, Irnich D. Acupuncture in patients with tension-type headache: randomised controlled trial. BMJ. 2005;331:376–82.
Bir SC, Nanda A, Patra DP, Maiti TK, Liendo C, Minagar A, Chernyshev OY. Atypical presentation and outcome of cervicogenic headache in patients with cervical degenerative disease: a single-center experience. Clin Neurol Neurosurg. 2017;159:62–9.
Vij B, Tepper SJ. Secondary headaches. In: Cheng J, Rosenquist RW, editors. Fundamentals of Pain Medicine. Cham: Springer; 2018. p. 291–300.
Alix ME, Bates DK. A proposed etiology of cervicogenic headache: the neurophysiologic basis and anatomic relationship between the dura mater and the rectus posterior capitis minor muscle. J Manip Physiol Ther. 1999;22:534–9.
Narouze SN, Casanova J, Mekhail N. The longitudinal effectiveness of lateral atlantoaxial intra-articular steroid injection in the treatment of cervicogenic headache. Pain Med. 2007;8:184–8.
Sjaastad O, Fredriksen T, Pfaffenrath V. Cervicogenic headache: diagnostic criteria. Headache. 1998;38:442–5.
Linde K, Allais G, Brinkhaus B, Fei Y, Mehring M, Shin BC, Vickers A, White AR. Acupuncture for the prevention of tension-type headache. Cochrane Database Syst Rev. 2016;4:CD007587.
Borodic GE, Acquadro MA. The use of botulinum toxin for the treatment of chronic facial pain. J Pain. 2002;3:21–7.
Dowson A. The burden of headache: global and regional prevalence of headache and its impact. Int J Clin Pract. 2015;69:3–7.
Bayani A, Jafari S, Sprott J, Hatef B. A chaotic model of migraine headache considering the dynamical transitions of this cyclic disease. EPL (Europhys Lett). 2018;123:10006.
Saunders BJ, Aberkorn IS, Nye BL. Laboratory investigation in CDH. In: Green M, Cowan R, Cham FF, editors. Chronic headache. Switzerland: Springer; 2019. p. 169–83.
Florencio LL, Ferracini GN, Chaves TC, Palacios-Ceña M, Ordás-Bandera C, Speciali JG, Falla D, Grossi DB, Fernández-de-las-Peñas C. Active trigger points in the cervical musculature determine the altered activation of superficial neck and extensor muscles in women with migraine. Clin J Pain. 2017;33:238–45.
Society HCCotIH. The international classification of headache disorders, (beta version). Cephalalgia. 2013;33:629–808.
Fernández-de-las-Peñas C, Cuadrado ML. Physical therapy for headaches. Cephalalgia. 2016;36:1134–42.
Fernández-de-las-Peñas C, Cuadrado ML. Therapeutic options for cervicogenic headache. Expert Rev Neurother. 2014;14:39–49.
Fernández-de-las-Peñas C, Cuadrado ML. Dry needling for headaches presenting active trigger points. Expert Rev Neurother. 2016;16:365–6.
Berrigan WA, Whitehair C, Zorowitz R. Acute spinal epidural hematoma as a complication of dry needling: a case report. PM&R. 2018. https://doi.org/10.1016/j.pmrj.2018.1007.1009.
Liu L, Huang Q-M, Liu Q-G, Thitham N, Li L-H, Ma Y-T, Zhao J-M. Evidence for dry needling in the management of myofascial trigger points associated with low back pain: a systematic review and meta-analysis. Arch Phys Med Rehabil. 2018;99:144–52.e2.
de Abreu Venâncio R, Guedes Pereira Alencar F, Zamperini C. Different substances and dry-needling injections in patients with myofascial pain and headaches. CRANIO®. 2008;26:96–103.
Brady S, McEvoy J, Dommerholt J, Doody C. Adverse events following trigger point dry needling: a prospective survey of chartered physiotherapists. J Man Manip Ther. 2014;22:134–40.
Liu L, Huang Q-M, Liu Q-G, Ye G, Bo C-Z, Chen M-J, Li P. Effectiveness of dry needling for myofascial trigger points associated with neck and shoulder pain: a systematic review and meta-analysis. Arch Phys Med Rehabil. 2015;96:944–55.
Hong C-Z. Lidocaine injection versus dry needling to myofascial trigger point. The importance of the local twitch response. Am J Phys Med Rehabil. 1994;73:256–63.
Koppenhaver SL, Walker MJ, Rettig C, Davis J, Nelson C, Su J, Fernández-de-las-Peñas C, Hebert JJ. The association between dry needling-induced twitch response and change in pain and muscle function in patients with low back pain: a quasi-experimental study. Physiotherapy. 2017;103:131–7.
Perreault T, Dunning J, Butts R. The local twitch response during trigger point dry needling: is it necessary for successful outcomes? J Bodyw Mov Ther. 2017;21:940–7.
Cagnie B, Barbe T, De Ridder E, Van Oosterwijck J, Cools A, Danneels L. The influence of dry needling of the trapezius muscle on muscle blood flow and oxygenation. J Manip Physiol Ther. 2012;35:685–91.
Cagnie B, Dewitte V, Barbe T, Timmermans F, Delrue N, Meeus M. Physiologic effects of dry needling. Curr Pain Headache Rep. 2013;17:348.
Dommerholt J. Dry needling—peripheral and central considerations. J Man Manip Ther. 2011;19:223–7.
Linde K, Allais G, Brinkhaus B, Manheimer E, Vickers A, White AR. Acupuncture for tension-type headache. Cochrane Database Syst Rev. 2009:1;CD007587.
Linde K, Allais G, Brinkhaus B, Manheimer E, Vickers A, White AR. Acupuncture for migraine prophylaxis: Cochrane Database of Systematic Reviews; 2009;1:CD001218.
Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg. 2010;8:336–41.
Lefebvre C, Manheimer E, Glanville J, Higgins J, Green S. Cochrane handbook for systematic reviews of interventions. Version 5.0. 2; 2009.
Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.
McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. 2016;75:40–6.
Cairns BE, Gazerani P. Sex-related differences in pain. Maturitas. 2009;63:292–6.
Malo-Urriés M, Tricás-Moreno JM, Estébanez-de-Miguel E, Hidalgo-García C, Carrasco-Uribarren A, Cabanillas-Barea S. Immediate effects of upper cervical Translatoric mobilization on cervical mobility and pressure pain threshold in patients with Cervicogenic headache: a randomized controlled trial. J Manip Physiol Ther. 2017;40:649–58.
Gao Z, Giovanardi CM, Li H, Hao C, Li Q, Zhang X, Mansmann U. Acupuncture for migraine: a protocol for a meta-analysis and meta-regression of randomised controlled trials. BMJ Open. 2018;8:e022998.
Gattie E, Cleland JA, Snodgrass S. The effectiveness of trigger point dry needling for musculoskeletal conditions by physical therapists: a systematic review and meta-analysis. J Orthop Sports Phys Ther. 2017;47:133–49.
Georgoudis G, Felah B, Nikolaidis P, Damigos D. The effect of myofascial release and microwave diathermy combined with acupuncture versus acupuncture therapy in tension-type headache patients: a pragmatic randomized controlled trial. Physiother Res Int. 2018;23:e1700.
Simons DG, Travell JG, Simons LS. Travell & Simons' myofascial pain and dysfunction: the trigger point manual, volume 1: upper half of body. Baltimore: Lippincott williams & wilkins; 1999.
Bagg MK, McLachlan AJ, Maher CG, Kamper SJ, Williams CM, Henschke N, Wand BM, Moseley G, Hübscher M, O'Connell NE. Paracetamol, NSAIDS and opioid analgesics for chronic low back pain: a network meta-analysis. Cochrane Database Syst Rev. 2018;6:CD013045.
Hofmann M, Fehlinger T, Stenzel N, Rief W. The relationship between skill deficits and disability–a transdiagnostic study. J Clin Psychol. 2015;71:413–21.
Furlan AD, Malmivaara A, Chou R, Maher CG, Deyo RA, Schoene M, Bronfort G, Van Tulder MW. 2015 updated method guideline for systematic reviews in the Cochrane Back and neck group. Spine. 2015;40:1660–73.
Saragiotto BT, Maher CG, Yamato TP, Costa LO, Menezes Costa LC, Ostelo RW, Macedo LG. Motor control exercise for chronic non-specific low-back pain. Cochrane Database Syst Rev 2016;1:CD012004.
Stang A. Critical evaluation of the Newcastle-Ottawa scale for the assessment of the quality of nonrandomized studies in meta-analyses. Eur J Epidemiol. 2010;25:603–5.
Griffin DW, Harmon D, Kennedy N. Do patients with chronic low back pain have an altered level and/or pattern of physical activity compared to healthy individuals? A systematic review of the literature. Physiotherapy. 2012;98:13–23.
Yong W, Sanguankeo A, Upala S. Association between primary Sjögren's syndrome, cardiovascular and cerebrovascular disease: a systematic review and meta-analysis. Clin Exp Rheumatol. 2018;36:S190–7.
Altman DG. Practical statistics for medical research. London: Chapman & Hall; 1991.
Pourahmadi MR, Taghipour M, Takamjani IE, Sanjari MA, Mohseni-Bandpei MA, Keshtkar AA. Motor control exercise for symptomatic lumbar disc herniation: protocol for a systematic review and meta-analysis. BMJ Open. 2016;6:e012426.
Higgins J, Green S. Cochrane handbook for systematic reviews of interventions version 5.1. 0.[updated march 2011]. Chichester: The Cochrane Collaboration; 2018.
Deeks JJ, Higgins JP, Altman DG. Analysing data and undertaking meta-analyses. Cochrane handbook for systematic reviews of interventions: Cochrane book series; 2008. p. 243–96.
Begg CB, Mazumdar M. Operating characteristics of a rank correlation test for publication bias. Biometrics. 1994:50;1088–101.
Egger M, Smith GD, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997;315:629–34.
Karr JE, Areshenkoff CN, Rast P, Garcia-Barrera MA. An empirical comparison of the therapeutic benefits of physical exercise and cognitive training on the executive functions of older adults: a meta-analysis of controlled trials. Neuropsychology. 2014;28:829.
Morris SB. Estimating effect sizes from pretest-posttest-control group designs. Organ Res Methods. 2008;11:364–86.
DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986;7:177–88.
Valentine JC, Pigott TD, Rothstein HR. How many studies do you need? A primer on statistical power for meta-analysis. J Educ Behav Stat. 2010;35:215–47.
Patsopoulos NA, Evangelou E, Ioannidis JP. Sensitivity of between-study heterogeneity in meta-analysis: proposed metrics and empirical evaluation. Int J Epidemiol. 2008;37:1148–57.
Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann HJ. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336:924–6.
Bermas H, Najafi N, Masafi S. A comparison between personality characteristic of the people suffering from migraine headache and personality characteristic of healthy people. Procedia Soc Behav Sci. 2011;30:1183–90.
Kearns G, Fernández-de-las-Peñas C, Brismée J-M, Gan J, Doidge J. New perspectives on dry needling following a medical model: are we screening our patients sufficiently? J Man Manip Ther. 2019:27;172–79.
None of the authors has received any funding from any commercial or non-commercial agency with regard to the preparation of this article.
Pediatric Neurorehabilitation Research Center, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
Mohammadreza Pourahmadi
& Mohammad Ali Mohseni-Bandpei
Department of Physiotherapy, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
& Mehrdad Bahramian
University Institute of Physical Therapy, Faculty of Allied Health Sciences, University of Lahore, Lahore, Pakistan
Mohammad Ali Mohseni-Bandpei
Department of Health Sciences Education Development, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
Abbasali Keshtkar
Department of General Practice, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
Bart W. Koes
Center for Muscle and Joint Health, University of Southern Denmark, Odense, Denmark
Department of Physical Therapy, Occupational Therapy, Rehabilitation and Physical Medicine, Universidad Rey Juan Carlos, Alcorcón, Madrid, Spain
César Fernández-de-Las-Peñas
Cátedra de Investigación y Docencia en Fisioterapia: Terapia Manual y Punción Seca, Universidad Rey Juan Carlos, Alcorcón, Madrid, Spain
Bethesda Physiocare, Inc., Bethesda, MD, USA
Jan Dommerholt
Myopain Seminars, LLC, Bethesda, MD, USA
PhysioFitness, LLC, Rockville, MD, USA
Department of Physical Therapy and Rehabilitation Science, School of Medicine, University of Maryland, Baltimore, MD, USA
Search for Mohammadreza Pourahmadi in:
Search for Mohammad Ali Mohseni-Bandpei in:
Search for Abbasali Keshtkar in:
Search for Bart W. Koes in:
Search for César Fernández-de-Las-Peñas in:
Search for Jan Dommerholt in:
Search for Mehrdad Bahramian in:
M.R.P., M.A.M.B., and J.D. contributed to conception and design of the project. M.R.P. wrote the manuscript. A.A.K. participated in reviewing the statistical section of the manuscript. All authors read and approved the final manuscript.
Correspondence to Mohammad Ali Mohseni-Bandpei.
We have read BioMed Central's guidance on competing interests and declare that none of the authors have any competing interests in the article.
Search strategies for PubMed/Medline (NLM), Scopus, Web of Science, and Embase®. (DOCX 24 kb)
Pourahmadi, M., Mohseni-Bandpei, M.A., Keshtkar, A. et al. Effectiveness of dry needling for improving pain and disability in adults with tension-type, cervicogenic, or migraine headaches: protocol for a systematic review. Chiropr Man Therap 27, 43 (2019). https://doi.org/10.1186/s12998-019-0266-7
Tension-type headache | CommonCrawl |
RRB Group-D 23rd Sep 2018-shift -2
The LCM of $$15x^3 y^4$$ and $$12x^2y^5$$ is:
$$25x^3y^5$$
An object of mass 1 kg has a potential energy of 2 J relative to the ground, when it is at a height of: Take g = 10 $$m/s^2$$
Find the next number in the series:
13, 18, 28, 43, 63, ?
$$84 \div [50 - \left\{4^3 - (30 - 128 \div \overline{8 \times 4})\right\}] = ?$$
Babur, an Afahan ruler, established the Mughal dynasty in India in .........
Rahim started a journey 15 minutes late, and as a result, had to drive at a speed of 54 km/h instead of 45 km/h to reach his destination on time. What is the distance covered during the course of the journey?
In a square park having a side of 23 m two 3 m wide paths run through the centre of the park. What will be the total cost of gravelling the path at the rate of ₹ 1/$$dm^2?$$
Read the given statements and the following conclusions carefully and select which of the conclusions logically follow(s) from the statements.
• All emerald are gems.
• All aems are rocks.
1. All emeralds are rocks.
2. All rocks are emerald.
Only conclusion 2 follows.
No conclusions follow.
All the conclusions follow.
Who has been appointed as the first female chief justice of Pakistan?
Syeda Tahira Safdar
Fauziya Ahmed
Jameela Khan
Nabeela Hussain
How many banks were nationalised in 1969? | CommonCrawl |
Earthquake Research in China 2019, Vol. 33 Issue (4): 584-595 DOI: 10.19743/j.cnki.0891-4176.201904009
YE Beng, CAO Wenzhong, HUO Yuanhang, CHEN Jia, LI Xiaobin. Comparative Study on Two Methods for Measuring Wave Velocity Change of Crustal Medium Based on Large Volume Airgun Excitation Data[J]. Earthquake Research in China, 2019, 33(4): 584-595.
Comparative Study on Two Methods for Measuring Wave Velocity Change of Crustal Medium Based on Large Volume Airgun Excitation Data
YE Beng1,2 , CAO Wenzhong2, HUO Yuanhang2, CHEN Jia1, LI Xiaobin1
1. Office of the Western Yunnan Earthquake Prediction Study Area, CEA, Dali 671000, Yunnan, China;
2. University of Science and Technology of China, Hefei 230000, China
Received on February 13, 2019; revised on May 5, 2019
This project sponsored by the Yunnan Youth Fund (2017K01) and the Assistantship Project of the Yunnan Earthquake Agency
About the Author: YE Beng, born in 1984, is an assistant research fellow at Yunnan Earthquake Agency. His major research interests are digital seismology and large-capacity airgun active source. E-mail:[email protected]
Abstract: This paper proposes the application of dynamic programming method to calculate the relative change of wave velocities and compares its similarities and differences with the cross-correlation delay estimation method based on interference. The results show that:① the trend of wave velocities obtained by cross-correlation method and dynamic programming method are consistent. Besides,it is considered that the calculated result using cross-correlation delay method is reliable. ② Compared with the cross-correlation delay method,the calculated result of the dynamic programming method has a magnifying effect and is more sensitive to small disturbances. ③ Under ideal conditions,the wave velocity change trend calculated by P-wave and S-wave phase should be consistent. In addition,the cross-correlation delay method is used to calculate the wave velocity change. Under appropriate conditions,the process of recovering from the suspected wave velocity before the ML1.1 earthquake near the airgun source can be observed.
Key words: Artificial source Airgun source Cross-correlation Wave velocity change Dynamic programming
Earthquakes are destructive natural disasters for human beings. Every time a strong earthquake occurs, it will cause huge losses. It is our urgent hope to understand the origin and occurrence of earthquakes and thus reduce the loss of earthquake disasters. Due to the complex physical processes of earthquakes, the inaccessibility of the earths interior and other factors hinder the understanding of the earthquake-triggered environment, and finding a physical quantity with a clear meaning related to earthquake is the primary condition for predicting earthquakes (Wang Weitao et al., 2009). Wave velocity is jointly determined by the intrinsic properties of a rock and the external environment in which it is located. The mineral composition and content of a rock, physical properties, distribution state, porosity, pore distribution state, fluid properties in the pores, temperature, pressure, and pore pressure all have great influence on the change of wave velocity (Yang Xiaosong et al., 2003), which is a suitable physical quantity for describing the change of the properties of the medium in the seismogenic environment. The change in wave velocity reflects the change in the physical properties of the subsurface medium. Therefore, by measuring the change of wave velocity, the spatial and temporal changes of underground media can be inferred, and their relationships with the occurrences of earthquakes are studied.
The predecessors have conducted extensive researches on the wave velocity changes before and after earthquake seismic event. For example, after Japanese scholars used the interferometry method to measure the effects of seasonal factors, the surface shear wave velocity varies 5% before and after the 2011 "3·11" earthquake in Japan (Nakata N. et al., 2011). Numerous studies have been done in this area by Chinese scholars. Xu Z. J. et al., (2009) found that after three earthquakes in Sumatra in 2004, 2005 and 2007, the Rayleigh wave velocity in the Sumatra region changed significantly. Additionally, Cheng Xin (2010) used noise-related method to measure the wave velocity of the Rayleigh wave on the northwest side of the Longmenshan fault after the Wenchuan earthquake and found it decreased by 0.4%. Liu Mian et al. (2014) further confirmed that the Longmenshan fault had a corresponding change in stress before the Lushan earthquake. Pei Shunping et al. (2019) also indicated that the body wave velocity in the area decreased to some extent before the Wenchuan earthquake by picking up the body waves of small earthquakes. The difficulty in studying the wave velocity change before and after large earthquakes is the accuracy of the wave velocity measurement. Previously, the reliable high-precision data of wave velocity was obtained from rock physics experiments, but the measurement of high-precision wave velocity variation at the regional scale has always been a difficult point in geophysics (Lin Jianmin et al., 2006). Since the events generated by active source have the advantages of position controllability, high repeatability, and large detection scale (Wang Bin et al., 2016; Yang Wei et al., 2013), while the natural seismic positioning accuracy is limited, and the noise source energy is weak, the former one is utilized oftentimes. The wave velocity measurement method is able to enhance the measurement accuracy, thus has attracted the attention of many scholars. In the early periods, the active sources that geophysicists used were airguns (Yang Wei et al., 2013), ultrasound (Wang Zijie et al., 1997), piezoelectric ceramics (Niu Fenglin et al., 2008), repeated earthquakes (Zhou Longquan et al., 2007), and repeated blasting (Li Le et al., 2007). However, with the development of the active source technology and high-precision observation systems, airgun has received much attention as a important source so that large-capacity airgun excitation sources have been widely used in China since the 21st century.
The variation of wave velocity is extracted from the experimental data collected by airgun source. It is an important scientific issue to study the change of crustal medium before and after an earthquake (Chen Yong et al., 2007). Studies suggested that there are abnormal fluctuations in wave velocity before and after an earthquake, and it may related to factors such as solid tide, atmospheric pressure, and rainfall (Silver P.G. et al., 2007; Wang Baoshan et al., 2008).
The regional velocity fluctuation of the subsurface medium is extracted from the seismic waveform obtained by airgun active source using the cross-correlation delay method (Wang Weitao et al., 2009; Luo Guichun, 2006; Wang Bin, 2009). Compared with previous studies, this method makes use of the advantages of the airgun source system, i.e. the high repeatability of the source and receiving stations, making themeasurement more straight forward. In practical applications, it is necessary to set a calculation time window, and it is assumed that the amplitude of the phase in the time window is constant for different events. Since the waveform component of the active source signal generated by airgun is complex, when the displacement amount is large and the amplitude changes drastically, the method may fail (Liu Lanfeng et al., 2015; Hale D., 2013; Hall S. A., 2006). This paper introduces the dynamic programming methods, and find which method is more advantageous by compares the calculation results of the two methods.
1 METHODOLOGY 1.1 Cross-Correlation Delay Method
Due to the great uncertainty of the occurrence of natural earthquakes, there are numerous problems in directly measuring the absolute wave velocity changes. However, in recent years, it has become possible to use artificial sources to measure wave velocity changes. Among them, Niu Fenglin et al. (2008) is the most famous one which implemented the measurement work in the St.Anders fault deep exploration program. Both the airgun source excitation environment and the obtained phase are complex. Generally, when calculating the wave velocity, a cross-correlation-based measurement technique is used, that is, measuring the delay time of a certain phase multiple times in a fixed path, and measuring accuracy can be higher than the sub-sampling rate and achieve the desired accuracy (Wang Baoshan et al., 2011). The principle is as follows:
For the active source excitation signal repeated twice in the same place, the waveform information correlation function is expressed as
$ {R_{xy}}(\tau) = \mathop {\lim }\limits_{T \to \infty } \frac{1}{T}\int_{ - \frac{T}{2}}^{\frac{T}{2}} x (t){y^*}(t - \tau){\rm{d}}t $ (1)
$ {R_{yx}}(\tau) = \mathop {\lim }\limits_{T \to \infty } \frac{1}{T}\int_{ - \frac{T}{2}}^{\frac{T}{2}} y (t){x^*}(t - \tau){\rm{d}}t $ (2)
Among them, the seismic waveform record is often a power-limited signal, and the two pieces of information are represented as the sum of the two real numbers, x(t) and y(t), and there is a delay τS between them.
$ {R_{xy}}\left({{\tau _{\rm{S}}}} \right) = \frac{{\int_{t - \frac{T}{2}}^{t + \frac{T}{2}} x (t)y(t - \tau){\rm{d}}t}}{{\sqrt {\int_{t - \frac{T}{2}}^{t + \frac{T}{2}} {{x^2}} (t){\rm{d}}t\int_{t - \frac{T}{2}}^{t + \frac{T}{2}} {{y^2}} (t){\rm{d}}t} }} $ (3)
For the airgun active source signal received twice by the same station, the real waveform x(t) and the waveform y(t) are highly similar, but there is time delay, that is, when the correlation coefficient takes the maximum value, the corresponding τS is the delay of the two signals.
$ {R_{xy}}\left({{\tau _{\rm{S}}}} \right) = \frac{{\int_{t - \frac{T}{2}}^{t + \frac{T}{2}} x (t)y\left({t - {\tau _{\rm{S}}}} \right){\rm{d}}t}}{{\sqrt {\int_{t - \frac{T}{2}}^{t + \frac{T}{2}} {{x^2}} (t){\rm{d}}t\int_{t - \frac{T}{2}}^{t + \frac{T}{2}} {{y^2}} (t){\rm{d}}t} }} $ (4)
In the analysis, the experimental signal is selected as the standard value, and the delay between the remaining excitation signal and the standard signal indicates the relative travel time difference between the two excitation events. For the same phase, the path from the active source to the station is fixed, and the travel time difference represents the speed variation of the phase as it passes through the medium.
1.2 Dynamic Programming
The dynamic specification method was first applied to the field of speech recognition (Sakoe H. et al., 1978) to identify waveform changes caused by the pronunciation of the same words. Anderson K. R. et al., (1983) first implemented it in the field of high-precision seismic imaging research. Anderson discussed the global path constraints on the basis of predecessors, and proposed that higher slope accuracy can be obtained when the slope constraint is reduced. Keysers D. et al., (2013) introduced it into 2D seismic imaging and found that the method is computationally intensive. Pishchulin L., (2010) proposed an approximate calculation that reduces the amount of computation. Based on Mottls approximation method, Hall S. A., (2006) proposed to divide the optimization problem into two steps: accumulation and backtracking. He also gave the mathematical solution process and deducted it into multidimensional image calculation. Liu Lanfeng et al. (2015) used Daves method to calculate the multidimensional space and found that dynamic image deformation is less sensitive to noise than local correlation imaging. In this theory, the dynamic programming method finds a path through a number of grid points, and the cumulative minimum value of the grid points passing through the path is the optimal value. In the calculation steps, only the matching distance of the frame is calculated. It satisfies the time correspondence of the time warping function description template and the reference template as a constraint to solve the regular function corresponding to the minimum cumulative distance when the two templates match. The process is listed below:
Suppose there are two series f[i] and g[i].
$ f=f_{1}, f_{2}, \cdots \cdots, f_{i} $ (5)
$ g=g_{1}, g_{2}, \cdots \cdots, g_{j} $ (6)
To find the similarity of two series, we can build a matrix, i.e.
$ c(k)=(f(k), g(k)) $ (7)
The above formula represents the pairing of any two points in the two series.
$ \mathrm{d}(c(k))=\mathrm{d}(f(k), g(k))=\mathrm{d}(i, j)=\left\|f_{i}-g_{j}\right\| $ (8)
Wherein, the matrix element d(c(k)) represents the distance between any two points in the two series.
$ D(f, g) = \frac{1}{N}\min F\left[ {\sum\limits_{k = 1}^k {\rm{d}} (c(k)) \cdot w(k)} \right] $ (9)
where, D(f, g) is all accumulation. When it reaches the minimum, it means that the two series have the maximum similarity. w(k) is the weight of each step. Here, w(k)=1 and it means all the steps.
According to the theory, the constraints are:
(1) Relationship between the previous step and the latter step
$ \begin{aligned} i(k-1) \leqslant i(k) & \text { and } j(k-1) \leqslant j(k) \\ \end{aligned} $ (10)
$ \begin{aligned} i(k)-i(k-1) & \leqslant 1 \text { and } j(k)-j(k-1) \leqslant 1 \end{aligned} $ (11)
$ c(k-1)=\min \left\{\begin{array}{c} {(i(k), j(k)-1)} \\ {(i(k)-1, j(k)-1)} \\ {(i(k)-1, j(k))} \end{array}\right\} $ (12)
(2) Boundary conditions
$ i(1)=1, j(1)=1, i(k)=I, j(k)=J $ (13)
(3) Sliding window size before and after
$ |i(k)-j(k)| \leqslant r $ (14)
Where i and j represent the lengths of f[i] and g[i], respectively. r is the sliding window set according to the actual situation.
Through the above conditions, the change rate of the time shift amount is constrained to find a global optimal time shift amount. That is, when the minimum value D(f, g) is recorded, the relative time difference of the corresponding two series windows is recorded, and the time delay can be obtained.
In this paper, since seismic imaging is not required, the amount of data calculation is relatively less and the approximate calculation method is not used.
As the theory shown above, the dynamic programming method can tolerate a greater degree of compression or tensile deformation of the data when the correlation coefficient is obtained.
2 TEMPLATE INSPECTION 2.1 Building a Theoretical Model
There are many seismic wave simulation methods in complex media (Zhang Wei, 2006). This paper uses a purely numerical finite difference method based on lattice points. The data is generated by two-dimensional forward modeling to perform a test on the two wave velocity variation calculations above. The relevant parameters of the media model are set as follows.
According to the parameters listed in Table 1, a simple medium model of the Binchuan area is constructed. The Rake wavelet is set as the source on the surface of the model, and 10 receivers are arranged with an interval of 1km at the same level as the source. A medium-disturbing body is set 5km from the source. The body is a rectangle which is 3km long and 1km wide that moves between 2km and 23km in depth. The disturbing body moves down 1km each time during the experiment. The media model and the received data are shown in Fig. 2.
Table 1 Theoretical model related parameters
Crust thickness/km vP/km·s-1 vS/km·s-1 Density/kg·m-3
15 6.1 3.58 2.32
7 6.3 3.7 2.76
7 6.4 3.76 3.06
21 6.64 3.91 3.31
Fig. 1 (a) Corresponding point-point diagram of the cross-correlation calculation of similarity time.(b) Corresponding point-point diagram of the dynamic programming calculation of similarity time
Fig. 2 Waveforms recorded by the media model and receiver (a) Media model related parameters and receiving station distribution. (b) All event waveforms received by Station 3;All event waveforms received by Station 9;Waveforms received by the disturbing bodies at all stations with a depth of 3km
After 24 simulation experiments, the data from the stations with the epicenter distances of 6km and 12km are selected and analyzed. The results show that during the sinking process of the disturbing body, multiple waves are obviously deformed, while body waves are not. For body waves, the influence of the disturbing body on S-wave is greater than that on P-wave (Fig. 2).
2.2 Analysis of Experimental Results
The cross-correlation delay method and the dynamic programming method are used to calculate the relative change of the wave velocity. However, in the two calculations, the time window for calculating the relative change is unchanged. According to the ray theory, the ray paths of different phases are different. Therefore, time-lapse windows of the most easily identifiable S and P are used to calculate the wave velocity variation, that is, the red and green marker bands in the first row of Fig. 3, which shows the epicentral distances and the wave velocity variations of the stations 6km and 12km from the epicenter. The second row is the calculation result using the cross-correlation delay method, while the third row is the calculation result using the dynamic programming method. The abscissa is time and the ordinate is the event number.
Fig. 3 The relative travel time of the P phase (red) and S phase (green) and their respective template phases in each event calculated by the theoretical waveforms (a)Template waveform for all events of station. (b) The template waveform for all events of station. (c) and (d) are the calculation results for cross-correlation delay method. (e) and (f) are the calculation results for dynamic programming method. (c) and (e)reveal the relative travel time difference between the corresponding phases of the stations with the epicentral distance of 6km and the template phase. (d) and (f) are the relative travel time difference between the corresponding phases of the stations at the epicentral distance of 12km and the template phase
As shown in Fig. 3, the cross-correlation delay method and the dynamic programming method have the following similarities and differences:
(1) The calculation result using the dynamic programming method is one order of magnitude larger than the result using the cross-correlation delay method, but the basic trend of the wave velocity change is consistent. Since the disturbing body is a rectangle which is 3km long and 1km wide, the wave velocity in the disturbing body is 9km. Therefore, the maximum time difference of two time events caused by the disturbing body is 0.11s, and the maximum time difference of the wave is 0.061s. The calculation results of the dynamic programming method are too large, while the results of the cross-correlation method based on interference are too small. In addition, the calculation results of the P-wave are close to the theoretical values. The reason for this difference may be related to their related principles, and may also be related to the paths of body waves in different events.
(2) Compared with the cross-correlation delay method, the dynamic programming method has a more obvious amplification effect and is more sensitive to small disturbances. Correspondingly, the cross-correlation method has strong stability.
(3) The wave calculation results obtained by the two methods show that the change trend of P-wave calculation is almost the same and the calculation result difference of the S-wave is relatively large but with a consistent basic trend. Besides, the calculation result of the wave is more sensitive to the disturbance. The calculation result of S-wave is more sensitive to disturbance and there is an amplification effect.
(4) The difference between the two methods increases with the epicentral distance.
In summary, from the experimental results, the cross-correlation method based on interference is closer to the theoretical value. This may be related to the amplification effect of the small changes in the waveform caused by the dynamic programming method when calculating the correlation coefficient due to its tolerance to the waveform shape change.
3 CALCULATION RESULTS AND DISCUSSION OF EXPERIMENTAL DATA IN BINCHUAN AREA
During December 12-16, 2015, the Binchuan Seismic Signal Transmitting System has continuously conducted a 4-day experiment at a frequency of one excitation per 1hour. In this period, three small earthquakes occurred in the inner center of the seismic signal launching platform. Among them, the earthquake with ML1.1 occurred at 20:46 on December 12, 2015 with a 5km focal depth. The earthquake of ML1.4 occurred at 12:05 on December 15 with a 5km focal depth. The two earthquakes are located between the airgun source and Station 53262. The distance between the station and the airgun launching station is 4.5km. In this paper, the airgun signals recorded by stations close to the two earthquake epicenters are selected. According to the ray theory, signal rays received by Station 53262 are determined to have passed through the earthquake occurrence area, and those received by Station 53272 are not considered having passed through the source region.
The theoretical model experiments carried out above indicate that the cross-correlation delay method is more reliable than the dynamic programming method. Therefore, the cross-correlation delay method is usually used to calculate the wave velocity variation and analyze its possible correlation with the earthquake.
It can be seen from Fig. 5 that the actual active source signal is more complicated than the theoretical signal, and the phase is difficult to identify. The reason is probably that the process of triggering the seismic wave by the airgun source relies on the bubble oscillation process at the bottom of the water. The airgun wavelet, including pressure pulses and bubble pulses, is much more complicated than the Rake wavelet in the experiment (Tang Jie et al., 2009). In the process, three relatively stable seismic phases are selected as the object to calculate the wave velocity change. For Station 53262, the three phases have the arrival time periods of 2.5-3.0s, 3.0-3.5s and 3.5-4.2s, respectively. The seismic phase of Station 53272 is selected with reference to the corresponding seismic phase of Station 53262. The calculated delay result is shown in Fig. 5.
Fig. 4 Locations of stations, airgun sources and earthquakes F1 is the Chenghai fault. F2 is the Honghe fault
Fig. 5 Relative arrival time changes before and after the earthquake (a) is the template waveforms of Station 53262.(b), (c) and (d) are the wave-velocity changes of the three phases of Station 53262.(e), (f) and (g) are the wave-speed changes of the corresponding phases of Station 53272. The numbers in the figure indicate the sequence number of the events, wherein the adjacent events differed by 1 hour. Red and green colors indicate the relative changes in delay
First, according to the calculation in the theoretical model experiment, a rectangle which is 3km long and 1km wide is used, and the P-wave disturbing wave velocity in the body is 9km/s, the maximum delay that may occur in the ray path is 0.01s, much larger than the current measurement result. Usually, the change in travel time 10-5s can be produced even by a magnitude 1.0 earthquake (Zhou Qingyun et al., 2018), which is significantly less than the measurement results herein. The occurrence of this situation may be related to the excessive setting of parameters such as disturbance scale and medium change in the theoretical model. There are also other interference factors including solid tide.
Furthermore, there are obvious daily variations in wave velocity changes, which is consistent with previous research results.
Again, from the calculation results of 53262 units, it can be seen that the wave speed varies from falling to rising at 5 hours before the ML1.1 earthquake, which reaches its maximum 0.02% when the earthquake occurs. This phenomenon is more obvious before the earthquake. The first two phases show that there is a maximum time delay 5 hours before the earthquake, and the relative standard time delays are 9.50×10-4s and 8.07×10-4s respectively, while there is no obvious change with the third phase. This may be because the ray path of the third phase is not related to the area affected by the earthquake with ML1.4. Also, the adjacent Station 53272 does not show and drop before the earthquake with ML1.1. The three phases received at Station 53272 before the earthquake with ML1.4 showed the process of the wave speed falling to the rebound, but the change range was smaller than Station 53262. As shown in Fig. 4, the distance from Station 53262 to the airgun source is 4.5km, and the distance from Station 53272 to the airgun source is 5.2km. The two stations are 3.8km apart from each other. Besides, the two earthquakes are on the connecting line between the airgun source and Station 53262 and are farther away from Station 53272. The magnitudes of the earthquake differ by 0.3, which is the possible reason that the wave velocity of Station 53272 changed before the earthquake with ML1.4, but did not change before the earthquake with ML1.1.
Finally, when using the cross-correlation interferometry to measure the wave velocity, the main factors affecting the measurement results are the reservoir water level where the airgun source is located, the local temperature, the performance of the observation system, the calculation error, and the solid tide. In this experiment, because of its 170 hours duration time, the water level of the reservoir where the airgun is located was 22.67m at 08:00:00 on December 12, 2015, and 22.76m at 20:00:00 on December 18, 2015, during which the water level rose first and fell subsequently. The maximum value was 22.83m, and the change process was stable with a small amplitude. Zhou Qingyun (2018) calculated that the 3×10-4s time delay can be caused by the change of reservoir water level per 1m. During the experiment, the temperature in Binchuan area was stable, and the relationship between temperature and travel time was about 0.5μs/℃ (Niu Fenglin et al., 2008), much smaller than the current measurement results. If the performance of the observation system and the calculation error were not considered, the main interference factor in this experiment should be solid tide. According to Wang Weitao et al. (2017), the change in travel time has a great response to solid tide, which was confirmed by this experiment. The trend of wave velocity before the two earthquakes is not consistent with the trend of solid tides. To some extent, the measured wave velocity changes before the two earthquakes are reliable.
In this paper, focusing on the needs of the dynamic monitoring of underground media, large-capacity airgun source was taken as the research object, and the method of calculating the relative change of wave velocity using artificial source was analyzed.Consequently, several new insights were obtained:
(1) In theory, the dynamic programming method can tolerate a greater degree of deformation of the waveform than the interference method.
(2) In the actual calculation, the value of the wave velocity obtained by using the cross-correlation delay method to calculate the airgun source data is an order of magnitude larger than that of the dynamic gun method, but the wave velocity change trend is basically the same, which indicates the interference-based cross-correlation delay method for wave velocity measurement is more reliable.
(3) Under appropriate conditions, the Binchuan active source airgun signal is able to capture the process of small wave velocity reduction and recovery of an earthquake small as ML1.1.
Anderson K. R., Gaby J. E.Anderson K. R., Gaby J. E. Dynamic waveform matching[J]. Information Sciences, 1983, 31(3): 221-242. DOI:10.1016/0020-0255(83)90054-3
Chen Yong, Wang Baoshan, Ge Hongkui, Xu Ping, Zhang WeiChen Yong, Wang Baoshan, Ge Hongkui, Xu Ping, Zhang Wei. Proposed of transmitted seismic stations[J]. Advances in Earth Science, 2007, 22(5): 441-446 (in Chinese with English abstract).
Cheng Xin, Niu Fenglin, Wang BaoshanCheng Xin, Niu Fenglin, Wang Baoshan. Coseismic velocity change in the rupture zone of the 2008 MW 7.9 Wenchuan earthquake observed from ambient seismic noise[J]. Bulletin of the Seismological Society of America, 2010, 100(5B): 2539-2550. DOI:10.1785/0120090329
Hale D.Hale D. Dynamic warping of seismic images[J]. Geophysics, 2013, 78(2): S105-S115. DOI:10.1190/geo2012-0327.1
Hall S.A.Hall S.A. A methodology for 7d warping and deformation monitoring using time-lapse seismic data[J]. Geophysics, 2006, 71(4): 021-031.
Keysers D., Unger W.Keysers D., Unger W. Elastic image matching is NP-complete[J]. Pattern Recognition Letters, 2003, 24(1/3): 445-453.
Li Le, Chen Qifu, Cheng Xin, Niu FenglinLi Le, Chen Qifu, Cheng Xin, Niu Fenglin. Spatial clustering and repeating of seismic events observed along the 1976 Tangshan fault, north China[J]. Geophysical Research Letters, 2007, 34(23): L23309. DOI:10.1029/2007GL031594
Lin Jianmin, Wang Baoshan, Ge Hongkui, Chen Qifu, Chen YongLin Jianmin, Wang Baoshan, Ge Hongkui, Chen Qifu, Chen Yong. Doublet and its potential application in active exploration[J]. Earthquake Research in China, 2006, 22(1): 1-9 (in Chinese with English abstract).
Liu Lanfeng, Wei Xiucheng, Huang Zhongyu, Wang Lu, Li LihuaLiu Lanfeng, Wei Xiucheng, Huang Zhongyu, Wang Lu, Li Lihua. PP-and PS-wave automatic time match based on dynamic image warp[J]. Oil Geophysical Prospecting, 2015, 50(4): 626-632 (in Chinese with English abstract).
Liu Mian, Luo Gang, Wang HuiLiu Mian, Luo Gang, Wang Hui. The 2013 Lushan Earthquake in China tests hazard assessments[J]. Seismological Research Letters, 2014, 85(1): 40-43. DOI:10.1785/0220130117
Luo Guichun. Precise Measurement of Seismic Velocity and Velocity Variation by Correlated Detection[D]. Beijing: Institute of Earthquake Science, CEA, 2006 (in Chinese with English abstract).
Nakata N., Snieder R.Nakata N., Snieder R. Near-surface weakening in Japan after the 2011 Tohoku-Oki earthquake[J]. Geophysical Research Letters, 2011, 38(17): L17302.
Niu Fenglin, Silver P. G., Daley T. M., Cheng Xin, Majer E. L.Niu Fenglin, Silver P. G., Daley T. M., Cheng Xin, Majer E. L. Preseismic velocity changes observed from active source monitoring at the Parkfield SAFOD drill site[J]. Nature, 2008, 454(7201): 204-208. DOI:10.1038/nature07111
Pei Shunping, Niu Fenglin, Ben-Zion Y., Sun Quan, Liu Yanbing, Xue Xiaotian, Su Jinrong, Shao ZhigangPei Shunping, Niu Fenglin, Ben-Zion Y., Sun Quan, Liu Yanbing, Xue Xiaotian, Su Jinrong, Shao Zhigang. Seismic velocity reduction and accelerated recovery due to earthquakes on the Longmenshan fault[J]. Nature Geoscience, 2019, 12(5): 387-392. DOI:10.1038/s41561-019-0347-1
Pishchulin L. Matching Algorthms for Image Recognition[D]. Rheinisch-Westf alischen: Technischen HochSchule Aachen, 2010.
Sakoe H., Chiba S.Sakoe H., Chiba S. Dynamic programming algorithm optimization for spoken word recognition[J]. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1978, 26(1): 43-49. DOI:10.1109/TASSP.1978.1163055
Silver P. G., Daley T. M., Niu Fenglin, Majer E. L.Silver P. G., Daley T. M., Niu Fenglin, Majer E. L. Active source monitoring of cross-well seismic travel time for stress-induced changes[J]. Bulletin of the Seismological Society of America, 2007, 97(1B): 281-293. DOI:10.1785/0120060120
Tang Jie, Wang Baoshan, Ge Hongkui, Chen YongTang Jie, Wang Baoshan, Ge Hongkui, Chen Yong. Study of experiment and simulation of large volume air-gun in deep structures exploration[J]. Earthquake Research in China, 2009, 25(1): 1-10 (in Chinese with English abstract).
Wang Baoshan, Zhu Ping, Chen Yong, Niu Fenglin, Wang BinWang Baoshan, Zhu Ping, Chen Yong, Niu Fenglin, Wang Bin. Continuous subsurface velocity measurement with coda wave interferometry[J]. Journal of Geophysical Research, 2008, 113(B12): B12313. DOI:10.1029/2007JB005023
Wang Baoshan, Wang Weitao, Ge Hongkui, Xu Ping, Wang BinWang Baoshan, Wang Weitao, Ge Hongkui, Xu Ping, Wang Bin. Monitoring subsurface changes with active sources[J]. Advances in Earth Science, 2011, 26(3): 249-256 (in Chinese with English abstract).
Wang Bin. The Experiment Study on Measurement of Seismic Velocity Variation Using Different Seismic Sources[D]. Hefei: University of Science and Technology of China, 2009 (in Chinese with English abstract).
Wang Bin, Li Xiaobin, Liu Zifeng, Yang Jun, Yang Runhai, Wang Baoshan, Yang WeiWang Bin, Li Xiaobin, Liu Zifeng, Yang Jun, Yang Runhai, Wang Baoshan, Yang Wei. The source and observation system of Binchuan earthquake signal transmitting seismic station and its preliminary observation results[J]. Earthquake Research in China, 2016, 32(2): 193-201 (in Chinese with English abstract).
Wang Weitao. Using Active Source to Study the Velocity Character of Media in Regional Scale[D]. Hefei: University of Science and Technology of China, 2009 (in Chinese with English abstract).
Wang Weitao, Wang Baoshan, Ge Hongkui, Chen Yong, Yuan Songyong, Yang Wei, Li YijinWang Weitao, Wang Baoshan, Ge Hongkui, Chen Yong, Yuan Songyong, Yang Wei, Li Yijin. Using active source to monitor velocity variation in shallow sediment caused by the Wenchuan earthquake[J]. Earthquake Research in China, 2009, 25(3): 223-233 (in Chinese with English abstract).
Wang Weitao, Wang Baoshan, Jiang Shengmiao, Hu Jiupeng, Zhang YuanshengWang Weitao, Wang Baoshan, Jiang Shengmiao, Hu Jiupeng, Zhang Yuansheng. A perspective review of seismological investigation on the crust at regional scale using the active Airgun source[J]. Journal of Seismological Research, 2017, 40(4): 514-524 (in Chinese with English abstract). DOI:10.3969/j.issn.1000-0666.2017.04.002
Wang Zijie, Zhao Pengda. Primary results in studying geologic structural effect on earthquake ground motion with supersonic modelling technique[J].Geoscience——Journal of Graduate School, China University of Geosciences, 1997(3): 339, 353 (in Chinese with English abstract).
Xu Z. J., Song XiaodongXu Z. J., Song Xiaodong. Temporal changes of surface wave velocity associated with major Sumatra earthquakes from ambient noise correlation[J]. Proceedings of the National Academy of Sciences of the United States of America, 2009, 106(34): 14207-14212. DOI:10.1073/pnas.0901164106
Yang Xiaosong, Ma JinYang Xiaosong, Ma Jin. Continental lithosphere decoupling:implication for block movement[J]. Earth Science Frontiers, 2003, 10(S1): 240-247 (in Chinese with English abstract).
Yang Wei, Wang Baoshan, Ge Hongkui, Wang Weitao, Chen YongYang Wei, Wang Baoshan, Ge Hongkui, Wang Weitao, Chen Yong. The active monitoring system with large volume airgun source and experiment[J]. Earthquake Research in China, 2013, 29(4): 399-410 (in Chinese with English abstract).
Zhang Wei. Finite Difference Seismic Wave Modelling in 3D Heterogeneous Media with Surface Topography and its Implementation in Strong Ground Motion Study[D]. Beijing: Peking University, 2006 (in Chinese with English abstract).
Zhou Longquan, Liu Guiping, Ma Hongsheng, Hua WeiZhou Longquan, Liu Guiping, Ma Hongsheng, Hua Wei. Monitoring crustal media variation by using repeating earthquakes[J]. Earthquake, 2007, 27(3): 1-9 (in Chinese with English abstract).
Zhou Qingyun, Liu Zifeng, He SugeZhou Qingyun, Liu Zifeng, He Suge. Influence factors of velocity variation of Airgun seismic waves and travel time changes related to earthquakes[J]. Earthquake, 2018, 38(3): 144-157 (in Chinese with English abstract).
[收稿日期] 2019-02-13; [修订日期] 2019-05-05 | CommonCrawl |
mersenneforum.org (https://www.mersenneforum.org/index.php)
- Viliam Furik (https://www.mersenneforum.org/forumdisplay.php?f=174)
- - Prime representing constant program for calculation of the digits (https://www.mersenneforum.org/showthread.php?t=26455)
Viliam Furik 2021-01-28 23:57
Prime representing constant program for calculation of the digits
Hello, not so long ago I have posted in the y-cruncher subforum about a program for the Prime representing constant (PRC). Since then I have developed quite a fast program, thanks to the help of casevh, author of gmpy2 library, and successfully computed 1 billion (10[SUP]9[/SUP]) digits of PRC. I have also finally managed to figure out how to parallelize the algorithm, but that led me to another problem...
Next big milestone after 10[SUP]9[/SUP] digits is 10[SUP]12[/SUP]. That requires 1 TB of disk space to contain all digits in a text file (or less if any compression is used), computing through primes up to about 2,5 * 10[SUP]12[/SUP], basically the same amount of RAM (that problem has a solution - writing the intermediary numbers to disk, and reading them from there, as y-cruncher does, or at least how I understand it does it), but most importantly, it needs to be able to sieve out the primes themselves. Storing 2,5 * 10[SUP]12[/SUP] bits in RAM would require about 300 GB of RAM space, which is too much for what I seek. Sieving them in a file would bring up a lot of hassle with making a procedure that could do such thing and another that can read the actual values from there.
These might not be extremely hard problems, but there may be easier ways to do this. So I would like to ask, whether anyone can help with some idea on how to effectively handle such gigantic sized numbers and amounts of data.
BTW, I am also thinking about making it into a GPU program, it could play nicely with TF high-performing GPUs (basically INT32), as the primes are not that big, compared to the blob of prime jelly that is being computed. If somebody has some ideas about this too, I will be grateful for any advice.
BTW2: Are there any volunteers who would like to test the upcoming C version of the code? It will have the multi-core computation implemented in it.
kruoli 2021-01-29 09:53
What are the requirements for the way you want to store the primes ((either) in RAM or on disk?). If none, you should be able to cut the memory requirement roughly in half, maybe even more. There are nearly [URL="http://www.wolframalpha.com/input/?i=primes+up+to+2.5e12"]91 billion primes[/URL] up to 2.5[$]\cdot[/$]10[SUP]12[/SUP]. If you would store the differences between primes like [URL="http://en.wikipedia.org/wiki/Variable-length_quantity"]this[/URL] instead of their exact values, it will save a lot of memory, but you can only access a single item by reading all values before it, so you would have random access in O(n). One could optimize this a lot by having intermediate values every so often, which might slightly increase memory demands, but will improve random access greatly.
[QUOTE=Viliam Furik;570370]BTW2: Are there any volunteers who would like to test the upcoming C version of the code? It will have the multi-core computation implemented in it.[/QUOTE]
Yes. :smile:
Batalov 2021-01-29 11:16
[QUOTE=Viliam Furik;570370]Hello, not so long ago I have posted in the y-cruncher subforum about a program for the Prime generating constant (PGC). Since then I have developed quite a fast program, thanks to the help of casevh, author of gmpy2 library, and successfully computed 1 billion (10[SUP]9[/SUP]) digits of PGC.[/QUOTE]
...but then you can generate a 1 billion digits prime, can you?
Which 'Prime generating constant' are you calculating? Mills'?
Or are you calculating Euler–Mascheroni gamma?
In the last thread he mentioned ([URL="http://mersenneforum.org/showthread.php?t=26298"]this one[/URL]), he wanted to compute the constant described in [URL="http://arxiv.org/pdf/2010.15882.pdf"]this[/URL] paper.
Dr Sardonicus 2021-01-29 12:24
Yes, I believe the original "Prime [i][b]representing[/b][/i] constant" was a better word choice.
[QUOTE=Dr Sardonicus;570400]Yes, I believe the original "Prime [i][b]representing[/b][/i] constant" was a better word choice.[/QUOTE]
Oopsie... My bad, I confused the name. Will correct it.
@kruoli: As of now, the program uses a bit array for the primes, which it iterates. The parallelized algorithm relies on some way to store the primes, and then it chops the list of primes into about equal pieces (whether by the length of the range, or prime count in that range - about the same for these quantities of primes), saves the actual value of the last prime in each range (one range per worker), and when the actual values are needed in the computation, it calculates the from the position in the range, range number (used for finding the last prime in previous range) and if running from a savefile, also the last prime used in the computation of the savefile.
BTW, I forgot to mention, that I need a way to efficiently sieve them, too. Sieving is done quite easily in RAM, but sieving on disk is a bit more complicated, albeit not too complicated. To be clear, I am not thinking about using other methods than Eratosthenes (unless they are more efficient), but the storage of bits representing the integers is the tricky part when you need a lot of them.
Storing only the prime gaps seems to be a lot better, but is there a better way to store the sieving data?
@Batalov: As kruoli kindly explained, I am calculating the Prime representing constant, which can only generate (thus my confusion in the name) primes which were used in its computation, using the recursive formula. So 1 billion digits of the constant only suffice for primes under about 2 302 586 000.
Using the formula to generate the primes back is perhaps the only way to confirm the correctness of the digits, apart from computing them with some different method (there is another formula, but it is equivalent to the one mentioned in the paper - it is basically the same, only the one in the paper has its terms grouped, thus has fewer terms for the same amount of digits, but the terms take longer to compute, so it equals out).
Sieving itself should not be a problem. You should use primesieve if you are not eager to implement everything yourself:
[CODE]$ /usr/bin/time -v primesieve 2500000000000
Sieve size = 256 KiB
Threads = 16
Seconds: 62.344
Primes: 90882915772
Command being timed: "primesieve 2500000000000"
User time (seconds): 992.74
System time (seconds): 0.12
Percent of CPU this job got: 1592%
Elapsed (wall clock) time (h:mm:ss or m:ss): 1:02.35
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
[B]Maximum resident set size (kbytes): 47436[/B]
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 6
Minor (reclaiming a frame) page faults: 15683
Voluntary context switches: 210
Involuntary context switches: 4092
Swaps: 0
File system inputs: 928
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0[/CODE]
That's not even 50 MB. The space requirement for that command was in [$]\mathcal{O}(\sqrt n)[/$]. If you use the [URL="http://primesieve.org/api/"]API[/URL] of primesieve and do execute a command that is not only counting, it will of course take a bit longer, but not in a way that would cause delays.
So my next question would be: Do you really need to store the primes? According to your last thread and the paper, you will only use one prime at a time, in order. Only in the division step in the end, you use the primorial up to your largest used prime, and you do not need all primes stored for that. [STRIKE]For an efficient implementation, you'd have to store in the order of [$]\mathcal{O}(\log p_\text{count})[/$] primes at any given time.[/STRIKE] The problem would be the result (divisor) getting large quickly.
[QUOTE=kruoli;570407]So my next question would be: Do you really need to store the primes?[/QUOTE]
[QUOTE=Viliam Furik;570401]Oopsie... My bad, I confused the name. Will correct it.[/QUOTE]:tu:
Blogging, as it should be! :grin:
Quick question, does GMP, when using C on Windows, have to be compiled or something? It seems to me it does, as there is only a source code available on the website. I assume the answer will be the same for the primesieve library. | CommonCrawl |
Cross section shape optimization of wire strands subjected to purely tensile loads using a reduced helical model
Francesco Maria Filotto ORCID: orcid.org/0000-0003-2173-35641,
Falk Runkel2 &
Gerald Kress3
This paper introduces a shape optimization of wire strands subjected to tensile loads. The structural analysis relies on a recently developed reduced helical finite element model characterized by an extreme computational efficacy while accounting for complex geometries of the wires. The model is extended to consider interactions between components and its applicability is demonstrated by comparison with analytical and finite element models. The reduced model is exploited in a design optimization identifying the optimal shape of a 1 + 6 strand by means of a genetic algorithm. A novel geometrical parametrization is applied and different objectives, such as stress concentration and area minimization, and constraints, corresponding to operational limitations and requirements, are analyzed. The optimal shape is finally identified and its performance improvements are compared and discussed against the reference strand. Operational benefits include lower stress concentration and higher load at plastification initiation.
Wire ropes are basic structural elements in engineering and construction. Thanks to their complex hierarchical composition, wire ropes design permits to achieve a response tailored for specific applications and load cases. They are used as a structural link in bridges and cranes, for lifting objects or as tracks in cable-ways. They offer high longitudinal stiffness, while keeping a low transversal bending stiffness. This allows for easy storage, movement and deployment, thanks to the use of drums, sheaves and pulleys [1].
Stranded rope and its constituents, adapted from [22]. This work focuses on the core strand
a Examples of stationary ropes (adapted from [5]). bChords Bridge, a cable-stayed bridge in Jerusalem [23]. c Kolbelco CKE1800, a crawler crane with the main boom suspended by ropes [24]
Even though many different designs have been proposed throughout history [2, 3], the general composition of a rope (see Fig. 1) has not changed. The basic element is the helical wire, arranged in a bundle to form a strand. The obtained strands can be themselves helically arranged to obtain the stranded rope. Compared to fibre materials used in fibre ropes (that have been in use for millennia [2]), the use of structural materials allow for an increased load carrying capability.
The mining field in the 19th century was a driving industry for the development of wire ropes: the aim was to replace the employed metal chain—characterized by small damage tolerance—with an element that would be comparable in structural response. This is achieved by using rope designs, where multiple load paths architecture provides time for servicing and replacing the damaged component, avoiding catastrophic failure [4].
Out of the vast number of rope applications, this work focuses on those where loads are mainly tensile, in which case stationary ropes are employed [5]. They are utilized, for example, in cable-stayed bridge, in cable-ways and in cranes as guy lines for booms suspension (see Fig. 2). They can also be found in civil constructions in the form of pre-stressed concrete strands [6]. Stationary ropes are usually multi-layered strands, having therefore each component in a single helical configuration, as opposed to stranded ropes described above.
Ropes have many geometrical parameters in defining the overall response of the strand [5]. While at the level of the strand and the rope there are numerous combinations of parameters (number of wires, layout, lay-factor), the very basic component, the wire, offers most often only its diameter as degree of freedom, due to the ease and lower costs in manufacturing circular wires. As a consequence, strands present local stress concentrations due to the radial pressure concentration at the wires contact locations. Departing from the geometrical constraint of round wires, the aforementioned design drawbacks are mitigated permitting to achieve optimized overall operational characteristics.
For the case of tensile-dominated applications where single strands are used, the focus will be on providing examples of how the geometry could be optimized for minimum weight or minimum stress concentration, while satisfying application-dependent requirements such as limit load, axial stiffness, axial load at plastification or bending stiffness. 1 + 6 strands are basic, yet among the most used, strands. It has been chosen as reference strand in this work for its relatively straightforward geometry. Accordingly, a novel geometrical parametrization for the 1 + 6 wire strands is proposed.
For the analysis to come, a reduced model [7] is employed. Rope theory literature has been developed since the 1860s and a plethora of models have been proposed. Complexity of analytical models for wire strands span from the simple assumption of helical springs in parallel to a more refined curved beam theory, mainly based on Love's theory [8] or on general theory of rods by Green and Laws [9], accounting for bending and torsion. Besides, finite element (FE) models have also being employed to model more complex phenomena as residual stress after manufacturing [10], contact and friction [11] and electromagnetic interactions within power cables [12]. Reduced helical models [7, 13,14,15] have been introduced in more recent years and utilize the concept of helical symmetry to reduce the computational domain, have also being successfully used in various fields. The computational efficiency of reduced models and their ability to model complex geometries permit to challenge the limitation of purely circular wires and to propose an alternative approach to the strand design, by means of a shape optimization. In order to permit such a procedure, part of the considered domain needs to be modified to allow for the contact definition.
This paper is structured as follows: "Modeling techniques comparison" presents the modeling technique used and how it stands against alternative techniques; in "Optimization procedure" section the optimization framework is introduced and the selection of objectives and constraints is discussed; "Results" section contains the discussion on the performance benefits of the optimal shape compared to the reference and a sensitivity analysis carried on the resulting strand; finally, conclusions are drawn in "Conclusions" section.
Modeling techniques comparison
Reduced helical model
When a helical structure is deformed uniformly along its entire length, the state variables (strains and stresses) are uniform along helical lines. Its overall response can be exactly analysed by taking a representative two-dimensional surface. This is a property called translational invariance [14], and it is exploited to derive a reduced finite element model [7] whose formulation is similar in idea to the generalized plane strain elements [16]. Other models have been proposed that use this same property, such as those by Zubov [17], Treyssede [13], Frikha et al. [14] and Karathanasopoulos and Kress [15]. Differently from the aforementioned models, the one used in this work has been derived within the finite strain framework, therefore being able to better describe the wire motions. Additionally, it was developed for complex geometries and interactions on the transverse cross section.
Axial response of the wire strand 1 + 6. Geometrical parameters are listed in Table 3 and material properties in Table 2
The reduced model permits to have a complex geometry, while keeping a low number of elements. This allows fine meshes and local strains and stresses to be studied, without the need of a volumetric FE and very computationally expensive simulations. On the other hand though, it is limited by its derivation assumption: only uniform loadcases can be studied, such as axial elongation and twist, radial compaction and thermal expansion [15]. Accordingly, any load case—which determines that each transverse cross section of the structure behaves identically—can be considered.
Requirements on modeling approaches
For our optimization, four requirements are essential to be satisfied by the chosen modeling technique. An analytical model as found in Feyrer [5], and two three-dimensional FE models (based on either on solid volumetric or beam elements) are compared to the reduced model.
Axial response As the axial elongation is the load case to optimize for, our model needs to be able to fully capture the interaction between wires, including stiffening due to contact among wires and material plasticity. Figure 3 shows how all models are able to predict the overall axial behaviour.
Computational efficiency A main focus when approaching an optimization routine is to assure that the core simulation—that computes the objective value—is as efficient as possible, as it is run multiple times. Therefore, in Fig. 4 a comparison between solution times to quantify the speed of each model is shown. Apart from the analytical model, the beam and reduced models are comparable in solving the analysis, with the solid FE being significantly slower.
Complex geometries With the goal of setting up a shape optimization, the chosen model will need to be able to fully describe the geometry of the strand (and in particular of the outer wire). Solid and reduced FE models are the only ones that satisfy this requirement, because both the analytical and the beam FE models rely on a narrow database of cross sections for contact definition.
Solid continuum elements (left), beam elements (center) and reduced elements (right), with corresponding computational times for the simulations shown in Fig. 3
Table 1 Requirements met by each model
Bending response A calculation of the bending response is also required in the optimization routine, to constrain the strand flexibility. Solid and beam FE models and analytical models can directly describe such a load case. The reduced model, on the other hand, because the transversal slices would not behave independently from their axial location, is inherently not capable of modeling bending.
Table 1 highlights how the reduced model stands out against the alternative modeling approaches.
Extension of the reduced helical model to account for contact
Because the influence of contact between wires is important to fully characterize the stress state within the strand, an extension of the model found in [7] was required (Fig. 5b). The model was originally developed for the analysis of a single constituent, either free helices or solid regions (e.g. solid cylinder with inclusions). Strands have instead distinct components that are free to rotate and move relative to each other. Therefore, an interaction law needs to be introduced. Instead of simply merging the contact points [15], the current work uses a contact law with exponential pressure-overclosure behaviour.
In order to use the contact definitions already available in Abaqus, a geometrical expedient is introduced. Since each component is locally planar and there is a relative out-of-plane rotation, in order to enable a three-dimensional contact, an auxiliary master surface must be defined. This allows the interaction to actually represent a surface-to-surface contact rather then a line-to-line one, that would eventually create an artificial—localized-kink. This surface is obtained by extruding the nodes of the inner core perpendicularly to the reference plane. These nodes are then connected by shell elements and rigidly constrained to the corresponding parent nodes to guarantee the helical symmetry. Figure 5b shows such contact surface, with highlighted the nodes connected to the corresponding master node lying on the reference cross section.
a Cross section of the 1 + 6 strand, with highlighted reduced model domain. b Auxiliary surface for contact definition. Nodal degrees of freedom are fully constrained to the corresponding node lying on the original cross section by constraint equations. c Extruded strand, corresponding to the cross section foundin a
Approximation of the bending stiffness
Results by Foti [18] and stiffness values computed analytically
As suggested in the work by Foti [18], the bending of a strand exhibits two distinctive extremes.
Stick phase, where the bending curvature is low enough that the friction between components prevents them from sliding relatively to each other. All wires form a cross section with connected elements associated with high bending stiffness.
Slip phase, curvatures are high enough that friction can be ignored and each component is assumed to freely bend about its neutral plane, determining an overall reduction in bending stiffness.
The two values of stiffnesses, both in stick and in slip phase, are well approximated by the bending stiffness of the straight rod having the same transverse cross section.
$$\begin{aligned} K_{stick} = E_{0} I_{0} + \sum \limits _{i=1}^6 E_{i} {\tilde{I}}_{i} \end{aligned}$$
$$\begin{aligned} K_{slip} = E_{0} I_{0} + \sum \limits _{i=1}^6 E_{i} I_{i} \end{aligned}$$
where E is the Young modulus, I is the moment of inertia of the each wire with respect to its own neutral plane and \({\tilde{I}}\) is the moment of inertia with respect to the strand neutral plane. Subscript 0 refers to the core wire, while values of \(i>0\) refer to the outer wires (\(i=1 \cdots 6\)).
This approximation allows us to consider bending without involving more complex models. Figure 6 shows how the analytically computed stiffness values match the results obtained by Foti [18]. However, the ability to characterize the transition between the two phases (that depends on the friction coefficient \(\mu \)) is not maintained.
The axial force applied to the strand also influences the bending response [18], due to the increased friction at the contact between wires when the strand is elongated. Considering the fact that, for the applications considered in this work, axial forces are high and curvatures are low, the stick phase stiffness \(K_{stick}\) will be considered.
Material model
Throughout all simulations presented here the material model is an elastic-ideally plastic constitutive law. Figure 7 shows the stress-strain curve corresponding to the material parameters as in Table 2. This choice of constitutive law allows to model failure by a limit load analysis. The material of the analysed structure is replaced by an ideally plastic material with lower yield stress. This makes the limit load, i.e. the maximum load the structure can sustain before plastic collapse, representative of the breaking load.
Stress–strain curve of a linear elastic-ideally plastic material
Table 2 Material properties used for both the reference for limit load analysis (\(H=0.0\) GPa)
Optimization procedure
The aim is to obtain wire shapes which reduce local stress concentration and therefore reduce plastification, fatigue damages, thereby extending life time. In addition, lightweight design increases structural efficiency and decreases material costs. As a result, it has been chosen to consider two objectives.
The first is stress concentration minimization, defined as
$$\begin{aligned} \gamma = \max \left( \frac{\sigma _{VM}^{max}}{\sigma _{VM}^n}\right) \end{aligned}$$
where \(\sigma _{VM}^{max}\) is the largest Von Mises stress acting in the cross section (located at the wire-to-wire contact point) and \(\sigma _{VM}^n\) is the nominal value at the center of the core wire, i.e. the tensile stress occurring as a result of the applied deformation. Because of the nonlinear local response, the stress concentration at contact point varies with the applied load history. In particular, it will reach its maximum value \(\gamma \) at the initiation of plastification (Fig. 10).
The second objective is area minimization, that, at constant lay-length, directly translates into weight reduction. It is considered as the effective area covered by the material in the transverse cross section. Due to the choice of the ideally plastic constitutive law, when a limit load is given, the minimum value of the area is bounded by the yield stress.
Optimization procedures need to have constraints that avoid infeasible solutions to be accepted. For instance, simplifying a rope structure to a single isotropic rod would prevent any stress concentration, therefore minimizing the objective. In such a case though, the rope would lose the favourable bending flexibility and damage tolerance, thereby not fullfilling fundamental requirements of rope structures. Such characteristics are main factors in the selection of ropes in an application and need to be maintained. While the damage tolerance is kept by solely considering a shape optimization (that keeps the multi-component nature of the strand, contrarily to a topology optimization), the bending stiffness is taken as inequality constraint, where the upper bound is defined by the bending stiffness \(K_{stick}\) of the reference strand.
Additionally, each application sets a maximum load the rope is required to carry. The breaking load of the selected rope needs to be higher than such value. Therefore, because the optimal shape needs to satisfy the same requirements as its respective initial geometry, the breaking load is considered as a constraint as well.
Geometrical setup
Geometrical parametrization of the strand cross section. b Strand cross section corresponding to the helical slice depicted in a, where the five parameters required to define the geometry are highlighted and taken as design variables. c Examples of feasible cross sections of the outer wire
Figure 8 shows the geometrical parametrization used in the considered procedure. The optimization aims at a wide variability, while keeping the number of design parameters reasonably low. It presents a straight core wire and 6 helical wires around it. The analysis considers constant the number of wires and the lay-length (i.e. the axial length corresponding to a full turn of an outer wire).
Figure 8b shows the degrees of freedom that our shape parametrization has. Besides the total strand wire radius R and the outer wire diameter d, the shape is parametrized by the use of two auxiliary circles that can be moved and scaled on the cross section. These fillets bring in a total of 3 parameters (\(\rho _1\), \(r_1\) and \(r_2\)).
To fully define the geometry, the following geometrical constraints are imposed as well:
Minimum interwire distance (gap) is set to be at the mirror plane (highlighted point 1 in Fig. 8b), to allow for the contact initiation;
Concave outer shape, with a curvature corresponding to the radius of the strand, R (point 2);
Flat outer wire surface (point 3) with given angular distance \(\Omega \), that permits relative movement between adjacent outer wires without contact.
In the case which reducing the concentration at the contact point is our objective, the optimal shape would morph into a shape that allows a surface-to-surface contact. Doing so would provide a larger area for radial force transmission and thus reduce the localization. We though encode the geometry so to have the contact surface to be concave or convex, in order not to restrict the design space. Figure 8c shows potential candidates satisfying the geometrical constraints.
Optimization routine
Because of the complex geometry and the geometrical constraints to be considered, a genetic algorithm has been chosen to find a global minimum of the considered problem. A pool of 100 different feasible geometries —based on the parametrization—has been created as initial population. The optimization is allowed to have up to 100 generations, with Matlab default values for mutation and crossover [19].
Each optimization has either area minimization or stress concentration minimization as single objective, as discussed in Section 3.1.
Constraints are enforced by a multiplicative penalty factor [20] as follows:
$$\begin{aligned} {\tilde{f}} = f \prod _{i=1}^{n} \left( 1+\frac{|g_i-{\hat{g}}_i|}{{\hat{g}}_i} \right) \prod _{j=1}^{m} \left( 1+\frac{\mathrm {max}(0,h_j-{\hat{h}}_j)}{{\hat{h}}_j} \right) \end{aligned}$$
where \(g_i\) and \(h_j\) are the current values of the n equality constraint functions and m disequality constraint functions. \({\hat{g}}_i\) and \({\hat{h}}_j\) are the given constraint values and \({\tilde{f}}\) is the objective value of the constrained problem.
The reference strand taken as initial design is characterized by a 1 + 6 layout, with a core diameter of 2.50 mm and an outer wire diameter of 2.25 mm. Additional constant geometry properties, as the number of wires n, lay-factor LL, gap g and angular distance \(\Omega \) are listed in Table 3. Material properties used correspond to an ideally plastic law, as discussed in "Material Model" section. The strand is extended axially to a nominal axial strain of \(1\%\). The objective is stress concentration minimization and the selected constraints are the limit load and bending flexibility of the reference strand.
The optimization procedure is coordinated by the built-in Matlab R2018a Optimization Toolbox [19], while each simulation is solved by Abaqus 6.14 [21] with a custom subroutine (called User Element) developed in a previous work [7].
The optimal shape is shown in Fig. 9 on the right, where the contour of the Von Mises stress concentration, i.e. the stress normalized with the nominal value measured at the center of the core wire, is displayed. The increment plotted refers to the nominal strain at which the plastification starts, that corresponds (as shown in Fig. 10) to the highest stress concentration within the loading history. The pressure distribution associated with the surface-to-surface contact results in the reduction of stress concentration into a more homogeneous field in the new geometry. The reference strand has a maximum local Von Mises stress of \(148\%\) higher than the nominal stress (corresponding to a concentration \(\gamma = 2.48\)), while the optimal presents only a \(4\%\) higher Von Mises stress (\(\gamma = 1.04\)). The initiation of plastification happens therefore at significantly larger strains, as shown in Fig. 11, where the accumulated equivalent plastic strain at the location of plastic initiation is plotted against the loading history. Delayed plastification means as well that the axial load that the structure can bear without having any local defects is increased, as it is highlighted in Fig. 12 by the value \(F_p^{opt}\) (\(34.8\,\mathrm {kN}\)) being 3.74 times higher than \(F_p^{ref}\) (\(9.3\,\mathrm {kN}\)).
Contrarily to Fig. 11, the curves in Fig. 12 do not show any effect of the early local plastification. This is due to the considerably small area affected by this phenomenon, making its contribution to the axial force is negligible. Figure 12 shows the force-strain curves and it can be seen that the limit load is kept as required by the constraint, with the optimal strand being more compliant than the reference strand by less than \(2.5\%\). As for the bending flexibility, it showed the same value as the reference, with less than \(0.1\%\) variation.
Table 3 Constant geometrical parameters of the reference strand
Stress concentration field, computed as local stress over nominal stress at the center of the core wire. Because of the bending of the outer wire, a gradient is present. \(\gamma \) is dimensionless. Blue corresponds to the minimum value of 0.8 and red to the maximum value of 2.48
Plot showing the evolution of the stress concentration at the contact point (corresponding to point 1 in Fig. 8) with the applied macro-elongation. As plastification starts, \(\gamma \) decreases due to the local plastification
Accumulated plastic strain. A lower Von Mises stress determines a delayed plastic initiation and a lower overall accumulated strain
Global axial response of the reference and the optimal strand. The optimal shape delivers a \(2.5 \%\) lower limit load
Figure 13 presents another optimal shape, obtained with an alternative choice of objectives and constraints. When axial force at plastic initiation \(F_p\) is considered as the only constraint, the required transverse surface is allowed to decrease, because it is not bounded by the limit load requirement. If area minimization is therefore considered, the resulting shape shown in Fig. 13b is obtained, with the area value of \(10.8\, \text {mm}^2\), corresponding to the \(37\%\) of the reference strand area (\(A = 28.9\,\text {mm}^2)\). A delayed plastification' start can provide margin for reducing the safety factor and consequently the required weight.
Optimal cross sections obtained with different objectives and constraints. a Shape obtained by minimizing the stress concentration and constraining the limit load and bending stiffness. b Shape resulting from area minimization with unconstrained limit load, but given axial load at plastification \(F_p\)
Influence of a perturbation of the geometrical parameter \(\rho _1\). Below the plot, the corresponding resulting cross sections are shown. The change of \(\Delta \rho _1\) has been scaled for visualization purposes, on the left for negative \(\Delta \rho _1\), on the right for positive variations
Any production process is subjected to tolerances, and thus it could be expected that the manufactured optimal wire would not match perfectly the computed one. To study this sensitivity, it has been chosen to slightly vary a parameter \(\rho _1\), that determines the inner surface curvature of the outer wire. Figure 14 shows the maximum stress concentration measured when adding a small perturbation \(\Delta \rho _1\) to the optimal \({\hat{\rho }}_1\). In both directions (whose corresponding geometries are illustrated in Fig. 14) there is a detrimental effect, due to the reintroduction of local concentration, losing partially the benefit of the optimal shape. Values are though significantly lower than the reference strand value (\( \gamma = 2.48\)). In particular, the results show that a larger \(\rho _1\) (\(\Delta \rho _1 > 0\), corresponding to a smaller curvature of the inner contact surface) is safer, as \(\gamma \) increases less for such values.
The reduced helical model capability to resolve local stresses has been proven essential to allow for the proposed optimization. It computes stress concentrations without recurring to a solid FE model, that would have rendered the entire routine computationally very expensive. Within the limitations set by the reduced helical model assumptions, the applicability and potential of the chosen approach was demonstrated by showing that a optimized design of the strand, and in particular of the outer wires, was found.
Such optimization framework complements the state-of-the-art design of strands, since an optimal cross section—providing beneficial characteristics—could be tailored to each application. The strand manufacturer would need to have ways to produce the resulting geometry by successive drawing through custom made dies. While this surely increases the complexity and cost of the strand manufaCturing processes, it is feasible, as non-circular wires have already been used in full-locked spiral rope [5]. Compared to the compaction of strands—the process of radially compressing a strand that originally had round wires—the approach proposed in this work reduce dirt infiltration and determines a better contact pressure distribution, without introducing unwanted pre-stresses. This analysis can directly be extended to more complex geometries such as multi-layer strands and it could also be coupled with other models to analyse the next hierarchical level, the wire rope. For instance, the reduced model could compute the homogenized properties of the wire strand to be used in a beam model, that would effectively simulate a stranded wire rope.
Data available on request from the authors. Plot data is found in the additional material "Plot_data.xlsx".
Cardou A, Jolicoeur C. Mechanical models of helical strands. Appl Mech Rev. 1997;50(1):1.
Verreet R. Die Geschichte des Drahtseiles. Drahtwelt. 1989;75(6):100–6.
Sayenga D. The Birth and Evaluation of the American Wire Rope Industry. First Annual Wire Rope Proceedings. Pullman, Washington 99164: Engineering Extension Service. Washington: Washington State University; 1980.
Costello GA. Theory of wire rope., Mechanical engineering seriesNew York: Springer; 1997.
Feyrer K. Wire ropes: tension, endurance, reliability, vol. 14. Berlin: Springer; 2015.
Raoof M. Wire recovery length in a helical strand under axial-fatigue loading. Int J Fatig. 1991;13(2):127–32.
Filotto FM, Kress G. Nonlinear planar model for helical structures. Comput Struct. 2019;224:106111.
Love AEH. Treatise on mathematical theory of elasticity. A treatise on the mathematical theory of elasticity. 4th edn. 1944; p. 643.
Green AE, Laws N. A General Theory of Rods. Mech Gener Cont. 1968;293:49–56.
Frigerio M, Buehlmann PB, Buchheim J, Holdsworth SR, Dinser S, Franck CM, et al. Analysis of the tensile response of a stranded conductor using a 3D finite element model. Int J Mech Sci. 2016;106:176–83.
Xiang L, Wang HY, Chen Y, Guan YJ, Dai LH. Elastic-plastic modeling of metallic strands and wire ropes under axial tension and torsion loads. Int J Solids Struct. 2017;129:103–18.
Del-Pino-López JC, Hatlo M, Cruz-Romero P. On simplified 3D finite element simulations of three-core armored power cables. Energies. 2018;11:11.
Treyssède F. Elastic waves in helical waveguides. Wave Motion. 2008;45(4):457–70.
Frikha A, Cartraud P, Treyssède F. Mechanical modeling of helical structures accounting for translational invariance. Part 1: static behavior. Int J Solids Struct. 2013;50(9):1373–82.
Karathanasopoulos N, Kress G. Two dimensional modeling of helical structures, an application to simple strands. Comput Struct. 2016;174:79–84. https://doi.org/10.1016/j.compstruc.2015.08.016.
Cheng AHD, Rencis JJ, Abousleiman Y. Generalized plane strain elasticity problems. Trans Model Simul. 1995;10:167–74.
MATH Google Scholar
Zubov LM. Exact nonlinear theory of tension and torsion of helical springs. Doklady Phys. 2002;47(8):623–6.
Foti F, Martinelli L. An analytical approach to model the hysteretic bending behavior of spiral strands. Appl Math Modell. 2016;40(13–14):6451–67.
The MathWorks I. Global Optimization Toolbox User's Guide; 2018.
Puzzi S, Carpinteri A. A double-multiplicative dynamic penalty approach for constrained evolutionary optimization. Struct Multidiscip Optimiz. 2008;35(5):431–45.
Dassault Systèmes. Abaqus 6.14 Online Documentation; 2014.
Bergen Cable Technology I. Cable 101. https://bergencable.com/cable-101.
Wikipedia. Chords Bridge. https://en.wikipedia.org/wiki/Chords_Bridge.
Kobelco. Kobelco Construction Machinery Europe B.V. https://www.kobelco-europe.com.
The authors acknowledge the support of the Swiss National Science Foundation (project No. 159583 and Grant No. 200020_1595831).
Experimental Continuum Mechanics Group, IMES, ETH Zürich, Leonhardstrasse 21, 8092, Zurich, Switzerland
Francesco Maria Filotto
Inspire AG, Innovative Composite Structures, Technoparkstrasse 1, 8005, Zurich, Switzerland
Falk Runkel
Laboratory of Composite Materials and Adaptive Structures, Department of Mechanical and Process Engineering, ETH Zurich, Tannenstrasse 3, 8092, Zurich, Switzerland
Gerald Kress
FMF performed the optimization simulations and analyses and wrote the paper. FR created the beam and solid models used in the present work and reviewed the paper. GK commented and reviewed the paper. All authors read and approved the final manuscript.
Correspondence to Francesco Maria Filotto.
Filotto, F.M., Runkel, F. & Kress, G. Cross section shape optimization of wire strands subjected to purely tensile loads using a reduced helical model. Adv. Model. and Simul. in Eng. Sci. 7, 23 (2020). https://doi.org/10.1186/s40323-020-00159-0
Wire strand
Reduced model
Helical structures
Shape optimization
Genetic algorithms | CommonCrawl |
Home / IT & Computer Science / Data Science / Advanced Machine Learning / Advanced Machine Learning: Clustering
Advanced Machine Learning: Clustering
We examine sequential, hierarchical and optimization-based clustering approaches, including a section on graph-based clustering methods.
© Dr Michael Ashcroft
In this article, we will introduce common approaches to clustering algorithms: sequential, hierarchical and optimization-based. The sequential and hierarchical approaches do not fit cleanly into the definition of statistical models we gave in the first module of this course. Proximity functions play a central role in all these clustering approaches, and we will begin by defining what these functions are. We will then look at the the two approaches individually, explain their basic concepts and give basic algorithms for each. We will also look briefly at graph-based clustering methods that can be used with these techniques.
The optimization-based approach matches our definition of statistical models, and it is no surprise that the dominant tools in such methods are statistical and probabilitistic in nature. We will look at some important examples of optimization based clustering in the following steps.
Proximity Measures
Proximity measures come in two forms: Dissimilarity and similarity measures. They give a measure of difference between two vectors, between one vector and one set of vectors, and between two sets of vectors. We begin with the case of proximity between two vectors, and provide as concrete examples the most common family of dissimilarity measures, \(l_p\)measures, and two common, and related, similarity measures, the inner product and cosine similarity measures.
\(l_p\) Dissimilarity Measures
Let \(X\) and \(Y\) be two vectors.
\[d_p(X,Y)=\big(\sum_{i=1}^l w_i | X_i – Y_i|^p \big)^{1/p}\]
Where \(w_i = 1\) for \(1 \leq i \leq l\) these are called unweighted \(l_p\) measures. The unweighted \(l_2\) measure is Euclidean distance. The unweighted \(l_1\) measure is the Manhattan distance. The weights are used to make certain directions more important than others when calculating dissimilarity.
Inner Product Similarity Measure
\[s_{inner}(X_i,X_j)=X_i^TX_j\]
Cosine Similarity Measure
The Open University online course,
Advanced Machine Learning
\[s_{inner}(X_i,X_j)=\frac{X_i^TX_j}{\|X_i\| \|X_j\|}\]
Extending proximity measures to sets of vectors
Proximity measures between two vectors are simple to extend to cases between vectors and sets of vectors, and between two sets of vectors, using min and max (and mean, though this is less common) functions. Let \(d\) be some proximity measure defined on vectors, and \(\Gamma\) and \(\Delta\) be two sets of vectors:
\[d_{min}(X,\Gamma)=min_{Y \in \Gamma}(d(X,Y))\] \[d_{max}(X,\Gamma)=max_{Y \in \Gamma}(d(X,Y))\] \[d_{min}(\Gamma,\Delta)=min_{X \in \Gamma,Y \in \Delta}(d(X,Y))\] \[d_{max}(\Gamma,\Delta)=max_{X \in \Gamma,Y \in \Delta}(d(X,Y))\]
Alternatively, some representor for the cluster, \(R(\Gamma)\) may be used. A common representor is the mean value of the vectors. In which case, we would have:
\[d_{rep}(X,\Gamma)=d(X,R(\Gamma))\] \[d_{rep}(\Gamma,\Delta)=d(R(\Gamma),R(\Delta)\]
Euclidean Distance and High Dimensionality
It is common when working with a large number of dimensions (i.e. many columns in the data) to avoid \(l_p\) dissimilarity measures such as Euclidean distance, and prefer alternatives such as cosine similarity.
This is because such dissimilarity measures fail to be discriminative as the number of dimensions increase. To explain things in a very hand-waving fashion, as the number of dimensions increase, points (from finite samples from some distribution) end up getting farther away from each other. The result is that in high dimensions, all points are a long way from all other points, and the difference in distances from a point to other points becomes a less useful way of distinguishing similar from dissimilar pairs. (In the general \(l_p\) case, some researchers who have paid a lot of attention to this phenomenon claim that you can avoid this if you reduce \(p\) fast enough as you increase dimensions, so you end up working with very small, fractional \(p\) values. Most practitioners just use cosine similarity.)
If this seems odd to you, don't worry – it seems odd to everyone. It turns out that our intuitions about distances that we have from visualizing the behaviour of sampling from distributions in one, two or three dimensions simply do not serve us very well in higher dimensions.
Sequential Clustering
Sequential clustering algorithms are the simplest type of clustering algorithms, and are typically very fast to run. The basic sequential clustering algorithm is:
For some specified threshold, \(\theta\), some dissimilarity measure \(d\) (alternatively, some similarity measure, \(s\)) and, optionally, some limit on number of clusters, \(q\):
\[m=1\] \[C_m={X_1}\]
For \(i=2,…,N\):
Find \(C_k\), such that \(k = arg min_{1 \leq j \leq m} d(X_i,C_j)\) (\(k = arg max_{1 \leq j \leq m} s(X_i,C_j)\))
If \(d(X_i,C_k) \geq \theta\) (\(s(X_i,C_k) \leq \theta\)) and \(m<q\) then set \(m=m+1\), \(C_m={X_i}\)
Otherwise set \(C_k=C_k \cup X_i\)
Characteristics of this algorithm are:
A single clustering is returned
The number of clusters present is discovered by the algorithm
The result is dependent on the order the data vectors are presented
Hierarchical clustering algorithms iteratively combine or divide clusters with use of the chosen proximity measure.
Agglomerative Hierarchical Clustering
Initialize clusters such that \(C={C_i : 1\leq i \leq N}\), where \(C_i = {X_i}\), for \(1 \leq i \leq N\).
m=N
While \(m>1\):
Find \(C_j,C_k\) such that \(arg min_{j,k} d(C_j,C_k)\) (\(arg max_{j,k} s(C_j,C_k)\)), \(1 \leq j \leq m\), \(1 \leq k \leq m\), \(j \neq k\).
Set \(C_j=C_j \cup C_k\), $m=m-1$ and remove \(C_j\) from \(C\).
Divisive Heirarchical Clustering
Divisive clustering works analagously to agglomerative clustering, but Begins with all data vectors in a single cluster which is then iteratively divided until all vectors belong to their own cluster.
Let \(S(C_i)=\{\langle \Gamma, \Delta \rangle : \Gamma \cup \Delta = C_i \land \Gamma \cap \Delta = \emptyset \land \Gamma, \Delta \neq \emptyset\}\). In words, \(S(C_i)\) gives all binary splits of \(C_i\), such that neither resulting subset is empty.
Initialize clusters such that \(C={C_1}\), where \(C_1 = {X_i : 0 \leq i \leq N}\).
m=1
While \(m<N\):
Find \(C_i, \langle \Gamma,\Delta \rangle\) such that they are the solution to \(argmax_{C_i,\Gamma,\Delta} d(\Gamma,\Delta)\) where \(\Gamma, \Delta \in S(C_i)\) (alt. \(argmin_{C_i,\Gamma,\Delta} s(\Gamma,\Delta)\) where \(\Gamma, \Delta \in S(C_i)\))
Set \(C= (C \setminus C_i) \cup \{\Gamma,\Delta\}\)
Characteristics of both types of hierarchical algorithms are: – A heirarchy of clusterings is returned – The final clustering is not chosen by the algorithm
Dendrograms
If we record the proximity values whenever clusters are divided or split in the above hierarchical algorithms, we can form a tree structure known as a dendrogram. A dendrogram gives an overview of the different sets of clusters existing at different levels of proximity. An example is given below.
The above dendrogram has a property called monotonicity, meaning that each cluster is formed at higher dissimilarity level than any of its components. Not all clustering algorithms will produce monotonic dendrograms.
Selecting a clustering from the hierarchy
Since hierarchical clustering algorithms do not provide a single clustering but rather a hierarchy of possible clusterings, a choice is required about which member of the hierarchy should be selected.
Visual or analytic analysis of dendrograms can be used to make this decision. In this case, an important concept is the lifetime of the cluster, where this is the difference between the dissimilarity levels at which a cluster is merged with another and the level at which it was formed. We would seek to use the clusters present at a level of dissimilarity such that all clusters existing at that level have long lifetimes.
Another option is to measure the self-proximity, \(h\), of clusters, making use of set vs set proximity measures. For example, we might choose to define self-similarity as:
\[h(C)=max_{X,Y \in C} \big(d_{l_2} (X,Y)\big)\]
Where we remember that \(d_{l_2}\) is Euclidean distance. We would then have to specify some threshold value \(\theta\), for self-similarity such that we take the clustering at level \(L\) in the hierarchy if at level \(L+1\) there exists some cluster, \(C_i\) such that \(h(C_i)>\theta\). In this and the following paragraph we treat the first layer in the hierarchy as the clustering where all vectors are given their own cluster, and the last when all vectors are assigned to the same cluster.
A popular choice is to try and read the self-similarity threshold from the data. For example we might choose the largest layer such that the following condition is fulfilled:
\[d(C_i,C_j) \geq max\big(h(C_i),h(C_j)\big)\]
In words, the last layer where the dissimilarity of each pair of clusters is greater or equal to the self similarity of each of the clusters in the pair.
Graph Clustering
To look at basic graph clustering, we need to introduce a number of concepts. Firstly, a graph, \(G=\langle N,E\rangle\), consists of a set of nodes, \(N\), and a set of edges \(E\). Graphs can be directed or undirected depending on if the edges between nodes is directed from one node to the other, or not. In directed graphs, the edges are ordered pairs of nodes and the edge is directed from the first node to the second. In undirected graphs, they are sets of two nodes.
The threshold graph of a data set is the undirected graph that results from associating a node with each data vector with edges between any two (non-identical) nodes whose associated data vectors are below some dissimilarity threshold (above some similarity threshold). The threshold graph is an unweighted graph, by which we mean its edges have no associated weight values.
Take the following data set:
\(X_1\)
1 7.5 8.9
Using Euclidean distance as our dissimilarity measure, the associated proximity matrix is:
\[\begin{bmatrix} 0 & 5.16 & 1.12 & 7.59 & 2.73 \\ 5.16 & 0 & 4.43 & 2.48 & 2.96 \\ 1.12 & 4.43 & 0 & 6.77 & 1.70 \\ 7.59 & 2.48 & 6.77 & 0 & 5.15 \\ 2.73 & 2.96 & 1.70 & 5.15 & 0 \end{bmatrix}\]
Given a threshold of 3, the resulting threshold graph is:
Which can also be represented by the adjacency matrix:
\[\begin{bmatrix} 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 \end{bmatrix}\]
A proximity graph is a threshold graph whose edges are weighted by the proximity of the associated data vectors. The proximity graph of the above data, with a threshold of 3, is:
\[\begin{bmatrix} 0 & 0 & 1.12 & 0 & 2.73 \\ 0 & 0 & 0 & 2.48 & 2.96 \\ 1.12 & 0 & 0 & 0 & 1.70 \\ 0 & 2.48 & 0 & 0 & 0 \\ 2.73 & 2.96 & 1.70 & 0 & 0 \end{bmatrix}\]
A subgraph of a graph \(G=\langle N_G, E_G \rangle\) is another graph, \(S=\langle N_S, E_S \rangle\) such that \(N_S \subseteq N_G\) and \(E_S \subseteq E_G^*\) where \(E_G^*\) are the edges in \(E_G\) such that they connect a pair of nodes both of which are in \(N_S\). In proceeding, we will assume that all subgraphs are such that \(E_S = E_G^*\).
A subgraph, \(S=\langle N_S, E_S \rangle\), is maximally connected if all pairs of nodes in \(N_S\) are connected by an edge in \(E_S\).
We are now able to explain how basic graph clustering proceeds as extensions of the basic sequential and hierarchical clustering algorithms explained above. We first decide on some property \(f\), of subgraphs (examples will be given below), and use this as an additional constraint in the sequential or hierarchical clustering algorithms.
In the sequential algorithm, a datum is added to a cluster so long as the proximity measure satisfies some threshold. In basic sequential graph clustering, we would also check that the cluster resulting from the addition of the new data has a corresponding sub-graph that it is either maximally connected or satisfies some property, \(f\).
In the agglomerative hierarchical clustering algorithm, we choose which clusters to merge based on their proximity measures between current clusters. In basic hierarchical graph clustering, we do the same but consider only pairs of clusters such that the cluster resulting from their merger has a corresponding sub-graph that it is either maximally connected or satisfies some property, \(f\).
Basic examples of properties of subgraphs that can be included in \(f\) include:
Node Degree: The node degree of a subgraph is the largest integer k such that every node in the subgraph has at least k incident edges.
Edge Connectivity: The edge connectivity of a subgraph is the largest integer k such that every pair of nodes in the subgraph is connected by at least k independent paths, where two paths are considered independent if they have no edges in common.
Node Connectivity: The node connectivity of a subgraph is the largest integer k such that every pair of nodes in the subgraph is connected by at least k independent paths, where two paths are considered independent if they have no nodes in common (excluding start and finish nodes).
Obviously, \(f\) might include logically complex combinations of such properties, such as that a subgraph should have node degree of 4 or edge connectivity of 3.
Optimization-based methods
Optimization-based clustering methods specify a loss function. This loss function will take as arguments the training data and cluster assignment, and the aim will be to find the cluster assignment that minimizes the loss.
For example, the clusters may be identified with probability distributions taken to have generated the training data. The loss function is then the negative log likelihood of the training data given the clusters. We minimize this by finding the set of generative probability distributions (their parameter values, and perhaps the number of distributions) that make the data the most likely. Data vectors can be identified as 'softly' belonging to particular clusters in the sense of having different probabilities of being generated by the different distributions. We will see how this works in the next few steps.
It should be noted that this form of optimization-based clustering is not the only one. As well as thinking of clusters as generative distributions and datum membership of a cluster as the probability it was generated by the particular distribution, other optimization-based methods exist. For example, fuzzy clustering algorithms are typically optimization based.
This content is taken from The Open University online course
Hi there! We hope you're enjoying our article: Advanced Machine Learning: Clustering
This article is part of our course: Advanced Machine Learning | CommonCrawl |
Continuous and discrete statistical distributions: Probability density/mass function, cumulative distribution function and the central limit theorem
In real world, all variables are random and the randomness is modelled by statistical distributions. In this post, various type of statistical distributions for both continuous and discrete random variables are explained.
Jan 8, 2022 • 14 min read
As a starter, we will discuss the basic concept of probability density and mass functions as well as cumulative distribution functions.
In addition, the underlying concept of mean and variable of a statistical distribution is also explained.
By the end of this post, reader will understand the basic concept of statistical distributions and their types as well as understand the central limit theorem.
Probability density function, probability mass function and cumulative distribution function
Random variable
Random variable is defined as a real variable, let us call, X, that is drawn from a random experiment, let us call, e. This experiment e is within a sample set, let us call, S.
Since the results of the experiment e is not yet known, hence the value of variable X is also not yet known. The random variable X is a function that associates a real value of e within the sample set S from a random experiment.
Commonly, a random variable is notated as capital letter, such as X, Y, Z and $A$. the measured value from the random variable is commonly written as a lower case letter, such as x,y,z and $a$ for the random variables X,Y,Z and $A$.
For example, X is a random variable of a temperature. Hence, xis the measured value of X, for example x=25 degree C.
There are two types of random variables:
Continuous variable: a variable that has a value with real interval and can have a finite or infinite limit. For example, length x = 2.567 mm and temperature t = 12.57 K.
Discrete variable: a variable that has a value with finite (integer) interval and has a finite limit. For example, the number of defect components per 1000 components = 5 and the number of people in a queue = 15 people.
Probability function and cumulative distribution function
Probability function of a random variable X is a function that describe the probability of a variable X to be obtained.
The notation for PDF is $f(x)$. Meanwhile, the notation for PDF is $F(x)$.
Based on the random variable types, probability distributions are also divided into two:
Probability density function (PDF): the probability function of continuous random variable.
Probability mass function (PMF): the probability function of discrete random variable.
Figure 1 shows an example of a PDF and PMF. In figure 1, we can observe that the PDF has real interval value (continuous) and PMF has finite interval values (discreet).
Figure 1: (a) An example of probability density function (PDF) and (b) an example of probability mass function (PMF).
Discrete random variable
PMF is formulated as:
Where $X$ is a discrete random variable. $x_{i1}, x_{i2},…., x_{ik}$ represent the value of $X$ such that $H(x_{ij})=x_{i}$ for the set of index values $\Phi _{i}={j:j=1,2,…,s_{i}}$.
The properties of PMF are as follows:
For discrete probability, we commonly say:
The probability of $X=a$ that is drawn from a sample set $S$, that is:
Where $a$ the discrete random variable from a sample set $S$ and $n$ the number of $a$ occurrences from $S$.
For example, a value of $a=2$ occurs from a discrete distribution $X$ so that the probability of $x=2$ or $P(x=2)$ is visually represented in figure 2 below.
Figure 2: The example of $P(x=2)$ from a discrete distribution $X$.
The cumulative distribution function (CDF) $F(x)$ of PMF is formulated as:
For discrete random variable, the properties of $F(x) are as follows:
Continuous random variable
PDF is formulated as:
For continuous probability distribution, commonly we say as the probability of a value $X<a$ occurs from a sample set $S$ or the probability of a value $a<X<b$ occurs from a sample set $S$. These probabilities are formulated as:
For example, the probability of a continuous random variable $X$ occurs with value between 1 and 2, that is $P(1<x<2)$ with, in this example, a Gaussian probability function is presented visually in figure 3.
Figure 3: The example of $P(1<x<2)$ from a continuous distribution $X$.
The cumulative distribution function (CDF) $F(x)$ of PDF is formulated as:
Statistical mean and variance
Mean or average $\mu$ is defined as a value that quantifies or describes the central tendency of a statistical probability distribution. Mean $\mu$ is expressed as the expected value of a random variable $E(x)$, that is the sample average from a long run repetition of a random variable.
Mean $\mu$ or expected value $E(x)$ is formulated as:
The other name of mean is the first moment of statistical distributions.
Variance $Var(x)=\sigma ^2$ is a value describing or quantifying the dispersion of statistical probability distributions. Variance $Var(x)=\sigma ^2$ is formulated as:
The Variance $Var(x)=\sigma ^2$ can also defined in term of expected value as:
The square root of $\sigma ^2$, that is $\sigma$, is called standard deviation. The other name of variance is the second moment of statistical distributions.
Important properties of expected values and variance
Important properties of expected values $E(x)$ and variance $Var(x)$ are as follows:
If there are two random variables, then:
The properties of expected value $E(x)$ and variance $Var(x)$ mentioned above are very important and useful to know. One of the use of these properties is to derive the GUM formula to estimate the uncertainty of measurement results.
In the next sections, we will see many types of continuous and discrete statistical distributions. In addition, we will demystify the meaning of central limit theorem.
Continuous statistical distribution
Types of continuous statistical distributions are: normal (Gaussian), uniform continuous, exponential, Gamma, Weibull and lognormal distributions.
Normal (Gaussian) distribution
Normal or Gaussian distribution is the most well-known statistical probability distribution. There are several reasons, such as, many nature entities follow this normal distribution and this distribution has been studied to understand the symmetrical pattern of measurement errors from the beginning of the 18th century.
Gauss is the first to publish this normal distribution in 1809. That is why distribution is also called Gaussian distribution.
Normal distribution has a very unique symmetric shape like a "bell". This type of distribution has a very important role in determining Type A measurement uncertainty in analysing other phenomena using regression and analysis of variance (ANOVA).
The PDF $f(x)$ of normal (Gaussian) distribution is formulated as follow:
Where $-\infty \leq x \leq \infty , -\infty \leq \mu \leq \infty , \sigma >0 $. $\mu$ is the mean of the distribution and $\sigma ^2$ is the variance of the distribution.
A shortened notation to present normal distribution of a random variable $X$ is $X~N(\mu , \sigma ^2)$.
The PDF of normal distribution with mean = 0 and variance = 0 and with different values of mean and variance are presented in figure 4 and figure 5, respectively.
Figure 4: The PDF of normal distribution with $\mu =0$ and $\sigma ^2 =1$.
Figure 5: The PDF of normal distribution with various mean and variance.
The properties of normal distribution are as follow:
The mean $\mu$ of normal distribution is formulated as:
By using the variable:
Then we get:
The variance $\sigma ^2$ of normal distribution is formulated as:
Similarly, by using the variable:
The CDF $F(x)$ of normal distribution is formulated as follow:
Figure 6 and 7 show some examples of normal distribution CDF with different mean and variance.
Figure 6: The CDF of a basic normal distribution.
Figure 7: The CDF of normal distribution with various mean and variance.
Uniform continuous distribution
Uniform distribution has an important application in measurement to determine the standard uncertainty from a calibration certificate or from a technical drawing (the tolerance).
The PDF of uniform distribution is:
Where $a$ and $b$ are the minimum and maximum value of the distribution.
The mean $\mu$ and variance $\sigma ^2$ of uniform distribution are formulated as:
The CDF of uniform distribution is:
Figure 8 and 9 show the PDF and CDF of continuous uniform distribution.
Figure 8: The PDF of continuous uniform distribution.
Figure 9: The CDF of continuous uniform distribution.
Exponential distribution is commonly used to model arrival time in computer simulation. The PDF of exponential distribution is as follow:
The CDF of exponential distribution is:
The mean $\mu$ of exponential distribution is:
The variance $\sigma ^2$ of exponential distribution is:
Figure 10 shows the PDF of exponential distribution with different means and variance.
Figure 10: The PDF of exponential distribution.
Gamma distribution
The PDF of Gamma distribution is defined as follow:
Where the parameter $\lambda >0$ and $r>0$. $\lambda$ is a scale parameter and $r$ is a shape parameter. $\Gamma (r)$ is a Gamma function = $(r-1)!$. Examples of the PDF of Gamma is shown in figure 11.
From figure 11, we can observed from the PDF of Gamma distribution above that exponential distribution is a special case of Gamma distribution where parameter $r=1$.
Figure 11: the PDF of Gamma distribution with different parameter $r$ and $\lambda$. It can be observed that when $r=1$, the Gamma distribution becomes exponential distribution.
The mean $\mu$ and variance $\sigma ^2$ of Gamma distribution are formulated as:
The CDF of Gamma distribution is formulated as:
Weibull distribution
Weibull distribution is very common to be used to model the time-to-failure of mechanical and electrical systems.
The PDF of Weibull distribution is:
Where the parameter $\gamma$ is a location parameter with values $-\infty <\gamma <\infty $, $\delta$ is a scale parameter and $\beta$ is a shape parameter.
A unique feature of Weibull distribution is that, by adjusting its parameters, various types of probability distribution can be approximated.
Figure 12 shows Weibull distribution with various parameters.
Figure 12: The PDF of Weibull distribution with different parameters.
The mean $\mu$ and variance $\sigma ^2$ of Weibull distribution are formulated as:
The CDF of Weibull distribution is as follow:
Lognormal distribution
The PDF of lognormal distribution is formulated as:
Where $\theta$ and $\omega$ are the lognormal parameters. Figure 13 shows lognormal distributions with different parameters.
Figure 13: The PDF of lognormal distributions with different parameters.
The mean $\mu$ and variance $\sigma ^2$ of lognormal distribution are formulated as:
Discrete statistical distribution
Types of discrete statistical distributions are: Bernoulli, uniform discrete, binomial, geometric and Poisson distributions.
The probability mass function PMF, that is the distribution function for discrete random variable, is visualised or presented as line diagram.
PMF of discrete variables is very useful to model qualitative measurement results that are commonly presented as integer value, for example, the colour code of a paint: 1 (black) and 0 (white). Other examples are the number of defective parts per batch and number of people arrival in a queue.
Bernoulli distribution
Bernoulli distribution has only two outputs. The PMF $f(x)$ of Bernoulli distribution is:
Figure 14 shows an example of the PMF of Bernoulli distribution.
Figure 14: the PMF of Bernoulli distribution with $p=0.6$ and $q=0.4$.
The mean $\mu$ and variance $\sigma ^2$ of Bernoulli distribution are formulated as:
Uniform discrete distribution
The PMF of discrete uniform distribution is formulated as:
Where $n$ is the number of variable.
The PMF of discrete uniform distribution is shown in figure 15.
Figure 15: the PMF of uniform discrete distribution with $a=1$ and $b=5$.
Meanwhile, the mean $\mu$ and variance $\sigma ^2$ of discrete uniform distribution are formulated as:
Where $a$ and $b$ are the minimum and maximum value of the random variable of discrete uniform distribution.
The PMF of binomial distribution is:
Where $n$ is the number of independent trials. Each trial only has two outputs, for example 0 and 1. $p$ is the output probability = 1 at each trial has values $0<p<1$.
Figure 16 show the PMF of binomial distributions with different $n$ and $p$ values.
Figure 16: The PMF of binomial distributions with different $n$ and $p$ values.
The mean $\mu$ and variance $\sigma ^2$ of binomial distribution are formulated as:
Geometric distribution
Geometric distribution describes numbers of trials of random variable $X$ until a certain output $A$ is obtained, for example the first occurrence of a failure or the first success.
The PMF of geometric distribution is:
Where $p$ is the distribution parameter with value $0<p<1$. Figure 17 show the PMF of binomial distributions.
Figure 17: The PMF of binomial distributions with different $n$ and $p$.
The mean $\mu$ and variance $\sigma ^2$ of geometric distribution are formulated as:
The PMF of Poisson distribution is:
Where the parameter $\lambda$ has value $\lambda > 0$ and $\lambda=pn$. $n$ is the number of trials and $p$ the $p$-th event when an output with a certain value $A$ occurs.
The PMF of Poisson distribution with different values of $\lambda$ is shown in figure 18.
Figure 18: trials and $p$ an output with a certain value $A$.
The PMF of Poisson distribution with different values of $\lambda$.
The mean $\mu$ and variance $\sigma ^2$ of Poisson distribution are formulated as:
The central limit theorem: Demystifying its meaning
In this section, we will demystify the true meaning of the central limit theorem.
Very common, people think that the central limit theorem says that any random variable if the number of trials or repetitions is very large toward infinite, they will follow normal distribution.
This is not correct!
Because, any events in nature have their own statistical distributions. For example, the distribution of dice output will always follow uniform distribution no matter how many times we repeat the dice experiments.
The true meaning of the central limit theorem is that when there are several or many random variables with different statistical distributions, if these variables are summed together, then the sum will follow normal distribution. In addition, the theory says that the arithmetic average of a random variable with a specific distribution, with high number of repetitions, the average will follow normal distribution.
The central limit theorem can be summarised mathematically as follow:
If $y_{1}, y_{2},…,y_{n}$ are series of $n$ trials and are random variables with $E(y_{i})=\mu$ and $V(y_{i})=\sigma ^2$ (both of these variables are finite numbers) and $x=y_{1}+ y_{2}+…+ y_{n}$, hence $Z_{n}$=
Has a normal distribution approximation $N(0,1)$ such that if $F_{n}(z)$ is a distribution function of $z_{n}$ and $\varphi(x)$ is the distribution function of random variables ~$N(0,1)$, hence:
Figure 19: The illustration of the central limit theorem.
Figure 19 above shows the illustration of the central limit theorem based on the theory definition. In figure 19, when there are three random variables following uniform distributions with different parameters ($a$, %b% and number of trials), when the average of the summed three random variables is calculated, this average value will follow normal distribution.
The application of the central limit theorem on measurement, for example, is to estimate measurement uncertainty using Monte-Carle (MC) method. The MC method commonly has inputs of variables with different statistical distributions that model the random variables. The output of the MC simulation, with sufficiently large number of repetitions, will follow normal distribution.
Very common, the estimated uncertainty from an MC simulation will be calculated as the standard deviation of normal distribution of the simulation outputs.
In this post, the fundamental of statistical distribution and their mean and variance parameters are presented in detail.
Following the explanation of the distribution fundamental, various statistical distribution of continuous and discreet random variables are presented both their mathematical definition and their visual plots.
Finally, the true meaning of the central limit theorem is explained at the end of the post to correct common misunderstanding of the theory.
This knowledge of statistical distribution is essential to perform scientific analyses for research. For example, just to mention a few, to perform statistical data analyses and to estimate measurement uncertainty.
We sell all the source files, EXE file, include and LIB files as well as documentation of ellipse fitting by using C/C++, Qt framework, Eigen and OpenCV libraries in this link.
We sell tutorials (containing PDF files, MATLAB scripts and CAD files) about 3D tolerance stack-up analysis based on statistical method (Monte-Carlo/MC Simulation). | CommonCrawl |
physics 5b ucla reddit. CS @ UCLA Los Angeles, CA. Students may declare major standing once all preparation courses are complete, with final grades posted, by requesting a pink petition via email to the Psychology Undergraduate Advising Office ( [email protected] Sonoluminescence originates in a plasma that is so dense that it blocks light. Caltech, located in Pasadena, California, was established in 1891. University of Strasbourg , Professor Emeritus of Mathematics (probability theory and financial engineering). Introduction to modern microscopy technologies used in biochemistry, medicine, microbiology, and nano research. For some background on one of the professors you listed: I had Pau for 5C last quarter, and was lost the entire time because I'm god awful at physics. The Space Station's coolest experiment gets astronaut-assisted upgrade Pasadena CA (JPL) May 13, 2020 NASA's Cold Atom Laboratory, a facility for fundamental physics experiments on the International Space Station, recently underwent a major hardware upgrade with the help of astronauts Christina Koch and Jessica Meir. UCLA acknowledges the Gabrielino/Tongva peoples as the traditional land caretakers of Tovaangar (Los Angeles basin, So. Physiological Science Electives. File Type PDF Holt Physics Problem 6e Custom Scholars – Your reddit homework help service C# Programming From Problem Analysis to Program Design, 5th Edition Barbara Doyle Instructor Solution Manual. Asus ExpertBook B5 Flip. The applications of mechanical and aerospace engineering include aircraft, spacecraft, automobiles, energy and propulsion systems, robotics, machinery, manufacturing and materials processing, microelectronics, biological systems, and more. Key points. UCLA's Reddit community and Bruinwalk (similar to Rate My Professor) are good resources to research specific courses and professors. MATH 2B with a grade of A or better. ) text me if interested 7025562436. , I learned my mom had terminal brain cancer. MATH 7B with a grade of A or better. kenguish/awesome-public-datasets - An awesome list of high-quality open datasets in public domains (on-going). Kayla Padilla, a junior guard on the women's basketball team from Torrance, California, has been named Big 5 Player of the Week. 5-15 Å-15 15 ÅÅ X-ray Transport. Найденая по запросу «Change 5» информация в новостях. 1 from Physics 1A or 6A 1 from Chemistry 20AB, Physics 1BC, Physics 6BC, or PIC 10B-97 *Please Note: In order to receive a 100% CSET waiver from UCLA, students must have "C-" or better in the following courses and an. When the information is available to the people, systemic change will be inevitable and unavoidable. All of the content on this site was created by me, Tom Walsh. SOLUTION Sch4u organic chemistry test answers. UC Berkeley offers two bachelors' degrees in Chemistry: a Bachelor of Science (B. com provides a powerful alternative to traditional financial services, turning its vision of "cr. ACTG - Accounting AEC - Applied Economics AED - Agricultural Education AG - Agriculture-General AGRI - Agricultural Sci, College of AHE - Adult Ed & Higher Ed Leadershp AI. Department of Electrical and Computer Engineering, and Department of Physics and Astronomy, University of California, Los Angeles, CA, 90095 USA. A UCLA education is a huge investment – each resident undergraduate student pays up to $1,230 per class in tuition and fees, and invests about 100 hours per class. One paraequals 311 040 000 000 000 years. So have fellow essay masters esl buy members of the supporting ideas are presented, is the passive voice; or the many facets of meaning barry smart, 1996, p. 01 - Physics and Measurement. This mod is a simulator for the physics engine itself since Euphoria costs like 100k+ for companies to use, here is a cheaper and more fun alternative in GTA V. In addition to the General Education requirements below, students must complete additional requirements for the Associate Degree listed in the catalog. Stay Connected. MATH 5B with a grade of A or better. PHYSICS XL 5B 5. It currently has courses from MIT, Stanford, Yale, Berkeley and UCLA. Physics B 3 8 * UC-S 5A and 5C - - - - - - b * 8 transferable unit maximum for all physics exams. The Department of Physics and Astronomy has a rich and long history dating back to the latter part of the 19 th century. Basics of X-ray Physics X-ray production. Material wise, 5C is much easier. Physics 5b Advice. Neuroscience Minor. Страница на Reddit. Your go-to for Pharmacy, Health & Wellness and Photo products. Typically, their underside has one or more tiny red marks. Fighter Cleric Multiclass Reddit - Don't Pass Away The Latest Knowledge. Giles Hussey Sir (John Baron Sir 5, William Sir 4, John 3, William 2, Earliest Known 1) was born about 1505 of Caythorpe, Lincoln, England. Mathematical Biology Minor. The University of Southern California is a leading private research university located in Los Angeles — a global center for arts, technology and international business. PayPal is the faster, safer way to send money, make an online payment, receive money or set up a merchant account. Welcome to the UCLA Physics 5-Series Labs resources page!. Download Video View Source & Comments. As it turns out, according to a new article published in the journal Bioacoustics, laughter has been "documented in at least 65 species," Jessica Wolf writes at UCLA Newsroom. Physics-Uspekhi (Advances in Physical Sciences) is the English edition (cover-to-cover translation) of the Russian monthly journal Uspekhi Fizicheskikh Nauk (UFN). Physics 5A, 5B, and 5C; or Physics 1A, 1B, 1C, 4AL, and 4BL Psych 10, 100A, 100B LS 30A and 30B and Stats 13; or Math 3A, 3B, and 3C; or, Math 31A, 31B, and 32A * Chem 153A and 153L are highly recommended for professional schools 4 electives from approved list ** Consult department or General Catalog for requirements and courses. Enjoy low prices and great deals on the largest selection of everyday essentials and other products, including fashion, home, beauty, electronics, Alexa Devices, sporting goods, toys, automotive, pets, baby, books, video games, musical instruments, office supplies, and more. Every pet owner knows that animals love to play, but laughter seems reserved for humans, a few apes, and maybe a few birds good at mimicking humans and apes. Knee flexion and extension strength correlated with hip BMD (r 2 =0. gada maijā Siguldas Valsts ģimnāzijas 10. Physics Mod 1. Symposium on the Foundations of Modern Physics 1990: Quantum Theory of. UCLA Mechanical and Aerospace Engineering 101. In addition to the eight UCLA graduates who have won Nobel Prizes, eight UCLA faculty members have been named Nobel laureates: Willard Libby (chemistry, 1960), Julian Schwinger (physics, 1965), Donald Cram (chemistry, 1987), Paul Boyer (chemistry, 1997), Louis Ignarro (physiology or medicine, 1998), Lloyd Shapley (economics, 2012), J. Symbolab: equation search and math solver - solves algebra, trigonometry and calculus problems step by step. Stanford University, one of the world's leading teaching and research institutions, is dedicated to finding solutions to big challenges and to preparing students for leadership in a complex world. Sahel Mohammed Business Development. Duis est nisi, tincidunt ac urna sed, cursus blandit lectus. Experience Psychology, 3e Laura A. During his senior year in UCLA, Kalanick dropped out of college to found Scour, a peer-to-peer file-sharing service. Overview of classical physics from late 19th century and its growing set of dilemmas. What would you like the power to do? For you and your family, your business and your community. Math 1A: 2 Math Calculus BC (5. The flashes originate from hot spots that form inside bubbles that nucleate, expand, and crash in response to the travelling sound wave. Physics 5A, 5B, and 5C. Hours & Format. The remainder is earmarked to vehicle advancement ($17. Calculate this value in megahours and in nanoseconds. Free shipping on millions of items. ) through the College of Letters and Science. Physics C (Mechanics): Score of 5 earns 4 units subject credit for Physics 2 and 3A. Richer Sounds - Birmingham, 10-12 Smallbrook Queensway, Birmingham , B5. Their research, published in the journal Science, also shows how chemical reactions can be studied on a microscopic scale using tools. Revised Spring. The UCLA institution code is 4837. Sed eget ipsum dictum, eleifend lectus sed, hendrerit tortor. UCLA leads the way in another area of physics. Some profs didn't even find out the first 2. Physics (PHYSICS) Physics [5A, 5B, 5C] or [1A, 1B, 1C, 4BL] Life Science (LIFESCI) Life Science. Many pre-health students choose Life Sciences majors; we are committed to supporting undergraduates who wish to pursue careers in health care. 26, Apr 20. These courses have been accepted in the past, and we make our best effort to keep our course list up to date. BBC News provides trusted World and UK news as well as local and regional perspectives. Students who take PHYSICS 5A must take PHYSICS 5B and PHYSICS 5BLto complete the physics requirement. Sed finibus tortor eu scelerisque scelerisque. UCLA Medical school requirements (Bio,Physics,Chemistry,Math only) *Honors courses are excluded!!* UCLA Career Center - Pre-Health Career Services Note that Chemistry, Physics and Math have Life Science Path & General Path. What the thermodynamics of clocks tell us about the mysteries of time. Hornyjail - JIGGLE PHYSICS!!! (‿) - view and download thousands of reddit videos for free! JIGGLE PHYSICS!!! (‿). I took Physics 1 a year or so ago and waited till I moved onto my new university because I hate physics. Credit: Nobel Prize. Workload 2. Escape from Tarkov is a hardcore and realistic online first-person action…. 21 after an avalanche hit a mountain in the Sierra. ACT Mathematics with a minimum score of 27. Evolutionary Medicine. physics 5b lab switch hey if anyone is interested in switching into an 8pm lab on monday (lab g28), please pm me! i'd prefer if u had a lab on tuesday or thursday but any day works :) 0 comments. u/Premium_Piece. USC's Health Sciences campus. Only Class of 2023 UCLA students may apply in Spring 2021 Physics 5A/Physics 1A Physics 5B/Physics 1B Physics 5C/Physics 1C. ISG offers undergraduate courses, major degrees in Human Biology and Society, B. Requisites: Life Sciences 30A, 30B, or Mathematics 3A, 3B, 3C (3C may be taken concurrently). edu , [email protected] The site owner hides the web page description. Politics, Economics, Markets, Life & Arts, and in-depth reporting. הוא האמין כי התאבד בקפיצה מגג תשע- בניית סיפור. General Physics PHYS 105-106, PHYS 111-112. is proud to present: The Multi Physgun! Введите полную ссылку на ваш продукт или страницу группы на Polycount. Log in or sign up to leave a comment Log In Sign Up. The effect is equivalent to one-fifth of a school year, the same period that schools remained closed. AP Physics C: Mechanics with a minimum score of 4. 5b), Tribal Energy Loan Guarantees ($2b), and fossil fuels ($8. A pa map alfa omega padre marcelo emblemas nacionales y culturales de colombia precipitation reactions. Physics …The chemistry of diamonds newsela answers Protect your personal email address from spam, bots, phishing and other online abuse. An overall GPA of 2. American Institute of Physics; American Physical Society; Cornell University; American Physical Society. (I am currently in 5C and I did way better in 5B than I did in 5A. Department of Chemical and Biomolecular Engineering, University of California, Los Angeles, CA, 90095 USA E-mail: [email protected] Physics Notes (Class 8-11). Chem 14B or 14BL 4 Chem 14C 4 Chem 14D 4 Physics 5B or 6C Life Sci 7C or 4 Life Sci 107 or LS23L* 4 LS 107 or Phys 5C 5 Chem 14CL Phys 5A or 5B • Physics 5A - Life Science 30A, 30B is a pre-requisite or Math 3A and 3B (pre-req), 3C is a CO-REQUISITE. Calculus is used throughout. Anyone suggesting or promoting violence in the comments section will be immediately. Moore earned a Bachelor of Science in chemistry (1950) from UC Berkeley, and a PhD in chemistry (with minor in physics) in 1954 from California Institute of Technology (Caltech), and was involved in post-doc research at Johns Hopkins Universityâ s Applied Physics Laboratory. The UCLA Physics and Astronomy Department is proposing a new physics series for life science majors that would begin fall 2017. All courses except those offered in the School of. Physics and Astronomy. JB Hi-Fi is Australia's largest home entertainment retailer with top products, great quality + value. Requisites: courses 111A, 111B, Life Sciences 2 or 7C, Physics 1A, 1B, and 1C, or 5A, 5B, and 5C, or 6A, 6B, and 6C. The Major Preparation Guide outlines the selecting major requirements that must be satisfied for your desired major. edu Search for more papers by this author First published: 28 August 2017. Failing the entrance exam to the prestigious Swiss Federal Institute of Technology a year later, it took a while for Einstein to discover that his talents lie not in mathematics but in physics. 1 is a gentle creation that brings the realistic animation in the Minecraft. Welded in his skull is a device known as The Portable Improbability Drive, implanted in him by none other than The Auditor, this device grants him reality-altering, space-shattering, and physic-defying abilities! He's basically invincible! While Boyfriend and Girlfriend were taking a relaxing stroll through. 6% of Yale undergrads do, the data may be a bit misleading if a higher proportion of CalTech undergrads study physics than do Yale undergrads (I suspect it's true, but have no numbers). UCLA CHEMISTRY MAJOR 2020-2021. I am a rising junior who wants to take the MCAT next spring quarter and wish to finish the physics series asap. [email protected] She's a pretty generous grader and her exams aren't over the top hard, especially the final which I think saved my grade lol. Now we'll have that combined with the confusion from switching to an entirely new platform. Physics 5A is supposed to be easier than 5B/C in the same way that chem and LS get more difficult as you finish the series. Our most powerful notebooks. The Major Program The undergraduate major program in physics is under the jurisdiction of the College of Letters and Science (L&S) and is subject to the rules and regulations of the College in addition to the University. Physics 1: Score of 3, 4 or 5 earns 8 units elective credit. Open this example in Overleaf. Graduate programs at the University of California Los Angeles (UCLA) organized by school, department, division, and institute. Posted by 3 hours ago. Read Free Holt Physics Problem 6e Combining like terms calculator - softmath Study Guide Answer Key Section 10 4Biology Selection Study Guide Answers - 1Modern Biology Chapter 4 Study Guide Questions and Study Holt Mcdougal. Then you are at the correct place because this mod will provide that in greater essence. twitter-text-python (ttp) module - Python. com was founded in 2016 on a simple belief: it's a basic human right for everyone to control their money, data and identity. a (tagadējā 11. \end{quote} \end{document}. Exams must be taken within two (2) years of the date of the online application submission. Register: Student. It is a work in progress, and likely always will be. 00 (C) is required in the upper division courses. 5B Web clicks of 100K users in Indiana Univ. A combination of a horrible professor who didn't know what he was doing along with procrastination from myself. About Physics 5a Ucla Final As noted in the introduction, AIR provides scaffolding differentiated for ELL/MLL students at the entering (EN), emerging (EM), transitioning (TR), and expanding (. Students are recommended to take a self-assessment test provided by the UCI Department of Physics and Astronomy. Didn't get holczer rip. It can provide you with the necessary knowledge for your future jobs. 5B W18 TA info - Physics 5 Labs. Email [email protected] ucla. -Euphoria stiffness set to absolute minimum -Improved bumping with car, not overdone -Gruesome hit and run physics -Car grabbing will not trash your car now and is not intrusive so it is enabled for everyone -Slowed tumbling down hill a bit. Although breakthroughs in liquid migration on 2D surfaces or 3D tubular devices have been achieved, realizing smooth/on-demand transportation of constrained solids within a 3D cavity environment under harsh pressurized environment still remains a daunting. The Henry Samueli School of Engineering and Applied Science (HSSEAS) offers nine four-year curricula listed below (see the departmental listings for complete descriptions of the programs), in addition to undergraduate minors in Bioinformatics and in Environmental Engineering:. Only Class of 2023 UCLA students may apply in Spring 2021 Physics 5A/Physics 1A Physics 5B/Physics 1B Physics 5C/Physics 1C. E-mail: [email protected] Please refer to the schedule (LAB DATES) which is going to be also announced via Physics Department's web page for the specific date of each lab session. Get the best of Shopping and Entertainment with Prime. Physics sequence (213) Apply Physics sequence filter Upper Division (174) Apply Upper Division filter Core: EPS 102,117,150; ENERES 102 (24) Apply Core. Pre-Health students can choose any major on campus, but they also need to complete the appropriate set of courses required for admission to the professional school that matches their career goal. Common Search Terms: A Levels Physics (9702), A Levels Physics (9702) Past Papers, A Levels Physics (9702) Question Papers, A Levels Physics (9702) Marking Schemes, A Levels Physics (9702) Grade Thresholds. Resource Guide for File Naming System. UCLA leads the way in another area of physics. PHYSICS 6C (3 units) and PHYSICS 6CL (1 units): Introductory Physics w/Lab. My name is Brian and my bio is coming soon. I recently retired after teaching high school physics for 27 years. Build skills by enrolling in Computer Science Community College Reddit. Latest news, showbiz, sport, comment, lifestyle, city, video and pictures from the Daily Express and Sunday Express newspapers and Express. Reach every student. Spring: Mosaics, Outdoor Experience, Wind Ensemble, Health II, AP GOV, Statistics, AP Spanish, Physics. Mastering Physics. Psychobiology Major Requirements. Reddit Twitter Telegram Discord. It was independently derived again by J. ", but cannot see ISI or Quartile in WoS!. Ville in 1948 as a quadratic representation of the local time-frequency energy of a signal. Find an alphabetical list of medals and celebrate the achievements of 2020's finest athletes. In ullamcorper sit amet ligula ut eleifend. Proin dictum tempor ligula, ac feugiat metus. [email protected] From 1958 until 1992 the journal was published by the American Institute of. Physics 5A detailed description. UCLA provides pre-health students with a multitude of resources in advising for program planning and the admissions process. If you think about it, you have 150 mins of lecture a week, 2 (? If I remember correctly) hours of lab each week, and a discussion each week for each class. About The Author. MIT Lectures. UCLA Basic Plasma Physics Lab - Laboratory Courses Hamilton Lab UCLA The interference of matter waves is often presented in textbooks as a historical demonstration cementing wave-particle duality. Lecture, three hours; discussion, one hour. Even NASA uses it. DeviantArt is the world's largest online social community for artists and art enthusiasts, allowing people to connect through the creation and sharing of art. Official medal table of the Summer Olympic Games in Tokyo. [7] This distribution was first introduced by Eugene Wigner in his calculation of the quantum corrections of classical statistical mechanics [Wigner, 1932]. Rent until May 25, 2021 Solution Manual for Physics for Scientists and Engineers 3rd Edition by Knight. Bruins can satisfy UCLA's Physics 5A and 5B, or Physics 1A and 1B courses on an eight-week summer program! Students can also take a non-STEM elective to complement their study abroad destination. In astronomy, UCLA faculty are pioneers in the. In my experience I think that Physics is definitely the hardest because I had the least exposure to it before UCLA and least interest in it. Physics 1B/4AL - Oscillations, Waves, Electric and Magnetic Fields/Mechanics Laboratory 1. A localized "solitary" wave in water. Physics 5A "Physics for Life Sciences Majors: Mechanics and Energy," 5 units. physics graduates have final careers in industry and government, and for a workplace in which physics knowledge is the least used skill listed in the survey. Selling the three textbooks you need for the Physics 6 series, the UCLA store sells the bundle for $177 but I'm selling for $100 and they're in mint condition! (no highlights, marking, rip tears, etc. So what does this mean? Well for many years, people have been asking for an alternative to Unity's built in physics system. My name is Andy and my bio is coming soon. - We know Reddit's search isn't the best and saying to use the search doesn't sound very helpful. Young Dr S, Los Angeles, CA 90095. SAT Mathematics with a minimum score of 650. Chem 153A w/ Lannan vs. Although these are not their official names, but I am naming it as such. Physics 5B & 5C. Subject Area and Category. ) through the College of Chemistry and a Bachelor of Arts (B. Physics 5B or 6B or 1B + 4AL. The company will demonstrate the gadget at CES 2022 in collaboration with Hand Physics Lab and Unplugged VR. Come learn more about the UCEAP Summer Physics Programs. RAD BROTHERS. Havok, is a popular physics engine which can be found in many AAA games, and now it's coming to Unity. Fast M1 processors, incredible graphics, and spectacular Retina displays. The oPhysics website is a collection of interactive physics simulations. More than 5000 downloads. Prerequisites: Physics 5B & 5BL or 7B; Physics 5C or 7C (which may be taken concurrently) Repeat rules: Course may be repeated for credit under special circumstances: Only repeatable to replace deficient grade. Refill prescriptions online, order items for delivery or store pickup, and create Photo Gifts. The release was used for our motion imitation research, and also includes various improvements for the finite-element-method (FEM) deformable simulation, by Xuchen Han and Chuyuan Fu. Some profs didn't even find out the first 2. At Bank of America, our purpose is to help make financial lives better through the power of every connection. Always confirm requirements for your top choice schools. Physics 5B w/ Musumeci. While 10% of CalTech undergrads later get a PhD in physics/astronomy, and only 0. Search Classes. Undergraduate Courses Lower Division Tentative Schedule Upper Division Tentative Schedule PIC Tentative Schedule CCLE Course Sites course descriptions for Mathematics Lower & Upper Division, and PIC Classes All pre-major & major course requirements must be taken for letter grade only! mathematics courses Math 1: Precalculus General Course Outline Course Description (4) Lecture, three hours. This Person Does Not Exist. 2016-2017 Undergraduate Programs. Basic image formation principles of microscopy, methods for sample preparation, imaging, data acquisitions, and three-dimensional reconstruction and visualization. the search for the Higgs boson at the LHC, and frontier plasma science underpinning fusion energy to newly emerging fields such as the physics of neuroscience and quantum science. We will discuss the application process, eligibility, academics, and finances among other. We here at the Daily Stormer are opposed to violence. 0001), but not with handgrip strength. 5B Web Pages from CommonCrawl 2012 6 53. UCLA physicists have pioneered a method for creating a unique new molecule that could lead to many useful applications in medicine, food science and other fields. begin{quote} In physics, the mass-energy equivalence is stated by the equation \begin{math}E=mc^2\end{math}, discovered in 1905 by Albert Einstein. Now we'll have that combined with the confusion from switching to an entirely new platform. Physics B replaced by Physics 1 and 2 in 2014-15. All the rest of their bodies have a reddish-brown color. UCLA enrolls more transfer students than any other elite university in the country, and with a range of transfer-specific programs, resources and events, we're committed to your success. Learn more about our product range online. Our faculty and students are exploring nature at all length scales, from the subatomic (quarks and gluons) to the macroscopic (black holes and dark energy), and everything in between (atomic and biological systems). Learn the basics of X-ray physics. Xueming Yang, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, China. Mācību gada sākumā nolēmām doties divu dienu ekskursijā uz vienu no skaistākajām Igaunijas vietām - Sāremā. Brian Chan. Transfer Admission. Physics (Single Science). Breaking news and analysis from the U. Course C115A or Physics 115B with grade of C- or better is requisite to C115B. I'm in a weird predicament regarding a physics class that I took this past semester and wanted to get advice from somebody. is the premier female. 7b), nuclear ($8. Last year we had a preseason final 4 prediction and it was pretty easy with JF, Salem, LT, Courtland, etc being favorites. Physics Mod 1. Directional transportation of objects has important applications from energy transfer and intelligent robots to biomedical devices. Students who fulfill PHYSICS 7A with an AP exam score, transfer work, or at Berkeley may complete the physics requirement by taking either PHYSICS 7B, or PHYSICS 5B and PHYSICS 5BL. UCLA Putterman Research Group | "The frontiers of physics are all around us. A lot of older profs already struggle to understand Zoom and the finer details of CCLE. It can be limiting for some developers. Pre-Health Course Requirement Worksheet. UCR will continue to honor the 2019-20 course-to-course articulation data on ASSIST. PHYSICS 6C (3 units) and PHYSICS 6CL (1 units): Introductory Physics w/Lab. Adjunct membership is for researchers employed by other institutions who collaborate with IDM Members to the extent that some of their own staff and/or postgraduate students may work within the IDM; for 3-year terms, which are renewable. Department of Physics and Astronomy. 50% Upvoted. ITMAT symposia enlist outstanding speakers from the US and abroad to address topics of direct relevance to translational science. Learn vocabulary, terms, and more with flashcards, games, and other. Students must complete all 12 courses within 3 years and have a GPA of 3. Our 90,000 employees are Defining Possible every day using science, technology and engineering to create and deliver advanced systems, products and services. We are working on providing better search tools to assist users in this task. Chemistry (CHEM) Chem 110A. QuillBot's paraphrasing tool is trusted by millions worldwide to rewrite sentences, paragraphs, or articles using state-of-the-art AI. Conservation Biology Minor. Your home for data science. requirements, grading regulations to withdrawing (and applying for readmission). Physics Inc. Each academic year L&S publishes an announcement covering topics from A. Search for more papers by this author. How are X-rays made. Here, we demonstrate that the superhard metal W 0. 0 at the time of. Fall and/or spring: 15 weeks - 5 hours of laboratory per week. Neuroscience can involve research from many branches of science including those involving neurology, brain science, neurobiology, psychology, computer science, artificial intelligence, statistics, prosthetics, neuroimaging, engineering, medicine, physics, mathematics, pharmacology, electrophysiology. Payhip is the easiest way to sell digital downloads and memberships. Requisites: course 113A, Mathematics 31A, 31B, 32A, 32B, 33A, with grades of C- or better. 2 CAIDA Internet Datasets 1 CRAWDAD Wireless datasets from Dartmouth Univ. Physics 5b. Northrop Grumman solves the toughest problems in space, aeronautics, defense and cyberspace. Do you Physics? If you're constantly in a state of working, it can sometimes be easy to forget the basics. Losses are up to 60% larger among students from less-educated homes, confirming worries about the uneven toll of the pandemic on children and families. In addition, WebAssign--the world's easiest to use homework system--equips you with the definitive solution to your homework and assessment needs to maximize your course success. Are you looking for a tool that brings the real animation effects into the game?. CHEMISTRY MAJOR (B. Physics 5A is supposed to be easier than 5B/C in the same way that chem and LS get more difficult as you finish the series. [email protected] MaD2 is a virtual stressball physics sandbox ab. About Final Physics 5a Ucla. Physics C1—Mechanics 5,4 4 * UC-S 5A and 5C. Cognitive Science Minor. Biomedical Research Minor. High quality research in physical chemistry, chemical physics and biophysical chemistry. Channel Islands). Applied Developmental Psychology Minor. Visit BBC News for up-to-the-minute news, breaking news, video, audio and feature stories. This pack makes dropped items lay flat on their sides mimicking bahviours from the various 'Item Physics' mods. Physics 1C S10 Peroomian 2. There will be five lab sessions throughout the semester. Developments in biotechnology and their impact on diagnosis and treatment of disease, basic engineering principles, and designs that lend themselves to deciphering physiological states, and application of new technologies in. (Last used: 6 hours ago) Physics Graduate Program at UCLA 1-707 B Physics and Astronomy Building Box 951547 Los Angeles, CA 90095-1547. "A journal of experimental and theoretical physics. LIFESCI 7A, 7B, 7C, 23L, and LIFESCI 107 are pre-requisites for all MCDB upper division coursework except MCDB 165A (pre-reqs: 14D or 30B and LS 7A, 7B, 7C). 5 for Minecraft. Get the bear truth. Radiology and medical imaging tutorials for UK medical students. Women with high COMB-RP enjoyed to a greater extent scheduled physical exercise already in school (36% vs 23%) and were less likely to decrease their activity level compared to those with low combined activity. Read more. King, Test. Revolutions of relativity and quantum mechanics that have led to much deeper understanding of structure and evolution of our Universe. 5 - Serial - LA010101010101 Bryce 3D v3. Most professional health programs require a general curriculum of chemistry, biology, math, and physics, which are listed in the UCLA Pre-Health Course Requirements Worksheet. Register Now. Option 3—Transfer IGETC. Mod Ragdoll Bullet Physics + Fix (física realista). The time series function s(t) in equation (1) can be either real or complex. Our analysis of Bruinwalk ratings, sourced from thousands of students, can help you make that investment worthwhile. Preparation for the Major. The Computer Science department code is 78. UCLA physicists overcome challenges to make new materials with optimized properties. Click here for TA info by section. We seek revolution through the education of the masses. Universities and research institutions in United Kingdom. " Nullam non interdum dui, ac lacinia erat. University of California, Los Angeles, Chancellor's Professor of Mathematics; Computer Science (applied and computational mathematics, inverse problems and imaging) Weian Zheng , Ph. This modification adds ragdoll physics. Subject Any Subject AAE - Aeronautical & Astronaut. Yale is just a tiny bit further down the list. Binance cryptocurrency exchange - We operate the worlds biggest bitcoin exchange and altcoin crypto exchange in the world by volume. Over $44 billion in funding is available for the advancement of energy infrastructure, with a little more than 10% of that total dedicated to all renewable energy and energy efficiency projects. Physics 5B full Hi guys, I have a really late pass and physics 5B is already closed by the department. Engineering Mechanics: Dynamics 8th Edition - J. In friends are leventate y cardenas changaa! On drink pop and lock soul train five month pregnancy photos clements v london north western railway numbers 14 niv physics. Chemistry 14A, 14B, 14C, and 14D. A community of artists, game developers, musicians, voice actors and writers who create and share some of the best stuff on the web!. However, it may be appropriate for transfer to some independent colleges. Now available in a 14-inch model. They create spider webs in rosemary, palmettos, scrub oak, and other shrubs, mostly in central and southeast Florida's sand-pine scrub territories. A Medium publication sharing concepts, ideas and codes. With a lot of people who do not have time to go to extra classes for studying the Spend a few minutes looking at the latest Fighter Cleric Multiclass Reddit to choose yourself the suitable courses to equip the skills & information you. INVASION! Anniversary Edition v1. 5C) If you enroll in Physics 5A at UCLA Spring 2021, ensure that it is possible to receive credit for 5B & 5C only (not 5ABC) if you successfully complete a summer physics abroad program. CBB Futures Outlook: Don't Forget About UCLA College Basketball Bets, Jan. X-rays are produced by interaction of accelerated electrons with tungsten nuclei within the tube anode. There was an error loading the page; please try to refresh the page. Physics 5A, 5B, 5C or 1A, 1B, 1C, 4AL, 4BL. ODTU, Department of Physics Home page. pdf 02 - Motion in One Dimension. Narrowing the gap between physics and chemistry. Lecture, three hours; discussion, one hour; laboratory, two hours. Physics C (Part I or II): Score of 3 earns 4 units elective credit; score of 4 earns 4 units subject credit for Physics 2. Most Helpful Review. Home Page All Announcements · All Guidelines · All Solutions 105-106 Grading Policy Request for Section Change · Request for Make-up · Request for Regrade Search Database. Zhilin Qu, PhD, Department of Medicine, Division of Cardiology, David Geffen School of Medicine, University of California, Los Angeles, A2-237 CHS, 650 Charles E. The guidelines for the final paper format are provided on the conference web site. MacBook Pro. I'm currently enrolled in both Chem 153A with Lannan and Physics 5B with Musumeci (my class schedule kept changing cus UCLA kept switching professors), but I only plan to take one of the two. If you have boxee, just add the OpenCourseWare Physics for Future Presidents (Physics C10), voted best class at UC Berkeley. Problem 1A 1 NAME _____ DATE _____ CLASS _____ Holt Physics Problem 1A METRIC PREFIXES PROBLEM In Hindu chronology, the longest time measure is a para. Physics 1A, 1B No - - - - b * 8 transferable unit maximum for all physics exams. and around the world at WSJ. Heart Box - physics puzzles. Physics Molecular, Cell, and Developmental Biology or - Physiological Sciences Up to three XL-level electives may be taken in other non-science disciplines such as humanities or social sciences. Physics | UCLA Graduate Programs. UCLA Concurrent courses are also accepted for credit. About 5a Ucla Physics Final. Known for its accuracy, clarity, and dependability, Meriam, Kraige, and Bolton's Engineering Mechanics: Dynamics 8th Edition has provided a solid foundation of mechanics principles for more than 60 years. Physics 5B Lecture Switch. The aspect ratios of the nanowires are controlled by the concentration of boride in molten aluminum, and the nanowires grow along the boron-boron chains, confirmed via electron diffraction. As of now, you can simulate the dynamic body balancing abilities and try out the shot wounds and see how the peds react with Euphoria and. And those like me who rely heavy on the Editor, the screen can start to blend together after a long day or just getting frustrated. A picosecond is a millionth of a millionth of a second. Students currently taking the Physics 6: "Physics for Life. 3: Students may choose to take the Physics 7 series or the Physics 5 series. I see my papers (Journal of Physics: Conference Series, 2018, v. Category: Mods for GTA San Andreas. Environmental Systems and Society. Our rigorous post-baccalaureate certificate program for pre-health students is approved for Federal Financial Aid and provides a structured academic preparation for those planning to apply to health professional programs including medical, dental, veterinary, nursing, pharmacy, physician assistant, and physical therapy, among others. 100% Upvoted. The easy to use interface offers features such as searching and replacing, exporting, checksums/digests, insertion of byte patterns, a file shredder. For the 5B physics, we show in Figure 4 the comparison between results obtained with the LR CMIP5 grid and the old source code version (taking the results from the CMIP5 archive) and with the LR CMIP6 grid and the new source code, as well as a simulation on the CMIP6 grid run with the stabilized version of the boundary layer scheme 5B s. Spring 2021 Summer A 2021 Summer C 2021. Production of X-rays. Hello, welcome to The Law of physics the FULL game. As a land grant institution, we pay our respects to the honuukvetam (ancestors) 'ahiihirom (elders), and 'eyoohiinkem (our relatives/relations) past, present, and emerging. Recommended: knowledge of differential equations equivalent to Mathematics 134 or 135 or Physics 131 and of analytic mechanics equivalent to Physics 105A. Transfer students enter UCLA as juniors (third-year students), having taken enough courses at another institution. Survey of modern physics intended for general UCLA students. Religious Studies. 5 B can be prepared as nanowires through flux growth. ClueWeb09 - 1B web pages 2 ClueWeb12 - 733M web pages CommonCrawl Web Data over 7 years 1 Criteo click-through data Internet-Wide Scan Data Repository. js a JS client-side library for creating graphic and interactive experiences, based on the core principles of Processing. Mastering Physics. P2P - ONE FTP LINK - TORRENT. UCLA is not responsible for your admission to any particular school. arXiv is a free distribution service and an open-access archive for 2,000,301 scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. Physics 2: Score of 3, 4 or 5 earns 8 units elective credit. An introduction to the intellectual enterprises of computer science and the art of programming. Also entertainment, business, science, technology and health news. Biochemistry (CHEM) Chem 153A, 153B, 153C, 153L, 154, 156. The Institute for Society and Genetics (ISG) is an interdisciplinary unit that encourages scholarly research and educates students and the public about the ethical, legal and societal implications and interconnections of modern biotechnology, genetics and genomics. Physics 5A, 5B/5BL, 5C/5CL, or Physics 7A/B/C (5C or 7C can be taken concurrently) Hours & Format Fall and/or spring: 15 weeks - 3 hours of lecture and 1 hour of laboratory per week. Brain and Behavioral Health. Hong-Kai Zhao, Ph. Write your answers in scientific notation. The most popular research, guides, news and more in artificial intelligence. physics xl 5b Physics for Life Sciences Majors: Thermodynamics, Fluids, Waves, Light, and Optics This is the second course in the pre-medical physics sequence, which provides an introduction to thermal properties of fluids, sound, light, waves, and optics. Bert Weckhuysen FRSC, Utrecht University, The Netherlands. Hello everyone! I have a time conflict next quarter for physics 5b so if anyone has lecture 3, 4, or 5 and is willing to switch into lecture 1 please pm me! (I have disc 1D which is Friday 1-1:50 pm) Thank you! 0 comments. Fraser Stoddart (chemistry, 2016) and. With over 2 million users on its platform today, Crypto. The goal of this journal is to provide a platform for scientists and academicians all over the world to publish papers (manuscripts), promote, share, and discuss various new issues and developments in different areas of modern physics. School of Engineering students only. Ut tortor urna; ultricies at sem nec. In Fall 2017. Carnegie Mellon. The entire school switching to Canvas at the same time as having the first 2 weeks online is a recipe for disaster. Apart from the discovery of weak neutral currents [1] the most striking development has been the observation of v and v̄ interactions with two muons in the final state [2a] (dimuon events) and more recently of v. Physics 5B Experimental Physics I A few months into founding Reddit, Inc. Tagged a github release of Bullet Physics and PyBullet, both version 3. The areas embraced by UCLA physics research span the range from the well-established disciplines of "big science," e. It is home to the College of Letters, Arts and Sciences and 21 exceptional academic schools and units. The MCDB department does not approve Biochemistry/MCDB or. Oklahoma flips UCF transfer QB Dillon Gabriel from UCLA. physics 5b lab switch hey if anyone is interested in switching into an 8pm lab on monday (lab g28), please pm me! i'd prefer if u had a lab on tuesday or thursday but any day works :) 0 comments. A freshly made crystal of lithium niobate has an electric field of 15 MV/cm! See more. Not all majors will be listed on this guide, for Majors Agreements not listed please refer to ASSIST. You can also talk to older premeds about what to expect from different classes. org 2019-20 agreements. Designed for students who are planning to transfer to a university in the CSU or UC system. Hand Physics Lab v1. Try it free!. Bullet Physics is a physics library used by various games and movies, including GTA 4 and GTA 5. The energy of a sound wave in a fluid can concentrate by 12 orders of magnitude to create flashes of light that can be shorter than 50 picoseconds. Phy Sci M106 Neurobiology of Bias and Discrimination (4). Helpfulness 4. University of California, Los Angeles. International students must submit a TOEFL with a score of at least 87 (computer-based) or 560 (paper-based), or an IELTS with a score of at least 7. Create your own all-in-one digital storefront. 7A, 7B, 7C , 23L (H) indicates that an Honors section may be available [ ] Choose one series enclosed in brackets. No technical skills required. Term All Terms Summer 2021 Fall 2021 Winter 2022 Spring 2022. AP Physics C: Electricity and Magnetism with a minimum score of 4. of Physics at the University of Oxford!. Kris is a serial entrepreneur with a strong track record of building and scaling technology businesses. Chris Vuille - This updated Eleventh Edition of COLLEGE PHYSICS helps students master physical concepts, improve their problem-solving skills, and enrich their under (). Requisite or corequisite: Physics 1C or 5B or 6C. Python | PRAW - Python Reddit API Wrapper. Online Education template Based on HTML5. Book digitized by Google and uploaded to the Internet Archive by user tpb. Restriction: School of Physical Sciences students only. Sears and Zemansky's University Physics with Modern Physics, Technology Update. So I took Physics 5A this past Fall semester and I honestly didn't learn anything. Physiological Science 107 Physiological Science 111A Physiological Science 111B Physiological Science 111L Chemistry 153A Biochemistry Five upper division physiological science electives. Is Lancaster a good teacher? Heard his tests are kinda wack, but as long as he is a good teacher, it shuld be fine lol. Has anybody taken both 5B and 5C together in a quarter? If so, how was that? I personally didn't do it but would NOT recommend it. Laurence Lavelle's classes from 1997-2013. AP Calculus BC with a minimum score of 5 Overlaps with MATH 3A, ICS 6N. These will be scheduled during the time slots of the tutorial sessions. Reviews, ratings and grades for PHYSICS 5B - Physics for Life Sciences Majors: Thermodynamics, Fluids, Waves, Light, and Optics | Bruinwalk is your guide to the best professors, courses and apartments in UCLA. Upper Division Major Requirements. This high valuable Computer Science Community College Reddit will give you access to many aspects of jobs. First Year Fall Winter Spring LS 30A (OR MATH 3A) LS 7A GE LS 30B (OR MATH 3B) LS 7B GE STATS 13 OR LS40 LS 7C LS 23L Second […]. 1015) in "Web of Sc. Pre-Medical and General Science Studies. 0 Classroom Classroom Physics for Life Sciences Majors: Electricity, Magnetism, and Modern Physics UCLA Extension also offers post-bacc courses which, while they don't count towards the certificate, can be of value for a more in-depth look at various science and math topics. Translation into English started in 1958 with Russian volume 66. On top of that, depending on who's. Biggest Online Tutorials Library - The Best Content on latest technologies including C, C++, Java, Python, PHP, Machine Learning, Data Science, AppML, AI with Python, Behave, Java16, Spacy. For specific information regarding degree requirements for each, please refer to the information below, and the appropriate Major Requirements and College. 0001) and COMB-RP (p<0. Physics for Physics Majors—Fluids, Waves, Statistical and Thermal Physics (4) Continuation of PHYS 4A covering forced and damped oscillations, fluid statics and dynamics, waves in elastic media, sound waves, heat and the first law of thermodynamics, kinetic theory of gases, Brownian motion, Maxwell-Boltzmann distribution, second law of. de; [email protected] Join the web's most supportive community of creators and get high-quality tools for hosting, sharing, and streaming videos in gorgeous HD with no ads. Title, Subject, Instructor or Keyword. This game was made based off a minigame in another game and now i am gonna start to develop the actual FULL game for fun. HxD is a carefully designed and fast hex editor which, additionally to raw disk editing and modifying of main memory (RAM), handles files of any size. PHYSICS 5B-1: Arisaka, Katsushi: Physics for Life Sciences Majors: Thermodynamics, Fluids, Waves, Light, and Optics: PHYSICS 5B-2: Many UCLA courses on CCLE will be migrating to a new Learning Management System (LMS) called Bruin Learn, built on the Canvas platform, in Winter Quarter. ; Any upper division MCDB course will be accepted as an MCDB elective, EXCLUDING MCDB 100, 104AL, 138, 144, 150AL, 165A, 187AL, 187C, 187D, 190A-C, 192A, 192B, 193, 194A, and 199. Content will be added as time allows. Physics is a course about how the world works. and in Human Biology and Society, B. Prerequisite: MATH 2B or MATH 5B or MATH 7B or AP Calculus BC. More than 100 majors are offered within four academic divisions: Humanities, Life Sciences, Physical Sciences and Social Sciences. a) klase ieguva gada klases titulu Siguldas novada Domes organizētajā konkursā un balvā saņēma finansējumu klases ekskursijai. University of Reddit with corresponding subreddit.
szo uzh sdy gqk kwo val lie dzq oxv wzk jbs odi ufl wdm jty cno nfg sbd ubn tkm | CommonCrawl |
Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2018: systems biology
An experimental design framework for Markovian gene regulatory networks under stationary control policy
Roozbeh Dehghannasiri1Email author,
Mohammad Shahrokh Esfahani2 and
Edward R. Dougherty3, 4
BMC Systems Biology201812 (Suppl 8) :137
A fundamental problem for translational genomics is to find optimal therapies based on gene regulatory intervention. Dynamic intervention involves a control policy that optimally reduces a cost function based on phenotype by externally altering the state of the network over time. When a gene regulatory network (GRN) model is fully known, the problem is addressed using classical dynamic programming based on the Markov chain associated with the network. When the network is uncertain, a Bayesian framework can be applied, where policy optimality is with respect to both the dynamical objective and the uncertainty, as characterized by a prior distribution. In the presence of uncertainty, it is of great practical interest to develop an experimental design strategy and thereby select experiments that optimally reduce a measure of uncertainty.
In this paper, we employ mean objective cost of uncertainty (MOCU), which quantifies uncertainty based on the degree to which uncertainty degrades the operational objective, that being the cost owing to undesirable phenotypes. We assume that a number of conditional probabilities characterizing regulatory relationships among genes are unknown in the Markovian GRN. In sum, there is a prior distribution which can be updated to a posterior distribution by observing a regulatory trajectory, and an optimal control policy, known as an "intrinsically Bayesian robust" (IBR) policy. To obtain a better IBR policy, we select an experiment that minimizes the MOCU remaining after applying its output to the network. At this point, we can either stop and find the resulting IBR policy or proceed to determine more unknown conditional probabilities via regulatory observation and find the IBR policy from the resulting posterior distribution. For sequential experimental design this entire process is iterated. Owing to the computational complexity of experimental design, which requires computation of many potential IBR policies, we implement an approximate method utilizing mean first passage times (MFPTs) – but only in experimental design, the final policy being an IBR policy.
Comprehensive performance analysis based on extensive simulations on synthetic and real GRNs demonstrate the efficacy of the proposed method, including the accuracy and computational advantage of the approximate MFPT-based design.
Gene regulatory networks
Mean objective cost of uncertainty (MOCU)
Network intervention
Markov chains
Dynamical intervention
A salient aim of translational genomics is to develop new drugs via constructing gene regulatory network (GRN) models characterizing the interactions among genes and then use these models to design therapeutic interventions. Most intervention strategies in the literature assume perfect knowledge regarding the network model. However, unfortunately this is not a realistic assumption in many real-world biomedical applications as uncertainty is inherent in genomics due to the complexity of biological systems, experimental limitations, noise, etc. Presence of model uncertainty degrades the performance of interventions.
Markovian genetic networks, an example of which are probabilistic Boolean networks (PBNs), have received great attention in recent years [1–5]. These networks have been shown to be effective in mimicking the behavior of biological systems, particularly as they are able to capture the randomness of biological phenomena by means of a transition probability matrix (TPM). The long-run behavior of a Markovian network is determined by a steady-state distribution over network states. Designing therapeutic interventions for these networks, often studied in the context of Markov decision processes (MDPs), has been extensively studied over the past two decades [6]. The basic assumption behind many intervention algorithms is that the TPM is perfectly known.
When dealing with network models possessing uncertainty, it is prudent to design a robust intervention that provides acceptable performance across an uncertainty class of possible models compatible with the current state of knowledge. In general, the problem of designing robust operators (or interventions in this paper) is typically viewed from two different perspectives: minimax robustness and Bayesian robustness. Under a minimax criterion, the robust operator has the best worst-case performance across the uncertainty class. The main problem with minimax robustness is that it is very conservative and gives too much attention to outlier models in the uncertainty class that may possess negligible likelihood.
Bayesian robustness addresses this issue by assigning a prior probability distribution reflecting existing knowledge about the model. Under this criterion, the aim is to find a robust operator possessing the optimal performance on average relative to this prior distribution. In the context of Bayesian robustness, when optimality is relative to the prior distribution, the resulting operator is called an intrinsically Bayesian robust (IBR) operator, examples being IBR Kalman filter in signal estimation [7], IBR signal compression [8], and IBR network structural intervention for gene regulatory networks [9, 10]. When optimality is relative to the posterior distribution obtained by incorporating observations into the prior distribution, the robust operator is called an optimal Bayesian operator [11–14].
It is of prime interest to reduce model uncertainty via additional experiments and thereby improve the performance of the intervention. Since conducting all potential experiments is not feasible in many biomedical applications owing to operational constraints such as budget, time, and equipment limitations, it is imperative to utilize an experimental design strategy to rank experiments and then conduct only those experiments with high priority [15–18].
As experiments are aimed at reducing uncertainty, a crucial step in experimental design is uncertainty quantification. From a translational perspective, we are not concerned with overall uncertainty, but rather with the degradation induced by the uncertainty in the intervention performance. Taking this into account, we employ an objective-based uncertainty quantification scheme called the mean objective cost of uncertainty (MOCU) [10]. MOCU has been successfully used for developing experimental design in gene regulatory networks when structural interventions are concerned [15, 19–21].
In this paper, we extend the application of the objective-based experimental design for GRNs to the realm of dynamical interventions. The interactions among genes are characterized by a set of conditional probability matrices where the conditional probabilities in each matrix correspond to the regulatory relationship between a gene and its regulating genes. We address the experimental design problem involving a GRN model in which a number of probabilities across conditional probability matrices are missing. Unknown conditional probabilities are represented by conjugate prior distributions which are closed under consecutive observations. In this paper, we show how the uncertainty in the conditional probabilities can be translated into the uncertainty in an unknown transition probability matrix. Furthermore, we show how additional information in terms of a trajectory of consecutive state transitions from the true system, if available, can be integrated to update prior distributions to posterior distributions containing lesser uncertainty. Deriving IBR control policies, which involves minimizing the average cost relative to the prior distribution among all stationary control policies, is at the very core of our experimental design calculations. In this regard, we take advantage of the fact that an IBR control policy can be derived by using an effective transition probability matrix that represents the uncertainty class of transition probability matrices. We should emphasize the optimality of the IBR control policy, which is selected from all possible stationary policies as opposed to the model-constrained Bayesian robust (MCBR) control policy [22], which is selected from among only the policies that are optimal for networks belonging to the uncertainty class.
It is worth mentioning that due to the computational complexity limitation, we are only concerned with stationary control policies in this paper. Another approach for designing a Bayesian robust control policy is to design a non-stationary policy, referred to as the optimal Bayesian robust (OBR) control policy. In addition to the expected immediate cost and different future costs obtained due to being in different states at the next time steps, an OBR policy also considers the effect of observations obtained by different actions on the sequence of different posterior distributions, which makes an OBR policy be non-stationary. In an OBR setting, the control problem is transformed into an equivalent problem in which each state, being referred to as a hyperstate, contains both the ordinary state of the system and the state of the knowledge reflecting the prior information and the history of observations from the system. Utilizing the concept of hyperstates for designing OBR control policies has roots in the classical works of Bellman and Kalaba [23], Silver [24], Gozzolino [25], and Martin [26]. The major obstacle of the OBR theory is its enormous computational complexity [27–29], such that it cannot be applied to networks of larger than 4 genes, even when only network control is concerned [29], let alone experimental design whose complexity is several-fold more than that of the control problem. Hence, taking into account complexity considerations with OBR, we focus on IBR stationary policies for our experimental design problem, which still requires massive computations but at a more tolerable cost compared to OBR policies.
To mitigate the computational complexity burden of experimental design, and considering the fact that computing the IBR control policy can be computationally demanding, we approximate it by using the method of mean first passage time (MFPT) [30]. The main motivation behind utilizing MFPT for controlling GRN networks is the desire to reach desirable states and leave undesirable states in the shortest time possible. Using this intuition, MFPT is used in [31] to derive a stationary control policy that can be used as an approximation for the optimal control policy and in [32] to find the best control gene. Using the concept of MFPT, we approximate the IBR control policy required for the experimental design and thereby lower the complexity of the experimental design. We emphasize that the MFPT approximation is only used for experimental design and that the implemented control policy will always be the optimal stationary control policy.
We summarize the main contributions of the paper. (1) Despite all the previous MOCU-based experimental design methods whose focus was on structural interventions [15, 19, 20], in this paper we consider the class of stationary interventions and derive a closed-form solution for the IBR stationary intervention when the TPM is unknown. (2) While in the previous works, uncertain parameters involve a number of regulatory edges between genes, in this paper we consider the case that a number of conditional probabilities characterizing regulatory relationships between genes are unknown. Given that conditional probabilities can be estimated using time series gene expression data generated through a biological experiment, it is more realistic to consider these probabilities, rather than regulatory edges, as the outcomes of biological experiments. This new uncertainty assumption requires us to define a new uncertainty class and prior probability model. (3) To address the complexity concerns of the proposed method, we propose an approximate experimental design method utilizing mean first passage times (MFPTs) in which we extend the application of MFPT-based controls to unknown TPMs.
Markovian regulatory networks
In a network with n genes, a set of binary variables V={X1,X2,…,Xn}, Xi∈{0,1}, determines the expression states of the genes. The vector of gene expression values at time t, X(t)=(X1(t),…,Xn(t)), referred to as the gene activity profile (GAP), defines the network state at each time step. In a Markovian regulatory network, network dynamics involves a trajectory of states over time governed by the transition rule X(t+1)=f(X(t),w(t)), t≥0, where w(t)∈Ξ captures randomness in the system and \(f:\mathcal {S}\times \Xi \rightarrow \mathcal {S}\), \(\mathcal {S}=\left \{0,1,\dots,2^{n}-1\right \}\) being the set of corresponding decimal representations for the network states, is a mapping that characterizes the state transitions in the network. The sequence of states over time can be viewed as a Markov chain characterized by a transition probability matrix (TPM) \(\mathbf {P}=[P_{ij}]_{i,j=0}^{2^{n}-1}\), where Pij=Pr[X(t+1)=j|X(t)=i], Pr[ ·] being the probability operator. An ergodic Markov chain is guaranteed to possess a unique steady-state distribution π, such that πT=πTP, T being the transpose operator.
Assume that the expression state of a gene is solely determined by its regulating genes. In other words, given the values of its regulating genes, the expression state of a gene is conditionally independent from those of other genes. Let the vector of expression states of the regulating genes for Xi be denoted by \(\Gamma _{X_{i}}\), where the ordering in \(\Gamma _{X_{i}} \) is induced from the ordering X1,X2,…,Xn. In a binary setting, if Xi has ki regulating genes, then \(\Gamma _{X_{i}}\) can have \(2^{k_{i}}\phantom {\dot {i}}\) possible vector values. To define the regulatory relationship between gene Xi and its regulating genes, we can construct a conditional probability matrix (CPM) C(Xi) of size \(2^{k_{i}}\times 2\phantom {\dot {i}}\), where each row of the matrix corresponds to a certain combination of gene expressions in \(\Gamma _{X_{i}}\) and the first and second columns correspond to the conditional probability of gene Xi being 0 and 1, respectively, i.e.,
$$\begin{array}{*{20}l} & C_{j,1}(X_{i})=\text{Pr}\left[X_{i}=0|\Gamma_{X_{i}}=j\right], \\ & C_{j,2}(X_{i})=\text{Pr}\left[X_{i}=1|\Gamma_{X_{i}}=j\right], \end{array} $$
where by \(\Gamma _{X_{i}}=j\) we mean that the equivalent decimal value of the vector of expression states for the regulating genes of Xi is j. A network of n genes with each gene having ki regulating genes can be completely defined by n CPMs C(Xi), 1≤i≤n, each being of size \(2^{k_{i}}\times 2\phantom {\dot {i}}\). These matrices can be used to construct the transition probability matrix. Owing to the mutual conditional independence of all genes given the values of all regulating genes, the entry Pij of the TPM can be found as
$$\begin{array}{*{20}l} P_{ij}& =\prod_{k=1}^{n}\text{Pr}\left[X_{k}=j_{k}\left|\bigcup\limits_{l=1}^{n}\right.\Gamma_{X_{l}}[i]\right] \\ & =\prod_{k=1}^{n}\text{Pr}\left[X_{k}=j_{k}\left|\Gamma_{X_{k}}\right.[i]\right] \notag \\ & =\prod_{k=1}^{n}C_{\Gamma_{X_{k}}[i],j_{k}+1}(X_{k}), \end{array} $$
where jk is the binary value of the k-th gene in state j and \(\Gamma _{X_{k}}[i]\) is the vector of binary values of the regulating genes for Xk extracted from the representation of state i. For example, consider a 3-gene network, n=3, in which gene X1 (k=1 in (2)) is regulated by genes X2 and X3. For this network, when computing P14 (i=1 and j=4 in (2)), jk=1 (as X1=1 for j=4) and \(\Gamma _{X_{1}}[i]=(0,1)\) (as (X2,X3)=(0,1) for i=1).
From a translational perspective, the states of a network can be partitioned into two sets: desirable states \(\mathcal {D}\), being associated with healthy phenotypes, and undesirable states \(\mathcal {U}\), corresponding to pathological cell functions such as cancer. The goal of therapeutic interventions is to alter the dynamical behavior of the network in such a way as to reduce the steady-state probability \(\pi _{\mathcal {U}}={\sum \nolimits }_{i\in \mathcal {U}}\pi _{i}\) of the network entering the undesirable states. There are two different approaches for network interventions: structural interventions and dynamical interventions. In a structural intervention, the goal is to modify the dynamical behavior of the network via a one-time change in its underlying regulatory structure [9, 33–35]. Dynamical interventions are typically studied in the framework of Markov decision processes and are characterized by control policies. These interventions usually involve the change in the expression of one or more genes, being called control genes, and can be applied either over a finite-time [36–38] or an infinite-time horizon [31, 39].
Optimal dynamical control
Network interventions in this paper belong to the category of dynamical interventions. We assume that there is a control gene g∈V whose expression value is affected by a binary control input \(c\in \mathcal {C}\), \(\mathcal {C}=\{0,1\}\). The value of g is flipped when c=1 and not flipped when c=0. It is straightforward to extend the results to m control genes, where there are 2m different control actions. Let \(\mathbf {P}(c)=\left [P_{ij}(c)\right ]_{i,j=0}^{2^{n}-1}\) denote the controlled TPM, i.e.,
$$\begin{array}{*{20}l} P_{ij}(c)=\text{Pr}\left[\mathbf{X}(t+1)=j|\mathbf{X}(t)=i,c(t)=c\right]. \end{array} $$
The controlled TPM can be found using the uncontrolled TPM P as
$$\begin{array}{*{20}l} P_{ij}(c)=\left\{ \begin{array}{l} P_{ij}\qquad \qquad \text{if}\,\,c=0 \\ P_{\tilde{i}j}\qquad \qquad \text{if}\,\,c=1 \\ \end{array} \right., \end{array} $$
where states \(\tilde {i}\) and i differ only in the value of gene g.
The problem of optimal control can be modeled as an optimal stochastic control problem [39]. Let the cost function \(r(i,j,c):\mathcal {S}\times \mathcal {S}\times \mathcal {C}\rightarrow \mathbb {R}\) determine the immediate cost accrued when the network transitions from state i to state j under control action c. This cost reflects both the desirability of states and the cost for imposing control actions. Usually, larger cost values are assigned to the undesirable states and when the control action is applied. This cost function is assumed to be time-invariant, bounded, and nonnegative. We consider an infinite-horizon discounted cost approach as proposed in [39] in which a discount factor 0<ζ<1 is used to guarantee convergence [40]. Control actions are chosen over time according to a control policy μ=(μ1,μ2,…), \(\mu _{t}:\mathcal {S}\rightarrow \mathcal {C}\). In this setting, given a policy μ and an initial state X0, the expected total cost is
$$\begin{array}{*{20}l} J_{\mu}(X_{0})={\lim}_{M\rightarrow \infty}\mathrm{E}\left[\sum_{t=0}^{M-1}\zeta^{t} r\left(\mathbf{X}(t),\mathbf{X}(t+1),\mu_{t}(\mathbf{X}(t)\right)\left|{\vphantom{\sum}} X_{0}\right.\right], \end{array} $$
where the expectation is taken relative to the probability measure over the space of state and control action trajectories. If Π denotes the space of all admissible policies, we seek an optimal control policy μ∗(X0) such that
$$\begin{array}{*{20}l} \mu^{\ast}(X_{0})=\underset{}{\arg}\,\underset{\mu\in\Pi}{\min}\, J_{\mu}(X_{0})\qquad\qquad \forall X_{0}\in\mathcal{S}. \end{array} $$
The corresponding optimal expected total cost is denoted by J∗(X0). It has been shown that the optimal policy μ∗(X0) exists and can be found by solving Bellman's optimality equation [40],
$$\begin{array}{*{20}l} J^{\ast}(i)\,=\,\underset{c\in\mathcal{C}}{\min}\left[\sum\limits_{j=0}^{2^{n}-1}P_{ij}(c)\left(r(i,j,c)+\zeta J^{\ast}(j)\right)\right]\quad\forall i\in\mathcal{S}. \end{array} $$
The optimal cost J∗ =(J∗(0),…,J∗(2n−1)) is the unique solution of (7) among all bounded functions and the control policy μ∗ that attains the minimum in (7) is stationary, i.e., μ∗=(μ∗,μ∗,..) [40]. In order to find the fixed point in the Bellman's optimality equation and thereby find the optimal control policy, dynamic programming algorithms, including the value iteration algorithm that iteratively estimates the cost function, can be used.
MOCU-based optimal experimental design framework
In this section, we review the general framework of the experimental design method in [15], which is based on the concept of the mean objective cost of uncertainty (MOCU) [10]. Let θ=(θ1,θ2,…,θT) be composed of T uncertain parameters in a network model. The set of all possible realizations for θ is denoted by Θ and is called an uncertainty class. A prior distribution f(θ) is assigned to θ, which reflects the likelihood of each realization of θ being the true value.
For each possible intervention ψ∈Ψ, the class of interventions, and each model θ in the uncertainty class, an error function ξθ(ψ) determines the error of ψ when applied to the network model θ. The optimal intervention ψ(θ) has the lowest error relative to model θ, i.e., ξθ(ψ(θ))≤ξθ(ψ),∀ψ∈Ψ. When dealing with an uncertainty class Θ, the intrinsically Bayesian robust (IBR) intervention ψIBR(Θ) is defined as
$$\begin{array}{*{20}l} \psi_{\text{IBR}}(\Theta)=\underset{}{\arg}\,\underset{\psi\in\Psi}{\min}\,\mathrm{E}_{\boldsymbol{\theta}}\left[\xi_{\boldsymbol{\theta}}(\psi)\right], \end{array} $$
where the expectation is taken relative to the prior distribution f(θ).
An IBR intervention is optimal on average rather than at each specific network model θ; therefore, relative to θ an objective cost of uncertainty (OCU) can be defined as
$$\begin{array}{*{20}l} \mathrm{U}_{\Psi,\xi}(\boldsymbol{\theta})=\xi_{\boldsymbol{\theta}}(\psi_{\text{IBR}}(\Theta))-\xi_{\boldsymbol{\theta}}(\psi(\boldsymbol{\theta})). \end{array} $$
Taking the expectation of UΨ,ξ(θ) relative to f(θ), we obtain the mean objective cost of uncertainty (MOCU):
$$\begin{array}{*{20}l} \mathrm{M}_{\Psi,\xi}(\Theta)&=\mathrm{E}_{\boldsymbol{\theta}}[\mathrm{U}_{\Psi,\xi}(\boldsymbol{\theta})] \\ &=\mathrm{E}_{\boldsymbol{\theta}}[\xi_{\boldsymbol{\theta}}(\psi_{\text{IBR}}(\Theta))-\xi_{\boldsymbol{\theta}}(\psi(\boldsymbol{\theta}))]. \end{array} $$
MOCU measures the model uncertainty in terms of the expected increased error due to applying an IBR intervention (the chosen intervention in the presence of uncertainty) instead of an optimal intervention (the chosen intervention in the absence of uncertainty). Uncertainty quantification based on MOCU can lay the groundwork for objective-based experimental design.
Assume that corresponding to each parameter θi, there is an experiment \(\mathcal {E}_{i}\) that results in the exact determination of θi. The goal of the experimental design is to find which experiment should be conducted first so that model uncertainty is reduced optimally. Focusing on experiment \(\mathcal {E}_{i}\) and parameter θi, consider the case that the outcome of experiment \(\mathcal {E}_{i}\) is \(\theta ^{\prime }_{i}\). Then the remaining MOCU given \(\theta _{i}=\theta ^{\prime }_{i}\) is defined as
$$\begin{array}{*{20}l} &\mathrm{M}_{\Psi,\xi}\left(\Theta|\theta_{i}=\theta^{\prime}_{i}\right) \\ &\qquad=\mathrm{E}_{\boldsymbol{\theta}|\theta^{\prime}_{i}}\left[\xi_{\boldsymbol{\theta}}\left(\psi_{\text{IBR}}\left(\Theta|\theta_{i}=\theta^{\prime}_{i}\right)\right)-\xi_{\boldsymbol{\theta}}\left(\psi\left(\boldsymbol{\theta}|\theta_{i}=\theta^{\prime}_{i}\right)\right)\right], \end{array} $$
where the expectation is taken relative to the conditional distribution \(f\left (\boldsymbol {\theta }|\theta _{i}=\theta ^{\prime }_{i}\right)\), \(\Theta |\theta _{i}=\theta ^{\prime }_{i}\), is the reduced uncertainty class obtained after \(\theta _{i}=\theta ^{\prime }_{i}\), and vector \(\boldsymbol {\theta }|\theta _{i}=\theta ^{\prime }_{i}\) is obtained from vector θ by setting θi to \(\theta _{i}^{\prime }\). Taking the expectation of (11) relative to the marginal distribution \(f\left (\theta ^{\prime }_{i}\right)\), which is in fact the marginal distribution of the parameter θi, we obtain the expected remaining MOCU given experiment \(\mathcal {E}_{i}\) is carried out (or equivalently parameter θi is determined):
$$\begin{array}{*{20}l} {}&\mathrm{M}_{\Psi,\xi}(\Theta;\theta_{i}) \\ {}&=\mathrm{E}_{\theta^{\prime}_{i}}\left[\mathrm{M}_{\Psi,\xi}\left(\Theta|\theta_{i}=\theta^{\prime}_{i}\right)\right] \\ {}&=\mathrm{E}_{\theta_{i}^{\prime}}\left[\mathrm{E}_{\boldsymbol{\theta}|\theta^{\prime}_{i}}\left[\xi_{\boldsymbol{\theta}}\left(\psi_{\text{IBR}}\left(\Theta|\theta_{i}=\theta^{\prime}_{i}\right)\right)-\xi_{\boldsymbol{\theta}}\left(\psi\left(\boldsymbol{\theta}|\theta_{i}=\theta^{\prime}_{i}\right)\right)\right]\right]. \end{array} $$
MΨ,ξ(Θ;θi) measures the pertinent uncertainty expected to remain in the model after conducting experiment \(\mathcal {E}_{i}\). The experiment \(\mathcal {E}_{i^{\ast }}\phantom {\dot {i}}\) that attains the minimum value of the expected reaming MOCU is called the optimal experiment and suggested as the first experiment [15]:
$$\begin{array}{*{20}l} i^{\ast}=\underset{}{\arg}\,\underset{i\in\{1,2,..,T\}}{\min}\,\mathrm{M}_{\Psi,\xi}(\Theta;\theta_{i}). \end{array} $$
The parameter \(\theta _{i^{\ast }}\phantom {\dot {i}}\) corresponding to \(\phantom {\dot {i}}\mathcal {E}_{i^{\ast }}\) is called the primary parameter. Note that (13) can be further simplified through some mathematical manipulations and removing expressions not dependent on the optimization variable [20]:
$$\begin{array}{*{20}l} i^{\ast}=\underset{}{\arg}\,\underset{i\in\{1,2,..,T\}}{\min}\,\mathrm{E}_{\theta_{i}^{\prime}}\left[\mathrm{E}_{\boldsymbol{\theta}|\theta^{\prime}_{i}}\left[\xi_{\boldsymbol{\theta}}(\psi_{\text{IBR}}(\Theta|\theta_{i}=\theta^{\prime}_{i}))\right]\right]. \end{array} $$
A number of experimental design methods based on the MOCU framework have been proposed in the literature [15, 19, 20]. In all of these cases, the MOCU-based experimental design can reduce the number of needed experiments significantly in comparison to other selection policies such as entropy-based experimental design, pure exploitation, or random selection policy.
Uncertainty in transition probability matrix
Assume that regulatory information between a gene and its regulating genes is missing for a number of genes in the network. In other words, a number of rows in the n conditional probability matrices are unknown. We represent unknown conditional probabilities by a set of random variables θ=(θ1,θ2,…,θT). Since each row of the CPM adds up to one, i.e., Cj,1(Xi)+Cj,2(Xi)=1, there is only one degree of freedom. The uncertainty in the CPMs will eventually show up in the corresponding TPM and thereby can affect the performance of the control policy. Therefore, it is of interest to reduce the uncertainty in the CPMs. We seek an experimental design method that efficiently guides us on which unknown conditional probability to determine first.
We need to assign prior distributions to the random variables representing unknown conditional probabilities. Assigning accurate priors is highly challenging. A prior distribution must describe the current state of knowledge regarding the unknown model accurately. It is also desirable that the prior distribution and the posterior distribution, obtained by incorporating data into the prior, belong to the same family of distributions, being referred to as a conjugate prior distribution. Using conjugate prior distributions, we can easily update the priors to the posteriors, which facilitates the computations in a Bayesian setting as it is enough to only keep track of the hyperparameters in the distributions. With this in mind, we utilize the beta distribution as the prior distribution for each unknown parameter θi. Relative to a random variable θi, the beta distribution Beta(αi,βi) with hyperparameters αi and βi is of the following form:
$$\begin{array}{*{20}l} \text{Beta}(\alpha_{i},\beta_{i})=\frac{\theta_{i}^{\alpha_{i}-1}(1-\theta_{i})^{\beta_{i}-1}}{B(\alpha_{i},\beta_{i})}, \end{array} $$
where B(αi,βi) is the beta function. The expected value of θi ∼ Beta(αi,βi) is \(\mathrm {E}[\theta _{i}]=\frac {\alpha _{i}}{\alpha _{i}+\beta _{i}}\). When αi = βi=1, the beta distribution becomes a uniform distribution over interval [0,1].
We assume that θ1,θ2,…,θT are independent and each parameter θi, 1≤i≤T, has a beta distribution Beta(αi,βi); therefore, the prior distribution of θ={θ1,θ2,…,θT} is
$$\begin{array}{*{20}l} f(\boldsymbol{\theta})=\prod_{i=1}^{T}\text{Beta}(\alpha_{i},\beta_{i})\propto \prod_{i=1}^{T} \theta_{i}^{\alpha_{i}-1}(1-\theta_{i})^{\beta_{i}-1}. \end{array} $$
In addition to the set of CPMs, containing unknown conditional probabilities, it is possible that observations from network dynamics in terms of a trajectory \(\mathcal {X}_{L}=\{\mathbf {X}(0),\mathbf {X}(1),\dots,\mathbf {X}(L)\}\) of L consecutive state transitions are also available. The state trajectory \(\mathcal {X}_{L}\) can be utilized as an additional source of information to update the initial beta distributions to the posterior beta distributions. If θi represents the unknown conditional probability \(C_{j,1}(X_{i})=\text {Pr}\left [X_{i}(t+1)=0|\Gamma _{X_{i}}=j\right ]\), then given a state trajectory \(\mathcal {X}_{L}\) the posterior distribution \(f(\theta _{i}|\mathcal {X}_{L})\) is again a beta distribution with new hyperparameters
$$\begin{array}{*{20}l} &\alpha^{\prime}_{i}=\alpha_{i}+\sum\limits_{l=0}^{L-1}\mathbbm{1}\left[\Gamma_{X_{i}}[\mathbf{X}(l)]=j,X_{i}(l+1)=0\right] \end{array} $$
$$\begin{array}{*{20}l} &\beta^{\prime}_{i}=\beta_{i}+\sum\limits_{l=0}^{L-1}\mathbbm{1}\left[\Gamma_{X_{i}}[\mathbf{X}(l)]=j,X_{i}(l+1)=1\right], \end{array} $$
where Xi(l) denotes the value of gene Xi at the l-th state in the trajectory, and \(\mathbbm {1}[\!\cdot ]\) is the indicator function. In other words, those state transitions in which the event corresponding to the unknown conditional probability θi occurs can be used to update the information about that unknown probability. Note that \(\Gamma _{X_{i}}[\mathbf {X}(l)]=j\) implies that the equivalent decimal value of the gene expression vector for the regulating genes of Xi extracted from network state X(l) should be equal to j. The conditional expectation of θi given \(\mathcal {X}_{L}\) is
$$\begin{array}{*{20}l} \mathrm{E}[\!\theta_{i}|\mathcal{X}_{L}]=\frac{\alpha^{\prime}_{i}}{\alpha^{\prime}_{i}+\beta^{\prime}_{i}}. \end{array} $$
Since the uncertainty of an unknown conditional probability is governed by the corresponding terms α′, β′, and given the fact that observation(s) can potentially increase α's and β's according to (17) and (18), availability of a state trajectory \(\mathcal {X}_{L}\) is equivalent to a lesser initial uncertainty, and hence a simpler experimental design problem.
Optimal experimental design for determining unknown conditional probabilities
Building on the general MOCU-based experimental design framework in (8)-(14), we propose an experimental design method when dynamical controls characterized by stationary control policies are concerned. A schematic diagram of the proposed experimental design framework is given in Fig. 1. We first assign beta distributions with initial hyperparameters (αi,βi) to each unknown conditional probability. Then if a state trajectory \(\mathcal {X}_{L}\) is available as an additional source of knowledge, it is incorporated to update the initial hyperparameters to \(\phantom {\dot {i}\!}\left (\alpha ^{\prime }_{i},\beta ^{\prime }_{i}\right)\) according to (17) and (18). These updated hyperparameters characterize the uncertainty class for finding the best parameter to determine using the proposed MOCU-based framework. When the first experiment is chosen and carried out, its outcome (the true value for the chosen unknown conditional probability) is incorporated in the uncertainty class, leading to a reduced uncertainty class that contains fewer uncertain parameters. If operational resources allow more experiments, this new uncertainty class can be used to find the next parameter for determination (this process can be iterated). Otherwise, the experimental design step is finished and the reduced uncertainty class is used to derive the IBR control policy based on which control actions at each time step are applied to the underlying true network.
A schematic diagram of the proposed experimental design framework
As (14) suggests, in order to implement the experimental design, we need to derive IBR interventions. Therefore, we first focus on explaining how an IBR control policy can be derived. Considering an uncertainty class Θ of TPMs, relative to an initial state X0, the average expected total discounted cost across Θ for control policy μ = (μ1, μ2, …) is:
$$\begin{array}{*{20}l} &{}J^{\Theta}(\mu;X_{0})=\mathrm{E}_{\boldsymbol{\theta}}[J^{\boldsymbol{\theta}}(\mu;X_{0})] \\ &{}=\mathrm{E}_{\boldsymbol{\theta}}\left[{\lim}_{M\rightarrow \infty}\mathrm{E}\left[\sum_{t=0}^{M-1}\zeta^{t} r\left(\mathbf{X}(t),\mathbf{X}(t+1),\mu_{t}(\mathbf{X}(t)\right)\! \left|{\vphantom{\sum}} X_{0}\right.\right]\right] \\ &{}={\lim}_{M\rightarrow \infty}\sum_{t=0}^{M-1}\mathrm{E}^{\ast}_{\boldsymbol{\theta}}\left[\zeta^{t} r\left(\mathbf{X}(t),\mathbf{X}(t+1),\mu_{t}(\mathbf{X}(t)\right)\! \left|{\vphantom{\sum}} X_{0}\right.\right], \end{array} $$
where \(\mathrm {E}^{\ast }_{\boldsymbol {\theta }}[\!\cdot ]=\mathrm {E}_{\boldsymbol {\theta }}[\mathrm {E}[\![\!\cdot ]\!]\) is the expectation over both within-model stochasticity and model uncertainty. For initial state X0, the optimal average cost is defined as
$$\begin{array}{*{20}l} J^{\Theta}(X_{0})=\underset{\mu\in\Pi}{\min}\,J^{\Theta}(\mu;X_{0}), \end{array} $$
and the minimum is attained by the IBR control policy \(\mu ^{\Theta }(X_{0})=\left (\mu ^{\Theta }_{1}(X_{0}),\mu _{2}^{\Theta }(X_{0}),\dots \right)\).
This control problem can be transformed into a dynamic programming problem of the following form for each \(i\in \mathcal {S}\) and t≥0:
$$\begin{array}{*{20}l} J_{t}(i)&=\underset{c\in\mathcal{C}}{\min}\,\mathrm{E}_{\boldsymbol{\theta}}\left[\mathrm{E}\left[r(i,j,c)+\zeta J_{t+1}(j)\right]\right] \notag \\ &=\underset{c\in\mathcal{C}}{\min}\,\mathrm{E}_{\boldsymbol{\theta}}\left[\sum\limits_{j=0}^{2^{n}-1}P^{\boldsymbol{\theta}}_{ij}(c)\left(r(i,j,c)+\zeta J_{t+1}(j)\right)\right] \notag \\ &=\underset{c\in\mathcal{C}}{\min}\left[\sum\limits_{j=0}^{2^{n}-1}\mathrm{E}_{\boldsymbol{\theta}}\left[P^{\boldsymbol{\theta}}_{ij}(c)\right]\left(r(i,j,c)+\zeta J_{t+1}(j)\right)\right]. \end{array} $$
We call \(\mathrm {E}_{\boldsymbol {\theta }}\left [P^{\boldsymbol {\theta }}_{ij}(c)\right ]\) the effective controlled transition probability matrix. It is obtained similarly to Pij(c) by plugging \(P_{ij}^{\Theta }=\mathrm {E}_{\boldsymbol {\theta }}\left [P_{ij}^{\boldsymbol {\theta }}\right ]\) in (4). The effective transition probability matrix (ETPM) \(\mathbf {P}^{\Theta }=\left [P_{i,j}^{\Theta }\right ]_{i,j=1}^{2^{n}}\) is obtained as
$$\begin{array}{*{20}l} P_{ij}^{\Theta}=\mathrm{E}_{\boldsymbol{\theta}}\left[P_{ij}^{\boldsymbol{\theta}}\right]=\prod_{k=1}^{n}\mathrm{E}_{\boldsymbol{\theta}}\left[\text{Pr}_{\boldsymbol{\theta}}\left[X_{k}=j_{k}\left|\Gamma_{X_{k}}\right.[i]\right]\right], \end{array} $$
where Prθ[ ·] is the probability operator relative to θ and Eθ[ ·] is taken relative to the updated prior (posterior) distribution \(f(\boldsymbol {\theta }|\mathcal {X}_{L})\). The ETPM PΘ represents the uncertainty class of TPMs and enables finding the IBR control policy μΘ for an uncertainty class of TPMs in the same way that the optimal control policy for a known TPM is found. Since the θi's are independent, the expectation can be brought inside the product. Each conditional probability term in (23) is either known, whose known value is used for multiplication, or is unknown and corresponds to an unknown parameter θi whose expected value obtained according to (19) is used in the multiplication.
The dynamic formulation in (22) is similar to the dynamic programming used for optimal control except that the known TPM has been replaced by \(\mathrm {E}_{\boldsymbol {\theta }}\left [P^{\boldsymbol {\theta }}_{ij}(c)\right ]\); therefore, a similar approach used for solving optimal control dynamic programming can be utilized here with the distinction that all theorems should be defined relative to ETPM. Keeping this in mind, we define a mapping \(TJ\!:\!\mathcal {S}\!\rightarrow \! \mathcal {R}\) for a bounded function \(J\!:\!\mathcal {S}\!\rightarrow \!\mathcal {R}\) and \(\forall i\!\in \!\mathcal {S}\) as
$$\begin{array}{*{20}l} TJ(i)=\underset{c\in\mathcal{C}}{\min}\left[\sum_{j=0}^{2^{n}-1}\mathrm{E}_{\boldsymbol{\theta}}\left[P^{\boldsymbol{\theta}}_{ij}(c)\right]\left(r(i,j,c)+\zeta J(j)\right)\right]. \end{array} $$
The following three theorems, whose proofs are similar to those in [40] relative to a known TPM, lay out the theoretical foundation for finding the IBR control policy.
Theorem 1
(Convergence of the algorithm) Letting \(J:\mathcal {S}\rightarrow \mathcal {R}\) be a bounded function, for any \(i\in \mathcal {S}\), the optimal average cost function JΘ(i) satisfies \(J^{\Theta }(i)={\lim }_{M\rightarrow \infty }T^{M}J(i)\phantom {\dot {i}\!}\).
(Bellman's optimality equation) The optimal average cost function JΘ satisfies
$$\begin{array}{*{20}l} {}J^{\Theta}(i)=\underset{c\in\mathcal{C}}{\min}\left[\sum\limits_{j=0}^{2^{n}-1}\mathrm{E}_{\boldsymbol{\theta}}\left[ P^{\boldsymbol{\theta}}_{ij}(c)\right]\left(r(i,j,c)+\zeta J^{\Theta}(j)\right)\right]\,\forall i\in\mathcal{S}. \end{array} $$
(Necessary and sufficient condition) A stationary policy μΘ is an IBR control policy if and only if for each \(i\in \mathcal {S}\), μΘ(i) attains the minimum in Bellman's optimality equation.
Based on Theorem 1, JΘ can be computed recursively using the value iteration algorithm in the same way that this algorithm is used to find the optimal control policy for a known TPM. The converged cost satisfies Bellman's optimality equation (Theorem 2). Also, the corresponding policy is a stationary IBR control policy (Theorem 3). μΘ attains the minimum in the Bellman's optimality equation, where the ordinary TPM is replaced by the ETPM.
The concept of effective quantities has also been used for deriving IBR operators in other problems: for example in [7], effective noise statistics are used to derive the IBR Kalman filter; or in [8], effective covariance matrix is used for achieving IBR signal compression.
To define the experimental design problem in the context of the framework laid out in (8)-(14), let the class of interventions Ψ be the set of all admissible control policies Π. Each ψ∈Ψ is characterized by a control policy μ and the cost of intervention is
$$\begin{array}{*{20}l} \xi_{\boldsymbol{\theta}}(\psi)=\mathrm{E}_{X_{0}}\left[J^{\boldsymbol{\theta}}_{\mu}(X_{0})\right], \end{array} $$
where \(\phantom {\dot {i}\!}J^{\boldsymbol {\theta }}_{\mu }(X_{0})\) is obtained according to (5) with E[ ·] relative to the probability measure defined by the TPM Pθ. To find a single value as the cost of a control policy for a specific TPM, in (26), we take the expectation over all possible initial network states X0, assuming that the possible initial states are equally likely. Regarding the IBR intervention, we find a single value as the average cost of the IBR intervention:
$$\begin{array}{*{20}l} \mathrm{E}_{\boldsymbol{\theta}}\left[\xi_{\boldsymbol{\theta}}\left(\mu^{\Theta}\right)\right]=\mathrm{E}_{X_{0}}\left[J^{\Theta}(X_{0})\right], \end{array} $$
where JΘ(X0) is obtained using (25). The definitions of cost and intervention in (26) and (27) set the stage for objective-based uncertainty quantification in the context of dynamical control according to (10). After defining MOCU, the MOCU-based experimental design framework can be used and the the primary parameter \(\phantom {\dot {i}\!}\theta _{i^{\ast }}\) can be found by plugging (27) in (14):
$$\begin{array}{*{20}l} i^{\ast}&=\underset{}{\arg}\,\underset{i\in\{1,2,..,T\}}{\min}\,\mathrm{E}_{\theta_{i}^{\prime}}\left[\mathrm{E}_{\boldsymbol{\theta}|\theta^{\prime}_{i}}\left[\xi_{\boldsymbol{\theta}}\left(\psi_{\text{IBR}}\left(\Theta|\theta_{i}=\theta^{\prime}_{i}\right)\right)\right]\right] \notag \\ &=\underset{}{\arg}\,\underset{i\in\{1,2,..,T\}}{\min}\,\mathrm{E}_{\theta_{i}^{\prime}}\left[\xi_{\left(\Theta|\theta_{i}=\theta^{\prime}_{i}\right)}\left(\mu^{\Theta|\theta_{i}=\theta^{\prime}_{i}}\right)\right] \notag \\ &=\underset{}{\arg}\,\underset{i\in\{1,2,..,T\}}{\min}\,\mathrm{E}_{\theta_{i}^{\prime}}\left[\mathrm{E}_{X_{0}}\left[J^{\mathbf{P}^{\Theta|\theta_{i}=\theta^{\prime}_{i}}}_{\mu^{\Theta|\theta_{i}=\theta^{\prime}_{i}}}(X_{0})\right]\right], \end{array} $$
where the IBR control policy for the reduced uncertainty class \(\phantom {\dot {i}\!}\Theta |\left (\theta _{i}=\theta ^{\prime }_{i}\right)\) is found using the ETPM \(\mathbf {P}^{\Theta |\theta _{i}=\theta ^{\prime }_{i}}\phantom {\dot {i}\!}\) obtained relative to the conditional probability distribution \(f\left (\boldsymbol {\theta }|\theta _{i}=\theta ^{\prime }_{i}\right)\phantom {\dot {i}\!}\).
According to (28), to evaluate the determination of each unknown parameter θi, for each realization \(\theta ^{\prime }_{i}\) of θi, we need to obtain the average cost of the IBR control policy \(\mu ^{\Theta |\theta _{i}=\theta ^{\prime }_{i}}\phantom {\dot {i}\!}\) across the reduced uncertainty class \(\Theta |\left (\theta _{i}=\theta ^{\prime }_{i}\right)\phantom {\dot {i}\!}\) and then take the average of all these average costs relative to the marginal distribution of parameter θi. In practice, the expression in (28) is approximated via Monte-Carlo simulations. We draw a number of samples from the marginal distribution of θi and then approximate the expression being minimized in (28) as the average of all inner expectations computed for each generated sample. The steps required for obtaining the primary parameter \(\phantom {\dot {i}\!}\theta _{i^{\ast }}\) are summarized in Algorithm 1. The inputs to this algorithm are n CPMs characterizing the GRN, T unknown parameters θi corresponding to unknown conditional probabilities, hyperparameters (αi,βi) for the prior beta distributions, the state trajectory \(\mathcal {X}_{L}\), and ζ, r, and I, which determine the discount factor, cost function, and the number of iterations for value iteration, respectively.
Finding IBR control policies for an uncertainty class (like finding the optimal control policy for a known TPM [41]) is computationally expensive and the complexity grows exponentially with the number of genes. Therefore, most of the computational burden of the experimental design is in finding IBR control policies. To mitigate the complexity of the proposed method, in the next section we propose an approximate method for computing IBR control policies.
Approximate experimental design based on MFPT
The mean first passage time (MFPT) from state i to j measures how long it would take on average that the network transitions from state i to state j.
For a Markovian GRN, if the sets of desirable states \(\mathcal {D}\) and undesirable states \(\mathcal {U}\) are determined, we can have the following partitioning for the TPM:
$$\begin{array}{*{20}l} \mathbf{P}= \left[\begin{array}{ll} \mathbf{P}_{\mathcal{D},\mathcal{D}} & \mathbf{P}_{\mathcal{D},\mathcal{U}}\\ \mathbf{P}_{\mathcal{U},\mathcal{D}} & \mathbf{P}_{\mathcal{U},\mathcal{U}} \end{array}\right], \end{array} $$
where \(\mathbf {P}_{\mathcal {S}_{1},\mathcal {S}_{2}}\) involves the transition probabilities from each state in the set \(\mathcal {S}_{1}\) to the states in set \(\mathcal {S}_{2}\). The vectors \(\mathbf {K}_{\mathcal {D},\mathcal {U}}\) and \(\mathbf {K}_{\mathcal {U},\mathcal {D}}\) of MFPTs from each state in \(\mathcal {D}\) to \(\mathcal {U}\) and from each state in \(\mathcal {U}\) to \(\mathcal {D}\), respectively, can be computed as [30]
$$\begin{array}{*{20}l} &\mathbf{K}_{\mathcal{D},\mathcal{U}}=\mathbf{e}+\mathbf{P}_{\mathcal{D},\mathcal{D}}\,\mathbf{K}_{\mathcal{D},\mathcal{U}} \end{array} $$
$$\begin{array}{*{20}l} &\mathbf{K}_{\mathcal{U},\mathcal{D}}=\mathbf{e}+\mathbf{P}_{\mathcal{U},\mathcal{U}}\,\mathbf{K}_{\mathcal{U},\mathcal{D}}, \end{array} $$
where e is an all-unity column vector of appropriate size. If g is the control gene and \(\tilde {X}^{g}\) is the flipped state corresponding to state X obtained by flipping g in state X, then to find the MFPT-based stationary control policy \(\mu _{\mathbf {P}}^{\text {MFPT}}:\mathcal {S}\rightarrow \mathcal {C}\), the control action for each desirable state \(X \in \mathcal {D}\) is obtained as [31]
$$\begin{array}{*{20}l} \mu^{\text{MFPT}}_{\mathbf{P}}(X)=\left\{ \begin{array}{l} 1\qquad \text{if}\,\,\mathbf{K}_{\mathcal{D},\mathcal{U}}(\tilde{X}^{g})-\mathbf{K}_{\mathcal{D},\mathcal{U}}(X)>\Delta \\ 0 \qquad \text{otherwise} \\ \end{array} \right., \end{array} $$
and for each undesirable state \(X\in \mathcal {U}\) as
$$\begin{array}{*{20}l} {}\mu^{\text{MFPT}}_{\mathbf{P}}(X)=\left\{ \begin{array}{l} 1\qquad \text{if}\,\,\mathbf{K}_{\mathcal{U},\mathcal{D}}(X)-\mathbf{K}_{\mathcal{U},\mathcal{D}}(\tilde{X}^{g})>\Delta \\ 0\qquad \text{otherwise} \\ \end{array} \right., \end{array} $$
where Δ in (32) and (33) is a tuning parameter that should be adjusted based on the definition for the cost function r(i,j,u).
In the spirit of the MFPT-based approximation for optimal control, we approximate the IBR control policy needed in experimental design via MFPT. Taking into account that the IBR control policy μΘ is in fact the optimal control policy relative to the ETPM and that MFPT can be used as an approximation for the optimal control policy, we approximate the IBR control policy by finding the MFPT-based control policy relative to the ETPM and denote it by \(\mu ^{\text {MFPT}}_{\Theta }\), i.e.,
$$\begin{array}{*{20}l} \mu^{\Theta}\approx\mu^{\text{MFPT}}_{\Theta}=\mu^{\text{MFPT}}_{\mathbf{P}^{\Theta}}. \end{array} $$
\(\mu ^{\text {MFPT}}_{\mathbf {P}^{\Theta }}\) is obtained by solving (29)-(33) for the effective transition probability matrix PΘ. When we approximate the IBR control policy via MFPT in experimental design, the average cost needed in (28) is computed via Monte-Carlo simulations (over different initial network states) as
$$ {\begin{aligned} &\mathrm{E}_{X_{0}}\left[J^{\mathbf{P}^{\Theta|\theta_{i}=\theta^{\prime}_{i}}}_{\mu^{\Theta|\theta_{i}=\theta^{\prime}_{i}}}(X_{0})\right]\\ &\qquad\approx\ \frac{1}{N}\sum_{n=1}^{N}\left\{{\lim}_{M\rightarrow\infty}\sum_{t=0}^{M-1}\zeta^{t} r^{(n)}\left(\mathbf{X}(t),\mathbf{X}(t+1),\mu^{\text{MFPT}}_{\mathbf{P}^{{\Theta|\theta_{i}=\theta^{\prime}_{i}}}}(\mathbf{X}(t))\right)\right\}, \end{aligned}} $$
where N is the total number of simulations and r(n)(·) is the accrued total discounted cost in the n-th simulation. The pseudo-code for the approximate method is the same as Algorithm 1 except for steps 19 and 20. For the approximate method, in step 19, we find \(\mu ^{\text {MFPT}}_{\Theta |\hat {\theta }_{i}}\) via the MFPT approach. Then in step 20, we plug \(\mu ^{\text {MFPT}}_{\Theta |\hat {\theta }_{i}}\), obtained from step 19, in (35) to compute \(\eta (\hat {\theta }_{i})\).
Having used the MFPT approach for the sole purpose of reducing the uncertainty class, the IBR control policy is obtained by solving Bellman's equation using value iteration.
Computational complexity analysis
The computationally demanding step in the proposed experimental design method is to find IBR control policies. If there are T different unknown parameters and we generate M different samples for Monte-Carlo simulations for each one, then we need to find IBR control policies in our experimental design calculations T×M times. Since we use value iterations to solve Bellman's equation, if we assume that the value iteration converges in I iterations, then we need to compute \((|\mathcal {S}|\times |\mathcal {C}|)^{I}\) terminal costs to obtain the control policy, \(|\mathcal {S}|\) being the number of network states and \(|\mathcal {C}|\) being the number of control actions. In this paper, we focus on binary networks and binary control actions, i.e., \(|\mathcal {S}|=2^{n}\), n being the number of genes and \(\mathcal {C}=2\). Therefore, the order of complexity when experimental design based on the IBR control policy is implemented is \(\mathcal {O}\left (T\times M\times (2^{n+1})^{I}\right)\). The complexity grows exponentially with the number of genes and polynomially with the number of unknown parameters.
The complexity of the approximate experimental design is much lower because there is no iterative calculation in MFPT. It is enough to solve the two linear equations in (30) and (31), which involves two matrix inversions. Although applying MFPT for experimental design requires us to find the average cost of the MFPT-based IBR control policy via Monte-Carlo simulations, this overhead complexity for MFPT is not concerning and still the complexity of the MFPT-based approach is much smaller in comparison to that of the optimal experimental design. Since the calculations for each unknown parameter and each realization of that parameter can be done independently, a parallel implementation can be used for the proposed experimental design methods.
In Fig. 2, we provide run times required for finding the primary parameter among 5 unknown parameters for GRNs with different numbers of genes. The codes are scripted in MATLAB and run in a parallel framework on a Machine with an Intel® quad-core 2.67 GHz CPU and 12 GB RAM. The number of iterations for value iteration is I=4. While the execution time grows exponentially with the number of genes and the runs may be prohibitively time-consuming beyond six genes for the optimal experimental design, the MFPT-based approximate method can be still implemented for networks of larger size.
Approximate run time in seconds elapsed for the optimal (based on the value iteration method) and approximate experimental design methods (based on MFPT)
In this section, we study the performance of the proposed methods based on synthetic and real gene networks. As a class of Markovian regulatory network, we consider Boolean networks with perturbation for the simulations.
Boolean networks with perturbation
An n-gene Boolean network is defined by a set of binary variables V={X1,X2,..,Xn}, and a set of Boolean functions F={f1,f2,…,fn}, where \(f_{i}\!:\{0,1\}^{k_{i}}\rightarrow \{0,1\}\) determines the value of gene Xi when it has ki regulating genes. The transition rule X(t+1)=F(X(t)) governs the evolution of states over time. In a Boolean network with perturbation (BNp), each gene may flip its value with a small perturbation probability p. In this network, the next state at time t+1 is F(X(t)) with probability (1−p)n or F(X(t))⊕γ with probability 1−(1−p)n, where γ is a binary vector of size n and ⊕ is the component-wise addition modulo 2 operator. The underlying state evolution of a BNp over time can be viewed as a Markov chain with a transition probability matrix P. The TPM can be derived using the regulatory structure of the network and the perturbation probability [22]. When p > 0 the Markov chain is guaranteed to possess a unique steady-state distribution π.
Synthetic networks
We first randomly generate a number of BNps and then from the corresponding TPMs we extract the set of conditional probabilities for each gene in the network. We consider BNps with 6 genes. The number of regulating genes for each gene is set to 2 and they are randomly selected from the set of genes. Therefore, the size of the CPM for each gene is 4×2. The bias (probability) that a Boolean function takes on the value 1 is randomly selected from a beta distribution with variance 0.0001 and mean 0.5. The perturbation probability p is set to 0.01. We use this protocol to generate 100 random BNps from which we generate 100 different sets of CPMs.
The conditional probability \(\phantom {\dot {i}\!}\mathbf {C}_{j,1}(X_{i})\,=\,\text {Pr}[\!X_{i}\,=\,0|\Gamma _{X_{i}}=j]\) characterizing the regulation of gene Xi is obtained from the generated TPM P as
$$\begin{array}{*{20}l} \mathbf{C}_{j,1}(X_{i})=\sum\limits_{S^{\prime}}P_{SS^{\prime}}, \end{array} $$
where \(\Gamma _{X_{i}}[\!S]=j\) and \(S^{\prime }_{i}=0\). In other words, to find the conditional probability for gene Xi being down regulated when the equivalent decimal value of its regulating genes \(\Gamma _{X_{i}}\) is j, we look for the row in the TPM in which \(\Gamma _{X_{i}}\) is j and then in that row we take the summation of all TPM entries corresponding to gene Xi equal to 0. Similarly, we extract \(\mathbf {C}_{j,2}(X_{i})=\text {Pr}[\!X_{i}=1|\Gamma _{X_{i}}=j]\phantom {\dot {i}\!}\) from the generated TPM as
where \(\Gamma _{X_{i}}[\!S]=j\) and \(S^{\prime }_{i}=1\). Since more than one row in a TPM might correspond to \(\Gamma _{X_{i}}=j\), in order to have a consistent procedure for extracting conditional probabilities, we take the average of all the values found for the rows corresponding to \(\Gamma _{X_{i}}=j\).
To define the control problem, we assume that states with down-regulated genes X1 and X2 are undesirable, i.e., \(\mathcal {U}=\{1,\dots,16\}\). The control gene whose expression can be flipped via control actions is X6. We use the following cost function for the simulations:
$$\begin{array}{*{20}l} r(i,j,c)=\left\{ \begin{array}{l} 6\qquad \qquad \text{if}\,\,j\in \mathcal{U}\quad \text{and}\quad c=1 \\ 5\qquad \qquad \text{if}\,\,j\in \mathcal{U} \quad\text{and}\quad c=0 \\ 1\qquad \qquad \text{if}\,\,j\in \mathcal{D}\quad \text{and}\quad c=1 \\ 0 \qquad \qquad \text{if}\,\,j\in \mathcal{D} \quad\text{and}\quad c=0 \end{array} \right.. \end{array} $$
This cost function reflects penalties assigned to undesirable states and also to the transitions to which the control action is applied. The discount factor ζ is set to 0.2. The tuning parameter Δ for the MFPT method is set to Δ=0.3. We use value iteration with 4 iterations to find the control policies in the optimal design method and for evaluating chosen experiments. All initial beta distributions for unknown conditional probabilities θi in the network are Beta(1,1), a uniform distribution. We run simulations for different numbers L of initial data used for updating priors.
In the first set of simulations, we generate 100 synthetic BNps. After extracting corresponding conditional probabilities for each network, we randomly select 5 conditional probabilities in each network and assume they are unknown. The aim is to decide which unknown conditional probability should be determined first. For each network, we generate a state trajectory \(\mathcal {X}_{L}=\{\mathbf {X}(0),\dots,\mathbf {X}(L)\}\), used for updating initial hyperparameters, by simulating the underlying true network.
In the simulations, when we want to evaluate the determination of an unknown probability θi, we put back its true value ϕi, which was discarded during experimental design calculations, in the network, thereby resulting in a new uncertainty class Θ|(θi=ϕi) of remaining unknown probabilities. We find the IBR control policy \(\mu ^{\Theta |(\theta _{i}=\phi _{i})}\phantom {\dot {i}\!}\) for this new uncertainty class by solving Bellman's equation relative to \(\mathbf {P}^{\Theta |(\theta _{i}=\phi _{i})}\phantom {\dot {i}\!}\). We then apply \(\phantom {\dot {i}\!}\mu ^{\Theta |(\theta _{i}=\phi _{i})}\) to the underlying true network and run the controlled network until the horizon length 6 according to the underlying true TPM and \(\mu ^{\Theta |(\theta _{i}=\phi _{i})}\phantom {\dot {i}\!}\), and record the cost at each time based on the network state at that time and the cost function r(i,j,c) in (38). Then we compute the total discounted cost over the horizon by accumulating the costs incurred over the horizon length according to the discount factor ζ. We repeat this process of calculating the total discounted cost for 10,000 iterations over different network initial states X0 and state transition paths. Note that although the underlying controlled TPM is fixed, there are still different state transition paths over the horizon due to the randomness characterized by the TPM. We represent the cost corresponding to determining parameter θi as the average of all 10,000 total discounted costs and denote it by J(θi). For comparing different experimental design approaches, we report the average of J(θi) over 100 generated synthetic networks and 100 different sets of assumed true values for the unknown probabilities in each network drawn from the beta prior distributions.
Using either optimal or approximate experimental design methods we can rank potential experiments \(\mathcal {E}_{1}\) up to \(\mathcal {E}_{5}\) from the optimal experiment being denoted by \(\mathcal {E}_{1^{\prime }}\) (obtained according to (28)) to the least optimal experiment denoted by \(\mathcal {E}_{5^{\prime }}\), which corresponds to the maximum value of the expression being minimized in (28). In Table 1, for different lengths L of the trajectory data used for updating priors, we rank experiments based on both experimental design methods and show the average cost \(J(\theta _{i^{\prime }})\), 1≤i≤5, obtained after conducting experiment \(\mathcal {E}_{i^{\prime }}\). This table suggests that the average cost obtained after conducting experiments with higher priority is smaller. Also, although the approximate experimental design method based on MFPT has much lower complexity, its performance is close to that of the optimal method. Note that the average cost obtained after high ranked experiments is lower when they are chosen by the optimal method but as we go towards low priority experiments the performance of the approximate method becomes better. This is because the optimal method yields a better ranking compared to the approximate method and more experiments resulting in lower average cost are given high priority in the optimal method. Another observation from the table is that, as we use more data for updating prior distributions, the difference between the performances of the different experiments gets smaller. For example, the difference between the average costs of \(\mathcal {E}_{1^{\prime }}\) and \(\mathcal {E}_{5^{\prime }}\) is larger when no data are used for the prior update than the case that the initial data \(\mathcal {X}_{L}\) of length L=50 are used for the prior update. This is because by using more data in the prior update step the posterior distribution becomes tighter around the true model and less uncertainty remains in the model.
Comparison of the ranked experiments according to the optimal and approximate methods
\(\mathcal {E}_{1^{\prime }}\)
(a) L=0 (no initial data)
Approximate
(b) L=10
(c) L=20
(d) L=50
Let J(θopt), J(θapprox), and J(θrnd) be the costs corresponding to the determination of the unknown probability chosen by the optimal method, the approximate method, and randomly, respectively. Table 2 shows the average of these costs over different networks and assumed true values. For different L, both optimal and approximate methods provide close performance and clearly outperform the random selection policy.
The comparison of the average costs obtained after choosing the experiment via different selection policies
L=0
J(θrnd)
J(θapprox)
J(θopt)
When comparing the optimal experiment \(\mathcal {E}_{1^{\prime }}\) with an experiment \(\mathcal {E}_{i^{\prime }}\), i≠1 (when experiments are ranked based on either optimal or approximate method), we say that a success occurs if \(J(\theta _{1^{\prime }})-J(\theta _{i^{\prime }})<-0.002\), a failure happens if \(J(\theta _{1^{\prime }})-J(\theta _{i^{\prime }})>0.002\), and a tie corresponds to \(|J(\theta _{1^{\prime }})-J(\theta _{i^{\prime }})|<0.002\). Table 3 shows the ratio of success, failure, and tie for both methods and different L. Regardless of the experimental design approach, the ratio of success is always higher than the ratio of failure and gets larger when we compare the optimal experiment \(\mathcal {E}_{1^{\prime }}\) with lowest ranked experiments. Note that the ratio of tie increases for larger values of L because a tighter prior leads to closer experimental performance.
The percentage of success, failure, and tie for performing the chosen experiment rather than the suboptimal experiments
\(\mathcal {E}_{1^{\prime }}\sim \mathcal {E}_{2^{\prime }} \)
Now, we evaluate the experimental design methods for a sequence of experiments. At each step in the sequential experiments, we choose experiment \(\mathcal {E}_{i^{\ast }}\) based on the experimental design method. After incorporating the true value \(\phantom {\dot {i}\!}\phi _{i^{\ast }}\) of the corresponding unknown probability \(\theta _{i^{\ast }}\phantom {\dot {i}\!}\) in the model, we compute the cost \(\phantom {\dot {i}\!}J(\theta _{i^{\ast }})\). The distribution for the new uncertainty class \(\phantom {\dot {i}\!}\Theta |(\theta _{i^{\ast }}=\phi _{i^{\ast }})\) is the product of the beta distributions for the remaining unknown probabilities as we assume that all unknown probabilities are statistically independent. This distribution is used as the new prior distribution to find the next best experiment. This process continues until all unknown parameters are estimated and the underlying true network model is fully identified. Figure 3 presents the average cost over 50 different 6-gene networks and 100 different sets of assumed true values for optimal experimental design, approximate experimental design, and the random selection policy when there are T=5 unknown probabilities and no initial data is used for updating priors, i.e., L=0. Since the first data-point corresponds to the cost before any experiment and the final point corresponds to the cost after conducting all experiments, they are the same for all three curves. However, the decrease in the cost obtained by either experimental design method is faster in comparison to that of the random policy.
Performance evaluation of different experimental design approaches for a sequence of experiments. The size of initial data used for updating priors is L=0
Figure 4 compares approximate experimental design with the random selection policy when the network is of size n=9 and there are T=8 unknown probabilities. For this network size and number of unknown probabilities, the computational burden of optimal experimental design is prohibitively large. Therefore, we only implement the approximate method and the random selection policy. Recall that although we use the MFPT-based approximate approach for the experimental design step, the robust control policies after each experiment are still obtained by solving Bellman's equation using value iteration. We see the promising performance of the approximate method in this figure. By following the approximate method, after conducting only four experiments the optimal cost is almost reached.
Performance evaluation of the approximate experimental design method and random selection policy for networks with 9 genes and 8 unknown probabilities. The length of \(\mathcal {X}_{L}\) is L=5
Real network example: TP53 pathways
In this section, we consider the set of pathways involving the TP53 gene as shown in Fig. 5 [42]. TP53 is a tumor suppressor playing a major role in cellular activities in response to stress signals such as DNA damage. When DNA damage occurs, a mutant TP53 may lead to the abundance of abnormal cells, which eventually results in tumors. For example, it has been observed that mutated TP53 is present in 30 to 50% of human cancers [43]. In normal conditions, TP53 remains low-expressed under the control of MDM2, which is an oncogene often highly expressed in tumor cells. We model the pathways shown in Fig. 5 via a BNp with perturbation probability p=0.01. Six nodes DNA DSBs, MDM2, TP53, WIP1, CHK2, and ATM are named X1 up to X6, respectively. DNA DSBs is a signal that indicates the existence of double stand breaks. The dynamics of the network are governed via majority vote rule for which a regulatory matrix R defining the regulatory interactions between genes is defined as
$$ R_{ij}=\left\{ \begin{array}{l} \,\,\,\,1\quad \text{activating relation from}\,\,j\,\,\text{to}\,\,i \\ -1\quad \text{suppressive relation from}\,\,j\,\,\text{to}\,\,i \\ \,\,\,\,0\quad \text{no relation from}\,\,j\,\,\text{to}\,\,i \end{array} \right.. $$
Regulatory relationships between genes in a signal pathway regulating the TP53 gene [42]
Using matrix R, the value of gene Xi is updated as
$$ {}X_{i}(t+1)=f_{i}\left(\mathbf{X}(t)\right)=\left\{ \begin{array}{l} \,\,\,\,\,1\qquad \text{if}\,\,{\sum\nolimits}_{j}R_{ij}X_{j}(t)>0 \\ \,\,\,\,\,0\qquad \text{if}\,\,{\sum\nolimits}_{j}R_{ij}X_{j}(t)<0 \\ X_{i}(t)\qquad\!\!\!\!\!\! \text{if}\,\,{\sum\nolimits}_{j}R_{ij}X_{j}(t)=0 \end{array} \right. $$
In Fig. 5, blunt arrows represent suppressive regulations and normal arrows represent activating regulations. It has been observed that in the presence of DNA damage (X1=1), up-regulated MDM2 (X2=1) and down-regulated TP53 (X3=0) would lead to cancerous cells [15, 44, 45]. Therefore, the set of undesirable states \(\mathcal {U}\) includes those states with X1=0, X2=1, and X3=0, i.e., \(\mathcal {U}=\{48,\dots,55\}\). The cost function r(i,j,c) is the same as the one given in (38). We also use gene ATM as the control gene.
After building the BNp model from pathways, we extract the CPMs based on the procedure explained for simulations on synthetic networks. Since the network is fixed in this example, we randomly select 10 different sets of 5 conditional probabilities and assume that they are unknown. We run the experimental design simulations for 100 different assumed true values for each set of unknown probabilities. The length of \(\mathcal {X}_{L}\) used for updating beta priors is L=5. Figure 6 illustrates the average cost obtained (over 1,000 different simulations) after each experiment in a sequence of experiments when optimal experimental design, approximate experimental design, or the random selection is employed. Better performance of both proposed approaches in comparison to the random selection policy is obvious from this.
Performance evaluation of different experimental design approaches for a sequence of experiments based on the TP53 regulatory model
Real network example: mammalian cell cycle
As another example of real gene regulatory networks, consider the 9-gene mutated cell cycle network model. Cell cycle is a tightly controlled process initiated only in response to external stimuli, such as growth factors, under normal situations. A regulatory model containing 10 genes is proposed for the normal cell cycle in [46]. These 10 genes are the genes in Table 4 along with gene p27. A permanently down-regulated gene p27 in the cell cycle network results in a mutated cell cycle network consisting of 9 genes. The Boolean functions for the mutated network are summarized in Table 4 [46]. We use these Boolean functions and build a BNp model with perturbation probability p=0.01. The index of each gene in the BNp model is given in the table. In this network, if both Cyclin D (CycD) and retinoblastoma (Rb) are down-regulated, then cell cycle continues even in the absence of stimuli, thereby leading to the growth of tumors. Hence, states with down-regulated CycD (X1=0) and down-regulated Rb (X2=0) are undesirable. To define a control problem, we use the cost function in (38) and choose gene CycA as the control gene.
The set of Boolean functions for a mutated cell cycle [46]
Boolean function
CycD
Extracellular signal
\((\overline {X_{1}}\wedge \overline {X_{4}}\wedge \overline {X_{9}}\wedge \overline {X_{8}})\)
\((\overline {X_{2}}\wedge \overline {X_{9}}\wedge \overline {X_{8}})\)
CycE
\((X_{3}\wedge \overline {X_{2}})\)
\((\overline {X_{9}}\wedge \overline {X_{8}})\vee X_{5}\)
UbcH10
\(\overline {X_{6}}\vee (X_{6}\wedge X_{7} \wedge (X_{5}\vee X_{9}\vee X_{8}))\)
CycB
\((\overline {X_{5}}\wedge \overline {X_{6}})\)
CycA
\((X_{3}\wedge \overline {X_{2}}\wedge \overline {X_{5}}\wedge (\overline {X_{6}}\wedge \overline {X_{7}}))\)
\(\vee (X_{9}\wedge \overline {X_{2}}\wedge \overline {X_{5}}\wedge (\overline {X_{6}\wedge X_{7}}))\)
Due to the size of the mammalian cell cycle, the optimal experimental design is not applicable. Therefore, for this network, we compare the approximate method and the random selection policies. The simulation settings are exactly the same as those for the TP53 model. We generate a state trajectory of size L=5 for updating priors. Simulation results in Fig. 7 are averaged over 10 different selections of sets of 5 unknown probabilities and 100 different assumed true values for each. The promising performance of the approximate method is clear in this figure.
Performance evaluation of the approximate experimental design for a sequence of experiments based on the mutated mammalian cell cycle model
An inherent problem of dealing with stationary control policies in a Markovian network is computational complexity, which is due to the exponential increase of the number of states with the network size. We have been able to mitigate the complexity of the experimental design and thereby push the size limit (as demonstrated in Fig. 2) by proposing an approximate experimental design approach based on mean first passage time. However, we believe that more complexity reduction should be achieved for addressing experimental design for extremely large gene networks. Considering these intrinsic computational issues, we plan to find more efficient approximations and investigate accelerated implementation of the method via efficient computer architecture platforms, such as Graphic Processing Unit (GPU).
Another consideration is the accuracy of the prior distributions used in the experimental design calculation. The performance of the experimental design depends on the degree to which the prior probabilities can describe the existing knowledge regarding uncertain parameters accurately. The problem of finding optimal prior probabilities in this context can be solved under a prior construction optimization framework, which involves constructing a mapping that transforms signaling relations to constraints on the prior distribution. Constructing optimal priors has been done for genomic classification [14, 47]. In future work, we aim to develop an optimization framework to address prior construction for experimental design in gene regulatory networks.
Given the complexity of biological systems and the cost of experiments, experimental design is of great practical significance in translational genomics. In this paper, we address the problem of optimal experimental design for gene regulatory networks controlled with stationary control policies. The proposed experimental design framework is based on the notion of mean objective cost of uncertainty, which views model uncertainty in terms of the increased cost it induces. Future work includes further reducing the computational cost of the method and also designing optimization frameworks for constructing optimal prior distributions. Also, another avenue of research is to implement an integrative experimental design method that can utilize the RNA-Seq data for the genes on the same pathway [48] for optimal uncertainty reduction.
BNp:
Boolean network with perturbation
CPM:
Conditional probability matrix
ETPM:
Expected transition probability matrix
GRN:
Gene regulatory network
IBR:
Intrinsically Bayesian robust
MDP:
Markov decision process
MFPT:
Mean first passage time
MOCU:
Mean objective cost of uncertainty
TPM:
Transition probability matrix
The authors would like to acknowledge Texas A&M High Performance Research Computing for providing computational resources to perform simulations in this paper.
Publication costs for this article were funded by the senior author's institution.
Data and MATLAB source code are available from the corresponding author upon request.
This article has been published as part of BMC Systems Biology Volume 12 Supplement 8, 2018: Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2018: systems biology. The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-12-supplement-8.
RD conceived the method, developed the algorithm, performed the simulations, analyzed the results, and wrote the first draft. MSE helped with the prior update step, analyzed the results, and commented on the manuscript. ERD conceived the method, oversaw the project, analyzed the results, and edited the manuscript. All authors have read and approved the final manuscript.
Department of Biochemistry, Stanford University, CA, Stanford, 94305, USA
Division of Oncology, Stanford School of Medicine, Stanford, 94305, CA, USA
Department of Electrical and Computer Engineering, Texas A&M University, College Station, 77843, TX, USA
Center for Bioinformatics and Genomic Systems Engineering, Texas A&M University, College Station, 77845, TX, USA
Kim S, Li H, Dougherty ER, Cao N, Chen Y, Bittner M, Suh EB. Can Markov chain models mimic biological regulation?J Biol Syst. 2002; 10(04):337–57.View ArticleGoogle Scholar
Zhao W, Serpedin E, Dougherty ER. Inferring gene regulatory networks from time series data using the minimum description length principle. Bioinformatics. 2006; 22(17):2129–35.View ArticleGoogle Scholar
Friedman N, Murphy K, Russell S. Learning the structure of dynamic probabilistic networks. In: Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence. San Francisco: Morgan Kaufmann Publishers Inc.: 1998. p. 139–147.Google Scholar
Chen PC, Chen JW. A Markovian approach to the control of genetic regulatory networks. Biosystems. 2007; 90(2):535–45.View ArticleGoogle Scholar
Liang J, Han J. Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks. BMC Syst Biol. 2012; 6(1):113.View ArticleGoogle Scholar
Datta A, Pal R, Choudhary A, Dougherty ER. Control approaches for probabilistic gene regulatory networks-what approaches have been developed for addreassinig the issue of intervention?IEEE Signal Process Mag. 2007; 24(1):54–63.View ArticleGoogle Scholar
Dehghannasiri R, Esfahani MS, Dougherty ER. Intrinsically bayesian robust kalman filter: An innovation process approach. IEEE Trans Signal Process. 2017; 65(10):2531–46.View ArticleGoogle Scholar
Dehghannasiri R, Qian X, Dougherty ER. Intrinsically Bayesian robust Karhunen-Loeve compression. Signal Process. 2018; 144:311–22.View ArticleGoogle Scholar
Xiao Y, Dougherty ER. The impact of function perturbations in boolean networks. Bioinformatics. 2007; 23(10):1265–73.View ArticleGoogle Scholar
Yoon BJ, Qian X, Dougherty ER. Quantifying the objective cost of uncertainty in complex dynamical systems. IEEE Trans Signal Process. 2013; 61(9):2256–66.View ArticleGoogle Scholar
Qian X, Dougherty ER. Bayesian regression with network prior: Optimal bayesian filtering perspective. IEEE Trans Signal Process. 2016; 64(23):6243–53.View ArticleGoogle Scholar
Broumand A, Esfahani MS, Yoon B-J, Dougherty ER. Discrete optimal Bayesian classification with error-conditioned sequential sampling. Pattern Recog. 2015; 48(11):3766–82.View ArticleGoogle Scholar
Dehghannasiri R, Qian X, Dougherty ER. A Bayesian robust Kalman smoothing framework for state-space models with uncertain noise statistics. EURASIP J Adv Signal Process. 2018; 2018(1):55.View ArticleGoogle Scholar
Boluki S, Esfahani MS, Qian X, Dougherty ER. Constructing pathway-based priors within a gaussian mixture model for Bayesian regression and classification. IEEE/ACM Trans Comput Biol Bioinf. 2017. https://doi.org/10.1109/TCBB.2017.2778715.
Dehghannasiri R, Yoon B-J, Dougherty ER. Optimal experimental design for gene regulatory networks in the presence of uncertainty. IEEE/ACM Trans Comput Biol Bioinf. 2015; 12(4):938–50.View ArticleGoogle Scholar
Sverchkov Y, Craven M. A review of active learning approaches to experimental design for uncovering biological networks. PLoS Comput Biol. 2017; 13(6):1005466.View ArticleGoogle Scholar
Kim M, Tagkopoulos I. Data integration and predictive modeling methods for multi-omics datasets. Mol Omics. 2018; 14(1):8–25.View ArticleGoogle Scholar
Steiert B, Raue A, Timmer J, Kreutz C. Experimental design for parameter estimation of gene regulatory networks. PloS ONE. 2012; 7(7):40052.View ArticleGoogle Scholar
Dehghannasiri R, Yoon B-J, Dougherty ER. Efficient experimental design for uncertainty reduction in gene regulatory networks. BMC Bioinformatics. 2015; 16(13):2.View ArticleGoogle Scholar
Mohsenizadeh D, Dehghannasiri R, Dougherty E. Optimal objective-based experimental design for uncertain dynamical gene networks with experimental error. IEEE/ACM Trans Comput Biol Bioinf. 2018; 15(1):218–30.View ArticleGoogle Scholar
Imani M, Dehghannasiri R, Braga-Neto UM, Dougherty ER. Sequential experimental design for optimal structural intervention in gene regulatory networks based on the mean objective cost of uncertainty. Cancer Informat. 2018; 17:1–10.View ArticleGoogle Scholar
Pal R, Datta A, Dougherty ER. Robust intervention in probabilistic boolean networks. IEEE Trans Signal Process. 2008; 56(3):1280–94.View ArticleGoogle Scholar
Bellman R, Kalaba R. Dynamic programming and adaptive processes: Mathematical foundation. IRE Trans Autom Control. 1960; AC-5(1):5–10.View ArticleGoogle Scholar
Silver EA. Markovian decision processes with uncertain transition probabilities or rewards: MIT; 1963.Google Scholar
Gozzolino JM, Gonzalez-Zubieta R, Miller RL. Markovian decision processes with uncertain transition probabilities. Technical report. 1965.Google Scholar
Martin JJ. Bayesian Decision Problems and Markov Chains. New York: Wiley; 1967.Google Scholar
Kumar P. A survey of some results in stochastic adaptive control. SIAM J Control Optim. 1985; 23(3):329–80.View ArticleGoogle Scholar
Van Hee KM, Hee K. Bayesian Control of Markov Chains vol. 95. Amsterdam: Mathematisch Centrum; 1978.Google Scholar
Yousefi MR, Dougherty ER. A comparison study of optimal and suboptimal intervention policies for gene regulatory networks in the presence of uncertainty. EURASIP J Bioinf Syst Biol. 2014; 2014(1):6.View ArticleGoogle Scholar
Norris JR. Markov Chains vol. 2. Cambridge: Cambridge university press; 1998.Google Scholar
Vahedi G, Faryabi B, Chamberland J, Datta A, Dougherty ER. Intervention in gene regulatory networks via a stationary mean-first-passage-time control policy. IEEE Trans Biomed Eng. 2008; 55(10):2319–31.View ArticleGoogle Scholar
Shmulevich I, Dougherty ER, Zhang W. Gene perturbation and intervention in probabilistic boolean networks. Bioinformatics. 2002; 18(10):1319–31.View ArticleGoogle Scholar
Shmulevich I, Dougherty ER, Zhang W. Control of stationary behavior in probabilistic boolean networks by means of structural intervention. Biol Syst. 2002; 10(4):431–46.View ArticleGoogle Scholar
Qian X, Dougherty ER. Effect of function perturbation on the steady-state distribution of genetic regulatory networks: Optimal structural intervention. IEEE Trans Signal Process. 2008; 56(10):4966–76.View ArticleGoogle Scholar
Hu M, Shen L, Zan X, Shang X, Liu W. An efficient algorithm to identify the optimal one-bit perturbation based on the basin-of-state size of boolean networks. Sci Rep. 2016; 6(26247):1–11.Google Scholar
Datta A, Choudhary A, Bittner ML, Dougherty ER. External control in markovian genetic regulatory networks. Mach Learn. 2003; 52(1-2):169–91.View ArticleGoogle Scholar
Yang C, Wai-Ki C, Nam-Kiu T, Ho-Yin L. On finite-horizon control of genetic regulatory networks with multiple hard-constraints. BMC Syst Biol. 2010; 4(2):14.View ArticleGoogle Scholar
Ching W-K, Zhang S-Q, Jiao Y, Akutsu T, Tsing N-K, Wong A. Optimal control policy for probabilistic boolean networks with hard constraints. IET Syst Biol. 2009; 3(2):90–9.View ArticleGoogle Scholar
Pal R, Datta A, Dougherty ER. Optimal infinite-horizon control for probabilistic boolean networks. IEEE Trans Signal Process. 2006; 54(6):2375–87.View ArticleGoogle Scholar
Bertsekas DP. Dynamic Programming and Optimal Control vol. 1. Belmont: Athena Scientific; 1995.Google Scholar
Akutsu T, Hayashida M, Ching W-K, Ng MK. Control of Boolean networks: Hardness results and algorithms for tree structured networks. J Theor Biol. 2007; 244(4):670–9.View ArticleGoogle Scholar
Batchelor E, Loewer A, Lahav G. The ups and downs of p53: understanding protein dynamics in single cells. Nat Rev Cancer. 2009; 9(5):371.View ArticleGoogle Scholar
Weinberg R. The Biology of Cancer. Princeton: Garland Science; 2007.Google Scholar
Leach FS, Tokino T, Meltzer P, Burrell M, Oliner JD, Smith S, Hill DE, Sidransky D, Kinzler KW, Vogelstein B. p53 mutation and MDM2 amplification in human soft tissue sarcomas. Cancer Res. 1993; 53(10):2231–4.Google Scholar
Reis RM, Könü-Lebleblicioglu D, Lopes JM, Kleihues P, Ohgaki H. Genetic profile of gliosarcomas. Am J Pathol. 2000; 156(2):425–32.View ArticleGoogle Scholar
Fauré A, Naldi A, Chaouiya C, Thieffry D. Dynamical analysis of a generic boolean model for the control of the mammalian cell cycle. Bioinformatics. 2006; 22(14):124–31.View ArticleGoogle Scholar
Boluki S, Esfahani MS, Qian X, Dougherty ER. Incorporating biological prior knowledge for bayesian learning via maximal knowledge-driven information priors. BMC Bioinformatics. 2017; 18(14):552.View ArticleGoogle Scholar
Broumand A, Hu T. A length bias corrected likelihood ratio test for the detection of differentially expressed pathways in RNA-Seq data. In: IEEE Global Conference on Signal and Information Processing. IEEE: 2015. p. 1145–9.Google Scholar | CommonCrawl |
Home > Journals > Ann. Probab. > Volume 27 > Issue 3 > Article
July 1999 Exact Separation of Eigenvalues of Large Dimensional Sample Covariance Matrices
Z. D. Bai, Jack W. Silverstein
Ann. Probab. 27(3): 1536-1555 (July 1999). DOI: 10.1214/aop/1022677458
Let $B _n = (1/N) T_n^{1/2} X _n X _n^*T_n^{1/2}$ where $X_n$ is $n \times N$ with i.i.d. complex standardized entries having finite fourth moment, and $T_n^{1/2}$ is a Hermitian square root of the nonnegative definite Hermitian matrix $T_n$. It was shown in an earlier paper by the authors that, under certain conditions on the eigenvalues of $T_n$, with probability 1 no eigenvalues lie in any interval which is outside the support of the limiting empirical distribution (known to exist) for all large $n$. For these $n$ the interval corresponds to one that separates the eigenvalues of $T_n$. The aim of the present paper is to prove exact separation of eigenvalues; that is, with probability 1, the number of eigenvalues of $B_n$ and $T_n$ lying on one side of their respective intervals are identical for all large $n$.
Z. D. Bai. Jack W. Silverstein. "Exact Separation of Eigenvalues of Large Dimensional Sample Covariance Matrices." Ann. Probab. 27 (3) 1536 - 1555, July 1999. https://doi.org/10.1214/aop/1022677458
First available in Project Euclid: 29 May 2002
Digital Object Identifier: 10.1214/aop/1022677458
Primary: 15A52, 60F15
Secondary: 62H99
Keywords: empirical distribution function of eigenvalues, Random matrix, Stielt-jes transform
Rights: Copyright © 1999 Institute of Mathematical Statistics
Ann. Probab.
Vol.27 • No. 3 • July 1999
Institute of Mathematical Statistics
Z. D. Bai, Jack W. Silverstein "Exact Separation of Eigenvalues of Large Dimensional Sample Covariance Matrices," The Annals of Probability, Ann. Probab. 27(3), 1536-1555, (July 1999) | CommonCrawl |
Difference between revisions of "Additive noise"
Latest revision as of 16:09, 1 April 2020 (view source)
Ulf Rehmann (talk | contribs)
m (tex encoded by computer)
An interference added to the signal during its transmission over a [[Communication channel|communication channel]]. More precisely, one says that a given communication channel is a channel with additive noise if the transition function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a0106701.png" /> of the channel is given by a density <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a0106702.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a0106703.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a0106704.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a0106705.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a0106706.png" /> are the spaces of the values of the signals at the input and output of the channel, respectively) depending only on the difference <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a0106707.png" />, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a0106708.png" />. In this case the signal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a0106709.png" /> at the output of the channel can be represented as the sum of the input signal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067010.png" /> and a random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067011.png" /> independent of it, called additive noise, so that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067012.png" />.
a0106701.png
$#A+1 = 25 n = 0
$#C+1 = 25 : ~/encyclopedia/old_files/data/A010/A.0100670 Additive noise
Automatically converted into TeX, above some diagnostics.
Please remove this comment and the {{TEX|auto}} line below,
if TeX found to be correct.
If one considers channels with discrete or continuous time over finite or infinite intervals, the notion of a channel with additive noise is introduced by the relation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067013.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067014.png" /> is in the given interval, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067015.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067016.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067017.png" /> are random processes representing the signals at the input and the output of the channel with additive noise, respectively; moreover, the process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067018.png" /> is independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067019.png" />. In particular, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067020.png" /> is a Gaussian random process, then the considered channel is called a [[Gaussian channel|Gaussian channel]].
{{TEX|auto}}
An interference added to the signal during its transmission over a [[Communication channel|communication channel]]. More precisely, one says that a given communication channel is a channel with additive noise if the transition function $ Q(y, \cdot ) $
of the channel is given by a density $ q(y, \widetilde{y} ) $,
$ y \in {\mathcal Y} $,
$ \widetilde{y} \in \widetilde {\mathcal Y} = {\mathcal Y} $(
$ {\mathcal Y} $
and $ \widetilde {\mathcal Y} $
are the spaces of the values of the signals at the input and output of the channel, respectively) depending only on the difference $ \widetilde{y} - y $,
i.e. $ q(y, \widetilde{y} ) = q( \widetilde{y} -y) $.
In this case the signal $ \widetilde \eta $
at the output of the channel can be represented as the sum of the input signal $ \eta $
and a random variable $ \zeta $
independent of it, called additive noise, so that $ \widetilde \eta = \eta + \zeta $.
If one considers channels with discrete or continuous time over finite or infinite intervals, the notion of a channel with additive noise is introduced by the relation $ \widetilde \eta (t) = \eta (t) + \zeta (t) $,
where $ t $
is in the given interval, $ \eta (t) $,
$ \widetilde \eta (t) $
and $ \zeta (t) $
are random processes representing the signals at the input and the output of the channel with additive noise, respectively; moreover, the process $ \zeta (t) $
is independent of $ \eta (t) $.
In particular, if $ \zeta (t) $
is a Gaussian random process, then the considered channel is called a [[Gaussian channel|Gaussian channel]].
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> R. Gallager, "Information theory and reliable communication" , McGraw-Hill (1968)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> A.A. Kharkevich, "Channels with noise" , Moscow (1965) (In Russian)</TD></TR></table>
====Comments====
More generally, especially in system and control theory and stochastic analysis, the term additive noise is used for describing the following way noise enters a stochastic differential equation or observation equation: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067021.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067022.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067023.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067024.png" /> are Wiener noise processes. The general situation of a stochastic differential equation of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a010/a010670/a01067025.png" /> is referred to as having multiplicative noise.
More generally, especially in system and control theory and stochastic analysis, the term additive noise is used for describing the following way noise enters a stochastic differential equation or observation equation: $ d x = f ( x , t ) d t + d w $,
$ d y = h ( x , t ) d t + d v $,
where $ w $
and $ v $
are Wiener noise processes. The general situation of a stochastic differential equation of the form $ d x = f ( x , t ) d t + g ( x , t ) d w $
is referred to as having multiplicative noise.
Latest revision as of 16:09, 1 April 2020
An interference added to the signal during its transmission over a communication channel. More precisely, one says that a given communication channel is a channel with additive noise if the transition function $ Q(y, \cdot ) $ of the channel is given by a density $ q(y, \widetilde{y} ) $, $ y \in {\mathcal Y} $, $ \widetilde{y} \in \widetilde {\mathcal Y} = {\mathcal Y} $( $ {\mathcal Y} $ and $ \widetilde {\mathcal Y} $ are the spaces of the values of the signals at the input and output of the channel, respectively) depending only on the difference $ \widetilde{y} - y $, i.e. $ q(y, \widetilde{y} ) = q( \widetilde{y} -y) $. In this case the signal $ \widetilde \eta $ at the output of the channel can be represented as the sum of the input signal $ \eta $ and a random variable $ \zeta $ independent of it, called additive noise, so that $ \widetilde \eta = \eta + \zeta $.
If one considers channels with discrete or continuous time over finite or infinite intervals, the notion of a channel with additive noise is introduced by the relation $ \widetilde \eta (t) = \eta (t) + \zeta (t) $, where $ t $ is in the given interval, $ \eta (t) $, $ \widetilde \eta (t) $ and $ \zeta (t) $ are random processes representing the signals at the input and the output of the channel with additive noise, respectively; moreover, the process $ \zeta (t) $ is independent of $ \eta (t) $. In particular, if $ \zeta (t) $ is a Gaussian random process, then the considered channel is called a Gaussian channel.
[1] R. Gallager, "Information theory and reliable communication" , McGraw-Hill (1968)
[2] A.A. Kharkevich, "Channels with noise" , Moscow (1965) (In Russian)
More generally, especially in system and control theory and stochastic analysis, the term additive noise is used for describing the following way noise enters a stochastic differential equation or observation equation: $ d x = f ( x , t ) d t + d w $, $ d y = h ( x , t ) d t + d v $, where $ w $ and $ v $ are Wiener noise processes. The general situation of a stochastic differential equation of the form $ d x = f ( x , t ) d t + g ( x , t ) d w $ is referred to as having multiplicative noise.
Additive noise. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Additive_noise&oldid=18399
This article was adapted from an original article by R.L. DobrushinV.V. Prelov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Additive_noise&oldid=45027"
TeX auto | CommonCrawl |
realnfo.com
Encyclopedia of Electrical Engineering
HOME TABLE OF CONTENTS PROJECTS TECHNOLOGY
HOME TABLE OF CONTENTS
PROJECTS TECHNOLOGY
⊡1. History
⊞⊟2. Fundamental of Physics
2.1. International Systems of Units
2.2. Converstion of Units
2.3. Accuracy and Precision
2.4. Significant Figures
2.5. Rounding Off
2.6. Physical Quantities
2.7. Length
2.8. Mass
2.9. Time
2.10. Building blocks of matter
2.11. Density
2.12. Motion
2.12.1. Displacement
2.12.2. Speed and Velocity
2.12.3. Acceleration
2.12.4. Scalars and Vectors
2.12.5. Coordinates Systems
⊡3. Sources of Electric Energy
⊞⊟4. Branches of Electrical Engineering
4.1. Power Engineering
4.2. Electronic Engineering
4.3. Computer Engineering
4.4. Microelectronics
4.5. Control System Engineering
4.6. Signal Processing
4.7. Telecommunication Engineering
4.8. Instrumentation Engineering
⊞⊟5. Electricity and Magnetism
5.1. Atom and Its Structure
5.2. Electric Charge
5.3. Electric Field
5.3.1. Electric Permittivity
5.3.2. Electric Flux
5.3.3. Gauss Law
5.4. Potential Difference
5.5. Voltage Sources
5.5.1. Battery
5.5.2. Ampere Hour Rating
5.5.3. Solar Cell
5.5.4. Voltmeter
5.5.5. Ammeter
5.5.6. Galvanometer
5.6. Conductance and Insulation
5.6.1. Conductor
5.6.2. Insulator
5.6.3. Semiconductor
5.7. Capacitance
5.8. Current
5.9. Magnetism
5.9.1. Magnetic Fields
5.9.2. Magnetic Flux
5.9.3. Magnetic Flux Density
5.9.4. Oersted law
5.9.5. Faradays Law of Induction
5.9.6. Fleming Right and Left Hand Rule
5.9.7. Lorentz Force : Force on a Charge
5.9.8. Force on a Current Carrying Conductor
5.9.9. Magnetic Field of Current Carrying Conductor
5.9.10. Force between Two Parallel Conductors
Faradays Law of Induction
Basic Electrical Engineering > Electricity and Magnetism > Magnetism
Faraday's Law of Induction describes how an electric current produces a magnetic field and, conversely, how a changing magnetic field generates an electric current in a conductor. English physicist Michael Faraday gets the credit for discovering magnetic induction in 1830; however, an American physicist, Joseph Henry, independently made the same discovery about the same time, according to the University of Texas.
It is impossible to overstate the significance of Faraday's discovery. Magnetic induction makes possible the electric motors, generators, and transformers that form the foundation of modern technology.
Faraday's law states:The electromotive force around a closed path is equal to the negative of the time rate of change of the magnetic flux enclosed by the path.
$$\bbox[5px,border:1px solid red] {\color{blue}{\mathcal{E} = {- d\phi_B \over dt}}}$$ Eq.(1)
where $\mathcal {E}$ is the electromotive force (EMF) and $\phi_B$ is the magnetic flux.
The direction of the electromotive force is given by Lenz's law.
The laws of induction of electric currents in mathematical form was established by Franz Ernst Neumann in 1845. Faraday's law contains information about the relationships between both the magnitudes and the directions of its variables. However, the relationships between the directions are not explicit; they are hidden in the mathematical formula.
Left Hand Rule for Faraday's Law
It is possible to find out the direction of the electromotive force (EMF) directly from Faraday's law, without invoking Lenz's law. A left hand rule helps doing that, as follows;
Fig.no.3:Left hand rule(magnetic induction)
Align the curved fingers of the left hand with the loop (yellow line).
Stretch your thumb. The stretched thumb indicates the direction of n (brown), the normal to the area enclosed by the loop.
Find the sign of the change in flux. Determine the initial and final fluxes (whose difference is $\Delta \phi_B$) with respect to the normal n, as indicated by the stretched thumb.
If the change in flux, $\Delta \phi_B$, is positive, the curved fingers show the direction of the electromotive force (yellow arrowheads).
If $\Delta \phi_B$ is negative, the direction of the electromotive force is opposite to the direction of the curved fingers (opposite to the yellow arrowheads).
For a tightly wound coil of wire, composed of N identical turns, each with the same $\phi_B$, Faraday's law of induction states that, $$\bbox[5px,border:1px solid red] {\color{blue}{\mathcal{E} = -N {d\phi_B \over dt}}}$$ Eq.(1)
where N is the number of turns of wire and $\phi_B$ is the magnetic flux through a single loop.
In order to understand Faraday's Law of Induction, it is important to have a basic understanding of magnetic fields. A magnetic field is often depicted as lines of magnetic flux. In the case of a bar magnet, the flux lines exit from the north pole and curve around to reenter at the south pole. In this model, the number of flux lines passing through a given surface in space represents the flux density, or the strength of the field.
How magnetic induction works?
If we run an electric current through a wire, it will produce a magnetic field around the wire. The direction of this magnetic field can be determined by the right-hand rule. If you extend your thumb and curl the fingers of your right hand, your thumb points in the positive direction of the current, and your fingers curl in the north direction of the magnetic field.
Fig.no.1: Left-hand and right-hand rule for a magnetic field due to a current in a straight wire.
If you bend the wire into a loop, the magnetic field lines will bend with it, forming a toroid, or doughnut shape.
Fig.no.2: A current-carrying circular loop
If we run a current through a wire loop in a magnetic field, the interaction of these magnetic fields will exert a twisting force, or torque, on the loop causing it to rotate. However, it will only rotate so far until the magnetic fields are aligned. If we want the loop to continue rotating, we have to reverse the direction of the current, which will reverse the direction of the magnetic field from the loop. The loop will then rotate 180 degrees until its field is aligned in the other direction. This is the basis for the electric motor.
Conversely, if we rotate a wire loop in a magnetic field, the field will induce an electric current in the wire. The direction of the current will reverse every half turn, producing an alternating current. This is the basis for the electric generator. It should be noted here that it is not the motion of the wire but rather the opening and closing of the loop with respect to the direction of the field that induces the current. When the loop is face-on to the field, the maximum amount of flux passes through the loop. However, when the loop is turned edge-on to the field, no flux lines pass through the loop. It is this change in the amount of flux passing through the loop that induces the current.
Copyright © 2011 - 2021 realnfo.com | CommonCrawl |
Membrane lipid raft organization during cotton fiber development
XU Fan1 na1,
SUO Xiaodong1 na1,
LI Fang1,
BAO Chaoya1,
HE Shengyang1,
HUANG Li1 &
LUO Ming1
Cotton fiber is a single-celled seed trichome that originates from the ovule epidermis. It is an excellent model for studying cell elongation. Along with the elongation of cotton fiber cell, the plasma membrane is also extremely expanded. Despite progress in understanding cotton fiber cell elongation, knowledge regarding the relationship of plasma membrane in cotton fiber cell development remains elusive.
The plasma membrane of cotton fiber cells was marked with a low toxic fluorescent dye, di-4-ANEPPDHQ, at different stages of development. Fluorescence images were obtained using a confocal laser scanning microscopy. Subsequently, we investigated the relationship between lipid raft activity and cotton fiber development by calculating generalized polarization (GP values) and dual-channel ratio imaging.
We demonstrated that the optimum dyeing conditions were treatment with 3 μmol·L− 1 di-4-ANEPPDHQ for 5 min at room temperature, and the optimal fluorescence images were obtained with 488 nm excitation and 500–580 nm and 620–720 nm dual channel emission. First, we examined lipid raft organization in the course of fiber development. The GP values were high in the fiber elongation stage (5–10 DPA, days past anthesis) and relatively low in the initial (0 DPA), secondary cell wall synthesis (20 DPA), and stable synthesis (30 DPA) stages. The GP value peaked in the 10 DPA fiber, and the value in 30 DPA fiber was the lowest. Furthermore, we examined the differences in lipid raft activity in fiber cells between the short fiber cotton mutant, Li-1, and its wild-type. The GP values of the Li-1 mutant fiber were lower than those of the wild type fiber at the elongation stage, and the GP values of 10 DPA fibers were lower than those of 5 DPA fibers in the Li-1 mutant.
We established a system for examining membrane lipid raft activity in cotton fiber cells. We verified that lipid raft activity exhibited a low-high-low change regularity during the development of cotton fiber cell, and the pattern was disrupted in the short lint fiber Li-1 mutant, suggesting that membrane lipid order and lipid raft activity are closely linked to fiber cell development.
Cotton is the premier natural fiber for textiles. Cotton fibers are highly elongated single cells of the seed epidermis. The unicellular extremely elongated structure makes cotton fiber cell an ideal model for studying plant cell growth (Kim and Triplett 2001; Shi et al. 2006; Singh et al. 2009a; Qin and Zhu 2011). The developmental process of cotton fiber consists of five distinctive but overlapping stages: initiation, elongation, transition, secondary cell wall deposition, and maturation (Haigler et al. 2012). Lint fiber initiates elongation near the day of anthesis and continues up to approximately 21 days post anthesis (DPA). During this period, the elongation rate exhibits a slow-fast-slow regularity. The elongation rate reaches a peak in approximate 10 DPA, and the fibers finally grow to 30–40 mm length (Liu et al. 2012). Subsequently, the elongation of fiber cells completely stops, and the fibers enters a stable secondary wall deposition period (20–45 DPA) (Singh et al. 2009b), followed by a dehydration period (45–50 DPA), which generates mature fibers.
Extremely elongated fiber cells require change in cell turgor pressure, plasmodesmatal regulation, and transporter activities (Ruan et al. 2004; Zhu et al. 2003). A large-scale transcriptome analysis revealed that during fiber cell elongation, lipid metabolism pathways are up-regulated significantly (Gou et al. 2007). According to a cotton lipid spectrum analysis, the amount of unsaturated fatty acids in elongation stage fiber cells (α-linolenic acid: C18:3) is greater than that in ovules, and the amount of very-long-chain fatty acids (VLCFAs, from C20 to C26) eventually increases to three to five times (Wanjie et al. 2005). In addition, treating in vitro cultured cotton ovules with VLCFAs can promote the elongation of cotton fibers significantly, while treating with VLCFAs inhibitor acephrachlor (ACE) completely inhibits fiber growth, indicating that VLCFAs are involved in the cotton fiber elongation process (Qin et al. 2007). The study on Δ12 fatty acid desaturase revealed that the formation of unsaturated fatty acids under cold stress could maintain the specific membrane structure required for fiber elongation (Kargiotidou et al. 2008). Furthermore, plant-specific glycosylphosphatidylinositol (GPI) anchoring protein encoded gene COBL influences the orientation and crystallinity of fiber microfilaments, and is closely linked to fiber development (Roudier et al. 2010; Niu et al. 2019). During the rapid elongation stage, high phytosterol concentrations were also observed. In addition, numerous plant sterol biosynthesis genes were down-regulated in the short fiber of the Li-1 mutant, indicating that plant sterol also participates in the development of cotton fiber (Deng et al. 2016). VLCFAs are some of the substrates required for the synthesis of sphingolipids, and GPI is a precursor in the synthesis of complex sphingolipids. Sphingolipids and sterols are critical structural components of cell membranes, organelle membranes, and vacuolar membranes, and they form membrane lipid rafts (Hill et al. 2018). The relationship between the substances and fiber development indicates that the fiber membrane plays an important role in fiber development. However, the roles of membrane lipid raft in the development of cotton fiber remain unclear.
Fluorescent probes have been extensively used as biomarkers in biological studies. Laurdan and di-4-ANEPPDHQ are two phase-sensitive membrane probes and they respond uniquely to lipid packing, in a manner different from membrane associated peptides (Dinic et al. 2011). Laurdan and di-4-ANEPPDHQ display blue shifts of approximately 50 nm in their emission peaks for membranes in liquid-ordered (lo) phase relative to membranes in liquid-disordered (ld) phase (Jin et al. 2005; Jin et al. 2006), and can be quantified by calculating the generalized polarization (GP) values (Aron et al. 2017). Laurdan is a type of ultra violet-excited dye that is usually imaged using a two-photon excited fluorescence (TPF) microscope to avoid the photobleaching tendency observed under single-photon excitation (Jin et al. 2005). The peak emission spectrum of Laurdan has been reported to be at 440 nm for lo phase and 490 nm for ld phase (Dinic et al. 2011). Di-4-ANEPPDHQ is a single photon excited dye, and its spectrum range is in the 500–750 nm, covering the entire spectral range of most microscope systems (Owen and Gaus 2010), while the spectrum blue shift of the di-4-ANEPPDHQ dye is 60 nm (Aron et al. 2017). Quantitative in vivo imaging of lipid raft using di-4-ANEPPDHQ in artificial membrane systems and animal cells is well established (Owen et al. 2006; Owen and Gaus 2010; Owen et al. 2012). A few studies have mentioned the application of di-4-ANEPPDHQ in the visualization of plasma membrane microdomains in plant cells (Roche et al. 2008; Liu et al. 2009; Zhao et al. 2015).
Plant materials
Wild-type Jimian 14 (Gossypium hirsutum L. cv. Jimian 14) was provided by professor MA Zhiying (Hebei Agricultural University) and was propagated and preserved at the Biotechnology Research Center of Southwest University. Short fiber cotton mutant Ligon lintless (Li-1) was provided by the Institute of Cotton Research, Chinese Academy of Agricultural Sciences, and the corresponding wild type (TM-1) for Li-1 mutant was segregated from a heterozygous Li-1 mutant. All plants were grown under natural conditions in the experimental field of the Biotechnology Research Center of Southwest University in Chongqing.
In vitro cotton ovule culture
Cotton ovules were collected 1 day after flower opening (defined as 1 DPA), soaked in 75% ethanol for 1 min, rinsed in distilled and deionized water, and soaked again in 0.1% (W/V) HgCl solution containing 0.05% Tween-80 at 100 g for 10 min to sterilize. Ovules were placed in Beasley and Ting's medium under aseptic conditions (Beasley and Ting 1973).
Di-4-ANEPPDHQ staining
Di-4-ANEPPDHQ was purchased from Invitrogen (CAT #D36802). The stock solution of di-4-ANEPPDHQ [5 mmol·L− 1 in dimethyl sulphoxide (DMSO)] was stored in dark at − 20 °C. For di-4-ANEPPDHQ staining, the in vitro cultured cotton ovules were incubated in staining solution as described in Results.
Confocal laser scanning microscope observation
An SP8 confocal laser scanning microscope (SP8 CLSM, Leica, Germany) was used for the imaging of di-4-ANEPPDHQ-labelled cotton fibers. The sample was excited using a 488-nm laser, and the emission spectra were 500–580 nm (green) and 620–720 nm (red). A 63× oil immersion objective (N.A. = 1.3) was used in this study. Identical microscope settings were maintained for quantitative imaging of membrane components.
GP processing
After CLSM imaging, generalized polarization (GP) images were generated by a previously described protocol (Owen et al. 2012), with some modifications. Briefly, the GP values were calculated according to the following equations:
$$ \mathrm{GP}=\left({\mathrm{I}}_{500-580}-\mathrm{G}\times {\mathrm{I}}_{620-720}\right)/\left({\mathrm{I}}_{500-580}+\mathrm{G}\times {\mathrm{I}}_{620-720}\right) $$
$$ \mathrm{G}=\left({\mathrm{GP}}_{\mathrm{ref}}+{\mathrm{GP}}_{\mathrm{ref}}{\mathrm{GP}}_{\mathrm{mes}}-{\mathrm{GP}}_{\mathrm{mes}}-1\right)/\left(\mathrm{GPmes}+{\mathrm{GP}}_{\mathrm{ref}}\mathrm{GPmes}-{\mathrm{GP}}_{\mathrm{ref}}-1\right) $$
I indicates the fluorescence intensity of each pixel of the image obtained within the receiving range of the two channels; G is the calibration constant; GPmes is the calibrated GP value of di-4-ANEPPDHQ in pure DMSO solution with similar device parameters; when the ordered phase and the disordered phase are separated, GPref = 0. The fluorescence intensity value of each image was calculated using Image J 1.46 (https://imagej.nih.gov/ij/download.html).
Ratio image processing
Dual-channel ratio imaging (ratio) can be used for quantitative analysis of membrane organization (Jin et al. 2005). In the present study, fluoviewFV1000 was used for ratio imaging of di-4-ANEPPDHQ stained cotton fiber cells. The formula for calculating ratio images is as follows:
$$ {\mathrm{Ratio}}_{\mathrm{r}/\mathrm{g}}=\left({\operatorname{int}}_{\mathrm{r}}-{\mathrm{bkg}}_{\mathrm{r}}\right)/\left({\operatorname{int}}_{\mathrm{g}}-{\mathrm{bkg}}_{\mathrm{g}}\right)\times \mathrm{MF} $$
intr and intg represent the fluorescence intensities of red channel and green channel images, respectively; bkgr and bkgg represent the background values set for intr and intg, respectively; the background values are set similarly to avoid interference. MF is the multiplication factor.
Di-4-ANEPPDHQ is a low toxicity fluorescent probe dye for cotton fibers
During the growth and development of cotton fiber cells, the changes in cell morphology are closely related to the cell membrane. Considering di-4-ANEPPDHQ is used as probe for detecting membrane order. In the present study, we use the fluorescent dye to detect the membrane lipid raft organization in cotton fiber cells. First, we investigated the toxicity of di-4-ANEPPDHQ to cotton fiber growth and development. Following exogenous application of di-4-ANEPPDHQ at various concentrations, we observed that the lengths of di-4-ANEPPDHQ-treated cotton fibers were similar to those of untreated fibers after culture for similar periods (Fig. 1). The results indicated that di-4-ANEPPDHQ is a low toxicity fluorescent probe for cotton fiber and could be used for the observing of lipid raft microregions of cotton fiber cells.
Effects of di-4-ANEPPDHQ on the growth of cotton fiber cells. Cotton ovules were treated with 0 μmol·L− 1, 2 μmol·L− 1, 4 μmol·L− 1, 6 μmol·L− 1 di-4-ANEPPDHQ and cultured in vitro for 5 day, 10 day and 15 day
The fluorescence emission spectra of di-4-ANEPPDHQ in cotton fibers
Di-4-ANEPPDHQ is a fluorescent probe that responds rapidly to changes in electric potential in the environment. Its spectral characteristics are related to environment, cell type, and electric potential. To determine the spectral characteristics of di-4-ANEPPDHQ in fiber cells, we used excitation light at 488 nm to collect emission spectra at 10-nm intervals in a lambda acquisition mode and calculated the spectral intensity. The range of emission fluorescence intensity of di-4-ANEPPDHQ-dyed cotton fiber cells was 550–660 nm and the peak emission fluorescence intensity was observed at 580 nm (Fig. 2).
Spectral characteristics of Di-4-ANEPPDHQ in cotton fiber cells. a Raw images series taken by the lambda mode of confocal laser scanning microscope (CLSM) in WT cotton fiber cell stained with di-4-ANEPPDHQ. b Raw emission profile of di-4-ANEPPDHQ in wild type cotton fiber cell
The optimum labeling conditions of di-4-ANEPPDHQ for cotton fibers
Considering di-4-ANEPPDHQ is a very sensitive fluorescent probe, treatment time and concentrations are two key factors to be taken into account in the application of the dye. To determine the optimal labeling conditions for di-4-ANEPPDHQ for dyeing cotton fibers, we examined the fluorescence of di-4-ANEPPDHQ-stained fiber cells under different concentrations and treatment times. When the staining time was 3 min, the fluorescence signal could hardly be detected under the low concentration treatments (Fig. 3a), while treatment for 5 min with 3 μmol·L− 1 di-4-ANEPPDHQ, yielded fluorescence images could be recorded (Fig. 3b). Therefore, we selected 3 μmol·L− 1 and 5 min treatment as the optimal labeling conditions for staining cotton fiber cells using di-4-ANEPPDHQ.
Optimized the di-4-ANEPPDHQ staining parameter for cotton fiber cells. a Di-4-ANEPPDHQ dye treatment for 3 min. b Di-4-ANEPPDHQ dye treatment for 5 min. 1 μmol·L− 1, 2 μmol·L− 1, 3 μmol·L− 1, 4 μmol·L− 1, 5 μmol·L− 1, and 6 μmol·L− 1 indicate the treatment concentrations
Membrane lipid raft order of wild-type cotton fibers
To understand the change regularity in plasma membrane lipid raft organization in the course of the development of cotton fiber cells, we detected the fluorescence signals at 0 DPA, 5 DPA, 10 DPA, 20 DPA, and 30 DPA, in fiber cells stained with di-4-ANEPPDHQ. The fluorescence intensity of the green channel (liquid-ordered phase, 500–580 nm) was stronger at 0 DPA, 5 DPA, and 10 DPA fibers, but weaker at 20 DPA and 30 DPA. Conversely, the fluorescence intensity of the red channel (liquid-disordered phase, 620–720 nm) was weaker in 0 DPA, 5 DPA, and 10 DPA fibers, but stronger in 20 DPA and 30 DPA fibers (Fig. 4). The results indicated that the plasma membrane order was higher at the early stages of fiber development, while the plasma membrane is disorder at the later stages of fiber development. Furthermore, the ratio (red/green) of the double channel fluorescence intensity was plotted, and the white color indicated that the membrane order and the lipid raft activity were low, while the blue color represented higher membrane order and lipid raft activity. The ratio images indicated that the fiber membranes at 0 DPA were almost all white and red; the blue were the highest at 10 DPA, with almost no distribution of white dot; the fiber membranes had a certain amount of white distribution at 20 DPA; the fiber membranes were almost all white at 30 DPA (Fig. 4). These suggested that at the initiation stage, the lipid order of fiber cell membrane and the lipid raft activity were lower, and the fiber cell membrane lipid order and lipid raft activity increased gradually as the fibers transitioned into the rapid elongation stage. In addition, with the termination of fiber cell elongation, fiber cells entered the secondary wall synthesis phase, and fiber cell membrane lipid order and lipid raft activity decreased. Therefore, during cotton fiber cell development, membrane lipid order and lipid raft activity change from low to high, then to low, and the physiological and biochemical activities of fiber cells were the greatest in elongation phase.
Fluorescence imaging of di-4-ANEPPDHQ labeled wild-type cotton fiber cells. 500–580 nm, the green channel; 620–720 nm, the red channel; Merged, the merged channel of green and red channels; Ratio, the ratio images of red channel/ green channel. 0 DPA, 5 DPA, 10 DPA, 20 DPA, 30 DPA represent the period of cotton fiber cell development
For quantitative analysis of the obtained pictures, we calculated the generalized polarization (GP) values and red/green ratio (Ratior/g) values based on more images. The GP values in fiber cells at 0 DPA were relatively lower. With the development of fiber cells, the GP values increased gradually, reaching the peak at 10 DPA, and then decreased to the lowest level at 30 DPA (Fig. 5a). The results are consistent with the fluorescence signal observations, which also indicated that the fiber cell membrane lipid order and lipid raft activity were higher at the rapid elongation stage, and lower in the secondary wall synthesis phase. The Ratior/g value in the 10 DPA fiber cells was 0.557 ± 0.131, which was the lowest value observed in the course of fiber development. Conversely, the value was 1.410 ± 0.090 at 30 DPA, which was the highest value observed in the course of fiber development (Fig. 5a). The results suggest that the lipid raft activity, cell membrane order, and physiological activity of fiber cells were the greatest during the rapid elongation phase.
The GP and Ratio value of wild-type cotton fiber. a The GP value of 0 DPA, 5 DPA, 10 DPA, 20 DPA, and 30 DPA wild-type cotton fibers. b The Ratior/g value statistics of 0 DPA, 5 DPA, 10 DPA, 20 DPA, 30 DPA wild-type cotton fibers
Membrane lipid raft order of Li-1 mutant cotton fibers
To further verify the relationship between lipid raft activity and fiber cell elongation development, we examined changes in GP value during the elongation of fiber cells of a short fiber mutant, Ligon lintless-1 (Li-1). Notably, the GP value in Li-1 mutant fibers was remarkably lower than the GP values in wild-type fibers at the similar development stages (Fig. 6), indicating that lipid raft activity was lower in mutant fiber cells. In addition, the GP value in wild-type fibers increased from 5 DPA to 10 DPA, while the GP value in mutant fibers decreased from 5 DPA to 10 DPA (Fig. 6), which further indicated that membrane lipid order and lipid raft activity were closely correlated with cotton fiber development.
Fluorescence images and the GP value of Li-1 mutant fibers. a Confocal laser scanning microscope images of di-4-ANEPPDHQ stained 5 DPA (top) and 10 DPA (bottom) Li-1 fibers. 500–580 nm, the green channel; 620–720 nm, the red channel; Merged, the merged channel of green and red channels; Ratio, the ratio images of red channel/ green channel. b The GP value of 5 DPA and 10 DPA wild-type and Li-1 fibers
Biomembranes play an important role in cell growth and development. The cotton fiber cell is one of the longest cells in plants. Owing to its highly elongated structure and high cellulose content, the cotton fiber serves as an excellent system for studying cell elongation, cell wall formation, and other fundamental aspects of plant cell growth and development (Kim and Triplett 2001).Therefore, it is assumed that membrane also play an important role in fiber growth. On the one hand, with the expansion of cell size, the area of the plasma membrane and the inner membrane needs to increase correspondingly; conversely, it serves as the site of attachment of most enzymes (about 80% of the enzymes are membrane-binding proteins, for example, the cellulose synthase complex is located in the plasma membrane). However, studying membrane functions and properties is challenging due to its dynamic structure and limited technologies. In recent years, following advancements in technologies, researchers have demonstrated that lipid rafts (lipid microdomains) are the functional domains of membranes using diverse approaches (Mongrand et al. 2004; Borner et al. 2005).
Plasma membranes (PMs) are composed of three major classes of lipids, including glycerolipids, sphingolipids, and sterols, which may account for up to 100 000 distinct molecular species (Yetukuri et al. 2008; Shevchenko and Simons 2010). Overall, all glycerolipids share similar molecular moieties in plants, animals, and fungi. In contrast, sterols and sphingolipids are varied and specific to each kingdom, and are the major components of lipid rafts (Cacas et al. 2016). Membrane lipid raft activity has become a major index for characterizing membrane properties. Higher order cell membranes and higher lipid raft activity could offer stable reaction platforms and ordered dynamic environment for various physiological and biochemical reactions (Maccioni et al. 2002; Yu et al. 2004), which are closely linked to the polar elongation of cells (Meder et al. 2006; Sorek et al. 2007; Cánovas and Pérez-Martín 2009). Over the last two decades, considerable progress has been made in cotton fiber studies. The components of lipid raft have been verified to play a key role in fiber development. For example, VLCFAs are required for fiber development and mainly served as precursors of sphingolipid biosynthesis (Qin et al. 2007). In addition, the compositions and concentrations of plant sterols influence fiber growth (Deng et al. 2016; Niu et al. 2019). Furthermore, it has been demonstrated that inhibiting sphingolipid synthesis seriously suppresses fiber cell growth. The results indicate that lipid rafts play a key role in the development of fiber cells. Therefore, it is critical to examine membrane lipid raft activity during fiber development.
The fluorescence probe di-4-ANEPPDHQ could bind both liquid-order (lo) and liquid-disorder (ld) phase membranes. Due to its strong polarity-dependent spectral shifts, di-4-ANEPPDHQ could stain lo and ld phase membranes in different colors (Klymchenko and Kreder 2014). Through fluorescence lifetime imaging and CLSM, the optical properties of di-4-ANEPPDHQ in animal cells have been well studied and applied to detect lipid raft activity in living cells (Owen et al. 2006, 2012). However, few studies that focus on plant cells have been performed. Roche et al. (2008) used the dye to explore changes in membrane lipid activity in BY2 cells following treatment with plant sterol chelate-β-cyclodextrin, and they demonstrated that the sterol influenced lipid raft activity significantly in plant cells. Liu et al. (2009) used di-4-ANEPPDHQ to investigate the aggregation of lipid micro area (lipid raft) at the tip of a pollen tube in Picea meyeri, which is analogous to the polar elongation of the pollen tube. In addition, Zhao et al. (2015) studied root epidermal cell and root hair cell using di-4-ANEPPDHQ in Arabidopsis, and reported that the ordered degree of plasma membrane was higher than that of the inner membrane in root epidermal and root hair cells.
In the present study, we investigated the toxicity and optical properties of di-4-ANEPPDHQ to cotton fiber cell. The toxicity of the di-4-ANEPPDHQ to fiber cell was relatively low. Furthermore, there was no obvious difference in fiber cell elongation between the fiber treated with 6 μmol·L− 1 di-4-ANEPPDHQ and the control, which is similar to the results observed in root hair cells (Zhao et al. 2015). In the fiber cell, incubation with 3 mmol·L− 1 of the di-4-ANEPPDHQ probe in the culture medium for 5 min at room temperature was adequate, while for root epidermal cells and root hairs, incubation with 5 mmol·L− 1 of the di-4-ANEPPDHQ probe in the culture medium for 5 min at room temperature was adequate (Zhao et al. 2015). The results suggested that the optimal stain condition depended on the materials.
Changes in lipid raft activity in the plasma membrane were observed in the course of fiber cell development. In addition, the rapid elongation phase of the fiber cell exhibited higher lipid raft activity, while the elongation termination and the early elongation stages exhibited relatively low lipid raft activity, which showed that lipid raft activity of the plasma membrane and plasma membrane organization were closely related to fiber cell elongation. Phytosterol is one of components of lipid raft. During fiber growth, higher concentrations of sitosterol and campesterol, two major phytosterols, were detected in the rapid elongation phase fiber cells, when compared with the concentrations in the early fiber elongation and secondary cell wall deposition stages (Deng et al. 2016). The change in trends of membrane lipid raft activity was consistent with the phytosterol concentration trends oberserved in the course of fiber cell development. Roche et al. (2008) used cyclic oligosaccharide methyl-β-cyclodextrin, commonly used in animal cells to decrease cholesterol levels, to induce a drastic reduction (50%) in the total free sterol concentrations in PM of BY2 cells and the depletion of sterol concentrations increased lipid acyl chain disorder. The results confirm that higher phytosterol concentrations are associated with higher membrane lipid raft activity (membrane order). Since phytosterols and sphingolipids are two key components of lipid raft, further studies should focus on the role of sphingolipids and various molecule species of phytosterols or sphingolipids on lipid raft activity in the cotton fiber cell.
In the present study, we investigated lipid raft activity in cotton fiber cell during its developmental process by labeling it with di-4-ANEPPDHQ. Using an in vitro cotton ovule culture system, we verified that the dye exhibited low toxicity to cotton fiber, and we established the optimal labeling conditions of the dye for the cotton fiber plasma membrane as follows: incubation with 3 μmol·L− 1 of the di-4-ANEPPDHQ probe for 5 min at room temperature. Based on the phase separation characteristics of di-4-ANEPPDHQ, dual channel images were obtained using CLSM (Leica SP8) and were processed based on GP values and a Ratior/g processing algorithm. According to the results, the membrane order degrees of cotton fiber cell exhibited a low-high-low change regularity with the development of cotton fiber cell. In addition, the regularity was disrupted in the short lint fiber Li-1 mutant. Overall, the results imply that there is a close relationship between cotton fiber cell development and cell membrane lipid organization and lipid raft activity.
ACE:
Acephrachlor
CLSM:
Confocal laser scanning microscope
DMSO:
Days post anthesis
Generalized polarization
Li-1 :
Ligon lintless-1
ld:
Liquid disorder
lo:
Liquid order
Multiplication factor
VLCFA:
Very-long-chain fatty acid
Aron M, Browning R, Carugo D, et al. Spectral imaging toolbox: segmentation, hyperstack reconstruction, and batch processing of spectral images for the determination of cell and model membrane lipid order. BMC Bioinformatics. 2017;18(1):254. https://doi.org/10.1186/s12859-017-1656-2.
Beasley CA, Ting IP. The effects of plant growth substances on in vitro fiber development from fertilized cotton ovules. Am J of Botany. 1973;60(2):130–9. https://doi.org/10.1002/j.1537-2197.1973.tb10209.x.
Borner GH, Sherrier DJ, Weimar T, et al. Analysis of detergent-resistant membranes in Arabidopsi. Evidence for plasma membrane lipid rafts. Plant Physiol. 2005;137:104–16. https://doi.org/10.1104/pp.104.053041.
Cánovas D, Pérez-Martín J. Sphingolipid biosynthesis is required for polar growth in the dimorphic phytopathogen Ustilago maydis. Fungal Genet Biol. 2009;46(2):190–200. https://doi.org/10.1016/j.fgb.2008.11.003.
Deng S, Wei T, Tan K, et al. Phytosterol content and the campesterol:sitosterol ratio influence cotton fiber development: role of phytosterols in cell elongation. Sci China Life Sci. 2016;59(2):183–93. https://doi.org/10.1007/s11427-015-4992-3.
Dinic J, Biverståhl H, Mäler L, Parmryd I. Laurdan and di-4-ANEPPDHQ do not respond to membrane-inserted peptides and are good probes for lipid packing. Biochim Biophys Acta. 2011;1808(1):298–306. https://doi.org/10.1016/j.bbamem.2010.10.002.
Gou JY, Wang LJ, Chen SP, et al. Gene expression and metabolite profiles of cotton fiber during cell elongation and secondary cell wall synthesis. Cell Res. 2007;17(5):422–34. https://doi.org/10.1038/sj.cr.7310150.
Haigler CH, Betancur L, Stiff MR, Tuttle JR. Cotton fiber: a powerful single-cell model for cell wall and cellulose research. Front Plant Sci. 2012;3:104. https://doi.org/10.3389/fpls.2012.00104.
Hill CH, Cook GM, Spratley SJ, et al. The mechanism of glycosphingolipid degradation revealed by a GALC-SapA complex structure. Nat Commun. 2018;9:151. https://doi.org/10.1038/s41467-017-02361-y.
Cacas JL, Buré C, Grosjean K, et al. Revisiting plant plasma membrane lipids in tobacco: a focus on sphingolipids. Plant Physiol. 2016;170:367–84. https://doi.org/10.1104/pp.15.00564.
Jin L, Millard AC, Wuskell JP, et al. Cholesterol-enriched lipid domains can be visualized by di-4-ANEPPDHQ with linear and nonlinear optics. Biophys J. 2005;89(1):L4–6. https://doi.org/10.1529/biophysj.105.064816.
Jin L, Millard AC, Wuskell JP, et al. Characterization and application of a new optical probe for membrane lipid domains. Biophys J. 2006;90(7):2563–75. https://doi.org/10.1529/biophysj.105.072884.
Kargiotidou A, Deli D, Galanopoulou D, et al. Low temperature and light regulate delta 12 fatty acid desaturases (FAD2) at a transcriptional level in cotton (Gossypium hirsutum). J Exp Bot. 2008;59(8):2043–56. https://doi.org/10.1093/jxb/ern065.
Kim HJ, Triplett BA. Cotton fiber growth in planta and in vitro. Models for plant cell elongation and cell wall biogenesis. Plant Physiol. 2001;127(4):1361–6. https://doi.org/10.1104/pp.010724.
Klymchenko AS, Kreder R. Fluorescent probes for lipid rafts: from model membranes to living cells. Chem Biol. 2014;21(1):97–113. https://doi.org/10.1016/j.chembiol.2013.11.009.
Liu K, Sun J, Yao L, Yuan Y. Transcriptome analysis reveals critical genes and key pathways for early cotton fiber elongation in Ligon lintless-1 mutant. Genomics. 2012;100(1):42–50. https://doi.org/10.1016/j.ygeno.2012.04.007.
Liu P, Li RL, Zhang L, et al. Lipid microdomain polarization is required for NADPH oxidase-dependent ROS signaling in Picea meyeri pollen tube tip growth. Plant J. 2009;60(2):303–13. https://doi.org/10.1111/j.1365-313X.2009.03955.x.
Maccioni HJF, Giraudo CG, Daniotti JL. Understanding the stepwise synthesis of glycolipids. Neurochem Res. 2002;27(7–8):629–36. https://doi.org/10.1023/a:1020271932760.
Meder D, Moreno MJ, Verkade P, et al. Phase coexistence and connectivity in the apical membrane of polarized epithelial cells. Proc Natl Acad Sci USA. 2006;103(2):329–34. https://doi.org/10.1073/pnas.0509885103.
Mongrand S, Morel J, Laroche J, et al. Lipid rafts in higher plant cells purification and characterization of triton X-100-insoluble microdomains from tobacco plasma membrane. J Biol Chem. 2004;279:36277–86. https://doi.org/10.1074/jbc.M403440200.
Niu Q, Tan K, Zang Z, et al. Modification of phytosterol composition influences cotton fiber cell elongation and secondary cell wall deposition. BMC Plant Biol. 2019;19(1):208. https://doi.org/10.1186/s12870-019-1830-y.
Owen DM, Gaus K. Optimized time-gated generalized polarization imaging of Laurdan and di-4-ANEPPDHQ for membrane order image contrast enhancement. Microsc Res Tech. 2010;73(6):618–22. https://doi.org/10.1002/jemt.20801.
Owen DM, Lanigan PMP, Dunsby C, et al. Fluorescence lifetime imaging provides enhanced contrast when imaging the phase-sensitive dye di-4-ANEPPDHQ in model membranes and live cells. Biophys J. 2006;90(11):L80–2. https://doi.org/10.1529/biophysj.106.084673.
Owen DM, Rentero C, Magenau A, et al. Quantitative imaging of membrane lipid order in cells and organisms. Nat Protoc. 2012;7(1):24–35. https://doi.org/10.1038/nprot.2011.419.
Qin YM, Hu CY, Pang Y, et al. Saturated very-long-chain fatty acids promote cotton fiber and Arabidopsis cell elongation by activating ethylene biosynthesis. Plant Cell. 2007;19(11):3692–704. https://doi.org/10.1105/tpc.107.054437.
Qin YM, Zhu YX. How cotton fibers elongate: a tale of linear cell-growth mode. Curr Opin Plant Biol. 2011;14(1):106–11. https://doi.org/10.1016/j.pbi.2010.09.010.
Roche Y, Gerbeau-Pissot P, Buhot B, et al. Depletion of phytosterols from the plant plasma membrane provides evidence for disruption of lipid rafts. FASEB J. 2008;22(11):3980–91. https://doi.org/10.1096/fj.08-111070.
Roudier F, Gissot L, Beaudoin F, et al. Very-long-chain fatty acids are involved in polar auxin transport and developmental patterning in Arabidopsis. Plant Cell. 2010;22(2):364–75. https://doi.org/10.1105/tpc.109.071209.
Ruan YL, Xu SM, White R, Furbank RT. Genotypic and developmental evidence for the role of plasmodesmatal regulation in cotton fiber elongation mediated by callose turnover. Plant Physiol. 2004;136(4):4104–13. https://doi.org/10.1104/pp.104.051540.
Shevchenko A, Simons K. Lipidomics: coming to grips with lipid diversity. Nat Rev Mol Cell Biol. 2010;11:593–8. https://doi.org/10.1038/nrm2934.
Shi YH, Zhu SW, Mao XZ, et al. Transcriptome profiling, molecular biological, and physiological studies reveal a major role for ethylene in cotton fiber cell elongation. Plant Cell. 2006;18(3):651–64. https://doi.org/10.1105/tpc.105.040303.
Singh B, Avci U, Inwood SEE, et al. A specialized outer layer of the primary cell wall joins elongating cotton fibers into tissue-like bundles. Plant Physiology. 2009a;150(2):684–99. https://doi.org/10.1104/pp.109.135459.
Singh B, Cheek HD, Haigler CH. A synthetic auxin (NAA) suppresses secondary wall cellulose synthesis and enhances elongation in cultured cotton fiber. Plant Cell Rep. 2009b;28(7):1023–32. https://doi.org/10.1007/s00299-009-0714-2.
Sorek N, Poraty L, Sternberg H, et al. Activation status-coupled transient S acylation determines membrane partitioning of a plant rho-related GTPase (retracted article. See vol 37, Artn e00321-17, 2017). Mol Cell Biol. 2007;27(6):2144–54. https://doi.org/10.1128/MCB.02347-06.
Wanjie SW, Welti R, Moreau RA, Chapman KD. Identification and quantification of glycerolipids in cotton fibers: reconciliation with metabolic pathway predictions from DNA databases. Lipids. 2005;40(8):773–85. https://doi.org/10.1007/s11745-005-1439-4.
Yetukuri L, Ekroos K, Vidal-Puig A, Oresic M. Informatics and computational strategies for the study of lipids. Mol BioSyst. 2008;4:121–7.
Yu RK, Bieberich E, Xia T, Zeng GC. Regulation of ganglioside biosynthesis in the nervous system. J Lipid Res. 2004;45(5):783–93. https://doi.org/10.1194/jlr.%20R300020-JLR200.
Zhao X, Li R, Lu C, et al. Di-4-ANEPPDHQ, a fluorescent probe for the visualisation of membrane microdomains in living Arabidopsis thaliana cells. Plant Physiol Biochem. 2015;87:53–60. https://doi.org/10.1016/j.plaphy.2014.12.015.
Zhu YQ, Xu KX, Luo B, et al. An ATP-binding cassette transporter GhWBC1 from elongating cotton fibers. Plant Physiol. 2003;133(2):580–8. https://doi.org/10.1104/pp.103.027052.
We are grateful to Professor MA Zhiying (Hebei Agricultural University) for kindly providing the Jimian 14 seeds. We thank the Institute of Cotton Research, Chinese Academy of Agricultural Sciences for providing the Li-1 mutant seeds.
This work was financially supported by the National Natural Science Foundation of China (31571722 and 31971984), the Funds for Creative Research Groups of China (31621005), and the Genetically Modified Organisms Breeding Major Project of China (No. 2018ZX0800921B). The funding bodies did not play any role in the design of the study and collection, analysis, and interpretation of data or in writing the manuscript.
Xu F and Suo XD contributed equally to this work.
Key Laboratory of Biotechnology and Crop Quality Improvement, Ministry of Agriculture/Biotechnology Research Center, Southwest University, Chongqing, 400716, China
XU Fan, SUO Xiaodong, LI Fang, BAO Chaoya, HE Shengyang, HUANG Li & LUO Ming
XU Fan
SUO Xiaodong
LI Fang
BAO Chaoya
HE Shengyang
HUANG Li
LUO Ming
SX and XF performed most of the experiments. LF, BC, HS, HL performed some of the experiments. LM designed the experiments. XF and LM analyzed the data and wrote the manuscript. SX and XF performed most of the experiments. LF, BC, HS, HL performed some of the experiments. LM designed the experiments. XF and LM analyzed the data and wrote the manuscript. The author(s) read and approved the final manuscript.
Correspondence to LUO Ming.
XU, F., SUO, X., LI, F. et al. Membrane lipid raft organization during cotton fiber development. J Cotton Res 3, 13 (2020). https://doi.org/10.1186/s42397-020-00054-4
Cotton fiber
Lipid raft
Di-4-ANEPPDHQ | CommonCrawl |
Predicting disease-related genes using integrated biomedical networks
Volume 18 Supplement 1
Proceedings of the 27th International Conference on Genome Informatics: genomics
Jiajie Peng1,
Kun Bai2,7,
Xuequn Shang1,
Guohua Wang2,
Hansheng Xue2,
Shuilin Jin3,
Liang Cheng4,
Yadong Wang2 &
Jin Chen5,6
BMC Genomics volume 18, Article number: 1043 (2017) Cite this article
Identifying the genes associated to human diseases is crucial for disease diagnosis and drug design. Computational approaches, esp. the network-based approaches, have been recently developed to identify disease-related genes effectively from the existing biomedical networks. Meanwhile, the advance in biotechnology enables researchers to produce multi-omics data, enriching our understanding on human diseases, and revealing the complex relationships between genes and diseases. However, none of the existing computational approaches is able to integrate the huge amount of omics data into a weighted integrated network and utilize it to enhance disease related gene discovery.
We propose a new network-based disease gene prediction method called SLN-SRW (Simplified Laplacian Normalization-Supervised Random Walk) to generate and model the edge weights of a new biomedical network that integrates biomedical data from heterogeneous sources, thus far enhancing the disease related gene discovery.
The experiment results show that SLN-SRW significantly improves the performance of disease gene prediction on both the real and the synthetic data sets.
One crucial step toward understanding the molecular basis of diseases, such as cancer, diabetes, and cardiovascular disorders, is to identify the predisposing or virulence genes of these diseases, which will lead to early disease diagnosis and effective drug design [1]. With the availability of the big biomedical data, researchers tend to get insights into human diseases by identifying genes that might cause or relate to them. Given the fact that experimentally identifying of the complete list of disease-related genes is generally impractical due to the high cost, computational methods have been proposed in the last decades to predict the relationships between genes and human diseases [2–10]. However, these tools, including filtering methods based on a set of criteria [11], text mining of biomedical literature [12], integration of genomic data [13–15], semantic similarity [16–21] based disease gene prioritization [22] and network analysis based and highly robust approach [8, 23–26], remain pre-eminent [27].
A human cell consists of several functionally inter-dependent molecular components. A human disease rarely results from an abnormality in a single gene but reflects the perturbations of the complex molecular network induced by different kinds of factors, such as genetic variations, pathogens and epigenetic changes [28]. The molecular network links molecular states to physiological states associated with diseases in a whole system view [29]. Therefore, network-based approaches may offer better targets for drug development, and may lead to multiple potential biological and clinical applications including disease gene discovery [28].
The network-based approaches for disease gene identification can be loosely grouped into three categories. The simplest approach, named direct neighbor counting, is to check whether two genes are connected directly in a molecular network. The idea is that if a gene is connected to one of the known disease genes, it may be associated with the same disease [30]. The experimental result demonstrates that using molecular networks can effectively increase the likelihood of identifying candidate disease genes. The direct neighbor counting method, however, does not consider the situation that two genes are not connected directly but still have certain biological associations. To address this problem, Kruthammmer et al. employed the shortest path length approach to measure the closeness between a disease gene and a candidate gene. This method has been successfully applied to predict the genes associated Alzheimer's disease, and the prediction results agree with the manually curated candidates [31]. Since both the direct neighbor counting method and the shortest path method are local distance measurements, they largely ignore the global structure of the whole molecular network and cannot fully capture the complex relationships between network nodes [32]. Subsequently, methods have been proposed to predict the gene-disease relation using the global network structure, such as random walk with restart (RWR) [33], propagation flow [34], Markov clustering [35] and graph partitioning [36]. The performance evaluation on HPRD [37], OPHID [38] and OMIM [39] dataset shows that RWR is the best among the then-existing methods [5].
Rapidly evolving bio-technologies promote collecting multiple types of biological data, including diverse genome-scale data, clinical phenotype data, environment data, and data of daily activities [40], making it feasible and attractive to build integrated biomedical networks from multiple sources, rather than focus on one single data set. The integrated network that includes multiple, heterogeneous types of resources, greatly extends the scope and ability for disease gene prediction [41]. For example, BioGraph [42] uses data from 21 publicly available curated databases to identify relations between heterogeneous biomedical entities. The work by Ganegoda et al. runs RWR on a integrated network, and has successfully identified disease-related genes with significant improved performance [43].
Using integrated networks for gene-disease relationship discovery is still a difficult task due to the existence of multiple biomedical entities in the integrated networks. In a network built using a single type of biomedical data, there is only one type of nodes and one type of edges. For example, in a protein-protein interaction network, nodes and edges represent proteins and protein interactions respectively. The integrated network, on the contrary, contains multiple types of nodes and edges representing different biomedical entities (such as genes, diseases, and ontology terms) and relationships (such as DNA-protein binding and gene ontology annotation). In order to differentiate these edge types, edge weights in the integrated biomedical network should be appropriately assigned [44].
In this article, we present a new algorithm called SLN-SRW (Simplified Laplacian Normalization-Supervised Random Walk) to define edge weights in an integrated network and use the weighted network to predict gene-disease relationships. Comparing with the existing approaches, SLN-SRW has the following advantages:
SLN-SRW is the first approach, to the best of our knowledge, to predict gene-disease relationships based on a weighted integrated network with its edge weight being computed to precisely describe the importance of different edge types.
The performance of random walk may be strongly affected by the super hub nodes in an integrated network. SLN-SRW adopts a Laplacian normalization based method to avoid such bias.
To prepare inputs for SLN-SRW, we constructed a new heterogeneous integrated network based on three widely used biomedical ontologies, i.e. Human Phenotype Ontology [45], Disease Ontology [46], and Gene Ontology [47, 48], and biological databases such as STRING [49]. This integrated network combines biomedical knowledge from ontologies that are manually curated and big biomedical data that have been deposited in databases. Based on these two distinctively different types of information, this network forms a foundation for disease gene discovery.
We propose SLN-SRW to compute and model the edge weight of an integrated network and then predict disease genes. To achieve the goal, SLN-SRW consists of three steps. First, it integrates knowledge and data from multiple ontologies and databases to construct an integrated network G(V,E), where V is a set of nodes and E is a set of edges that connect the nodes in V. Second, it uses a Laplacian normalization based supervised random walk algorithm to learn the edge weight of network G, resulting in a weighted integrated network G w . Third, it employs the RWR method on G w to predict disease-gene relationships. The diagram of the whole process of SLN-SRW is shown in Fig. 1. We will introduce the key steps of SLN-SRW in the rest of this section.
The Framework of SLN−SRW. Framework of SLN-SRW for estimating the edge weight of the integrated network automatically and predicting disease genes based on it. The second step is the essential part of SLN-SRW algorithm
Step 1. Integrating heterogeneous knowledge and data sources for integrated network construction
In the first step of SLN-SRW, an integrated network is constructed based on eleven heterogeneous data sources in four distinct forms, i.e. ontologies, networks, unified vocabularies, and relational databases. The data sources are listed in Table 1, and they are mainly used for relation extraction, name mapping, and unified vocabulary. The data sources can be grouped into two categories: 1) Curated data that were collected from literature and other high quality data sources, such as Search Tool for the Retrieval of Interacting Gene/Proteins (STRING) and Online Mendelian Inheritance in Man (OMIM), and 2) Curated ontologies that constructed manually by the domain expert, such as Gene Ontology (GO) and Disease Ontology (DO).
Table 1 Integrated databases and ontologies. The first column, second column, and third column represent the abbreviation of the data source, simplify the description of the data source and the relationship extracted from the data source respectively. Eleven data sources are used to construct the integrated network. Specific types of nodes and edges are extracted from various data sources and integrated into a network
The workflow for constructing the integrated network out of the heterogeneous data sources is shown in Fig. 2. Specifically, the network construction process has the following four steps:
Extracting information from heterogeneous data sources. Ontology parser and database parser have been developed for ontology and database data extraction respectively. The ontology parser processes the OBO file and the ontology annotation file, since HPO, DO and GO are all in Open Biomedical Ontologies (OBO) format. The database parser processes files in Tab Separated Values (TSV), Comma Separated Values (CSV), and Extensible Markup Language (XML) format. The outputs of the two parsers are pair-wise relations and their properties between two biomedical entities.
The workflow of constructing the integrated network. Work flow of constructing the integrated network based on multiple data sources
Unifying biomedical entity IDs. The same pair-wise relation may be extracted from multiple data sources with different identifiers. To avoid confusion, we provide a distinct ID number for each biomedical entity by mapping all types of identifiers to the ones provided in the Unified Medical Language System (UMLS). The challenge is that some types of identifiers cannot be direct mapped to UMLS. For example, only a small part (61%) of HPO and DO term can be mapped to UMLS. Therefore, we adopted ClinVar [50] to map all the HPO terms to UMLS, and utilized SIDD [51] to map all the disease names in DO to MeSH ID, provided that there are direct mappings between MeSH ID and UMLS. Please see Additional file 1 for more details. After unifying the entity IDs from multiple data sources, each entity only has one identifier in the database. We removed the identifiers that cannot be mapped to UMLS.
Constructing the integrated network. The binary relations extracted from multiple data sources form an integrated network G, in which nodes are biomedical entities (i.e. ontology terms and genes), and edges are the relationships between the entities, which have seven different types: GO term - GO term, GO term - gene, DO term - DO term, DO term - gene, HPO term - HPO term, HPO term - gene, and gene - gene.
Edge initial weight assignment. We assign the initial edge weight t(u,v) to every edge <u,v> according to its edge type and the evidence code associated to the edge, where both u and v are nodes in G. Specifically, for the edge types that have edge confidence scores in the source databases, we use the confidence scores directly. For the edge types that do not have confidence scores but are associated with evidence codes, we manually assign initial edge weights based on their edge evidence codes (see Additional file 2 for the manually assigned weights). The edge initial weights are between 0 and 1, and the experimentally verified edges have higher initial weights than computational predictions. For example, an edge between a GO term and a gene with evidence code "EXP" has a high weight (1.0), whereas an edge with "IEA" code has a low weight (0.4), since "EXP" indicates the GO-gene relationship has been experimentally verified while "IEA" means computational prediction. Note that for the edges that have two or more evidences in E, the initial weights are calculated as the maximal weight of all the valid evidence codes.
Step 2. Weighing the importance of different types of edges in integrated network
Given an integrated network G with manually assigned initial edge weights, the aim of this step is to automatically re-assign all the edge weights, such that the weighted network G w can be used for more precise disease gene prediction. To achieve this goal, we develop a new edge weight optimization algorithm based on supervised random walks (SRW) [52]. SRW combines the information from network structure with the node and edge level attributes, which can guide the random walk on the network. By running SRW, we expect to re-assign weights to all the edges, such that the random walker from a disease node is more likely to visit the nodes representing the associated genes. However, the training process of supervised random walks (i.e. RWR) can be significantly affected by the hub nodes in the network. To solve this problem, we propose a Laplacian normalization method to weigh the importance of different types of edges in an integrated network described as follows.
Given an integrated network G(V,E), let node v d ∈V represent a kind of disease and let V g ⊂V be the set of the candidate genes of v d , then the disease gene prediction problem can be converted to a problem to predict all the new edges between v d and a subset of nodes in V g , where a critical step is to weigh the edges such that a random walker from v d has higher probabilities to reach the known disease genes in V g than to reach the other genes. To provide the training set for learning the edge weight, we generate a positive set V p and a negative set V n for every disease node v d , where V p includes known disease genes associated with v d and V n includes genes not associated with v d .
The approach to weigh the importance of different edge types consists of the following three steps:
Laplacian normalization on edge weights. To avoid the biases caused by the hub nodes in the integrated network, we adopt the Laplacian normalization method [53] to normalize all the edge weights. Given a edge (u,v)∈E, the edge weight of edge (u,v) is normalized by all the edges connecting to node u or node v. Mathematically, the laplacian normalized edge weight a(u,v) is defined as:
$$ a(u,v) = \frac{f(u,v)}{\sqrt{\sum_{i \in N(u)}f(u,i)\sum_{j \in N(v)} f(v,j)}} $$
where N(x) is the set of neighbors of node x; f(x,y)=1/(1+e −w·t(x,y)); w is the edge type importance vector for graph G that we will learn in the next step using an optimization process, and its length is equal to the number of possible edge types (in our case, seven); t(x,y) is the vector of the initial weight of edge <x,y>, which has the same length as w. t(x,y) is all zero except one cell because each edge can have one and only one kind of edge type. Note that the edge type is decided by the type of nodes connected by it. For example, gene - gene and HPO term - gene are two different types of edges in the integrated network. a(u,v) integrates and normalizes both the edge type importance w and the initial edge weight t; it can be used to model the random walk transition probability.
Edge weight optimization - problem formation. In order to learn the optimal w for all the seven edge types in an integrated network, we minimize an optimal function defined in Eq. 2, such that the random walker in the network is more likely to reach the genes in V p than the genes in V n .
$$ {{} {\begin{aligned} w&=\arg\min_{w} O(w) \\&\quad\,=\,\ \arg\min_{w}\left(\frac{1}{2}{||w||}^{2} \,+\, \lambda\sum\limits_{v_{d} \in D}\sum\limits_{v_{p}\in V_{p}, v_{n} \in V_{n}} h\left(S_{v_{n}}\,-\,S_{v_{p}}\right)\right) \end{aligned}}} $$
where ||w|| is the euclidean norm; and D is a set of starting nodes representing the diseases in the training set. For each disease node v d ∈D, V p and V n representing the positive training set and the negative training set respectively. \(S_{v_{p}}\) (\(S_{v_{n}}\)) is the association value between v d and v p ∈V p (v d and v n ∈V n ), which can be calculated by running RWR on G [54]. λ is the weight penalty score deciding to what extent the constraints can be violated. Given the value of \(S_{v_{n}}-S_{v_{p}}\), h() is a loss function that returns a non-negative value:
$$ h(x)= \left\{ \begin{aligned} 0, ~~~~~~ x < 0 \\ \frac{1}{1+ e^{-\frac{x}{b}}}, ~~~~~~x \geq 0\\ \end{aligned} \right. $$
where b is a constant positive parameter, \(x=S_{v_{n}}-S_{v_{p}}\). The smaller the b is, the more sensitive the loss function is (see Additional file 3). If \(S_{v_{n}}-S_{v_{p}}<0\), the association between a disease and a gene in the positive training set is stronger than the association between the same disease and a gene in the negative training set, so h()=0. Otherwise, the constraint is violated, so h()>0.
Edge weight optimization - our solution. To optimize edge type importance parameter w to minimize Eq. 2, we adopt a widely used meta-heuristics method called the gradient based optimization method [20], which has been successfully adopted to solve the link prediction problem in social networks and collaboration networks [52].
To make the story complete, we briefly describe the gradient-based optimization method in the following text.
First, we construct a stochastic transition matrix Quv′ of RWR using Eq. 1.
$$ Q_{uv}'= \left\{ \begin{aligned} \frac{a(u,v)}{\sum_{w}a(u,v)}, ~~~~~~if(u,v) \in E \\ 0, ~~~~~~otherwise\\ \end{aligned} \right. $$
And then, based on the transition matrix Quv′, RWR can be described as:
$$ Q_{uv} = (1-\alpha)Q_{uv}' + \alpha\mathbf{1}(v=s) $$
where u and v represent two arbitrary nodes in G; α is the restart probability, which is a user given threshold (in this case, we find the best value based on the training data set); and node s is a disease node, which is the starting node of random walk. Let \(p_{i}^{(k)}\) be the probability to reach node i from s after k iterations. The probability vector at the kth iteration can be represented as \(P^{(k)} = (p_{1}^{(k)}, p_{2}^{(k)},..., p_{|V|}^{(k)})^{T}\). The stationary probability vector P, which can be obtained after certain iterations, is the solution of the following equation:
$$ P^{T} = P^{T}Q $$
The next step is to apply a gradient based method to identify w to minimize O(w) in Eq. 2. The derivative of O(w) can be calculated as follows.
$$ \begin{aligned} \frac{\partial O(w)}{\partial w} &= 2w + \sum\limits_{v_{n},v_{p}}\frac{\partial h(S_{v_{n}}-S_{v_{p}})}{\partial w} \\&= 2w + \sum\limits_{v_{n},v_{p}}\frac{\partial h(S_{v_{n}}-S_{v_{p}})}{\partial (S_{v_{n}}-S_{v_{p}})}\left(\frac{\partial S_{v_{n}}}{\partial w} - \frac{\partial S_{v_{p}}}{\partial w}\right) \end{aligned} $$
\(\frac {\partial S_{v_{x}}}{\partial w}\) can be calculated as follows:
$$ \frac{\partial S_{v_{x}}}{\partial w} = \sum\limits_{v_{i}}Q_{v_{i}v_{x}}\frac{\partial S_{v_{i}}}{\partial w}+S_{v_{i}}\frac{\partial Q_{v_{i}v_{x}}}{\partial w} $$
This derivative can be repeatedly computed based on the estimate obtained in the previous iteration. The iteration stops when \(\frac {\partial S_{v_{i}}}{\partial w}\) and \(S_{v_{i}}\) do not change. The initial value of \(\frac {\partial S_{v_{i}}}{\partial w}\) is 0. The \(S_{v_{i}}\) is initialized as \(\frac {1}{|V|}\). The initialization process is the same as the traditional SRW method. \(\frac {\partial Q_{v_{i}v_{x}}}{\partial w}\) can be calculated as follows.
Particularly, \(\frac {\partial Q_{v_{i}v_{x}}}{\partial w} = 0\), if edge (v i ,v x ) does not exist in the network.
$$ \begin{aligned} \frac{\partial Q_{v_{i}v_{x}}}{\partial w} = (1-\alpha)\frac{\frac{\partial a(v_{i},v_{x})}{\partial w}\left(\sum_{v_{j}}a(v_{i},v_{j})\right) - a(v_{i},v_{x})\sum_{v_{j}}\frac{\partial a\left(v_{i},v_{j}\right)}{\partial w}} {\left(\sum_{k}a(v_{i},v_{j})\right)^{2}} \end{aligned} $$
$$ \frac{\partial a(v_{i},v_{x})}{\partial w} = \frac{\frac{\partial f(v_{i},v_{x})}{\partial w}\pi(f(v_{i},v_{x})) - f(v_{i},v_{x})\frac{\partial \pi(f(v_{i},v_{x}))}{\partial w}} {\pi(f(v_{i},v_{x}))^{2}} $$
where π(f(v i ,v x )) and \(\frac {\partial \pi (f(v_{i},v_{x}))}{\partial w}\) are:
$$ \pi(f(v_{i},v_{x})) = \sqrt{\sum_{v_{j} \in N(v_{i})}f(v_{i},v_{j})\sum_{v_{y} \in N(v_{x})}f(v_{x},v_{y})} $$
$$ \begin{aligned} \frac{\partial \pi(f(v_{i},v_{x}))}{\partial w} \,=\, \frac{\sum_{v_{j}\in N(v_{i})}\sum_{v_{y} \in N(v_{x})}\left(\frac{\partial f(v_{j},v_{i})}{\partial w}f\left(v_{y},v_{x}\right) \,+\, \frac{\partial f(v_{y},v_{x})}{\partial w}f(v_{j},v_{i})\right)} {2\sqrt{\sum_{v_{j} \in N(v_{i})}f(v_{j},v_{i})\sum_{v_{y}\in N(v_{x})}f(v_{y},v_{x})}} \end{aligned} $$
where N(v) is the set of neighbors of node v. After we get the solution of Eq. 7, we can apply a gradient descent based method and minimize O(w).
Practically, the process of obtaining w has four steps (Fig. 3). Firstly, we initial the O(w) based on the initial parameters. Secondly, the derivative \(\frac {\partial O(w)}{\partial w}\) is calculated in step 2. The power iteration is applied to calculate \(\frac {\partial S_{v_{i}}}{\partial w}\) and \(\frac {\partial Q_{v_{i}v_{x}}}{\partial w}\) respectively. Thirdly, based on the derivative, we can update the gradient to obtain an updated parameter w. Fourthly, taking the updated w as input, step 4 calculates the stationary probability of the RWR. In the process, the iteration for derivative calculation (step 2 in Fig. 3) and the RWR algorithm (step 4 in Fig. 3) are the two key steps. After estimating the edge weight of the integrated network, we can directly apply RWR on the weighted integrated network to predict the relation between diseases and genes.
The process of training the the parameter w. The steps of training the the parameter w
We compare SLN-SRW with SRW and RWR, where the latter has been widely used in network-based disease gene prediction, on a real and a synthetic data sets. SLN-SRW was implemented with Java 7 in Linux.
As shown in Table 1, eleven data sources, i.e. STRING [49], CTD [55], OMIM [56], ClinVar [50], HGNC [57], MeSH [58], UMLS [59], SIDD [51], DO [60], HPO [61] and GO [62], are used to construct the integrated network G, which has 78,786 nodes and 504,517 edges.
To test the performance of SLN-SRW, we select 430 disease-gene edges from the integrated network as the positive set. The rules for data selection are similar to the rules used in [42]. In the positive set, there are 16 diseases, each of which has at least five known disease-associated genes in the integrated network. More detail about the positive set is listed in Additional file 4. The disease-gene pairs included in the negative set are generated in two steps. First, we select a disease d from the positive set. Second, we repeatedly and randomly select genes that do not connect to d in the integrated network G. The number of the randomly selected genes is the same as the number of genes that connect to d in the positive set. We repeat the process until all disease nodes in the positive set are elected. Note that the positive set is removed from the integrated network in the testing process. Both positive and negative sets are evenly divided into two parts randomly, one for training and the other for testing.
A synthetic data set is generated following the rules in [52]. Specifically, we generated a scale-free network with 1,000 nodes using the Copying model [63] The generation process starts with three connected nodes. We connect a new node u to any of the existing nodes, which are selected at random with probability 0.8 or with probability proportional to the node degree. Parameter b is equal to 0.03 in all the experiments. For each edge in the network, we set w={1,−1} as the gold standard labeled as w ′. Then, we randomly choose one of the original three nodes as the start point v. Based on the edge strength determined by w ′, we run RWR starting from v and ranked the other nodes via the stationary probability. We select the top 20 nodes that directly connect with v as the positive training set, and select the nodes that do not connect with v are the negative set. Note that both the positive set and the negative set are removed from the integrated network in the testing process. In the subsection "Performance evaluation on synthetic data set", we test whether w ′ can be estimated precisely.
Disease gene prediction
The parameters in SLN-SRW and SRW method are estimated based on the training set. The RWR method does not need the training set for edge weight assignment. Alternatively, the training set is used to estimate the best restart probability in RWR. Finally, the performance of all the three methods is tested based on the testing set.
Varying the restart probability α from 0.1 to 0.9, the AUC (area under receiver operating characteristic curve) scores [64] of all the three methods are shown in Fig. 4. If α=0.2, SLN-SRW method reaches the highest AUC score 0.81, whereas SRW and RWR have the highest AUC scores if α=0.6, indicating that SLN-SRW can find the disease genes which are far from the disease node. Based on the edge weights learned using the training data, we predicted the disease-gene relationships in the testing set. We compared the performance of all the three methods using the receiver operating characteristic (ROC) curve. In our test, the AUC score of SLN-SRW (0.79) is the highest (see Fig. 5). Especially, the true positive rate of SLN-SRW is significantly higher than RWR and SRW while its false positive rate keeps low. This is important for disease gene predict, since researchers usually select candidate disease genes with a stringent threshold, which corresponds to a low false positive rate.
he AUC score for each given restart probability for three methods. The AUC score for each given restart probability for three methods. The red, blue and yellow lines are represent SLN-SRW, SRW and RWR method respectively
ROC curves for the experimental results on testing set. ROC curves for the experimental results on testing set. ROC curves for the experimental results calculated with SLN-SRW (green), SRW (red) and RWR (blue)
Finally, we ranked the predicted disease genes to check whether the true disease-related genes have higher ranks than the other genes. Figure 6 shows that the prediction result of SLN-SRW contains more known disease-related genes than SRW and RWR at a majority of the top k levels, indicating that the edge weighing process in SLN-SRW has contributed significantly to the high recall of the results.
True disease-gene pair rates. True disease-gene pair rates at different top k levels
Performance evaluation on synthetic data set
To compare SLN-SRW with SRW, we ran both methods on synthetic data, following the method described in below [52]. For SRW and SLN-SRW, we estimated the edge-type parameter based on the synthetic network and the training set described in the " Data preparation " subsection, resulting in w ∗. We measure the performance of SRW and SLN-SRW by comparing the true edge-type parameter w ′ with w ∗, using \(error = \sum _{i}|w'_{i} - w^{*}_{i}|\). After repeating the experiment 100 times, we find that the error of SLN-SRW is statistically significantly lower than that of SRW (t-test p−value<0.05) indicating that SLN-SRW performs better than SRW (see Fig. 7). The error of SLN-SRW is also lower in the first and third quartile.
The boxplot of the error score. The boxplot of the error score for SLN-SRW and SRW
Identifying the relationships between diseases and genes is vital for disease diagnosis and drug design. Recently, researchers have started to employ integrated biomedical networks to extend the scope and ability for disease gene prediction. In this article, we proposed a novel network-based method named SLN-SRW to define the weight of edges in an integrated network and then use it to predict the gene-disease relationships. SLN-SRW has the following advantages: 1) it can estimate edge weight by differentiating different edge-types; 2) it adopts a Laplacian normalization based method to avoid the bias caused by the super hub nodes in an integrated network; 3) three widely used biomedical ontologies are used to construct a new heterogeneous integrated network. To demonstrate the advantages of SLN-SRW, we compare it with two existing methods SRW and RWR. The experiment on a real data set shows that SLN-SRW performs best among all the three methods. Furthermore, the experiment on a synthetic data set indicates that the edge weights predicted by SLN-SRW are more precise than SRW. Comparing with the existing methods, SLN-SRW has the unique function to identify disease genes, which are not close to any disease node in the disease-gene networks. This could benefit clinicians on discovering new disease-associated genes that have not been identified by the existing methods. Besides, SLN-SRW provides a novel approach to automatically assign weights to the heterogeneous edge types in the disease-gene networks, whereas the existing methods can only define the edge weights manually.
In the future, SLN-SRW will be applied to networks with different edge densities and qualities to test its robustness. Furthermore, we will apply SLN-SRW on more recent datasets and examine the results using both biological experiments and literature.
Wang X, Gulbahce N, Yu H.Network-based methods for human disease gene prediction. Brief Funct Genomics. 2011; 10(5):280–93.
Ala U, Piro RM, Grassi E, Damasco C, Silengo L, Oti M, Provero P, Di Cunto F. Prediction of human disease genes by human-mouse conserved coexpression analysis. PLoS Comput Biol. 2008; 4(3):e1.000043.
Kann MG. Advances in translational bioinformatics: computational approaches for the hunting of disease genes. Brief Bioinformatics. 2010; 11(1):96–110.
Jiang Q, Wang J, Wu X, Ma R, Zhang T, Jin S, Han Z, Tan R, Peng J, Liu G. LncRNA2Target: a database for differentially expressed genes after lncRNA knockdown or overexpression. Nucleic Acids Res. 2015; 43(Database issue):193–6.
Navlakha S, Kingsford C. The power of protein interaction networks for associating genes with diseases. Bioinformatics. 2010; 26(8):1057–63.
Jiang q, Wang G, Zhang T, Wang Y. Predicting human microRNA-disease associations based on support vector machine. Int J Data Mining Bioinformatics. 2013; 8(3):282–93.
Browne F, Wang H, Zheng H. A computational framework for the prioritization of disease-gene candidates. BMC Genomics. 2015; 16(Suppl 9):S2.
Chen B, Li M, Wang J, Shang X, Wu FX. A fast and high performance multiple data integration algorithm for identifying human disease genes. BMC Med Genomics. 2015; 8(Suppl 3):S2.
Chen B, Shang X, Li M, Wang J, Wu FX. Identifying individual-cancer-related genes by re-balancing the training samples. IEEE Transactions on Nanobioscience. 2016; 15(4):309–315.
Jiang q, Hao Y, Wang G, Juan L, Zhang T, Teng M, Liu Y, Wang Y. Prioritization of disease microRNAs through a human phenome-microRNAome network. BMC Syst Biol. 2010; 4:1.
Bush WS, Dudek SM, Ritchie MD. Biofilter: a knowledge-integration system for the multi-locus analysis of genome-wide association studies. In: Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing. The Big Island of Hawaii: NIH Public Access: 2009. p. 368.
Yu S, Van Vooren S, Tranchevent LC, De Moor B, Moreau Y. Comparison of vocabularies, representations and ranking algorithms for gene prioritization by text mining. Bioinformatics. 2008; 24(16):i119—25.
Aerts S, Lambrechts D, Maity S, Van Loo P, Coessens B, De Smet F, Tranchevent LC, De Moor B, Marynen P, Hassan B, et al.Gene prioritization through genomic data fusion. Nat Biotechnol. 2006; 24(5):537–44.
Hu Y, Zhou W, Ren J, Dong L, Wang Y, Jin S, Cheng L. Annotating the function of the human genome with gene ontology and disease ontology. BioMed Res Int. 2016;4130861.
Zhang T, Hu Y, Wu X, Ma R, Jiang Q, Wang Y. Identifying liver cancer-related enhancer SNPs by integrating GWAS and histone modification ChIP-seq data. BioMed Res Int. 2016; 6968:2395341.
Peng J, Uygun S, Kim T, Wang Y, Rhee SY, Chen J. Measuring semantic similarities by combining gene ontology annotations and gene co-function networks. BMC Bioinformatics. 2015; 16:1.
Cheng L, Li J, Hu Y, Jiang Y, Liu Y, Chu Y, Wang Z, Wang Y. Using semantic association to extend and infer literature-oriented relativity between terms. IEEE/ACM Trans Comput Biol Bioinformatics. 2015; 12(6):1219–26.
Cheng L, Jiang Y, Wang Z, Shi H, Sun J, Yang H, Zhang S, Hu Y, Zhou M. DisSim: an online system for exploring significant similar diseases and exhibiting potential therapeutic drugs. Sci Rep. 2016; 6:30024.
Peng J, Wang Y, Chen J. Towards integrative gene functional similarity measurement. BMC Bioinformatics. 2014; 15(2):1.
Peng J, Li H, Jiang Q, Wang Y, Chen J. An integrative approach for measuring semantic similarities using gene ontology. BMC Syst Biol. 2014; 8(Suppl 5):S8.
Peng J, Li H, Liu Y, Juan L, Jiang q, Wang Y, Chen J. InteGO2: a web tool for measuring and visualizing gene semantic similarities using gene ontology. BMC Genomics. 2016; 17(s5):530.
Schlicker A, Lengauer T, Albrecht M. Improving disease gene prioritization using the semantic similarity of gene ontology terms. Bioinformatics. 2010; 26(18):i561—7.
Peng J, Wang T, Hu J, Wang YW, Chen J. Constructing Networks of Organelle Functional Modules in Arabidopsis. Curr Genomics. 2016; 5:427–38.
Cheng L, Shi H, Wang Z, Hu Y, Yang H, Zhou C, Sun J, Zhou M. IntNetLncSim: an integrative network analysis method to infer human lncRNA functional similarity. Oncotarget. 2016; 7(30):47864–74.
Hu Y, Zhang Y, Ren J, Wang Y, Wang Z, Zhang J. Statistical approaches for the construction and interpretation of human protein-protein interaction network. BioMed Res Int. 2016;5313050.
Song S, Hao J, Liu Y, Sun J. Improved EGT-Based Robustness Analysis of Negotiation Strategies in Multiagent Systems via Model Checking. IEEE Trans Human-Mach Syst. 2015; 86(86):1–12.
Moreau Y, Tranchevent LC. Computational tools for prioritizing candidate genes: boosting disease gene discovery. Nat Rev Genet. 2012; 13(8):523–36.
Barabási AL, Gulbahce N, Loscalzo J. Network medicine: a network-based approach to human disease. Nat Rev Genet. 2011; 12:56–68.
Schadt EE. Molecular networks as sensors and drivers of common human diseases. Nature. 2009; 461(7261):218–23.
Oti M, Snel B, Huynen MA, Brunner HG. Predicting disease genes using protein–protein interactions. J Med Genet. 2006; 43(8):691–8.
Krauthammer M, Kaufmann CA, Gilliam TC, Rzhetsky A. Molecular triangulation: bridging linkage and molecular-network information for identifying candidate genes in Alzheimer's disease. Proc Natl Acad Sci U S A. 2004; 101(42):15148–53.
Köhler S, Bauer S, Horn D, Robinson PN. Walking the interactome for prioritization of candidate disease genes. Am J Hum Genet. 2008; 82(4):949–58.
Li Y, Patra JC. Genome-wide inferring gene–phenotype relationship by walking on the heterogeneous network. Bioinformatics. 2010; 26(9):1219–24.
Vanunu O, Magger O, Ruppin E, Shlomi T, Sharan R. Associating genes and protein complexes with disease via network propagation. PLoS Comput Biol. 2010; 6:e1000641.
Van Dongen S. Graph clustering via a discrete uncoupling process. SIAM J Matrix Anal Appl. 2008; 30:121–41.
Navlakha S, White J, Nagarajan N, Pop M, Kingsford C. Finding biologically accurate clusterings in hierarchical tree decompositions using the variation of information. In: Research in Computational Molecular Biology. Springer: 2009. p. 400–17.
Goel R, Harsha H, Pandey A, Prasad TK. Human protein reference database and human proteinpedia as resources for phosphoproteome analysis. Mol bioSystems. 2012; 8(2):453–63.
Brown KR, Jurisica I. Online predicted human interaction database. Bioinformatics. 2005; 21(9):2076–82.
Amberger JS, Bocchini CA, Schiettecatte F, Scott AF, Hamosh A. OMIM. org: Online Mendelian Inheritance in Man (OMIM®;), an online catalog of human genes and genetic disorders. Nucleic Acids Res. 2015; 43(D1):D789—98.
Wang B, Mezlini AM, Demir F, Fiume M, Tu Z, Brudno M, Haibe-Kains B, Goldenberg A. Similarity network fusion for aggregating data types on a genomic scale. Nat Methods. 2014; 11(3):333–7.
Wang J, Chen G, Li M, Pan Y. Integration of breast cancer gene signatures based on graph centrality. BMC Syst Biol. 2011; 5(3):1.
Liekens AM, De Knijf J, Daelemans W, Goethals B, De Rijk P, Del-Favero J, et al.BioGraph: unsupervised biomedical knowledge discovery via automated hypothesis generation. Genome Biol. 2011; 12(6):R57.
Ganegoda GU, Wang J, Wu FX, Li M. Prediction of disease genes using tissue-specified gene-gene network. BMC Syst Biol. 2014; 8(Suppl 3):S3.
Eronen L, Toivonen H. Biomine: predicting links between biological entities using network models of heterogeneous databases. BMC Bioinformatics. 2012; 13:1.
Groza T, Köhler S, Moldenhauer D, Vasilevsky N, Baynam G, Zemojtel T, Schriml LM, Kibbe WA, Schofield PN, Beck T, et al.The human phenotype ontology: semantic unification of common and rare disease. Am J Hum Genet. 2015; 97:111–24.
Kibbe WA, Arze C, Felix V, Mitraka E, Bolton E, Fu G, Mungall CJ, Binder JX, Malone J, Vasant D, et al.Disease Ontology 2015 update: an expanded and updated database of human diseases for linking biomedical knowledge through disease data. Nucleic Acids Res. 2015; 43(D1):D1071—8.
Consortium GO, et al.Gene ontology consortium: going forward. Nucleic Acids Res. 2015; 43(D1):D1049—56.
Peng J, Wang T, Wang J, Wang Y, Chen J. Extending gene ontology with gene association networks. Bioinformatics. 2016; 32(8):1185–94.
Szklarczyk D, Franceschini A, Wyder S, Forslund K, Heller D, Huerta-Cepas J, Simonovic M, Roth A, Santos A, Tsafou KP, et al.STRING v10: protein–protein interaction networks, integrated over the tree of life. Nucleic Acids Res. 2015; 43(D1):D447–D452.
Landrum MJ, Lee JM, Riley GR, Jang W, Rubinstein WS, Church DM, Maglott DR. ClinVar: public archive of relationships among sequence variation and human phenotype. Nucleic Acids Res. 2014; 42(D1):D980—5.
Cheng L, Wang G, Li J, Zhang T, Xu P, Wang Y. SIDD: a semantically integrated database towards a global view of human disease. PloS ONE. 2013; 8(10):e75504.
Backstrom L, Leskovec J. Supervised random walks: predicting and recommending links in social networks. In: Proceedings of the fourth ACM international conference on Web search and data mining. Kowloon: ACM: 2011. p. 635–44.
Johnson R, Zhang T. On the Effectiveness of Laplacian Normalization for Graph Semi-supervised Learning. J Mach Learn Res. 2007; 8(4):1489–1517.
Tong H, Faloutsos C, Pan JY. Random walk with restart: fast solutions and applications. Knowl Inf Syst. 2008; 14(3):327–46.
Mattingly C, Rosenstein M, Colby G, Forrest J, Boyer J. The Comparative Toxicogenomics Database (CTD): a resource for comparative toxicological studies. J Exp Zool Part A Comparative Exp Biol. 2006; 305(9):689–92.
Hamosh A, Scott AF, Amberger JS, Bocchini CA, McKusick VA. Online Mendelian Inheritance in Man (OMIM), a knowledgebase of human genes and genetic disorders. Nucleic Acids Res. 2005; 33(suppl 1):D514—7.
Povey S, Lovering R, Bruford E, Wright M, Lush M, Wain H. The HUGO gene nomenclature committee (HGNC). Hum Genet. 2001; 109(6):678–80.
Lipscomb CE. Medical subject headings (MeSH). Bull Med Libr Assoc. 2000; 88(3):265.
Bodenreider O. The unified medical language system (UMLS): integrating biomedical terminology. Nucleic Acids Res. 2004; 32(suppl 1):D267—70.
Schriml LM, Arze C, Nadendla S, Chang YWW, Mazaitis M, Felix V, Feng G, Kibbe WA. Disease Ontology: a backbone for disease semantic integration. Nucleic Acids Res. 2012; 40(D1):D940—6.
Köhler S, Doelken SC, Mungall CJ, Bauer S, Firth HV, Bailleul-Forestier I, Black GC, Brown DL, Brudno M, Campbell J, et al.The Human Phenotype Ontology project: linking molecular biology and disease through phenotype data. Nucleic Acids Res. 2014; 42(D1):D966—74.
Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, Davis AP, Dolinski K, Dwight SS, Eppig JT, et al.Gene Ontology: tool for the unification of biology. Nat Genet. 2000; 25:25–9.
Kumar R, Raghavan P, Rajagopalan S, Sivakumar D, Tomkins A, Upfal E. Stochastic models for the web graph. In: Foundations of Computer Science, 2000. Proceedings. 41st Annual Symposium on. Redondo Beach: IEEE: 2000. p. 57–65.
Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982; 143:29–36.
We would like to thank Dr. Qingcai Chen, Professor at Harbin Institute of Technology, Shenzhen Graduate School, for invaluable comments and suggestions to the project.
This article has been published as part of BMC Genomics Volume 18 Supplement 1, 2016: Proceedings of the 27th International Conference on Genome Informatics: genomics. The full contents of the supplement are available online at http://bmcgenomics.biomedcentral.com/articles/supplements/volume-18-supplement-1.
This project has been funded by the National Natural Science Foundation of China (Grant No. 61332014, 61272121); the Start Up Funding of the Northwestern Polytechnical University (Grant No. G2016KY0301); the Fundamental Research Funds for the Central Universities (Grant No. 3102016QD003); the National High Technology Research and Development Program of China grant (no. 2015AA020101, 2015AA020108, 2014AA021505).
The publication costs for this article were funded by Northwestern Polytechnical University.
The datasets during and/or analysed during the current study available from the corresponding author on reasonable request.
JP, JC and YW conceived the project; JP, KB and JC designed the algorithm and experiments; JC and JP wrote this manuscript; XS, GW, HX, SJ and LC helped to test the algorithm. All authors read and approved the final manuscript.
The authors declare that there are no competing interests.
School of Computer Science, Northwestern Polytechnical University, Xi'an, China
Jiajie Peng & Xuequn Shang
School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
Kun Bai, Guohua Wang, Hansheng Xue & Yadong Wang
Department of Mathematics, Harbin Institute of Technology, Harbin, China
Shuilin Jin
College of Bioinformatics Science and Technology, Harbin Medical University, Harbin, China
Liang Cheng
Institue of Biomedical Informatics, College of Medicine, University of Kentucky, Lexington, 40536, KY, USA
Jin Chen
Department of Energy Plant Research Lab, Michigan State University, East Lansing, 48824, MI, USA
Current address: Tencent, Inc., Shenzhen, China
Kun Bai
Jiajie Peng
Xuequn Shang
Guohua Wang
Hansheng Xue
Yadong Wang
Correspondence to Yadong Wang or Jin Chen.
Additional file 1
Process of mapping different types of IDs. Additional file 1 is a figure to illustrate how different types of IDs are unified. (PDF 54.5 kb)
Initial weight for difference evidence code. Additional file 2 is a table that lists the weight values for different evidence code. (PDF 42 kb)
Relation between parameter b and loss value. Additional file 3 is a figure showing the relation between parameter b and loss value. (PNG 155 kb)
Diseases selected as the evaluation set. Additional file 4 is a table of diseases selected as the evaluation set. (PDF 708 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Peng, J., Bai, K., Shang, X. et al. Predicting disease-related genes using integrated biomedical networks. BMC Genomics 18 (Suppl 1), 1043 (2017). https://doi.org/10.1186/s12864-016-3263-4
Laplacian normalization
Supervised random walk
Integrated network | CommonCrawl |
Analyzing agent-based models: a brief survey
A proposed schema combining exploratory, sensitivity analysis and data mining techniques
Illustration: implementing the proposed schema on the 'DITCH' agent-based model
Conclusions and outlook
M. Hammad Patel1,
Mujtaba Ahmed Abbasi1,
M. Saeed1 and
Shah Jamal Alam2Email author
Received: 14 October 2017
Accepted: 30 December 2017
Exploring and understanding outputs from agent-based models is challenging due to a relatively higher number of parameters and multidimensionality of the generated data. We use a combination of exploratory and data mining techniques to understand data generated from an existing agent-based model to understand the model's behavior and its sensitivity to initial configurations of its parameters. This is a step in the direction of an ongoing research in the social simulation community to incorporate more sophisticated techniques to better understand how different parameters and internal processes influence outcomes of agent-based models.
Agent-based modeling
Exploratory data analysis
Agent-based models simulating social reality generate outputs which result from a complex interplay of processes related to agents' rules of interaction and model's parameters. As such agent-based models become more descriptive and driven by evidence, they become a useful tool in simulating and understanding social reality. However, the number of parameters and agents' rules of interaction grows rapidly. Such models often have unvalidated parameters that must be introduced by the modeler in order for the model to be fully functional. Such unvalidated parameters are often informed by the modeler's intuition only and may represent gaps in existing knowledge about the underlying case study. Hence, a rather long list of model parameters is not a limitation but an inherent feature of descriptive, evidence-driven models that simulate social complexity.
Theoretical exploration of a model's behavior with respect to its parameters in particular those that are not constrained by validation is important but have been, until recently, limited by the lack of available computation resources and analysis tools to explore the vast parameter space. An agent-based model of moderate complexity will, when run across different parameters (i.e., the total number of configurations times the number of simulation runs) generates output data that could easily be on a scale of gigabytes and more. With high performance computing (HPC), it has become possible for agent-based modelers to explore their models' (vast) parameter space, and while generating this simulated 'big data' is becoming (computationally) cheaper, analyzing agent-based model's outputs over a (relatively) large parameter space remains a big challenge for researchers.
In this paper we present a selection of practical exploratory and data mining techniques that might be useful to understand outputs generated from agent-based models. We propose a simple schema and demonstrate its application on an evidence-driven agent-based model of inter-ethnic partnerships (dating and marriages), called 'DITCH'. The model is available on OpenABM1 and reported by Meyer et al. (2014). In the analysis reported in this paper, we focus on the dynamics and interplay of the key model parameters and their effect on model output(s). We do not consider the model's validation in terms of the case studies on which it is based.
The next section ("Analyzing agent-based models: a brief survey" section) reviews selected papers that have previously addressed the issue of analyzing agent-based models. "A proposed schema combining exploratory, sensitivity analysis and data mining techniques" section present a general schema to analyze outputs generated by agent-based models and gives an overview of the exploratory and data mining techniques that we have used in this paper. In "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section, we present an overview of the DITCH agent-based model and discuss its parameters with their default values that have been reported by Meyer et al. (2014). This section also describes the experimental setup and results and finally, "Conclusions and outlook" section concludes with next steps in this direction.
Agent-based models tend to generate large volumes of simulated data that is dynamic and high-dimensional, making them (sometimes extremely) difficult to analyze. Various exploratory data analysis (EDA) and data mining (DM) techniques have been reported to explore and understand a model's outcome against different input configurations (e.g., Villa-Vialaneix et al. 2014). These techniques include heat-maps, box and whisker plots, sensitivity, classification trees, the K-means clustering algorithm and ranking of model parameters' in influencing the model's outcomes.
Several papers have proposed and explored data mining techniques to analyze agent-based simulations. One such is by Remondino and Correndo (2006) where the authors applied 'parameter tuning by repeated execution', i.e., a technique in which, multiple runs are performed for different parameter values at discrete intervals to find parameters that turn out to be most influential. The authors suggested different data mining techniques such as regression, cluster analysis, analysis of variance (ANOVA), and association rules for this purpose. For illustration, Remondino and Correndo (2006) presented a case study in which a biological phenomenon involving some species of cicadas was analyzed by performing multiple runs of simulations and aggregating the results. In another work, Arroyo et al. (2010) proposed a methodological approach involving a data mining step to validate and improve the results of an agent-based model. They presented a case study in which cluster analysis was applied to validate simulation results of the 'MENTAT' model. Their aim was to study the factors influencing the evolution in a Spanish society from 1998 to 2000. The clustering results were found to be consistent with the survey data that was used to initially construct the model.
Edmonds et al. (2014) used clustering and classification techniques to explore the parameter space of a voter behavior model. The goal of this study was to understand the social factors influencing voter turnout. The authors used machine learning algorithms such K-means clustering, hierarchical clustering, and decision trees to evaluate data generated from the simulations. Recently, Broeke et al. (2016) used sensitivity analysis as the technique to study the behavior of agent-based models. The authors applied OFAT ('One Factor at a Time'), global, and regression-based sensitivity analysis on an agent-based model in which agents harvest a diffusing renewable source. Each of these methods was used to evaluate the robustness, outcome uncertainty and to understand the emergence of patterns in the model.
The above cited references are by no means exhaustive but provide some interesting examples of the use of data mining techniques in analyzing agent-based models. In the next section, we give an overview of some of the EDA and sensitivity analysis (SA) techniques used in this paper. "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section of this paper further discusses the EDA, SA and DM techniques vis-à-vis the analysis of simulated outputs of an agent-based model.
We propose a schematic approach as a step towards combining different analysis techniques that are typically used in the analysis of agent-based models. We present a methodological approach to use exploratory, statistical and data mining techniques for analyzing the relationships between inputs and output parameters of an agent-based model. Applying the appropriate technique (or a set of techniques) to analyze a model's behavior and parameters sensitivity is the key to validate and predict any real word phenomena in an agent-based model. In "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section, we demonstrate the application of various exploratory data analysis, sensitivity analysis, and data mining techniques to understand the impact of various input parameters on the model output.
Figure 1 shows a schema that combines exploratory, statistical and data mining techniques to analyze outputs of agent-based models. We first begin with a broader, exploratory analysis of a selected model's input variables (parameters) to understand their effect on the given agent-based model's outputs. This is a typical way of understanding agent-based models, where a wider range of parameters are explored to visually see their relationship with the model outputs. Performing model sensitivity analysis follows next. With many input parameters, understanding outputs through eyeballing is difficult. Hence, techniques such as partial rank correlation coefficient (PRCC) help to measure 'monotonic relationships between model parameters and outputs' (Morino et al. 2008). The use of data mining techniques further allows to find patterns in the generated output across a wider range of a model's input parameters.
A schema for analyzing outputs generated by agent-based (social simulation) models using a combination of exploratory, statistical and data mining techniques
Next, we present an overview of some of the techniques that may be applied for each step in the schema, as shown in Fig. 1.
Data analysis in exploratory data analysis (EDA) is typically visual. EDA techniques help in highlighting important characteristics in a given dataset (Tukey 1977). Choosing EDA as a starting point in our proposed schema provides a simple yet effective way to analyze relationship between our model's input and output parameters. Graphical EDA techniques such as box and whisker plots, scatter plots, and heat maps (Seltman 2012) are often reported in the generated data from an (agent-based) simulation. Heat maps are (visually) often good indicators of patterns in the simulated output when parameter values change, whereas, the scatter maps are good often indicators to highlight association between two independent variables (model parameters) for a particular dependent variable (model output). Box and whisker plots on the other hand, summarize a data distribution by showing median, the inter-quartile range, skewness and presence of outliers (if any) in the data. Other techniques such as histograms and violin plots are used to describe the full distribution of an output variable for a given input parameter configuration(s) and are more descriptive than box and whisker plots (Lee et al. 2015).
In this paper, we used the ggplot2 package in R to generate heat maps and box and whisker plots for output variables against the most influential parameters having variations. The result as shown in "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section highlights the tipping points in heat maps where the percentage of dependent variable changes significantly. In order to explore the variation in output across the varying parameters, box plots were plotted for different parameter configurations. The results produced while plotting box plots can thus be used to identify the subset of a dataset contributing more in increasing the proportion of a target variable.
The purpose of performing sensitivity analysis is to study the sensitivity of input parameters of our ABM in generating the output variables, and thus, provide a more focused insight than exploratory analysis techniques. Several techniques may be used to perform sensitivity analysis. For instance, for the results reported in "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section, we performed multiple sensitivity analysis techniques such variable importance method, recursive elimination method, and PRCC (partial rank correlation coefficient).
Following step 2 of the proposed schema (Fig. 1), we identify two useful methods that are used in the analysis in "Illustration: implementing the proposed schema on the 'DITCH' agent-based model" section: variable importance and recursive feature elimination.
Variable importance
For a given output variable, ranking of each input variable (model parameter) with respect to its importance can be estimated by using model information (training data set). Variable importance thus quantifies the contribution of each input variable (parameter) for a given output variable. The method assumes a linear model, whereby the absolute value of each model parameter is used to train the dataset to generate importance of each input variable. In our case, we used caret package in R, which constructs a linear model by targeting a dependent attribute against the number of input attributes and then ranking with respect to their estimated importance.
Recursive feature elimination
The recursive feature elimination (aka RFE) method builds many models based on the different subsets of attributes using the caret package in R. This part of analysis is carried out to explore all possible subsets of the attributes and predicting the accuracy of the different attribute subset sizes giving comparable results.
Using data mining to analyze ABM outputs
There is a growing interest in the social simulation on the application of data mining techniques to analyze multidimensional outputs that are generated from agent-based simulations across a vast parameter space. In this section, we present an overview of some of the common datamining techniques that have been used to analyzed agent-based models' outputs.
Classification and regression trees
A classification/regression tree is based on a supervised learning algorithm which provides visual representation for the classification or regression of a dataset (Russell and Norvig 2009). It provides an effective way to generalize and predict output variables for a given dataset. In such trees, nodes represent the input attributes, and edges represent their values. One way to construct such a decision tree, is by using a divide-and-conquer approach to reach the desired output by performing a sequence of tests on each attribute node and splitting the node on each of its possible value. The process is repeated recursively, each time selecting a different attribute node to split on until there are no more nodes left to split and a single output value is obtained.
K-Means clustering
K-Means clustering is one of the widely implemented clustering algorithms and have been used to analyze agent-based models, e.g., Edmonds et al. (2014). It is often used in situations where the input variables are quantitative and a squared Euclidean distance is used as a dissimilarity measure to find clusters in a given dataset (Friedman et al. 2009). The accuracy of the K-means clustering algorithm depends upon the number of clusters that are specified at the initialization; depending upon the choice of the initial centers, the clustering results could vary significantly.
In this section, we present an overview of the 'DITCH' agent-based model followed by a description of the experimental setup through which the data was generated. We then report analysis of the generated output using the techniques introduced in the previous section.
An overview of the DITCH agent-based model (Meyer et al. 2014)
We have used the DITCH ("Diversity and Inter-ethnic marriage: Trust, Culture and Homopily") agent-based model by Meyer et al. (2014, 2016) for our analysis. Written in NetLogo,2 the model simulates inter-ethnic partnerships leading to cross-ethnic marriages reported in different cities of the UK and is evidence-driven.
Agents in the DITCH model are characterized by traits that influence their preferences for choosing suitable partner(s) over the course of a simulation run. The model assumes heterosexual partnerships/marriages within and across different ethnicities.
Agents' traits in the DITCH model (source: Meyer et al. 2016):
Gender {Male, Female}: Agents choose partners of opposite gender.
Age {18–35}: Preference based on a range with (default) mean 1.3 years and (default) standard deviation of 6.34 years.
Ethnicity (w, x, y, z): Agents have a preference for selecting partners of their own ethnicity or a different ethnicity.
Compatibility (score: 0–1): Agents prefer partners with a compatibility score that is closer to their own.
Education (levels: 0–4): Agents are assigned different levels of education, which influences their partner selection.
Environment Agents in the DITCH model are situated in a social space where they interact with each other and use their pre-existing social connections to search for potential partners. The choice of a potential partner depends upon an agent's aforementioned traits as well as other model parameters which we will discuss later on. Once a partnership is formed, agents then date each other to determine if the partner's education and ethnicity satisfy their requirements. They continue dating for a specified period, after which they reveal their compatibility scores to each other; if the scores are within their preferred range, they become marriage partners. Once a marriage link is formed, agents remain in the network without searching for any more potential partners. There is no divorce or break-up of marriages in the model. The model runs on a monthly scale, i.e., a time step/tick corresponds to 1 month in the model.
DITCH model parameters Following are the model parameters that setup the initial conditions at the start of a simulation run.
ethproportion: Proportions of different ethnicities in the agent population.
num-agents: Total number of agents. The population remains constant during simulation.
love-radar (values: 1, 2, 3): Defines the search range by an agent for a potential partner in its network as the 'social distance'.
new-link-chance: Probability that two unconnected agents will form a new link during a simulation run.
mean-dating/sd-dating: Mean and standard deviation of an agent's dating period (in years).
sd-education-pref: An agent's tolerance for the difference in education level vis-à-vis its romantic partner.
Experimental setup
Initialization of ethnic proportions The DITCH model uses the UK census data of 2001 as a basis for the parameter ethproportion. In all of our simulation experiments reported in this paper, the following four cases were used; based on the four UK cities differentiated with respect to the proportion of different ethnicities (Meyer et al. 2014):
Newham, London (Super-diverse3): White: British (WB): 49.8%; Asian/Asian British: Indian (A/ABI): 17.9%; Asian/Asian British: Bangladeshi (A/ABB): 13.0%; Black/Black British: African (B/BBA): 19.3%.
Birmingham, W. Midlands (Cosmopolitan): WB: 75.53%; Asian/Asian British: Pakistani (A/ABP): 12.26%; A/ABI: 6.57%; Black/Black British: Caribbean (B/BBC): 5.64%.
Bradford, W. Yorkshire (Bifurcated): WB: 80.3%; White: Other (WO): 1.6%; A/ABI: 2.8%; A/ABP: 15.3%.
Dover, Kent: WB: 98.17%; WO: 1.83%.
We conducted experiments using the BehaviorSpace tool in NetLogo, which allows exploring a model's parameter space. The approach we used is also called "Parameters Tuning by repeated execution", i.e., varying one input parameter at a time while keeping the remaining parameters unchanged (update-threshold, second-chance-interval) (Remondino and Correndo 2006).
The DITCH model generates several outputs and a complete description is reported by its developers in Meyer et al. (2014; 2016). In the analyses reported in this paper, we have focused on one output variable as the primary output: crossethnic, which is the percentage of cross-ethnic marriages in the system. The values taken for this variable were at the end of a simulation run (120 time steps; 10 years) and averaged over 10 replications per parameter configuration.
Given our resource constraints, we performed the experiments in two phases: In the first phase, we looked into the model's sensitivity to scale (in terms of the number of agents) and the extent to which agents search their potential partners in the network (i.e., love-radar). In the second phase, we explored the model's parameters specific to expanding agents' social network and those related to agents' compatibility with their potential partners.
Phase-I We first explored the model by varying two parameters with 10 repetitions for a total of 600 runs. All other parameters remained unchanged. Each simulation ran for 120 ticks (10 years).
Phase-II In the second phase, we kept the number of agents fixed to 3000 (see "Conclusions and outlook" section about the discussion on this choice). We then varied the other five model parameters for the four UK cities' ethnic configurations (see Table 1); for a total of 9720 runs. Each simulation ran for 120 ticks (10 years).
Model parameters and their range of values that were explored in Phase-I of simulation experiments
Values explored
num-agents
1000, 2500, 5000, 7500, 10,000
The number of agents in the model
love-radar
The diameter of an agent's ego network through which potential partners are sought
Simulation results and analyses
Here we present the results of the simulation experiments. For box plots and heat maps, we used R4 and its ggplot2 package. For regression/parameters importance analyses and for cluster analyses, we used R's caret and cluster packages respectively. For classification trees, we used Weka3 software.5
Results from simulation experiments (Phase-I)
In Phase-I, we varied the number of agents and the three values for the model parameter love-radar. For the rest of parameters, default values were used as reported in Meyer et al. (2016). The purpose for running experiments in Phase I was to gain a broader sense of the model's outcomes, in particular, the outcome of interest, which is the percentage of cross-ethnic marriages happening over a course of 10 years. Primarily, we were interested in testing the model's sensitivity to scale (the number of agents) and the availability of potential partners once the social distance (love-radar parameter) increases (Table 2).
Model parameters and their range of values that were explored in Phase-II of simulation experiments
{1, 2, 3}
Represents the social distance with respect to an agent's ego network through which potential partners are sought
new-link-chance
{0.25, 0.5, 0.75}
The probability for an agent to form a new link at each time step (month)
sd-education-pref
"Standard deviation of the normal distribution governing the agents' preference for difference in education level (mean is always 0)."—Meyer et al.
mean-dating
"Mean and standard deviation (in years) of the normal distribution governing the duration of the agents' dating period."—Meyer et al.
sd-dating
The number of agents was kept fixed at 3000
To summarize the results, we generated the box and whisker plots and heat maps (Janert 2010; Seltman 2012; Tukey 1977), to explore variation in output across the two varying parameters and within each parameter configuration when repeated 10 times. Figure 2 clearly indicates that the average percentage of cross-ethnic marriages across all the four cases (UK cities) is sensitive to the number of agents in the system. In particular, there is a sharp decrease in the average percentage of cross-ethnic marriages when the number of agents increases from 1000 to 2500, which is more evident in the case of Newham, where ethnic diversity was greatest in contrast to the case of Dover, where 98% of the agent population belonged to the White ethnic group. While sensitivity to scale is observed, the observed decline goes much slower and levels off as the number of agents reaches to 10,000.
Box and whisker plots showing the output variable crossethnic (percentage of cross-ethnic marriages) for the four cities in the UK across two parameters num-agents and love-radar in the DITCH model (with box plots drawn across the five different values of num-agents)
For a fixed size of agent population, the love-radar parameter in the DITCH model does influence the percentage of cross-ethnic marriages for all the four cases (UK cities). This is unsurprising as increasing the value of this parameter enables agents with a wider search space to find potential partners and thus the possibilities for finding a potential partner belonging to a different ethnic group increases as well. However, the relation with increasing the values of love-radar in the model is nonlinear for the output variable crossethnic for all the four cases (see Fig. 3). In Newham, which has the greatest ethnic diversity among all the four cities considered, the percentage of cross-ethnic marriages in-creases as the allowable social distance (value of the love-radar parameter) increases, whereas in case of Bradford and Dover, an increase in the love-radar from 1 to 2 results in an increase in the average cross-ethnic marriages but a further increase from 2 to 3 results other-wise. The heat map plot shown in Figure S1 in Additional file 1: Appendix further highlights this effect.
Box and whisker plots showing the output variable crossethnic (percentage of cross-ethnic marriages) for the four cities in the UK across two parameters num-agents and love-radar in the DITCH model (with box plots drawn across the three different values of love-radar)
From an exploratory analysis of simulations from Phase-I, it is clear that the DITCH model is sensitive to the number of agents in the system. As the effect dampens when the agent population increases further on, we fix the number of agents to be 3000 for simulation experiments in Phase-II. In case of the love-radar, the observed nonlinear relation indicates that other model parameters that were kept fixed in Phase-I also contribute to the output. Thus, a further exploration and a deeper analysis of the four model parameters is presented next.
Results from simulation experiments (Phase-II)
In Phase-II, we fixed the agent population at 3000 and ran simulations across different values of the five other model parameters, as described in the previous section. Here we demonstrate the use of several predictive and data mining techniques that might be useful in exploring and analyzing outputs generated from agent-based models.
First, we estimate the 'importance' of parameters by building a predictive model from the simulated data Brownlee (2016). For instance, importance of parameters can be estimated (subject to the underlying assumptions) using a linear regression model. We used the caret package in R for this purpose. The method ranks attributes by importance with respect to a dependent variable, here crossethnic (the percentage of cross-ethnic marriages) as shown in Fig. 4 (left). As Fig. 4 (left) shows, the model parameters love-radar and the new-link-change were identified as the most important parameters while the parameter mean-dating was ranked last. Figure 4 (right) shows the RMSE (root mean square error) when identifying the predictive model's accuracy in the presence and absence of model parameters through the automated feature selection method. Again, love-radar and new-link-chance were found as most significant (as the top two independent variables). Having identified love-radar and new-link-chance as two most important parameters, we explore variation in the generated dataset for the four cases (UK cities) with respect to these two parameters as shown in the box plots in Fig. 5.
Ranking of the five parameters as predictors of the output variable crossethnic. Left: the importance ranking of model parameters. Right: RMSE score against different models built using the automatic feature selection algorithm
Box and whisker plots showing the output variable crossethnic (percentage of cross-ethnic marriages) for the four cities in the UK across two parameters love-radar and new-link-chance in the DITCH model (with box plots drawn across the three different values of love-radar)
As Fig. 5 shows, increasing the value of love-radar parameter does result in increasing average cross-ethnic marriages in the DITCH model. Increasing chances of new links formation also contributes albeit less significantly. The variations observed in the box and whisker plots also suggest the role of other three parameters, which seem to play a role when the values of love-radar and new-link-chance are increased (see heat map in Figure S2 in Additional file 1: Appendix).
Evaluating partial rank correlation coefficients We further explored a subspace of the parameter space to identify the most admissible parameters by evaluating partial rank correlation coefficients (PRCC) for all output variables (Blower and Dowlatabadi 1994). The rationale behind calculating the PRCC is that for a particular output, not all input parameters may contribute equally. Thus, to identify the most relevant parameter(s), PRCC could be useful. One major advantage of identifying the top most relevant parameters based on the PRCC is that given a large parameter space, if only few input parameters have a significant contribution for a particular output, it reduces the dimensionality of parameter space significantly. For our analysis, we calculated the PRCCs for all output variables using a package in R called knitr.6 Table 3 shows the top three contributing inputs for each output variable when the PRCC was estimated.
PRCC values of the top three contributing input parameters against each output variable
Input-variable-1
Value of input1
Cross-ethnic marriages overall
loveradar
newlinkchance
sddating
− 0.04572220
Number of agents per ethnicity
Married agents per ethnicity
Cross-ethnic marriages per ethnicity
Following our proposed schema, we proceed with generating a classification and regressing tree using Weka's decision tree builder as shown in Fig. 6.
Classification and regression tree generated from the simulated data across the five parameters in the DITCH model against the average percentage of cross-ethnic marriages shown in the leaf nodes
The decision tree shown in Fig. 6 was built using Weka's REPTree algorithm.7 It is a 'fast decision tree learner and builds a decision/regression tree using information gain/variance reduction' (Hall et al. 2011). Since here we are predicting the cross-ethnic parameter, which is a continuous variable, the REPTree algorithm uses variance reduction to select the best node to split. We used the five varied parameters to build the tree shown in Fig. 6, in which the DITCH model parameters love-radar, sd-education-pref, mean-dating, new-link-chance, sd-dating were the predictors while the output parameter cross-ethnic was the target variable. We set the minNum (the minimum total weight of the instances in a leaf) property of the classifier to 200 to avoid overfitting. The resulting tree had the following accuracy/error metrics on the test/unseen data.
$$\begin{aligned} & {\text{Mean Absolute Error: }} 0.9582 \\ & {\text{Root Mean Squared Error: }}1.2995 \end{aligned}$$
As the constructed tree shows (Fig. 6), ethnic diversity (or the lack of) in the agent population was the strongest determinant of cross-ethnic marriages. Once again, love-radar was found to be the second most important determinant, especially, in situations where some ethnic diversity existed. When the value of the love-radar was set to 1 (i.e., only immediate neighbors in the social network were sought), it alone determined the percentage of cross-ethnic marriages; however, for higher values of the love-radar parameter (i.e., 2 and 3), the output was further influenced by new-link-chance and in other instances, the parameters related to agents' dating in the simulation.
K-Means clustering on all 13 DITCH output variables We now turn towards the K-means clustering algorithm to find clusters in the generated dataset. We performed the cluster analysis on the 13 output variables of the DITCH model that were recorded from our simulation experiments. We chose the data from Phase-II, which involved five varied parameters for each sample area (a UK city) with 9720 runs altogether. Our purpose of applying this technique was to group the output instances that were similar in nature into clusters. All output variables were first normalized before proceeding to the next step of finding the optimal number of clusters (k). We then followed the technique used by Edmonds et al. (2014), in which within group sum of squares were calculated against the number of clusters for multiple random initialized runs. The optimal value of clusters in the plot could then be identified as the point at which there is a bend or an elbow like curve. Figure 7 (left) suggests the optimal number of clusters to be around 3 or 4 where a bend is observed.
Finding the optimal number of clusters for the K-means clustering algorithm. Left: using the within group sum of squares technique. Right: using the silhouette analysis
The silhouette analysis8 shown in Fig. 7 (right) also shows that the optimal value for k is around 3 or 4 in this case. Here the plot displays a measure of similarity between the instances in each cluster and thus provides a way to assess parameters like the number of optimal clusters (Rousseeuw 1987). The results from this analysis confirms that the optimal number of clusters should be around 4. Hence, we ran the K-means clustering algorithm for all the thirteen outputs; the centroids of the four K-means clusters are given in Table 3. The partitioning of data into the four clusters gives a good split across parameters explored.
As Table 3 shows, the goodness of fit is high (~ 87%) indicating that the clusters are distinct, with an almost equal number of instances across all the four clusters. The mean percentage of cross-ethnic marriages was highest in Cluster 2 (19.78%) and lowest in Cluster 4 (1.25%); while Clusters 1 and 3 were found to be closer in terms of the average cross-ethnic marriages. These are results we expect as they present quite an accurate picture of the population distribution of ethnicities in the four UK cities (Newham, Dover, Bradford and Birmingham). We can check the distribution of the input parameter eth-proportions across these four clusters and the resulting matrix in Table 4 shows that each region is quite accurately labeled in each cluster. The dominant ethnicity in which most of the cross-ethnic marriages occur like in Cluster 1 representing the sample area Birmingham has ethnicity-z (Black/Black British: Caribbean(B/BBC)), which is 5.64% of the total population showing the most cross-ethnic marriages, while in Cluster 2 ethnicity-y(Asian/Asian British: Indian A/ABI) which is 6.57% of the total population, in Cluster 3 ethnicity-x ((Asian/Asian British: Indian A/ABI)), which is 17.9%, and in Cluster 4 ethnicity-x (White: Other (WO)), which is 1.83% of total populations are representing the highest cross-ethnic marriages.
Centroids of the four K-means clusters for all the thirteen output variables in the DITCH model based on the data generated through simulations in Phase-II (between_SS/total_SS = 87.1%)
Output variables in DITCH
Cluster-1 (2431 instances)
Cluster 2 (2430 instances)
num-turtles-w
num-turtles-x
num-turtles-y
num-turtles-z
married-turtles-w
married-turtles-x
married-turtles-y
married-turtles-z
out-percent-w (%)
out-percent-x (%)
out-percent-y (%)
out-percent-z (%)
cross-ethnic (%)
Within cluster sum of squares
Figure 8 (top) shows the 2D representation of all the data points of the four clusters. As discussed earlier, Clusters 1 and 3 have some overlapping points while Clusters 2 and 4 were distinct and separate. Finally, Fig. 8 (below) shows the variability in terms of the average cross-ethnic marriages across the four clusters.
Top: 2D plot of the four clusters found using the K-means clustering algorithm for the data generated in Phase-II of the simulation experiments. Bottom: box plot showing the distribution of data points against the output variable crossethnic
As agent-based models of social phenomenon become more complex, with many model parameters and endogenous processes, exploring and analyzing the generated data gets even more difficult. We need a whole suite of analyses to look into the data that such agent-based models generate, incorporating traditional or dynamic social network analysis, spatio-temporal analysis, machine learning or more recent ones such as deep learning algorithms. There is a growing number of social simulation researchers who are employing different data mining and machine learning techniques to explore agent-based simulations.
The techniques discussed in this paper are by no means exhaustive and the exploration of useful analysis techniques for complex agent-based simulations is an active area of research. Lee et al. (2015), for example, examined multiple approaches in understanding ABM outputs including both statistical and visualization techniques. The authors proposed methods to determine a minimum sample size followed by an exploration model parameters using sensitivity analysis. Finally, the authors discussed focused on the transient dynamics by using spatio-temporal methods to gain insights on how the model evolves over a time period.
In this paper, we propose a simple step-by-step approach to combine three different analysis techniques. For illustration, we selected an existing evidence-driven agent-based model by Meyer et al. (2014, 2016), called the 'DITCH' model. As a starting point, we recommend the use of exploratory data analysis (EDA) techniques for analyzing agent-based models. EDA provide simple, yet an effective set of techniques to analyze relationship between a model's input and output variables. These techniques are useful to spot patterns and trends in a model's output across varying input parameter(s) and to gain insight into the distribution of data that is generated. Sensitivity analysis (SA) techniques follow the exploratory space and are useful, e.g., to rank input parameters in terms of their contribution towards a particular model output. SA techniques are not only useful in identifying those parameters but also quantify the variability of the effect these input parameters may have on different model output variables. The application of datamining (DM) techniques to analyze agent-based social simulations is relatively new. While traditional techniques such as EDA or SA (or other statistical techniques) are useful, they may fail to fully capture a complex, multidimensional output that may result from agent-based simulations. DM can be useful in providing a better and holistic understanding of the role parameters and processes in generating such output.
http://www.obenabm.org.
http://www.netlogoweb.org/.
The case labels 'super diverse', 'cosmopolitan', 'bifurcated' and 'parochial' are taken from Meyer et al. (2014, 2016); as reported in their original paper.
https://cran.r-project.org/.
http://www.cs.waikato.ac.nz/ml/weka/.
https://cran.r-project.org/web/packages/knitr/index.htmls.
http://weka.sourceforge.net/doc.dev/weka/classifiers/trees/REPTree.html.
https://stat.ethz.ch/R-manual/R-devel/library/cluster/html/silhouette.html.
HP, MA and SJA drafted the manuscript; SJA and MS designed the study; HP and MA generated the data; HP, MA, SJA and MS analyzed and interpreted the data. All authors read and approved the final manuscript.
We are thankful to the anonymous reviewers for their useful feedback and also to the reviewers of the Social Simulation 2017 conference where an earlier version of this paper was presented.
No funding received.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
40294_2018_52_MOESM1_ESM.pdf Additional file 1. Additional figures and table.
Department of Computer Science, University of Karachi, Karachi, 75270, Pakistan
School of Science & Engineering, Habib University, Karachi, 75290, Pakistan
Arroyo J, Hassan S, Gutiérrez C, Pavón J (2010) Re-thinking simulation: a methodological approach for the application of data mining in agent-based modelling. Comput Math Org Theory 16:416–435View ArticleGoogle Scholar
Blower SM, Dowlatabadi H (1994) Sensitivity and uncertainty analysis of complex models of disease transmission: an HIV model as an example. Int Stat Rev Revue 62:229–243View ArticleMATHGoogle Scholar
Broeke GT, van Voorn G, Ligtenberg A (2016) Which sensitivity analysis method should i use for my agent-based model? J Artif Soc Soc Simul 19:1. http://jasss.soc.surrey.ac.uk/19/1/5.html
Edmonds B, Little C, Lessard-Phillips L, Fieldhouse E (2014) Analysing a complex agent-based model using data-mining techniques. In: Social Simulation Conference 2014. http://ddd.uab.cat/record/125597
Friedman J, Hastie J, Tibshirani R (2009) The elements of statistical learning, 2nd edn. Springer, New YorkMATHGoogle Scholar
Hall M, Witten I, Frank E (2011) Data Mining: practical machine learning tools and techniques. Morgan Kaufmann, BurlingtonGoogle Scholar
Janert PK (2010) Data analysis with open source tools: a hands-on guide for programmers and data scientists. Newton, O'ReillyGoogle Scholar
Brownlee J (2016) Master Machine Learning Algorithms: discover how they work and implement them from scratch. https://machinelearningmastery.com/master-machine-learning-algorithms/. Accessed 1 Dec 2017
Lee JS, Filatova T, Ligmann-Zielinska A, Hassani-Mahmooei B, Stonedahl F, Lorscheid I, Voinov A, Polhill JG, Sun Z, Parker DC (2015) The complexities of agent-based modeling output analysis. J Artif Soc Soc Simul 18:4. http://jasss.soc.surrey.ac.uk/18/4/4.html
Meyer R, Lessard-Phillips L, Vasey H (2014) DITCH: a model of inter-ethnic partnership formation. In: Social simulation conference 2014. http://fawlty.uab.cat/SSC2014/ESSA/socialsimulation2014_037.pdf
Meyer R, Lessard-Phillips L, Vasey H (2016) DITCH—a model of inter-ethnic partner-ship formation (version 2). CoMSES computational model library. https://www.openabm.org/model/4411/version/
Morino S, Hoque IB, Ray CJ, Kirschner DE (2008) A methodology for performing global uncertainty and sensitivity analysis in systems biology. J Theor Biol 254(1):178–196MathSciNetView ArticleGoogle Scholar
Remondino M, Correndo G (2006) MABS validation through repeated execution and data mining analysis. Int J Simul 7:6Google Scholar
Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20:53–65View ArticleMATHGoogle Scholar
Russell S, Norvig P (2009) Artificial intelligence: a modern approach. Pearson, Upper Saddle RiverMATHGoogle Scholar
Seltman HJ (2012) Experimental design and analysis. Carnegie Mellon University, Pittsburgh, p 428Google Scholar
Tukey JW (1977) Exploratory data analysis. Pearson, Upper Saddle RiverMATHGoogle Scholar
Villa-Vialaneix N, Sibertin-Blanc C, Roggero P (2014) Statistical exploratory analysis of agent-based simulations in a social context. Case Stud Bus Ind Govern Stat 5:132–149Google Scholar | CommonCrawl |
Dynamic scan control in STEM: spiral scans
Xiahan Sang1,2,
Andrew R. Lupini2,3,
Raymond R. Unocic1,2,
Miaofang Chi1,2,
Albina Y. Borisevich2,3,
Sergei V. Kalinin1,2,
Eirik Endeve4,
Richard K. Archibald4 &
Stephen Jesse1,2
Advanced Structural and Chemical Imaging volume 2, Article number: 6 (2016) Cite this article
Scanning transmission electron microscopy (STEM) has emerged as one of the foremost techniques to analyze materials at atomic resolution. However, two practical difficulties inherent to STEM imaging are: radiation damage imparted by the electron beam, which can potentially damage or otherwise modify the specimen and slow-scan image acquisition, which limits the ability to capture dynamic changes at high temporal resolution. Furthermore, due in part to scan flyback corrections, typical raster scan methods result in an uneven distribution of dose across the scanned area. A method to allow extremely fast scanning with a uniform residence time would enable imaging at low electron doses, ameliorating radiation damage and at the same time permitting image acquisition at higher frame-rates while maintaining atomic resolution. The practical complication is that rastering the STEM probe at higher speeds causes significant image distortions. Non-square scan patterns provide a solution to this dilemma and can be tailored for low dose imaging conditions. Here, we develop a method for imaging with alternative scan patterns and investigate their performance at very high scan speeds. A general analysis for spiral scanning is presented here for the following spiral scan functions: Archimedean, Fermat, and constant linear velocity spirals, which were tested for STEM imaging. The quality of spiral scan STEM images is generally comparable with STEM images from conventional raster scans, and the dose uniformity can be improved.
Beam damage, drift distortion, and scan distortion are inherent issues that hinder quantitative interpretation of scanning transmission electron microscopy (STEM) imaging [1–6]. Beam damage occurs when the electron beam used to form the image transfers a critical amount of energy to the sample being examined, potentially causing damage or otherwise changing the subject of the experiment. This effect can be very useful, for example, allowing electron energy loss spectroscopy (EELS), or for the deliberate sculpting of nano-device components [7], or to excite diffusion of single atoms [8, 9] and vacancies [10]. However, in most cases, such damage to the sample is usually considered to be detrimental. Thus, various strategies are employed to minimize beam damage, and the optimal method will depend on the properties of the sample and microscope imaging parameters. If damage is dominated by knock-on mechanisms, a viable option is to reduce the accelerating voltage below the threshold at which significant damage occurs. Conversely, if the damage is dominated by ionization, then it may be beneficial to increase the accelerating voltage to reduce the ionization cross section [1]. Additional experimental procedures might also be useful, such as coating the sample with a conductive layer (such as carbon), imaging inside a liquid [11], or operating at cryogenic temperature [12].
Similarly, a variety of imaging strategies can be employed to minimize the electron dose, chief among which is reducing the beam current. Using more source demagnification can improve spatial resolution, but the lower signal level may degrade the signal-to-noise ratio. Other possibilities include control of the beam dose via 'blanking', adjusting operating parameters (such as focus and astigmatism) on an area slightly away from the area of interest, using repeated fast scans [13], or making more efficient use of the available signals [14]. The recent development of sparse sampling methods also appears to be extremely promising [15].
On the other hand, acquisition of multiple fast scans can both reduce the dose rate and allow sequential imaging, which is particularly useful for samples that are beam-sensitive or that experience charging. Also, there has been a recent resurgence of interest in applying methods to correct scan and drift distortions in STEM using frame averaging [2–6]. However, the success of these methods raises the question of whether the scan itself can be improved to eliminate some of the distortions during data acquisition rather than by post-processing. Extremely high-speed scanning and the possibility of dynamic stabilization seem to be promising routes for further exploration.
Advantages of using non-traditional scan paths have been demonstrated in scanning probe microscopy (SPM), including improved speed and accuracy and the ability to automate the targeting of regions of interest for higher resolution measurements [16, 17]. However, customization or optimization of the scanning path has rarely been used in STEM. There are several technical difficulties associated with scanning in STEM. These mostly arise because of the competing demands on the probe response: the user might wish to move the probe rapidly, requiring a fast response, whereas the probe also has to be highly stable and not wobble about each position during a slower scan. Typically, the scan speed used for spectroscopy might be 3–6 orders of magnitude slower than for imaging. Obviously, these competing demands place stringent requirements on the scan amplification electronics. Moreover, STEM scans are usually 'double-deflection' to obtain tilt-free scans or coma-free scans. Here, we will largely ignore such details and treat the magnification and scan purification as separate problems.
In this paper, we show for the first time that aberration-corrected STEM images can be formed at high speed using paths that are significantly different from traditional orthogonal rastering. Advantages and disadvantages of different scan paths will be compared in terms of sampling uniformity and distortion. To differentiate from conventional rastering mode scans, this new scanning method will be referred to as general-scan STEM (G-STEM).
For test purposes, we used a SrTiO3 (STO) sample viewed down the [110] zone axis. STO is a very common substrate for thin-film growth, which is a major topic of interest for electron microscopy, meaning that there will likely be an STO reference region available on many technologically important samples. Moreover, STO is reasonably stable and does not charge significantly under typical electron doses.
STEM images were acquired used an FEI Titan 80–300 operating at 300 kV equipped with a Fischione high-angle annular dark-field (HAADF) detector. We developed a custom field-programmable gate array (FPGA)-based scan system (in a National Instruments PXIe-1073 chasis) capable of interfacing to a variety of different microscopes. A LabView program was developed to control the scan unit with input coordinates from customizable Matlab code. This system generates voltage waveforms that are sent to the x- and y- scan controls to enable arbitrary and dynamic beam positioning. The maximum readout frequency of the FPGA scan system is 2 MHz with an equivalent shortest dwell time of 0.5 μs.
At this stage, it is important to point out that the unconventional scan patterns used here induce a paradigm shift in how image data are considered. In a traditional scanning mode, the data are essentially stored as an array of intensities, which are assigned to elements within a 2D matrix. However, for more complicated scan patterns, it is also necessary to specify the (nominal) position where each data point was acquired. A simple interpolation algorithm (herein called reconstruction) is used to map each data point to an element of the displayed or printed image. Thus, rather than a simple list of intensities (I i ), the data are better envisioned as a list of positions and intensities (x i , y i , I i ).
In practice, we have begun to store the nominal positions in this manner. Of course, it is possible to just store the scan-generation algorithm, but the factor of 3 increases in storage requirements is largely irrelevant here. Moreover, if distortions are significant, the true probe position may be quite different from the nominal position. Scan distortion correction consists of constructing the map from nominal to 'true' probe positions. Thus, this paradigm also highlights the analogy to the usual post-processing distortion correction, where a per-pixel map of corrections is generated [2–4].
In this paper, every G-STEM data set contains a series of twenty frames each acquired with 0.2 s frame time and the maximum frequency of 2 MHz. The 400,000 data points in each frame were then reconstructed to form a 200 × 200 image. The twenty image frames were aligned using cross-correlation and averaged to increase the signal-to-noise ratio (SNR). The final images presented in the figures were further smoothened in the frequency domain using a Gaussian filter.
Sawtooth scans
A typical STEM image acquired using the conventional raster scan path with a dwell time of 20 μs and 512 × 512 frame size is shown in Fig. 1a. Here, the brighter atom columns are Sr and the fainter columns are Ti. The drift distortion is evident as the angle between [\(1\bar{1}0\)] and [001] deviates from 90°. We start the G-STEM attempt from the simple sawtooth scan path that resembles conventional raster scan from left to right and top to bottom. Here, we use a simple version of this path such that the beam flies directly from the end of the last line to the start of the next line and continues to scan without any flyback time or line synchronization. The probe location (x i , y i ) as a function of time is shown in Fig. 1b. Here, (x i , y i ) scales to the voltages applied along the two directions. The X-axis (red) is defined as the horizontal direction, also known as the fast scan direction in the conventional STEM. The Y-axis (black) is defined as the vertical direction, also known as the slow scan direction. The scans for both the X- and Y-axes are sawtooth waves of appropriate frequencies. Practically, the amplitude of this wave is controlled by the microscope electronics and defines the magnification of the STEM image. To better illustrate the scan path, we also plot the beam locations in 2D, as shown in Fig. 1c. The black zigzagging line connecting the dots illustrates the scan path.
a Conventional STEM image acquired from STO along [110] zone axis. Schematic illustration of a sawtooth scan. b Voltages applied to the X and Y scan coils over the time for a single frame acquisition. c Probe positions, shaded from dark to light as a function of time. d A reconstructed G-STEM image acquired with a sawtooth scan
Figure 1d shows a processed G-STEM image acquired using a fast sawtooth scan with a frame time of 0.2 s and 20 frames as discussed earlier. Note that although we use the HAADF signal in this paper, it is possible to simultaneously acquire multiples signals, such as both bright- and dark-field signals. The image is significantly distorted at the left edge of the displayed region, although the rest of the image is relatively undistorted. This distortion is likely from the phase lag of the scan electronics responding to a sudden change of beam location. When the beam moves from the end of the last line to the start of the next line, the actual location will take some extra time to reach the nominal position. Therefore, one way to compensate for the lag is to add in some extra shifts or a delay time, as in a conventional cathode ray tube.
Conventionally, a 'flyback' delay at the start of each fast scan line is used to reduce such distortion. For a present state-of-the-art STEM, flyback delays of 10–1000 μs are typical. As a specific example, the Nion UltraSTEM 200 typically needs more than 500 μs to yield images without noticeable distortions. Thus, for a scan of 512 × 512 pixels at 1 µs/pixel, using this flyback delay would result in losing roughly half of the available imaging time. If a fast enough blanker is available, the beam could be blanked during the flyback; otherwise, there might also be additional unnecessary damage at the edges of the scan where the beam spends extra time. The distribution of the electron dose is an important topic that will recur later. Clearly, a method of eliminating the flyback delay would allow an increase in scanning rate and potentially reduce the beam damage.
Another method to reduce the distortion and lateral shift along slow scan direction is called line-synchronization, i.e., tying each line to the same part of the wave of the electrical supply. Such synchronization has the added advantage that the effects of mains interference should be similar for each scan line and each frame, facilitating its correction [18]. However, this method either requires a delay time at the start of each line or imposes additional restrictions on the per-pixel dwell time.
Serpentine scans
An obvious improvement over the sawtooth scan to avoid a flyback delay is to perform a 'serpentine' scan, alternately moving the probe from left to right on one scan line and then right to left on the next, using what is sometimes called a triangle wave. A serpentine scan is shown in Fig. 2a, b, where the X- and Y- directions are the same as in Fig. 1b. Double serpentine scans (i.e., performing a second scan after rotating the slow-scan axis by 90°) can also be implemented.
Schematic illustration of a serpentine scan. a Voltages applied to the X and Y scan coils over the time for a single frame. b Probe positions, shaded from dark to light as a function of time. c Reconstructed G-STEM images (forward and backward) acquired with a serpentine scan
Figure 2c shows the result of such a serpentine scan. Unfortunately, these scans initially appear worse than the conventional scan at high scan speeds, because the distortions are different for the leftwards and rightwards trajectories. For display purposes, it is best to separate out these two paths. Notably, unwarping this distortion might present an easier problem to solve than the regular sawtooth wave, because the triangle wave provides two images of the same area with different distortions. To a reasonable level of approximation, we might, therefore, expect the distortions to be similar, but reversed. Thus, a digital correction of serpentine scans could be a promising route for further development.
The obvious lesson from the serpentine scans is that the sharp changes in direction at the edges of the scan contribute significantly to the distortions. There is a clear difference in the acceleration of the probe between the abrupt changes at the end of each scan line as compared with the rest of the pixels. The relevance should be obvious in scanning tunneling microscopy (STM), in which the moving probe/stage has mass, but is perhaps a little surprising in STEM where the 'probe' does not really correspond to a physical object. However, it seems clear that there is a non-ideal response of the 'true' probe movement to the 'nominal' probe positions. The cause of this lag is inductance in the scan coils and other current-flow limitations, which limit how fast the scan can be changed, in an analogous way as to how inertia can limit mechanical movement. One route to address this problem would be with faster electronics or rapid electrostatic deflectors. However, such new hardware would introduce other complications and, thus, scan paths without sharp changes in acceleration merit further investigation.
Spiral scans
We now focus on smooth curves that can fill the 2D space without crossing themselves. The distortions can hopefully be reduced due to the relatively smooth acceleration. Spiral curves are natural solutions to this problem. The mathematical study of spirals has a long and interesting history, dating back thousands of years [19]. In this paper, we focus on spirals with coordinates (x, y) as a function of time t defined by:
$$x = t^{a} { \cos }\left( {\omega t^{b} } \right),\;y = t^{a} { \sin }\left( {\omega t^{b} } \right)$$
where ω is the scanning frequency, and a and b are parameters to control the shape of the spirals. The scanning frequency ω can be adjusted to change the sampling rate. The spiral can go both inward and outward. As the drift distortion is different but correlated for inward and outward scans, this is a promising way to decouple drift distortion from the scan distortion, which will be considered in more detail in the future work.
Here, we explore the physical properties of the spiral curves, as they are closely related to the quality of STEM images reconstructed from those scan paths. We begin with the velocity \(\vec{v}\), which basically determines the distance between adjacent sampling points. For each point on the spiral, \(\vec{v}\) is the first derivative of Eq. (1), with a magnitude:
$$\left| {\vec{v}} \right| = t^{a - 1} \sqrt {a^{2} + \omega^{2} b^{2} t^{2b} } \approx \omega bt^{a + b - 1}$$
The term a 2 inside the square root can usually be neglected for large ωt b. We can see that when a + b = 1 the velocity magnitude is approximately constant for all the points on the spiral. If a + b > 1, the beam moves faster as it moves away from the center.
The angular velocity magnitude Ω is defined by:
$$\varOmega = \frac{{\left| {\vec{v}} \right|}}{r} = \frac{{\omega bt^{a + b - 1} }}{{t^{a} }} = \omega bt^{b - 1}$$
Here, we assume that the velocity \(\vec{v}\) is perpendicular to \(\vec{r}\) = (x, y), which is a reasonable approximation: The angle between \(\vec{v}\) and \(\vec{r}\) can be calculated as θ = arccos(a/(ωbt b)). As t increases, cos(θ) approaches zero and θ approaches 90°. Equation 3 tells us that the angular velocity is approximately constant if b = 1.
Another potentially interesting feature of the spiral curves is the sampling density. To ensure uniform sampling, the dose should ideally be the same across the whole area. For a first approximation, we consider how the spiral sweeping area A increases as a function of time t,
$$\frac{dA}{dt} = \frac{{d\left( {\pi \left( {t^{a} } \right)^{2} } \right)}}{dt} = 2a\pi t^{2a - 1}$$
For a = 0.5, the area increases linearly with time. For a < 0.5, the increase slows down over time, resulting in more dose at the edges, while for a > 0.5, the center is exposed to more electron dose. Now, with the understanding of physical properties of the spiral scans, we investigate the behavior for spiral curves with different a and b parameters.
Archimedean spiral
The first type of spiral we consider is an 'Archimedean' spiral with a = 1 and b = 1:
$$x = t{ \cos }\left( {\omega t} \right),\;y = t{ \sin }\left( {\omega t} \right)$$
The beam scan path Fig. 3a shows that the magnitude of x and y slowly increase without any sharp turns and the frequency of the sinusoids remains constant. Taking coordinates from Fig. 3a, we can form the outward scan trajectory, as shown in the left part of Fig. 3b. The inward scan shown in the right part is constructed by reversing of the scan path and also the y-direction. Two typical reconstructed images using Archimedean inward and outward spirals are shown in Fig. 3c. Note that the resulting STEM images do not display any obvious non-linear distortion. This is attributed to the constant frequencies (and constant angular velocity) for b = 1. Both inward and outward images are rotated at the same angle with respect to the sawtooth scan images in Fig. 1d. The distortion is likely from the scan lag which is related to the angular velocity. As the spiral scan direction is clockwise for both the inward and outward scans, the distortion is the same for both images.
Schematic illustration of an Archimedean spiral scan. a Voltages applied to the X and Y scan coils over the time for a single frame. b Probe positions, shaded from dark to light as a function of time. The left part shows an outward scan starting from the center. The right part shows an inward scan ending at the center. c Reconstructed outward and inward G-STEM images acquired with Archimedean spiral scan paths. The black striations (indicated by the arrows) at the edges are due to undersampling
The main problem with an Archimedean scan is the sampling density. This can be seen simply by recognizing that the number of points scanned in time t will be proportional to t, while the area scanned is approximately proportional to the square of the time, as t 2. Thus, the sampling density and dose at the sample will vary with position in the image. This is also evident from the corrupted regions close to the edge of the reconstructed images due to very sparse sampling in those areas. Also, due to very dense sampling in the center, the beam dose there will be much larger than the average, resulting in extra beam damage. Clearly, if uniform sampling distribution is desired, Eq. 4 reveals that we should investigate solutions with a = 0.5.
Fermat spiral
Here, we use a different spiral with a = 0.5 and b = 1 to give both uniform sampling and constant angular velocity:
$$x = \sqrt t { \cos }\left( {\omega t} \right),\;y = \sqrt t { \sin }\left( {\omega t} \right)$$
This spiral has been known as Fermat spiral, which has the more general form r 2 = ωt. Since the square root has two solutions (positive and negative), a natural approach is to use one part as the outward scan and the other as the inward scan. Figure 4a shows how x and y change as a function of time for the outward scan part. Figure 4b shows the scan path for both outward and inward scans. Note that the end point (A) of the outward scan and the starting point (B) of the inward scan are at opposite sides. Therefore, a smooth wave was added to move the probe from A and B for a smooth transition between outward scan and inward scan. The outward and inward scan paths move clockwise and counter clockwise, respectively. The two reconstructed STEM images are shown in Fig. 4c. Again, the distortion seems to be purely linear due to constant angular velocity. The rotation distortions are opposite as expected from different spiral rotation directions. However, the image quality is not uniform; the edge area is noticeably more blurred than the center area. This non-uniformity is attributed to the anisotropic sample spacing. Near the center area, the spacing between adjacent points along the tangent direction is much shorter than the spacing along radial directions. For the edge area, the spacing along the radial direction is much longer than along the tangential direction. Therefore, despite the nominally uniform areal distribution, the actual sampling is still not ideal.
Schematic illustration of a Fermat spiral scan. a Voltages applied to the X and Y scan coils over the time for a single frame. b Probe positions, shaded from dark to light as a function of time. c Reconstructed STEM images from outward and inward scans using Fermat spiral scan paths
Constant linear velocity spiral
We seek a spiral that retains the constant sampling density, but where the distance between samples is isotropic. The solution is known as a constant linear velocity spiral. From the previous discussion on the physical properties of spirals, the two parameters should satisfy a = 0.5 and a + b = 1. The spiral equation is thus:
$$x = \sqrt t { \cos }\left( {\omega \sqrt t } \right),\;y = \sqrt t { \sin }\left( {\omega \sqrt t } \right)$$
This spiral has both constant sampling density (dose distribution) and, evenly, isotropic spaced points. A similar scan path was proposed for atomic force microscopy (AFM) [20]. The scan path is shown in Fig. 5a. Examples of the sampling trajectories for both outward and inward scans are shown in Fig. 5b, where we can see that the data points are evenly distributed along both tangential and radial directions.
Schematic illustration of a constant linear velocity scan. a Voltages applied to the X and Y scan coils over the time for a single frame. b Probe positions, shaded from dark to light as a function of time. c Reconstructed STEM images from outward and inward scans using constant linear velocity scan paths
Experimental images with the constant velocity spiral are shown in Fig. 5c. The outward and inward parts of the scan are displayed separately. Significant distortions are apparent at the center of the images where the scan frequency is changing the fastest. The two images have opposite rotation distortion directions in the center, which result from different spiral rotation directions. Therefore, the drawback of this spiral is that the angular frequency changes. Since the distortions depend on frequency, the disadvantage is that the distortions are non-uniform across a single frame. Another way to look at this problem is that to keep a constant linear velocity, the angular velocity has to be large near the center and smaller at the edges. Thus, the angular distortion changes with angular velocity, which results in much more severe distortions at the center.
All three spiral scans we tested have successfully eliminated the flyback delay common in conventional STEM. Both Archimedean and Fermat scans yield STEM images with a quality comparable with conventional scan paths, but both have problems with sampling density (dose distribution). The constant linear velocity scan solves the sampling problem but introduces significant distortion in the center. For ease of use, the Fermat scan seems to be the best choice due to its relatively uniform sampling density and easy interpretation of the reconstructed image.
A possible solution to the sampling problem might be truncated spiral functions, which have the same functional form but start from some finite t0 instead of from zero. Spirals with varying a and b could also be investigated in future work.
As the drift distortion depends critically on the relative drift direction with respect to the scan direction [2], the varying scan directions in spiral scans lead to an abundance of information for further drift correction within one frame. Other areas for future work could involve hybrid scans, scans adapted on the fly, or changes in the dwell-time per pixel.
We have demonstrated for the first time that aberration-corrected STEM images can be acquired at high speed with different spiral scans. By completely eliminating the flyback effect in STEM imaging, the spiral scans provide new possibilities to reduce beam damage, image distortion, and drift distortion. Combined with conventional image processing methods, the spiral scans can be used to significantly improve the quality of STEM images. In the future, this system could be extended with high-speed feedback in the FPGA unit. Such capabilities could allow dynamic position correction or atom tracking in hardware, without having to wait for relatively slow data transfers to and from a computer.
Egerton, R.F.: Electron energy-loss spectroscopy in the electron microscope. Springer, Berlin (2011)
Sang, X., LeBeau, J.M.: Revolving scanning transmission electron microscopy: correcting sample drift distortion without prior knowledge. Ultramicroscopy 138, 28–35 (2014)
Yankovich, A.B., Berkels, B., Dahmen, W., Binev, P., Sanchez, S.I., Bradley, S.A., Li, A., Szlufarska, I., Voyles, P.M.: Picometre-precision analysis of scanning transmission electron microscopy images of platinum nanocatalysts. Nat Commun 5, 4155 (2014)
Jones, L., Nellist, P.D.: Identifying and correcting scan noise and drift in the scanning transmission electron microscope. Microsc Microanal 19, 1050–1060 (2013)
Ophus, C., Ciston, J., Nelson, C.T.: Correcting nonlinear drift distortion of scanning probe and scanning transmission electron microscopies from image pairs with orthogonal scan directions. Ultramicroscopy 162, 1–9 (2016)
Jones, L., Yang, H., Pennycook, T.J., Marshall, M.S.J., Van Aert, S., Browning, N.D., Castell, M.R., Nellist, P.D.: Smart align—a new tool for robust non-rigid registration of scanning microscope data. Adv Struct Chem Imaging 1, 8 (2015)
Lin, J., Cretu, O., Zhou, W., Suenaga, K., Prasai, D., Bolotin, K.I., Cuong, N.T., Otani, M., Okada, S., Lupini, A.R., Idrobo, J.-C., Caudel, D., Burger, A., Ghimire, N.J., Yan, J., Mandrus, D.G., Pennycook, S.J., Pantelides, S.T.: Flexible metallic nanowires with self-adaptive contacts to semiconducting transition-metal dichalcogenide monolayers. Nat Nanotechnol 9, 436–442 (2014)
Ishikawa, R., Mishra, R., Lupini, A.R., Findlay, S.D., Taniguchi, T., Pantelides, S.T., Pennycook, S.J.: Direct observation of dopant atom diffusion in a bulk semiconductor crystal enhanced by a large size mismatch. Phys Rev Lett 113, 155501 (2014)
Zan, R., Ramasse, Q.M., Bangert, U., Novoselov, K.S.: Graphene reknits its holes. Nano Lett. 12, 3936–3940 (2012)
Kotakoski, J., Mangler, C., Meyer, J.C.: Imaging atomic-level random walk of a point defect in graphene. Nat. Commun. 5, 3991 (2014)
de Jonge, N., Peckys, D.B., Kremers, G.J., Piston, D.W.: Electron microscopy of whole cells in liquid with nanometer resolution. Proc Natl Acad Sci USA 106, 2159–2164 (2009)
van Heel, M., Gowen, B., Matadeen, R., Orlova, E.V., Finn, R., Pape, T., Cohen, D., Stark, H., Schmidt, R., Schatz, M., Patwardhan, A.: Single-particle electron cryo-microscopy: towards atomic resolution. Q Rev Biophys 33, 307–369 (2000)
Zhou, W., Oxley, M.P., Lupini, A.R., Krivanek, O.L., Pennycook, S.J., Idrobo, J.-C.: Single atom microscopy. Microsc Microanal 18, 1342–1354 (2012)
Pennycook, T.J., Lupini, A.R., Yang, H., Murfitt, M.F., Jones, L., Nellist, P.D.: Efficient phase contrast imaging in STEM using a pixelated detector. Part 1: experimental demonstration at atomic resolution. Ultramicroscopy 151, 160–167 (2015)
Stevens, A., Yang, H., Carin, L., Arslan, I., Browning, N.D.: The potential for Bayesian compressive sensing to significantly reduce electron dose in high-resolution STEM images. Microscopy. 63, 41–51 (2014)
Ziegler, D., Meyer, T.R., Farnham, R., Brune, C., Bertozzi, A.L., Ashby, P.D.: Improved accuracy and speed in scanning probe microscopy by image reconstruction from non-gridded position sensor data. Nanotechnology. 24, 335703 (2013)
Ovchinnikov, O.S., Jesse, S., Kalinin, S.V.: Adaptive probe trajectory scanning probe microscopy for multiresolution measurements of interface geometry. Nanotechnology. 20, 255701 (2009)
Sanchez, A.M., Galindo, P.L., Kret, S., Falke, M., Beanland, R., Goodhew, P.J.: An approach to the systematic distortion correction in aberration-corrected HAADF images. J. Microsc. 221, 1–7 (2006)
Cook T.A.: Spirals in nature and art: A study of spiral formations based on the manuscripts of Leonardo Da Vinci (1903). Literary Licensing LLC (2014)
Mahmood, I.A., Reza Moheimani, S.O.: Spiral-scan atomic force microscopy: a constant linear velocity approach. 2010 10th IEEE Conf. Nanotechnology. NANO 2010, 115–120 (2010)
SJ built the scan control system. ARL, RRU, MC, and SJ interfaced the controller to the microscope and performed the microscopy experiments. XS and ARL drafted the manuscript. SJ, ARL, RRU, MC, AYB, and SVK conceived and designed the study. XS, SJ, EE, and RKA participated in image analysis. All authors read and approved the final manuscript.
Research supported by Oak Ridge National Laboratory's (ORNL) Center for Nanophase Materials Sciences (CNMS), which is a U.S. Department of Energy (DOE), Office of Science User Facility (XS, RRU, MC, SVK, SJ), by the Division of Materials Sciences and Engineering, Office of Basic Energy Sciences, DOE (ARL and AYB), by ORNL's Laboratory Directed Research and Development Program, which is managed by UT-Battelle LLC for the U.S. DOE (SJ) and by the Office of Advanced Scientific Computing Research, Applied Mathematics program under the ACUMEN project.(EL and RKA).
This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, TN, 37831, USA
Xiahan Sang, Raymond R. Unocic, Miaofang Chi, Sergei V. Kalinin & Stephen Jesse
Institute for Functional Imaging of Materials, Oak Ridge National Laboratory, Oak Ridge, TN, 37831, USA
Xiahan Sang, Andrew R. Lupini, Raymond R. Unocic, Miaofang Chi, Albina Y. Borisevich, Sergei V. Kalinin & Stephen Jesse
Materials Sciences and Technology, Oak Ridge National Laboratory, Oak Ridge, TN, 37831, USA
Andrew R. Lupini & Albina Y. Borisevich
Computer Science and Mathematics, Oak Ridge National Laboratory, Oak Ridge, TN, 37831, USA
Eirik Endeve & Richard K. Archibald
Xiahan Sang
Andrew R. Lupini
Raymond R. Unocic
Miaofang Chi
Albina Y. Borisevich
Sergei V. Kalinin
Eirik Endeve
Richard K. Archibald
Stephen Jesse
Correspondence to Andrew R. Lupini or Stephen Jesse.
Xiahan Sang and Andrew R. Lupini contributed equally to the paper
Sang, X., Lupini, A.R., Unocic, R.R. et al. Dynamic scan control in STEM: spiral scans . Adv Struct Chem Imag 2, 6 (2016). https://doi.org/10.1186/s40679-016-0020-3
Aberration-corrected STEM
Scan control
Spiral scan | CommonCrawl |
Stocks Dividend Stocks
Forward Dividend Yield
Guide to Dividend Investing
Introduction to Dividend Investing
Cash Dividend Definition
Companies That Pay Dividends vs. Companies That Don't
How and Why Do Companies Pay Dividends?
Is Dividend Investing a Good Strategy?
Put Dividends to Work in Your Portfolio
The 3 Biggest Misconceptions of Dividend Stocks
How Dividends Work
Understanding Dividend Rate vs. Dividend Yield
Dividend Payout Ratio Definition
Ex-Dividend Definition
Make Ex-Dividends Work for You
Difference Between Record Date and Ex-Dividend Date
How and When Are Stock Dividends Paid Out?
Dividend Investing Strategies & Concepts
How Dividends Affect Stock Prices
What Causes Dividends Per Share to Increase?
How Can I Find Out Which Stocks Pay Dividends?
Dividend Growth Rate
4 Ratios to Evaluate Dividend Stocks
How to Use the Dividend Capture Strategy
Mutual Funds: How They Pay Dividends
Why Would a Company Drastically Cut Its Dividend?
What is a Forward Dividend Yield?
A forward dividend yield is an estimation of a year's dividend expressed as a percentage of the current stock price. The year's projected dividend is measured by taking a stock's most recent actual dividend payment and annualizing it. The forward dividend yield is calculated by dividing a year's worth of future dividend payments by a stock's current share price.
A forward dividend yield is the percentage of a company's current stock price that it expects to pay out as dividends over a certain time period, generally 12 months.
Forward dividend yields are generally used in circumstances where the yield is predictable based on past instances.
If not, trailing yields, which indicate the same value over the previous 12 months, are used.
Introduction To Dividend Yields
Understanding Forward Dividend Yields
For example, if a company pays a Q1 dividend of 25 cents, and you assume the company's dividend will be consistent, the firm will be expected to pay $1.00 in dividends over the course of the year. If the stock price is $10, the forward dividend yield is 10%.
The opposite of a forward dividend yield is a trailing dividend yield, which shows a company's actual dividend payments relative to its share price over the previous 12 months. When future dividend payments are not predictable, the trailing dividend yield can be one way to measure value. When future dividend payments are predictable or have been announced, the forward dividend yield is a more accurate tool.
An additional form of dividend yield is the indicated yield or the dividend yield that one share of stock would return, based on its current indicated dividend. To calculate indicated yield, multiply the most recent dividend issued by the number of annual dividend payments (the indicated dividend). Divide the product by the most current share price.
Indicated Yield = ( MRD ) × ( # of DPEY ) Stock Price where: MRD = Most recent dividend \begin{aligned}&\text{Indicated Yield}=\frac{(\text{MRD})\times(\#\text{ of DPEY})}{\text{Stock Price}}\\&\textbf{where:}\\&\text{MRD}=\text{Most recent dividend}\\&\text{DPEY}=\text{Dividend payments each year}\end{aligned} Indicated Yield=Stock Price(MRD)×(# of DPEY)where:MRD=Most recent dividend
For example, if a stock trading at $100 has a most recent quarterly dividend of $0.50, the indicated yield would be:
Indicated Yield of Stock ABC = $ 0 . 5 0 × 4 $ 1 0 0 = 2 % . \text{Indicated Yield of Stock ABC}=\frac{\$0.50\times4}{\$100}=2\%. Indicated Yield of Stock ABC=$100$0.50×4=2%.
Image by Sabrina Jiang © Investopedia 2020
Forward Dividend Yields and Corporate Dividend Policy
A company's board of directors determines the dividend policy of the company. In general, more mature and established companies issue dividends, while younger, rapidly growing firms often choose to put any excess profits back into the company for research, development, and expansion purposes. Common types of dividend policies include the stable dividend policy, in which the company issues dividends when earnings are up or down.
The goal of a stable dividend policy is to align with the firm's goal for long-term growth instead of its quarterly earnings volatility. With a constant dividend policy, a company issues a dividend each year, based on a percentage of the company's earnings. With constant dividends, investors experience the full volatility of company earnings. Finally, with a residual dividend policy a company pays out any earnings after it pays for its own capital expenditures and working capital needs.
Guide to Dividend Investing Guide
Dividends: A Complete Guide
What Is a Stock Dividend?
Cash Dividend Explained: Characteristics, Accounting, and Comparisons
Why Do Some Companies Pay a Dividend, While Others Don't?
The 3 Biggest Misconceptions About Dividend Stocks
Dividend Yield; Formula and Calculation
Forward Dividend Yield Definition
How Determining the Dividend Rate Pays off for Investors
What Is the Difference Between a Dividend Rate and Dividend Yield?
What Is a Dividend Payout Ratio?
What Is Ex-Dividend?
What's the Difference Between the Record Date and Ex-Dividend Date?
Dividend Growth Rate Definition
Unpaid Dividend Definition
Using the Dividend Capture Strategy
What Is Indicated Yield?
Indicated yield is the dividend yield that a share of stock would return based on its most recent dividend.
The dividend payout ratio is the measure of dividends paid out to shareholders relative to the company's net income.
The dividend yield is a financial ratio that shows how much a company pays out in dividends each year relative to its stock price.
Price-to-Earnings (P/E) Ratio
The price-to-earnings (P/E) ratio is the ratio for valuing a company that measures its current share price relative to its per-share earnings.
A dividend is the distribution of some of a company's earnings to a class of its shareholders, as determined by the company's board of directors.
What Is Dividend Frequency?
Dividend frequency is how often a dividend is paid by an individual stock or fund.
Does the S&P 500 Index Include Dividends?
How the Dividend Yield and Dividend Payout Ratio Differ
Why Dividends Matter to Investors | CommonCrawl |
Methodology Article
Enhanced construction of gene regulatory networks using hub gene information
Donghyeon Yu1,
Johan Lim2,
Xinlei Wang3,
Faming Liang4 &
Guanghua Xiao ORCID: orcid.org/0000-0001-9387-98835
BMC Bioinformatics volume 18, Article number: 186 (2017) Cite this article
Gene regulatory networks reveal how genes work together to carry out their biological functions. Reconstructions of gene networks from gene expression data greatly facilitate our understanding of underlying biological mechanisms and provide new opportunities for biomarker and drug discoveries. In gene networks, a gene that has many interactions with other genes is called a hub gene, which usually plays an essential role in gene regulation and biological processes. In this study, we developed a method for reconstructing gene networks using a partial correlation-based approach that incorporates prior information about hub genes. Through simulation studies and two real-data examples, we compare the performance in estimating the network structures between the existing methods and the proposed method.
In simulation studies, we show that the proposed strategy reduces errors in estimating network structures compared to the existing methods. When applied to Escherichia coli, the regulation network constructed by our proposed ESPACE method is more consistent with current biological knowledge than the SPACE method. Furthermore, application of the proposed method in lung cancer has identified hub genes whose mRNA expression predicts cancer progress and patient response to treatment.
We have demonstrated that incorporating hub gene information in estimating network structures can improve the performance of the existing methods.
A gene regulatory network (GRN) describes interactions and regulatory relationships among genes. It provides a systematic understanding of the molecular mechanisms underlying biological processes by revealing how genes work together to form modules that carry out cell functions [1–4]. In addition, the visualization of genetic dependencies through the GRN facilitates the systematic interpretation and comprehension of analysis results from genome-wide studies using high-throughput data. GRNs have proven valuable in a variety of contexts, including identifying druggable targets [5], detecting driver genes in diseases [6], and even optimizing prognostic and predictive signatures [7].
Gene expression microarrays monitor the transcription activities of thousands of genes simultaneously, which provides a great opportunity to study the "relationships" among genes on a large scale. However, challenges lie in constructing large-scale GRNs from gene expression microarray data due to the small sample sizes of microarray studies and the extremely large solution space. Computational techniques and algorithms have been proposed to reconstruct GRNs from gene expression data, including probability-based approaches such as Bayesian networks [8–12], correlation-based approaches [13], likelihood-based approaches [14–16], partial-correlation-based approaches [17, 18], and information-theory-based approaches [19–22]. The existing methods are briefly reviewed in the Methods Section. Readers can also refer to Bansal et al. [23] and Allan et al. [24] for a more detailed review of network construction methods.
The sparse partial correlation estimation (SPACE) method, proposed by Peng et al. [18], considers a penalized regression approach to estimate edges in the GRN, which utilizes the sparse feature of the GRN. Comparative studies have shown that the SPACE method performs well in estimating sparse networks with high accuracy [24]. Peng et al. [18] also showed that the method was able to identify functional relevant molecular networks. In addition, recent studies of network analysis have revealed its advantage in detecting genes or modules associated with phenotypes [25–27].
In gene networks, genes that have many interactions with other genes are defined as hub gens. Because of these interactions, hub genes usually play an important role in a biological system. For example, transcription factor (TF), a protein that binds to specific DNA sequences, can regulate a given set of genes. In humans, approximately 10% of genes in the genome code for around 2600 TFs [28]. The combinatorial human TFs account for most of the regulation activities in the human genome, especially during the development stage. As a result, the genes that code TFs, called TF-encoding genes, are usually regarded as hub genes. Furthermore, in cancer research, cancer genes (oncogenes or tumor suppressor genes) take part in tumor genesis and are likely to be hub genes in the genetic networks of tumors [29, 30]. Through decades of biological studies, knowledge on important genes (such as TFs or cancer genes) has been accumulated. Our hypothesis is that incorporating prior knowledge about hub genes can improve accuracy in estimating the gene network structure. It is worth noting that there is a reweighted ℓ 1 regularization method [31] that repeatedly estimates the structures and modifies the weights of the penalties by using the information on degrees from the previous estimation to encourage the appearance of the hubs. This method does not use the prior information obtainable from the other resources while our method uses additional information not contained in the observed dataset.
To explicitly account for the information on hub genes, we propose an extension of the SPACE method, which introduces an additional tuning parameter to open up the possibility of reducing penalization and increasing the likelihood of selecting the edges connected to such genes. We numerically show that the proposed method reduces errors in estimating network structures. Although we focus on extending the SPACE method in this paper, the idea can also be applied to penalized likelihood methods as well as to other penalized regression methods. Note that there is no rigorous definition of a hub in the context of a network; the definition of a hub varies depending on the sparsity of the network. For sparse protein networks, a hub is defined in [32] as a protein whose degree lies over the 0.95 quantile of the degree distribution or in [33] and [7] as a protein whose degree is greater than 7. In this paper, we conservatively define a hub as a node whose degree is both greater than 7 and above the 0.95 quantile of the degree distribution, because most nodes in sparse networks have relatively small degrees between 0 and 3.
In this study, we briefly introduce seven existing methods, including the SPACE and the graphical lasso, and propose the extended SPACE (ESPACE) method to incorporate the biological knowledge about important genes, i.e. network hubs. Moreover, it is worth noting that the ESPACE only incorporates the previously known biological information not contained in the observed dataset compared to the other existing methods. Through simulation studies, we show that the proposed approach reduces error in estimating the network structures compared to the seven other existing methods that we reviewed in the "Methods" section. Finally, we demonstrate the improvement of the ESPACE method compared to the SPACE method with two real-data examples.
Review of existing methods
Here, we briefly review the existing methods; the GeneNet [34], the NS [17], the GLASSO [15], the GLASSO-SF [31], the PCACMI [21], the CMI2NI [22], and the SPACE [18]. Let \(X_{i}^{k}\) be the expression level of the ith gene of the kth array for i=1,2,…,p and k=1,2,…,n. Let \(\mathbf {X}_{i} = \left (X_{i}^{1}, X_{i}^{2}, \ldots, X_{i}^{n}\right)^{T}\) so that observed gene expression data can be denoted by an n×p matrix X=(X 1;X 2;…;X p ) whose rows and columns denote arrays and genes, respectively. Suppose row vectors \(\mathbf {X}^{k}=\left (X_{1}^{k},X_{2}^{k},\ldots,X_{p}^{k}\right)\) for k=1,2,…,n are independently and identically distributed random vectors from the multivariate normal distribution with mean 0 and covariance matrix Σ. We assume that Σ is positive definite, and let Ω≡Σ −1=(ω ij )1≤i,j≤p be the inverse of the covariance matrix Σ, which is referred to as a concentration matrix or a precision matrix.
GeneNet
Schäfer and Strimmer [34] propose the linear shrinkage estimator for a covariance matrix and the Gaussian graphical model (GGM) selection based on the partial correlation obtained from their shrinkage estimator. With multiple testing procedure using the local false discovery rate [35], the GGM selection controls the false discovery rate under a pre-determined level α. Since Schäfer and Strimmer [34] provide their GGM selection procedure in the R package GeneNet, we denote their GGM selection procedure as GeneNet in this paper. To be specific, one of the most commonly used linear shrinkage estimators S ∗ for the covariance matrix Σ is
$$ S^{*} = \lambda^{*} T + (1-\lambda^{*}) S, $$
where S=(s ij )1≤i,j≤p is the sample covariance matrix, T=diag(s 11,s 22,…,s pp ) is the shrinkage target matrix, and \(\lambda ^{*} = {\sum \nolimits }_{i\neq j} \widehat {\text {Var}} (s_{ij}) / \left ({\sum \nolimits }_{i\neq j} s_{ij}^{2}\right)\) is the optimal shrinkage intensity. With this estimator S ∗, the matrix of the partial correlations \(P = (\hat {\rho }^{ij})_{1 \le i,j \le p}\) is defined as \(\hat {\rho }^{ij} = -\hat {\omega }_{ij} / \sqrt {\hat {\omega }_{ii} \hat {\omega }_{jj}}\), where \(\hat {\Omega } = (\hat {\omega }_{ij})_{1 \le i,j \le p} = \left (S^{*}\right)^{-1}\).
To identify the significant edges, Schäfer and Strimmer [34] suppose the distribution of the partial correlations as the mixture
$$f(\rho) = \eta_{0} f_{0} (\rho,\nu) + (1-\eta_{0}) f_{1}(\rho), $$
where f 0 is the null distribution, f 1 is the alternative distribution corresponding to the true edges, and η 0 is the unknown mixing parameter. Using the algorithm in [35], GeneNet identifies significant edges that have the local false positive rate
$$\text{fdr}(\rho) = \frac{\hat{\eta}_{0} f_{0} (\rho, \hat{\nu})}{\hat{f}(\rho)} $$
smaller than the pre-determined level α, where f 0(ρ,ν)=|ρ|Be(ρ 2;0.5,(ν−1)/2), Be(x;a,b) is the density of the Beta distribution and ν is the reciprocal variance of the null ρ.
Neighborhood selection (NS)
Meinshausen and Bühlmann [17] propose the neighborhood selection (NS) method, which separately solves the lasso [36] problem and identifies edges with nonzero estimated regression coefficients for each node. Meinshausen and Bühlmann [17] prove that the NS method is asymptotically consistent in identifying the neighborhood of each node when the neighborhood stability condition is satisfied. Note that the neighborhood stability condition is related to the irrepresentable condition in linear model literature [37].
To be specific, for each node i∈V={1,2,…,p}, NS solves the following lasso problem
$$\hat{\beta}^{i,\lambda} = \operatornamewithlimits{arg\min}_{\beta\in \mathbb{R}^{p}: \beta_{i} = 0} ~\frac{1}{2} \| \mathbf{X}_{i} - \mathbf{X}\beta\|_{2}^{2} + \lambda \|\beta\|_{1}, $$
where \(\|\mathbf {x}\|_{2}^{2} = {\sum \nolimits }_{i=1}^{p} x_{i}^{2}\) and \(\|\mathbf {x}\|_{1} = {\sum \nolimits }_{i=1}^{p} |x_{i}|\) for \(\mathbf {x} \in \mathbb {R}^{p}\). With the estimate \(\hat {\beta }^{i,\lambda }\), NS identifies the neighborhood of the node i as \(N_{i}(\lambda) = \{ k~|~ \hat {\beta }_{k}^{i,\lambda } \neq 0\}\), which defines an edge set \(E_{i}^{\lambda } = \{(i,j)~|~ j \in N_{i}(\lambda)\}\). Since NS separately solves p lasso problems, contradictory edges may occur when we define the total edge set \(E^{\lambda } = \cup _{i=1}^{p} E_{i}^{\lambda }\), i.e., \(\hat {\beta }_{k}^{i,\lambda } \neq 0\) and \(\hat {\beta }_{i}^{k,\lambda } = 0\). To avoid these contradictory edges, NS suggests two types of edge sets E λ,∧ and E λ,∨ defined as follows:
$$\begin{array}{*{20}l} E^{\lambda,\wedge} = \left\{(i,j)~|~ i \in N_{j}(\lambda) ~\text{and}~ j \in N_{i}(\lambda)\right\},\\ E^{\lambda,\vee} = \left\{(i,j)~|~ i \in N_{j}(\lambda) ~\text{or}~ j \in N_{i}(\lambda)\right\}. \end{array} $$
Meinshausen and Bühlmann [17] mentioned these two edge sets have only small differences in their experience and the differences vanish asymptotically. Meinshausen and Bühlmann [17] also propose the choice of the tuning parameter λ i (α) for the ith node
$$\lambda_{i}(\alpha) = \|\mathbf{X}_{i}\|_{2} \tilde{\Phi}^{-1}\left(\frac{\alpha}{2p^{2}}\right), $$
where \(\tilde {\Phi } = 1 - \Phi \) and Φ is the distribution function of the standard normal distribution. With this choice of λ i (α) for i=1,2,…,p, the probability of falsely identifying edges in the network is bounded by the level α. Note that we estimate the edge set with E λ,∧ and solve the lasso problems using the R package CDLasso proposed by [38] in this paper.
Graphical lasso (GLASSO)
Friedman et al. [15] propose the graphical lasso method that estimates a sparse inverse covariance matrix Ω by maximizing the ℓ 1 penalized log-likelihood
$$ l(\Omega) = \log |\Omega| - \text{tr}(S\Omega) - \lambda \|\Omega\|_{1}, $$
where S is the sample covariance matrix, tr(A) is the trace of A and ∥A∥1 is the ℓ 1 norm of A for \(A \in \mathbb {R}^{p \times p}\).
To be specific, let W be the estimate of the covariance matrix Σ and consider partitioning W and S
$$ W = \left(\begin{array}{cc} W_{11} & w_{12}\\ w_{12}^{T} & w_{22} \end{array}\right),~ S = \left(\begin{array}{cc} S_{11} & s_{12}\\ s_{12}^{T} & s_{22} \end{array}\right), ~ \Omega = \left(\begin{array}{cc} \Omega_{11} & \omega_{12}\\ \omega_{12}^{T} & \omega_{22} \end{array}\right) $$
Motivated by [39], Friedman et al. [15] show that the solution \(\hat {\Omega }\) of (1) is equivalent to the inverse of W whose partitioned entity w 12 satisfies w 12=W 11 β ∗, where β ∗ is the solution of the lasso problem
$$ \min_{\beta} ~ \frac{1}{2} \left\| W^{1/2}_{11} \beta - W_{11}^{-1/2} s_{12} \right\|_{2}^{2} + \lambda \|\beta\|_{1}. $$
Based on the above property, the graphical lasso sets the diagonal elements w ii =s ii +ρ and obtains the off-diagonal elements of W by repeatedly applying the following two steps:
Permuting the columns and rows to locate the target elements at the position of w 12.
Finding the solution w 12=W 11 β ∗ by solving the lasso problem (2).
until convergence occurs. After finding W, the estimate \(\hat {\Omega }\) is obtained from the relationships \(\omega _{12} = - \hat {\beta } \hat {\omega }_{22}\) and \(\hat {\omega }_{22} = 1/(w_{22} - w_{12}^{T}\hat {\beta })\), where \(\hat {\beta } = W_{11}^{-1} w_{12}\). This graphical lasso algorithm was proposed in [15] and had its computational efficiency improved in [16] and [40]. Witten et al. [16] provide the R package glasso version 1.7.
GLASSO with reweighted strategy for scale-free network (GLASSO-SF)
Liu and Ihler [31] propose the reweighted ℓ 1 regularization method to improve the performance of the estimation for the scale-free network whose degrees follows the power law distribution. Motivated by the fact that the existing methods work poorly for the scale-free networks, Liu and Ihler [31] consider changing the ℓ 1 norm penalty in the existing methods to the power law regularization
$$ p_{\lambda,\gamma}(\Omega) = \lambda \sum\limits_{i=1}^{p} \log \left(\|\omega_{-i}\|_{1} + \epsilon_{i} \right) + \gamma \sum\limits_{i=1}^{p} |\omega_{ii}|, $$
where λ and γ are nonnegative tuning parameters, ω −i ={ω ij | j≠i}, \(\|\omega _{-i}\|_{1} = {\sum \nolimits }_{j\neq i} |\omega _{ij}|\), and ε i is a small positive number for i=1,2,…,p. Thus, Liu and Ihler [31] consider optimizing the following objective function
$$ f(\Omega; \mathbf{X}, \lambda, \gamma) = L(\mathbf{X},\Omega) + u_{L} \cdot p_{\lambda,\gamma}(\Omega), $$
where L(X,Ω) denotes the objective function of the existing method without its penalty terms, u L =1 if L is convex and u L =−1 if L is concave for Ω. Note that the choice of L is flexible. For instance, L(X,Ω) can be the log-likelihood function of Ω as in the graphical lasso or the squared loss function as in the NS and the SPACE. In this section, we suppose that L is concave for the purpose of notational simplicity.
To obtain the maximizer of f(Ω;X,λ,γ), Liu and Ihler [31] propose the iteratively reweighted ℓ 1 regularization procedure based on the minorization-maximization (MM) algorithm [41]. The reweighted procedure iteratively solves the following problem:
$$ \Omega^{(k+1)} = \operatornamewithlimits{arg\max}_{\Omega}~ L(\mathbf{X}, \Omega) - \sum\limits_{i=1}^{p} \sum\limits_{j\neq i} \eta_{ij}^{(k)} |\omega_{ij}| - \gamma \sum\limits_{i=1}^{p} |\omega_{ii}|, $$
where \(\Omega ^{(k)}= \left (\omega _{ij}^{(k)}\right)\) is the estimate at the kth iteration, \(\|\omega _{-i}^{(k)}\|_{1} = {\sum \nolimits }_{l \neq i} |\omega _{il}^{(k)}|\), and \(\eta _{ij}^{(k)} = \lambda \left (1/(\|\omega _{-i}^{(k)}\|_{1}\right. + \epsilon _{i}) +\left.1/(\|\omega _{-j}^{(k)}\|_{1} + \epsilon _{j})\right)\). In practice, [31] suggest ε i =1, γ=2λ/ε i , and the initial estimate Ω (0)=I p , where I p is the p-dimensional identity matrix. Note that this reweighted strategy facilitates to estimate the hub nodes by adjusting weights in the penalty term but weights are updated by solely using the observed dataset without previously known information from other literatures.
In this paper, we consider L(X,Ω)= log|Ω|−tr(S Ω), which is the same to the component in the objective function of the GLASSO. Thus, we call this procedure as the GLASSO with a reweighted strategy for the scale-free network (GLASSO-SF). As applied in [31], we stop the reweighting iteration after 5 iterations. The R package glasso version 1.7 is used to obtain the solution of (5) at each iteration with the penalty matrix \(E^{(k)} = (e_{ij}^{(k)})\), where \(e_{ij}^{(k)} = \eta _{ij}^{(k)}\) for i≠j and \(e_{ii}^{(k)} = 2\lambda \) for i=1,2,…,p.
Path consistency algorithm based on conditional mutual information (PCACMI)
Mutual information (MI) is a widely used measure of dependency between variables in information theory. MI even measures non-linear dependency between variables and can be considered as a generalization of the correlation. Several mutual information (MI) based methods have been developed such as ARACNE [20], CLR [42], and minet [43]. However, similar to the correlation, MI only measures pairwise dependency between two variables. Thus, it usually identifies many undirected interactions between variables. To resolve this difficulty, Zhang et al. [21] propose the information theoretic method for reconstruction of the gene regulatory networks based on the conditional mutual information (CMI).
To be specific, let H(X) and H(X,Y) be the entropy of a random variable X and the joint entropy of random variables X and Y, respectively. For two random variables X and Y, H(X) and H(X,Y) can be expressed as
$$ H(X) = E \left(-\log f_{X}(X)\right),~ H(X,Y) = E\left(-\log f_{XY}(X,Y)\right), $$
where f X (x) is the marginal probability density function (PDF) of X and f XY (x,y) is the joint PDF of X and Y. With these notations, MI is defined as
$$ \begin{array}{lll} I(X,Y) &=& E\left(- \text{log} \frac{f_{XY}(X,Y)}{f_{X}(X)f_{Y}(Y)}\right)\\ &=& H(X) + H(Y) - H(X,Y). \end{array} $$
It is known that MI measures dependency between two variables that contain both directed dependency and indirected dependency through other variables. While MI can not distinguish directed and indirected dependency, CMI can measure directed dependency between two variables by conditioning on other variables. CMI for X and Y given Z is defined as
$$ I(X,Y|Z) = H(X,Z)+ H(Y,Z) - H(Z) - H(X,Y,Z). $$
To estimate the entropies in (7), Zhang et al. [21] consider the Gaussian kernel density estimator used in [19]. Using the Gaussian kernel density estimator, MI and CMI are defined as
$$\begin{array}{*{20}l} \widehat{I}(X,Y) = \frac{1}{2} \log \frac{|C(X)|~|C(Y)|}{|C(X,Y)|},\\ \widehat{I}(X,Y|Z) = \frac{1}{2} \log \frac{|C(X,Z)|~|C(Y,Z)|}{|C(Z)|~|C(X,Y,Z)|}, \end{array} $$
where |A| is the determinant of a matrix A, C(X), C(Y) and C(Z) are the variances of X, Y and Z, respectively, and C(X,Z), C(Y,Z) and C(X,Y,Z) are the covariance matrices of (X,Z), (Y,Z) and (X,Y,Z), respectively.
To efficiently identify dependent pairs of variables, Zhang et al. [21] adopt the path consistency algorithm (PCA) in [44]. Thus, the authors called their procedure as PCA based on CMI (PCACMI). The PCACMI method sets L = 0 and calculates with L-order CMI, which is equivalent to MI if L=0. Then, PCACMI removes the pairs of variables such that the maximal CMI of two variables given L+1 adjacent variables is less than a given threshold α, where α determines whether two variables are independent or not and adjacent variables denote variables connected to the two target variables in PCACMI at the previous step. PCACMI repeats the above steps until there is no higher order connection. The MATLAB code for PCACMI is provided by [21] at the author's website https://sites.google.com/site/xiujunzhangcsb/software/pca-cmi.
Conditional mutual inclusive information-based network inference (CMI2NI)
Recently, Zhang et al. [22] proposed the conditional mutual inclusive information-based network inference (CMI2NI) method that improves the PCACMI method [21]. CMI2NI considers the Kullback-Leibler divergences from the joint probability density function (PDF) of target variables to the interventional PDFs removing the dependency between two variables of interest. Instead of using CMI, CMI2NI uses the conditional mutual inclusive information (CMI2) as the measure of dependency between two variables of interest given other variables. To be specific, we consider three random variables X, Y and Z. For these three random variables, the CMI2 between X and Y given Z is defined as
$$ \text{CMI2}(X,Y|Z) = \left(D_{\text{KL}}(P || P_{X \rightarrow Y}) + D_{\text{KL}}(P || P_{Y \rightarrow X}) \right)/2, $$
where D KL(f||g) is the Kullback-Leibler divergence from f to g, P is the joint PDF of X, Y and Z, and P X→Y is the interventional probability of X, Y and Z for removing the connection from X to Y.
With Gaussian assumption on the observed data, the CMI2 for two random variables X and Y given m-dimensional vector Z can be expressed as
$$ \begin{aligned} CMI2(X,Y|Z) &= \frac{1}{4}\left(\text{tr}(C^{-1} \Sigma) + \text{tr}(\tilde{C}^{-1} \tilde{\Sigma}) + \log C_{0}\right.\\ &\left.\qquad+ \log \tilde{C}_{0} - 2n \right), \end{aligned} $$
where Σ is the covariance matrix of (X,Y,Z T)T, \(\tilde {\Sigma }\) is the covariance matrix of (Y,X,Z T)T, Σ XZ is the covariance matrix of (X,Z T)T, Σ YZ is the covariance matrix of (Y,Z T)T, n=m+2, and C, \(\tilde {C},C_{0}\) and \(\tilde {C}_{0}\) are defined with the elements of \(\Sigma,\Sigma _{XZ},\Sigma _{YZ},\Sigma ^{-1},\Sigma _{XZ}^{-1}\) and \(\Sigma _{YZ}^{-1}\) (see Theorem 1 in [22] for details). As applied in PCACMI, CMI2NI adopts the path consistency algorithm (PCA) to efficiently calculate the CMI2 estimates. All steps of the PCA in CMI2NI are the same as one of PCACMI if we change the CMI to the CMI2. In the PCA steps of CMI2NI, two variables are regarded as independent if the corresponding CMI2 estimate is less than a given threshold α. The MATLAB code for CMI2NI is available at the author's website https://sites.google.com/site/xiujunzhangcsb/software/cmi2ni.
Sparse partial correlation estimation (SPACE)
In the Gaussian graphical models [45], the conditional dependencies among p variables can be represented by a graph \(\mathcal {G}=\left (V,E\right)\), where V={1,2,…,p} is a set of nodes representing p variables and E={(i,j) | ω ij ≠0, 1≤i≠j≤p} is a set of edges corresponding to the nonzero off-diagonal elements of Ω.
To describe the SPACE method, we consider linear models such that for i=1,2,…,p,
$$ \mathbf{X}_{i}=\sum\limits_{j\neq i}\beta_{ij}\mathbf{X}_{j}+\boldsymbol{\epsilon}_{i} $$
where ε i is an n-dimensional random vector from the multivariate normal distribution with mean 0 and covariance matrix (1/ω ii )I n , and I n is an identity matrix with size of n×n. Under normality, the regression coefficients β ij s can be replaced with the partial correlations ρ ijs by the relationship
$$ \beta_{ij}=-\frac{\omega_{ij}}{\omega_{ii}}=\rho^{ij}\sqrt{\frac{\omega_{jj}}{\omega_{ii}}}, $$
where \(\rho ^{ij}=\text {corr}\left (X_{i},X_{j}~|~X_{k},k\neq i,j\right)=-{\omega _{ij}}\left /{\sqrt {\omega _{ii}\omega _{jj}}}\right.\) is a partial correlation between X i and X j . Motivated by the relationship (12), Peng et al. [18] propose the SPACE method for solving the following ℓ 1-regularized problem:
$$ {\begin{aligned} \min_{\rho}\frac{1}{2}\sum\limits_{i=1}^{p}\left\{ w_{i}\sum\limits_{k=1}^{n}\left(X_{i}^{k}-\sum\limits_{j\neq i}\rho^{ij}\sqrt{\frac{\omega_{jj}}{\omega_{ii}}}X_{j}^{k}\right)^{2}\right\} +\lambda\sum\limits_{1\le i<j \le p}|\rho^{ij}|, \end{aligned}} $$
where w i is a nonnegative weight for the i-th squared error loss.
Proposed approach incorporating previously known hub information
Extended sparse partial correlation estimation (ESPACE)
In this paper, we assume that some genes (or nodes), which are referred to as hub genes (or hub nodes), regulate many other genes, and we also assume that many of these hub genes were identified from previous experiments. To incorporate information about hub nodes, we propose the extended SPACE (ESPACE) method, which extends the model space by using an additional tuning parameter α on edges connected to the given hub nodes. This additional tuning parameter can reflect the hub gene information by reducing the penalty on edges connected to hub nodes. To be specific, let \(\mathcal {H}\) be the set of hub nodes that were previously identified. The ESPACE method we propose solves
$$ {\begin{aligned} &\min_{\rho}\frac{1}{2}\sum\limits_{i=1}^{p}\left\{w_{i} \sum\limits_{k=1}^{n}\left(X_{i}^{k}-\sum\limits_{j\neq i}\rho^{ij}\sqrt{\frac{\omega_{jj}}{\omega_{ii}}}X_{j}^{k}\right)^{2}\right\}\\ &\quad+\alpha \lambda \sum\limits_{{i<j,\atop \{i \in \mathcal{H}\} \cup \{j\in \mathcal{H}\}}}|\rho^{ij}|+ \lambda \sum\limits_{ {i<j,\atop i,j\in \mathcal{H}^{c}}}|\rho^{ij}|, \end{aligned}} $$
where 0<α≤1. Note that we consider the weights w i s for the squared error loss as one in this paper. To summarize the process of the proposed method, we depict the flowchart of the ESPACE method in Fig. 1. As described in Fig. 1, the ESPACE has the prior knowledge about hub genes as an additional input, which is the novelty of the proposed method compared to the other existing methods.
Flowchart of ESPACE
Extended graphical lasso (EGLASSO)
In the Background, we mentioned the proposed procedure is applicable to other methods such as the graphical lasso. For the purpose of fair comparison and the investigation of the performance, we also applied the proposed strategy to the GLASSO, which is the GLASSO incorporating the hub gene information. We call this procedure the extended graphical lasso (EGLASSO). Similar to the ESPACE, the EGLASSO maximizes
$$ \log |\Omega| - \text{tr}(S\Omega) -\alpha \lambda \sum\limits_{{i<j,\atop \{i \in \mathcal{H}\} \cup \{j\in \mathcal{H}\}}}|\omega_{ij}| - \lambda \sum\limits_{{i<j,\atop i,j\in \mathcal{H}^{c}}}|\omega_{ij}|, $$
where λ≥0 and 0<α≤1 are two tuning parameters, S is the sample covariance matrix, tr(A) is the trace of A and \(\mathcal {H}\) is the set of hub nodes that were previously identified. Note that we can use the R package glasso version 1.7 for the EGLASSO by defining the penalty matrix corresponding to the penalty term in (15).
Active shooting algorithm for ESPACE
To solve (14), we adopt the active shooting algorithm introduced in [18]. We rewrite the problem (14) as
$$ \min_{\rho}\frac{1}{2}\left\|\mathbf{Y}-\tilde{\mathbf{X}}\boldsymbol{\rho}\right\|_{2}^{2}+\alpha \lambda \sum\limits_{{i<j,\atop \{i \in \mathcal{H}\} \cup \{j\in \mathcal{H}\}}}|\rho^{ij}|+ \lambda \sum\limits_{ {i<j,\atop i,j\in \mathcal{H}^{c}}}|\rho^{ij}|, $$
where \(\mathbf {Y}=(\mathbf {X}_{1}^{T},\mathbf {X}_{2}^{T},\ldots,\mathbf {X}_{p}^{T})^{T}\) is an n×p column vector; \(\mathbf {X}^{k,l}=\left (\mathbf {0}_{n(k-1)\times 1}^{T},\mathbf {X}_{l(k)}^{T},\mathbf {0}_{n(l-k-1)\times 1}^{T},\mathbf {X}_{k(l)}^{T},\mathbf {0}_{n(p-l)\times 1}^{T}\right)^{T}\) is an n×p column vector as well, with \(\mathbf {X}_{k(l)}=\sqrt {\frac {\omega _{kk}}{\omega _{ll}}}\mathbf {X}_{k};\)
$$ \tilde{\mathbf{X}}= \left(\begin{array}{cccccccc} \mathbf{X}^{1,2}; & \mathbf{X}^{1,3}; & \cdots; & \mathbf{X}^{1,p}; & \mathbf{X}^{2,3}; & \mathbf{X}^{2,4}; & \cdots; & \mathbf{X}^{(p-1),p} \end{array}\right), $$
and ρ=(ρ 12;ρ 13;…;ρ 1p;ρ 23;ρ 24;…;ρ (p−1)p)T. Let \(\widehat {\boldsymbol {\rho }}^{(m)}\) and \(\widehat {\omega }_{ii}^{(m)}\) be estimates of ρ and ω ii at the m-th iteration, respectively. Then, the steps of the modified algorithm are outlined below:
Step 1: (Initialization of \(\widehat {\omega }_{ii}\)) For i=1,2,…,p, \(\widehat {\omega }_{ii}^{(0)} = 1\) and s=0.
Step 2: (Initialization of \(\widehat {\boldsymbol {\rho }}\)) For 1≤i<j≤p and m=0,
$$ \begin{array}{llll} \widehat{\rho}^{ij,(0)} &=& \text{sign}\left(\mathbf{Y}^{T}\mathbf{X}^{i,j}\right)\frac{\left(\left|\mathbf{Y}^{T}\mathbf{X}^{i,j}\right|-\alpha\lambda\right)_{+}}{\left(\mathbf{X}^{i,j}\right)^{T}\mathbf{X}^{i,j}} &\text{for}~\{i \in \mathcal{H}\}\cup\{ j \in \mathcal{H}\},\\ \widehat{\rho}^{ij,(0)} &=& \text{sign}\left(\mathbf{Y}^{T}\mathbf{X}^{i,j}\right)\frac{\left(\left|\mathbf{Y}^{T}\mathbf{X}^{i,j}\right|-\lambda\right)_{+}}{\left(\mathbf{X}^{i,j}\right)^{T}\mathbf{X}^{i,j}} &\text{for}~i,j\in\mathcal{H}^{c}, \end{array} $$
where (x)+= max(x,0) and X i,js are defined in (16) with \(\widehat {\omega }_{ii}^{(s)}\).
Step 3: Define an active set \(\Lambda =\{(i,j)~|~\widehat {\rho }^{ij,(m)}\neq 0\}\).
Step 4: Iteratively update \(\widehat {\boldsymbol {\rho }}^{(m)}\) for (k,l)∈Λ,
$$ \begin{array}{lll} \widehat{\rho}^{kl,(m)}&= \text{sign}\left((\mathbf{X}^{k,l})^{T}\boldsymbol{\epsilon}'\right)\frac{\left(\left|(\mathbf{X}^{k,l})^{T}\boldsymbol{\epsilon}'\right|-\alpha\lambda\right)_{+}}{(\mathbf{X}^{k,l})^{T}\mathbf{X}^{k,l}}\\ &\quad\text{for}~\{k \in \mathcal{H}\}\cup\{ l \in \mathcal{H}\}, \\ \medskip \widehat{\rho}^{kl,(m)}&= \text{sign}\left((\mathbf{X}^{k,l})^{T}\boldsymbol{\epsilon}'\right)\frac{\left(\left|(\mathbf{X}^{k,l})^{T}\boldsymbol{\epsilon}'\right|-\lambda\right)_{+}}{(\mathbf{X}^{k,l})^{T}\mathbf{X}^{k,l}}&\text{for}~k,l\in\mathcal{H}^{c}, \end{array} $$
where \(\boldsymbol {\epsilon }'= \mathbf {Y}-{\sum \nolimits }_{(i,j)\neq (k,l)} \tilde {\rho }^{ij}\mathbf {X}^{i,j}\) and \(\tilde {\rho }^{ij}\)s are current estimates at the step for updating the (k,l)-th partial correlation.
Step 5: Repeat Step 4 until convergence occurs on the active set Λ.
Step 6: Update \(\widehat {\boldsymbol {\rho }}^{(m+1)}\) for 1≤i<j≤p by using the equations in Step 4. If the maximum difference between \(\widehat {\boldsymbol {\rho }}^{(m+1)}\) and \(\widehat {\boldsymbol {\rho }}^{(m)}\) is less than a pre-determined tolerance τ, then go to Step 7 with the estimates \(\widehat {\boldsymbol {\rho }}^{(m+1)}\). Otherwise, consider m=m+1 and go back to Step 3.
Step 7: Update \(\widehat {\omega }_{ii}^{(s+1)}\) for i=1,2,…,p,
$$\begin{aligned} \frac{1}{\widehat{\omega}_{ii}^{(s+1)}}&= \frac{1}{n}\left\|\mathbf{X}_{i}-\sum\limits_{j\neq i}\widehat{\rho}^{ij,(m+1)}\sqrt{\frac{\widehat{\omega}_{jj}^{(s)}}{\widehat{\omega}_{ii}^{(s)}}}\mathbf{X}_{j}\right\|_{2}^{2}\\ &\quad\text{for}~i=1,2,\ldots,p. \end{aligned} $$
Step 8: Repeat Step 2 through Step 7 with s=s+1 until convergence occurs on \(\widehat {\omega }_{ii}\)s.
Note that the number of iterations of \(\widehat {\omega }_{ii}\)s is usually small for stabilization of the estimates of ρ. In our numerical study, the estimates of ω ii s converge within 10 iterations. Moreover, the inner products such as Y T X i,j, whose complexity is O(n p), can efficiently be computed by rewriting \(\mathbf {Y}^{T}\mathbf {X}^{i,j} = {\sum \nolimits }_{k=1}^{n} \left (\sqrt {\omega _{jj}/\omega _{ii}} + \sqrt {\omega _{jj}/\omega _{ii}}\right) X_{i}^{k} X_{j}^{k}, \) whose complexity is O(n). We implemented the R package espace, which is available from https://sites.google.com/site/dhyeonyu/software.
Choice of tuning parameters
We have introduced the ESPACE method, which relaxes the penalty on edges connected to the hub genes (i.e., α<1) but uses the same penalty on edges connected to non-hub gene (i.e., α=1). When no hub genes are involved in a network, ESPACE is reduced to SPACE. For a given λ, this modification allows us to find more edges connected to the hub genes by reducing α. In practice, however, we do not know the values of λ and α. In this paper, we consider the GIC-type criterion used in [46] for the Gaussian graphical model to choose the optimal tuning parameters (λ ∗,α ∗). Let \(\widehat {\rho }_{(\lambda,\alpha)}^{ij}\) be the (i,j)-th estimate of partial correlation for given λ and α. The GIC-type criterion is defined as
$${\begin{aligned} \text{GIC}(\lambda,\alpha)&=\sum\limits_{i=1}^{p}\left\{ n\cdot\log {RSS}_{i}+\log{\log{n}}\log (p-1){\vphantom{\widehat{\rho}_{\lambda,\alpha}^{ij}}}\right.\\&\qquad\quad\times\left.\left|\left\{j:j\neq i,\widehat{\rho}_{\lambda,\alpha}^{ij}\neq0\right\}\right|\right\}, \end{aligned}} $$
where \({RSS}_{i} = \left \|\mathbf {X}_{i}-\sum \limits _{j\neq i}\widehat {\rho }_{(\lambda,\alpha)}^{ij}\mathbf {X}_{j(i)}\right \|_{2}^{2}\) and |A| denotes a cardinality of a set A. We choose the tuning parameters which minimize the GIC-type criterion,
$$(\lambda^{*},\alpha^{*})={\operatornamewithlimits{argmin}_{\lambda,\alpha}\text{GIC}(\lambda,\alpha).} $$
Simulation studies
Simulation settings
In this simulation, we consider four real protein-protein interaction (PPI) networks used in a comparative study [24], which were partially selected from the human protein reference database [47]. As mentioned earlier, genes whose degrees are greater than 7 and above the 0.95 quantile of the degree distribution are thought of as hub genes. Figure 2 shows the four PPI networks and their hub genes. Let p be the number of nodes in a network. We consider the number of samples as p/2 and p and generate samples from the multivariate normal distribution with mean 0 and covariance matrix Σ defined with \((\Sigma)_{ij} = (\Omega ^{-1})_{ij}/\sqrt {(\Omega ^{-1})_{ii}(\Omega ^{-1})_{jj}}\), where Ω is a concentration matrix corresponding to a given network structure. To generate a positive definite concentration matrix, we use the following procedure as described in [18]:
Step G1: For a given edge set E, we generate an initial concentration matrix \(\tilde {\Omega }=(\tilde {\omega }_{ij})_{1\le i,j\le p}\) with
$$\tilde{\omega}_{ij}=\left\{ \begin{array}{ll} 1 & \quad i=j\\ 0 & \quad i\neq j,~(i,j)\notin E\\ \sim~Unif(D) & \quad i\neq j,~(i,j)\in E \end{array}\right., $$
where D=[−1,−0.5]∪[ 0.5,1].
The network structures of the four simulated networks. The structure of the real protein-protein interaction networks [47] were used to construct networks of different sizes by varying the number of references required to support each connection. In the degree distribution, the 0.95 quantile is 7 (connections), so the nodes with more than 7 connections were defined as hub nodes, which are represented as black nodes in the network structure. a 52 edges among 44 nodes (3 hubs), b 103 edges among 83 nodes (3 hubs), c 290 edges among 231 nodes (8 hubs) and d 837 edges among 612 nodes (33 hubs)
Step G2: For positive definiteness and symmetry of the concentration matrix, we define a concentration matrix Ω=(ω ij )1≤i,j≤p as
$$\Omega=\frac{1}{2}\left(A+A^{T}\right), $$
where A=(a ij )1≤i,j≤p , \(a_{ij}=\tilde {\omega }_{ij}/(1.5\cdot d_{i})\) and \(d_{i}=\sum \limits _{k\neq i}|\tilde {\omega }_{ik}|\) for i=1,2,…,p.
Step G3: Set ω ii =1 for i=1,2,…,p and ω ij = 0.1·sign(ω ij ) if 0<|ω ij |<0.1.
With these four networks, we have conducted the numerical comparisons of the ESPACE and the SPACE methods, as well as seven other methods including the other reviewed existing methods and EGLASSO. For the purpose of fair comparison, we select the optimal model by the GIC for SPACE, ESPACE, GLASSO, GLASSO-SF, and EGLASSO. Since there is no specific rule for the model selection in the other methods, we set the level α=0.2 for GeneNet and NS, and the threshold α=0.03 for PCACMI and CMI2NI. Note that the pre-determined level α=0.2 is a default of the GeneNet package and used in [35]. The pre-determined threshold α=0.03 was used in [21, 22].
Note that all the existing methods need O(p 2) memory space to store and calculate values corresponding to the interactions between variables. We can reduce this memory consumption when the whole variables can be divided into several conditionally independent blocks by using the condition described in [16].
Sensitivity analysis on random noise in the observed data
To investigate the effect of the random noise contained in the observed data, we consider sensitivity analysis for the variance of the random noise. To be specific, suppose that a random vector X=(X 1,X 2,…,X p )T follows the multivariate normal distribution with mean 0 and covariance matrix Σ, a vector of random noise ε=(ε 1,ε 2,…,ε p )T follows the multivariate normal distribution with mean 0 and covariance matrix \(\sigma _{\epsilon }^{2} I\), and X and ε are independent, where I is the identity matrix. Furthermore, we assume that an observed random vector Z=(Z 1,Z 2,…,Z p )T such that
$$ \mathbf{Z} = \mathbf{X} + \varepsilon. $$
Thus, the covariance matrix of Z becomes \(\Sigma + \sigma _{\epsilon }^{2} I\), which may have a different conditional dependent structure to one of X.
For example, if we consider \(\sigma _{\epsilon }^{2} = 0.5\) and the following Σ and Σ Z
$$ \Sigma= \left(\begin{array}{ccc} 15/11 & -8/11 & 2/11 \\ -8/11& 16/11& -4/11 \\ 2/11& -4/11& 12/11 \\ \end{array}\right),~ \Sigma_{\mathbf{Z}}= \Sigma + \sigma_{\epsilon}^{2} I, $$
then the inverse matrices of Σ and Σ Z are calculated as
$$ {\begin{aligned} &\Sigma^{-1}= \left(\begin{array}{ccc} 1 & 0.5 & 0 \\ -0.5& 1& 0.25 \\ 0& 0.25& 1 \\ \end{array}\right)\text{and}\\ &\Sigma_{\mathbf{Z}}^{-1}= \left(\begin{array}{ccc} 0.63 &0.23 &-0.02 \\ 0.23 &0.62 & 0.12 \\ -0.02& 0.12 & 0.66 \end{array}\right), \text{respectively.} \end{aligned}} $$
Thus, we can see that Z 1 and Z 3 are conditionally dependent given Z 2 while X 1 and X 3 are conditionally independent given X 2. Moreover, the nonzero partial correlations decrease when the variance of the random noises increases. From these observations, the performance of the estimation becomes worse if the variance of the random noise increases.
In this sensitivity analysis, we consider \(\sigma _{\epsilon }^{2} = 0, 0.01, 0.1, 0.25, 0.5\) and p=231 and n=115,231 with the same network structure as the one of p=231 in Fig. 2. To focus on the proposed method, we apply the SPACE and the ESPACE methods to the 50 generated datasets containing random noise having variance \(\sigma _{\epsilon }^{2}\).
To investigate the gains from the extension, we use five performance measures: sensitivity (SEN), specificity (SPE), false discovery rate (FDR), mis-specification rate (MISR) and Matthews correlation coefficients (MCC). Note that the MCC, which lies between −1 and +1, has been used to measure the performance of binary classification, where +1, 0, and −1 denote a perfect classification, a random classification, and a total discordance of classification, respectively. Let ρ and \(\widehat {\rho }_{\lambda,\alpha }\) be (p(p−1)/2)-dimensional vectors of the true and estimated partial correlation, respectively. The above five measures are defined as
$${\begin{aligned} \begin{array}{l} \text{SEN} \equiv \text{TP}/(\text{TP}+\text{FN}),~~ \text{SPE} \equiv \text{TN}/(\text{TN}+\text{FP}), \\ \text{FDR} \equiv \text{FP}/(\text{TP} + \text{FP}),~~ \text{MISR} \equiv (\text{FN} + \text{FP})/\left(p(p-1)/2\right)~~\text{and}\\ \text{MCC} \equiv \frac{\text{TP} \times \text{TN} - \text{FP} \times \text{FN}}{\sqrt{(\text{TP}+\text{FP})(\text{TP}+\text{FN})(\text{TN}+\text{FP})(\text{TN}+\text{FN})}}, \end{array} \end{aligned}} $$
where \(\text {TP} = {\sum \nolimits }_{i<j} I(\rho ^{ij} \neq 0) I(\widehat {\rho }^{ij}_{\lambda,\alpha } \neq 0)\), \(\text {FP} = {\sum \nolimits }_{i<j} I(\rho ^{ij} = 0) I(\widehat {\rho }^{ij}_{\lambda,\alpha } \neq 0)\), \(\text {FN} = {\sum \nolimits }_{i<j} I(\rho ^{ij} \neq 0) I(\widehat {\rho }^{ij}_{\lambda,\alpha }=0)\) and \(\text {TN} = {\sum \nolimits }_{i<j} I(\rho ^{ij} = 0) I(\widehat {\rho }^{ij}_{\lambda,\alpha } = 0)\).
Application to Escherichia coli dataset
We applied the ESPACE method to the largest public Escherichia coli (E.coli) microarray dataset available from the Many Microbe Microarrays database (M3D) [48]. The M3D contains 907 microarrays measured under 466 experimental conditions using Affymetrix GeneChip E.coli genome arrays. Microarrays from the same experimental conditions were averaged to derive the mean expression. The data set ("E_coli_v4_Build_6" from the M3D) contains the expression levels of 4297 genes from 446 samples. In the E.coli genome, a number of studies have been conducted to identify transcriptional regulations. The RegulonDB [49] curates the largest and best-known information on the transcriptional regulation of E.coli. To combine the information from the above two databases, we focus on the 1623 genes reported in both the M3D and the RegulonDB. As mentioned before, the TFs are known to regulate many other genes in the genome and can be considered potential hubs. To incorporate the information about the potential hubs, we used a list of 180 known TF-encoding genes from the RegulonDB. The RegulonDB also provides 3811 transcriptional interactions among the 1623 genes, which were used as the gold standard to evaluate the accuracy of the constructed networks.
Application to lung cancer adenocarcinoma dataset
Lung cancer is the leading cause of death from cancer, both in the United States and worldwide; it has a 5-year survival rate of approximately 15% [50]. The progression and metastasis of lung cancer varies greatly among early stage lung cancer patients. To customize treatment plans for individual patients, it is important to identify prognostic or predictive biomarkers, which allows for more precise classification of lung cancer patients. In this study, we applied the extended SPACE method to reconstruct the gene regulatory network in lung cancer. Exploring network structures can facilitate comprehension of biological mechanisms underlying lung cancer and identification of important genes that could be potential lung cancer biomarkers. We constructed the gene network using microarray data from 442 lung cancer adenocarcinoma patients in the Lung Cancer Consortium study [51]. For detail about preprocessing this dataset, please refer to [7]. First, univariate Cox regression was used to identify the genes whose expression levels are correlated with patient survival outcomes, after adjusting for clinical factors such as study site, age, gender, and stage. The false discovery rate (FDR) was then calculated using a Beta-Uniform model [52]. By controlling the FDR to less than 10%, we identified 794 genes that were associated with the survival outcome of lung cancer patients. Among these 794 genes, 22 were found to appear among the 236 carefully curated cancer genes of the FoundationOneTM gene panel (Foundation Medicine, Inc.). Current biological knowledge indicates genes from this panel play a key role in various types of cancer. These 22 genes were then input as known hub genes to the ESPACE method.
Comparison results for existing methods
For each network, we generated 50 datasets and reconstructed the network from each dataset using nine different network construction methods, including both the SPACE and the ESPACE methods. In addition to the five performance measures, we also measure the computation time (Time) of each method to compare the efficiency. Note that all methods are executed on R software [53] for the purpose of fair comparison. We implemented the R codes for PCACMI and CMI2NI using the authors' MATLAB codes. The computation times are measured in CPU time (seconds) by using a desktop PC (Intel Core(TM) i7-4790K CPU (4.00 GHz) and 32 GB RAM).
Tables 1, 2, 3 and 4 report the averages and standard errors of the number of the estimated edges, the five performance measures of the estimation of the network structures and computation times with the optimal tuning parameter λ ∗ for SPACE, GLASSO, GLASSO-SF; the optimal tuning parameters α ∗ and λ ∗ for ESPACE and EGLASSO; and the pre-determined α for GeneNet, NS, PCACMI, and CMI2NI.
Table 1 The averages of the number of estimated edges, the five performance measures and the computation time (sec.) over 50 datasets
Overall, ESPACE has the best performance in estimating network structures in terms of the MCC and the MISR except for the case (p,n)=(83,41), where ESPACE has the second smallest FDR while the MCC and the MISR of ESPACE show the moderate performance among all methods. In the case (p,n)=(83,41), the CMI-based methods have better performance than the others in terms of the MCC and the MISR, but the CMI-based methods also have the large FDRs (≈41%) more than double of those of the other methods. As we described in the Methods Section, the MCC has been used to measure the performance of binary classification and the MISR denotes the total error rate. Thus, this comparison results show that ESPACE is favorable for the identification of edges for the networks with high-dimensional data.
In addition, we made several interesting observations from the results of the our simulation study. First, ESPACE and EGLASSO improve SPACE and GLASSO in terms of the FDR, the MISR, and the MCC for almost scenarios, respectively. The only exception is the case (p,n)=(231,115) for the ESPACE and the SPACE methods. In this case, although the FDR of ESPACE increases 2.17% compared to one of SPACE, ESPACE still improves SPACE in terms of the SEN, the MISR, and the MCC. This suggests that our proposed strategy, which incorporates the previously known hub information, can reduce the errors in estimating network structures compared to the existing method without considering known hub information. Second, GeneNet controls the FDR relatively close to the given level α while the FDRs of NS are controlled conservatively. For instance, the FDRs of GeneNet are measured between 3.94 and 22.24% and NS has the FDRs less than 3.48%. Note that GeneNet and NS control the FDR under 20% (α=0.2) in this simulation study. Third, all methods except the CMI-based methods (the PCACMI and the CMI2NI) have similar efficiency for the relatively low dimensions (p=44,83). The CMI-based methods are relatively slower than the other methods for all the scenarios except for the case (p,n)=(612,612), where GLASSO-SF is the slowest and 1.4 times slower than CMI2NI. CMI2NI is slightly slower than PCACMI for the relatively high dimensions (p=231,612). Finally, even though ESPACE is not the fastest method among the nine methods we consider, there is no overall winner and ESPACE is the third best in terms of the computation time for p=231,612 except for the case (p,n)=(612,612) where ESPACE is faster than SPACE, GLASSO-SF, PCACMI and CMI2NI.
To investigate the other property of the proposed approach, we depict barplots of the averages of degrees of known hub genes over 50 datasets for ESPACE and SPACE in Fig. 3. Figure 3 shows that ESPACE tends to find more edges connected to known hub genes than SPACE. The only exception is the case (p,n)=(612,306), where the average by the ESPACE is 0.57 less than that by SPACE. We conjecture this is simply due the difference in the number of estimated edges, which by ESPACE is 15.54 less than that of SPACE on average. This property is due to the result that the averages of α ∗ selected by the GIC in the ESPACE method lie between 0.76 and 0.97 for all the scenarios, which indicates that ESPACE has incorporated prior information about the hub genes and reduced the penalty on edges connected to known hub genes.
Plots of the averages of degrees of hub nodes over the simulated 50 datasets. Vertical lines denote 95% confidence intervals of the averages
Results of sensitivity analysis on random noise
Table 5 reports the averages of the number of the estimated edges and the five performance measures. From the results in Table 5, we can see that the performance of estimation decreases when the variance of the random noises increases in both the SPACE and the ESPACE. For a relatively small sample size (n=115), both the SPACE and the ESPACE are more sensitive to the variance \(\sigma _{\epsilon }^{2}\) compared to the case of n=231. Even though the performance of two methods decreases by similar amounts as the variance \(\sigma _{\epsilon }^{2}\) increases, the ESPACE has better performance than the SPACE in terms of the MCC and the MISR.
Table 5 The averages of the number of estimated edges and the five performance measures over 50 datasets
Comparison of the identified GRNs in Escherichia coli dataset
In this study, we compared the performance of network construction using the SPACE and the ESPACE methods for the model selected by the GIC. We report the number of estimated edges and the true positives, which are matched to the transcriptional interactions in the RegulonDB, in Table 6. The SPACE method estimated 368 edges among 524 genes, which contain 16 TF-encoding genes, and identified 16 transcriptional interactions as true positives. In comparison, the ESPACE method estimated 349 edges among 478 genes containing 29 TF-encoding genes and found 45 transcriptional interactions in the RegulonDB. The ESPACE method found more interactions than the SPACE method and increased the ratio of the number of TPs versus the number of estimated edges as 8.55%. Figure 4 shows the number of TPs vs. the number of estimated edges for various λ values with α ∗ in Table 6. The number of TPs of the ESPACE method is consistently greater than those of the SPACE method at similar sparsity. These results clearly indicate that incorporating potential hub gene information improves the accuracy of network construction.
Plot of the number of the TPs vs. the number of the estimated edges for various λs with α ∗ in Table 6
Table 6 Summary of the estimated networks using the SPACE and ESPACE methods from the E.coli dataset. We denote a set of estimated edges and a set of the interactions from the RegulonDB by \(\widehat {E}\) and T, respectively
Comparison of the identified GRNs in lung cancer adenocarcinoma dataset
We again compared the performances of network construction using the SPACE and the ESPACE methods. An overview of the networks constructed using both methods is shown in Fig. 5. The SPACE method estimated 234 edges between 114 genes and the ESPACE method found 272 edges between 132 genes. Although the numbers of estimated edges from both the SPACE and ESPACE methods are quite similar, 16.7 and 28.3% of the estimated edges in networks by the SPACE and the ESPACE are different, respectively. We identified hub genes using the criterion mentioned at the beginning of this paper. The lists of hub genes identified in both networks are reported in Table 7. Interestingly, all hub genes identified by the SPACE method were also found using ESPACE. Note that this is not the usual case. For instance, if we define a hub as a node whose degree is greater than 5, the set of hub genes identified by the SPACE is not a subset of the hub genes identified by the ESPACE. To investigate the gains of the ESPACE method, therefore, we focused on the hub genes identified only by ESPACE (AURKA, APC, CDKN3), among which, AURKA and APC are among the 22 pre-specidifed hub genes while CDKN3 is not.
Estimated networks structure using the SPACE and ESPACE methods. The nodes with more than 7 connections (the 0.95 quantile in the degree distribution) were defined as hub nodes, which are represented as black nodes in the network structure. The details of hub genes are reported in Table 7. a SPACE (114 nodes, 234 edges) and b ESPACE (132 nodes, 272 edges)
Table 7 Hub genes from the estimated graphs using the SPACE and ESPACE methods
The CDKN3 (Cyclin-Dependent Kinase Inhibitor 3) protein coded by the CDKN3 gene is a cyclin-dependent kinase inhibitor. Recent studies [54, 55] show that CDKN3 overexpression was associated with poorer survival outcomes in lung adenocarcinoma, but not in lung squamous cell carcinoma. We validated that CDKN3 is associated with the prognosis of lung adenocarcinoma patients in two independent datasets (see Fig. 6). The CDKN3 expression allowed us to separate the lung adenocarcinoma patients into high CDKN3 and low CDKN3 groups with significantly different survival outcomes: in the GSE13213 dataset [56] (n=117), hazard ratio = 2.02 (high CDKN3 vs. low CDKN3), p=0.0146; in the GSE1037 dataset [57] (n=61), hazard ratio= 3.39 (high CDKN3 vs. low CDKN3), p=0.0126. Note that we divided patients into "high" and "low" groups by their gene expression levels with the K-means clustering method.
Kaplan-Meier curves for the CDKN3 gene from GSE13213 and GSE1037 datasets. For each gene, we divide patients into two groups, "High" and "Low", by their gene expression levels with the K-means clustering method. Red solid lines denote the "High" group and black dashed lines denote the "Low" group. a CDKN3 (GSE13213 dataset) and b CDKN3 (GSE1037 dataset)
APC (Adenomatous Polyposis Coli) is a tumor suppressor gene, and is involved in the Wnt signaling pathway as a negative regulator. It has been identified as one of the key mutation genes in lung adenocarcinoma by a comprehensive study on the somatic mutations in lung adenocarcinoma [58]. AURKA (aurora kinase A) is a protein-coding gene found to be associated with many different types of cancer. Aurora kinase inhibitors have been studied as a potential cancer treatment [59]. Using the GSE42127 dataset [7] (n=209), we found that AURKA expression can predict lung cancer patients' response to chemotherapy. The dataset contains expression profiles and treatment information for 209 lung cancer patients from MD Anderson Cancer Center, among whom 62 received adjuvant chemotherapy (ACT group) and the remaining 147 did not (no ACT group). The AURKA gene expression allowed us to separate the 209 patients into a low AURKA group (n=104) and high AURKA group (n=105) using the median AURKA expression as a cut-off. The patients in the low AURKA group (Fig. 7 a) showed significant improvement in survival after ACT: hazard ratio = 0.289 (ACT vs. no ACT) and p value = 0.0312. The patients in the high AURKA group (Fig. 7 b), on the other hand, showed no significant survival benefit after ACT: hazard ratio = 0.679 (ACT vs. no ACT) and p value = 0.241. These results indicate that AURKA expression could potentially be a predictive biomarker for lung cancer adjuvant chemotherapy, since only patients with low AURKA expression benefit from the treatment, while those with high AURKA expression are less likely to benefit. In addition, it is possible that Aurora kinase inhibitors, which suppress the expression of AURKA genes, may synergize the effect of adjuvant chemotherary, i.e. improve the chance that a patient responds to adjuvant chemotherapy. In fact, a recent study has demonstrated that Aurora kinase inhibitors may synergize the effect of adjuvant chemotherapy in ovarian cancer, which is consistent with our results in lung cancer.
Kaplan-Meier curves for low and high groups of the AURKA gene expression in GSE42127 dataset [7]. The AURKA expression separates the 209 lung cancer patients into two groups. In the AURKA low expression group (left panel), lung cancer patients with ACT (green line) have significantly longer survival time than patients without ACT (observational group, purple line). In the AURKA high expression group (right panel), patients with ACT do not have significant survival benifit compared to patients without ACT. a lower expression group. b high expression group
We have demonstrated incorporating hub gene information in estimating network structures by extending SPACE with an additional tuning parameter. Our simulation study shows that the ESPACE method reduces errors in the construction of networks when the networks have previously-known hub nodes. Through two applications, we illustrate that the ESPACE method can improve the SPACE method by using the information about the potential hub genes. Although we adopted the GIC to select the optimal tuning parameters in this paper, the ESPACE method can directly be applied with other model selection criteria. The performance of the ESPACE method varies with the chosen criterion. However, the performance of the ESPACE method is at least comparable to the SPACE method since the ESPACE includes the SPACE as a reduced case.
Adjuvant chemotherapy
APC:
Adenomatous polyposis coli
AURKA:
Aurora kinase A
CDKN3:
Cyclin-dependent kinase inhibitor 3
CMI2NI:
Conditional mutual inclusive information-based network inference
E.coli :
EGLASSO:
Extended GLASSO
ESPACE:
Extended SPACE
FDR:
False discovery rate
GIC:
Generalized information criterion
GGM:
Gaussian graphical model
GLASSO:
Graphical lasso
GLASSO-SF:
GLASSO with reweighted strategy for scale-free network
GRN:
Gene regulatory network
M3D:
Many Microbe Microarrays database
MCC:
Matthews correlation coefficients
Mutual information
MISR:
Mis-specification rate
Minorization-maximization
NS:
Neighborhood selection
PCACMI:
path consistency algorithm based on conditional mutual information
PPI:
Protein-protein interaction
SEN:
Sparse partial correlation estimation
SPE:
Friedman N. Inferring cellular networks using probabilistic graphical models. Science. 2004; 303(5659):799–805. doi:10.1126/science.109406810.1126/science.1094068.
Ihmels J, Friedlander G, Bergmann S, Sarig O, Ziv Y, Barkai N. Revealing modular organization in the yeast transcriptional network. Nat Genet. 2002; 31(4):370–7. doi:10.1038/Ng94110.1038/Ng941.
Segal E, Shapira M, Regev A, Pe'er D, Botstein D, Koller D, Friedman N. Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data. Nat Genet. 2003; 34(2):166–76. doi:10.1038/ng1165ng116510.1038/ng1165 ng1165.
Sachs K, Perez O, Pe'er D, Lauffenburger DA, Nolan GP. Causal protein-signaling networks derived from multiparameter single-cell data. Science. 2005; 308(5721):523–9. doi:10.1126/science.110580910.1126/science.1105809.
Zhong R, Allen JD, Xiao G, Xie Y. Ensemble-based network aggregation improves the accuracy of gene network reconstruction. PLoS ONE. 2014; 9(11):106319. doi:10.1371/journal.pone.010631910.1371/journal.pone.0106319.
Akavia UD, Litvin O, Kim J, Sanchez-Garcia F, Kotliar D, Causton HC, Pochanard P, Mozes E, Garraway LA, Pe'er D. An integrated approach to uncover drivers of cancer. Cell. 2010; 143(6):1005–17. doi:10.1016/j.cell.2010.11.01310.1016/j.cell.2010.11.013.
Tang H, Xiao G, Behrens C, Schiller J, Allen J, Chow CW, Suraokar M, Corvalan A, Mao J, White MA, Wistuba I, Minna JD, Xie Y. A 12-gene set predicts survival benefits from adjuvant chemotherapy in non-small cell lung cancer patients. Clin Cancer Res. 2013; 19(6):1577–86. doi:10.1158/1078-0432.CCR-12-232110.1158/1078-0432.CCR-12-2321.
Cooper GF, Herskovits E. A bayesian method for the induction of probabilistic networks from data. Mach Learn. 1992; 9(4):309–47. doi:10.1023/A:102264940155210.1023/A:1022649401552.
Ellis B, Wong WH. Learning causal bayesian network structures from experimental data. J Am Stat Assoc. 2008; 103(482):778–89. doi:10.1198/01621450800000019310.1198/016214508000000193.
Liang FM, Zhang J. Learning bayesian networks for discrete data. Comput Stat Data Anal. 2009; 53(4):865–76. doi:10.1016/j.csda.2008.10.00710.1016/j.csda.2008.10.007.
Needham CJ, Bradford JR, Bulpitt AJ, Westhead DR. Inference in bayesian networks. Nat Biotechnol. 2006; 24(1):51–3. doi:10.1038/nbt0106-5110.1038/nbt0106-51.
Sachs K, Gifford D, Jaakkola T, Sorger P, Lauffenburger DA. Bayesian network approach to cell signaling pathway modeling. Sci STKE. 2002; 2002(148):38. doi:10.1126/stke.2002.148.pe3810.1126/stke.2002.148.pe38.
Langfelder P, Horvath S. Wgcna: an r package for weighted correlation network analysis. Bmc Bioinforma. 2008; 9. doi:10.1186/1471-2105-9-55910.1186/1471-2105-9-559.
Yuan M, Lin Y. Model selection and estimation in the gaussian graphical model. Biometrika. 2007; 94(1):19–35. doi:10.1093/biomet/asm01810.1093/biomet/asm018.
Friedman J, Hastie T, Tibshirani R. Sparse inverse covariance estimation with the graphical lasso. Biostatistics. 2008; 9(3):432–41. doi:10.1093/biostatistics/kxm04510.1093/biostatistics/kxm045.
Witten DM, Friedman JH, Simon N. New insights and faster computations for the graphical lasso. J Comput Graph Stat. 2011; 20(4):892–900. doi:10.1198/jcgs.2011.11051a10.1198/jcgs.2011.11051a.
Meinshausen N, Buhlmann P. High-dimensional graphs and variable selection with the lasso. Ann Stat. 2006; 34(3):1436–62. doi:10.1214/00905360600000028110.1214/009053606000000281.
Peng J, Wang P, Zhou N, Zhu J. Partial correlation estimation by joint sparse regression models. J Am Stat Assoc. 2009; 104(486):735–46. doi:10.1198/jasa.2009.012610.1198/jasa.2009.0126.
Basso K, Margolin AA, Stolovitzky G, Klein U, Dalla-Favera R, Califano A. Reverse engineering of regulatory networks in human b cells. Nat Genet. 2005; 37(4):382–90. doi:10.1038/ng153210.1038/ng1532.
Margolin AA, Nemenman I, Basso K, Wiggins C, Stolovitzky G, Dalla Favera R, Califano A. Aracne: An algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. Bmc Bioinforma. 2006; 7. doi:10.1186/1471-2105-7-S1-S710.1186/1471-2105-7-S1-S7.
Zhang X, Zhao XM, He K, Lu L, Cao Y, Liu J, Hao JK, Liu ZP, Chen L. Inferring gene regulatory networks from gene expression data by path consistency algorithm based on conditional mutual information. Bioinformatics. 2012; 28(1):98–104. doi:10.1093/bioinformatics/btr62610.1093/bioinformatics/btr626.
Zhang X, Zhao J, Hao JK, Zhao XM, Chen L. Conditional mutual inclusive information enables accurate quantification of associations in gene regulatory networks. Nucleic Acids Res. 2015; 43(5):31. doi:10.1093/nar/gku131510.1093/nar/gku1315.
Bansal M, Belcastro V, Ambesi-Impiombato A, di Bernardo D. How to infer gene networks from expression profiles. Mol Syst Biol. 2007; 3. doi:10.1038/Msb410012010.1038/Msb4100120.
Allen JD, Xie Y, Chen M, Girard L, Xiao G. Comparing statistical methods for constructing large scale gene networks. PLoS ONE. 2012; 7(1):29348. doi:10.1371/journal.pone.002934810.1371/journal.pone.0029348.
Pan W. Network-based multiple locus linkage analysis of expression traits. Bioinformatics. 2009; 25(11):1390–6. doi:10.1093/bioinformatics/btp17710.1093/bioinformatics/btp177.
Pan W, Xie BH, Shen XT. Incorporating predictor network in penalized regression with application to microarray data. Biometrics. 2010; 66(2):474–84. doi:10.1111/j.1541-0420.2009.01296.x10.1111/j.1541-0420.2009.01296.x.
Wei P, Pan W. Incorporating gene networks into statistical tests for genomic data via a spatially correlated mixture model. Bioinformatics. 2008; 24(3):404–11. doi:10.1093/bioinformatics/btm61210.1093/bioinformatics/btm612.
Babu MM, Luscombe NM, Aravind L, Gerstein M, Teichmann SA. Structure and evolution of transcriptional regulatory networks. Curr Opin Struct Biol. 2004; 14(3):283–91. doi:10.1016/j.sbi.2004.05.00410.1016/j.sbi.2004.05.004.
Li JJ, Xie D. Rack1, a versatile hub in cancer. Oncogene. 2015; 34(15):1890–8. doi:10.1038/onc.2014.12710.1038/onc.2014.127.
Selvanathan SP, Graham GT, Erkizan HV, Dirksen U, Natarajan TG, Dakic A, Yu S, Liu X, Paulsen MT, Ljungman ME, Wu CH, Lawlor ER, Uren A, Toretsky JA. Oncogenic fusion protein ews-fli1 is a network hub that regulates alternative splicing. Proc Natl Acad Sci USA. 2015; 112(11):1307–16. doi:10.1073/pnas.1500536112.
Liu Q, Ihler A. Learning scale free networks by reweighted L1 regularization. In: AISTATS: 2011. p. 40–48.
Batada NN, Reguly T, Breitkreutz A, Boucher L, Breitkreutz BJ, Hurst LD, Tyers M. Stratus not altocumulus: a new view of the yeast protein interaction network. PLoS Biol. 2006; 4(10):317. doi:10.1371/journal.pbio.0040317.
Ekman D, Light S, Bjorklund AK, Elofsson A. What properties characterize the hub proteins of the protein-protein interaction network of saccharomyces cerevisiae?Genome Biol. 2006; 7(6):45. doi:10.1186/gb-2006-7-6-r45.
Schafer J, Strimmer K. A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Stat Appl Genet Mol Biol. 2005; 4:32. doi:10.2202/1544-6115.1175.
Efron B. Local false discovery rates. available at. 2005. http://statweb.stanford.edu/~ckirby/brad/papers/2005LocalFDR.pdf. Accessed 9 Mar.
Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc Series B-Methodological. 1996; 58(1):267–88.
Zhao P, Yu B. On model selection consistency of lasso. J Mach Learn Res. 2006; 7:2541–63.
Wu TT, Lange K. Coordinate descent algorithms for lasso penalized regression. Ann Appl Stat. 2008; 2(1):224–44. doi:10.1214/07-Aoas147.
Banerjee O, El Ghaoui L, d'Aspremont A. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. J Mach Learn Res. 2008; 9:485–516.
Mazumder R, Hastie T. The graphical lasso: New insights and alternatives. Electron J Stat. 2012; 6(0):2125–149. doi:10.1214/12-ejs740.
Lange K, Hunter DR, Yang I. Optimization transfer using surrogate objective functions. J Comput Graph Stat. 2000; 9(1):1–20.
Faith JJ, Hayete B, Thaden JT, Mogno I, Wierzbowski J, Cottarel G, Kasif S, Collins JJ, Gardner TS. Large-scale mapping and validation of escherichia coli transcriptional regulation from a compendium of expression profiles. PLoS Biol. 2007; 5(1):8. doi:10.1371/journal.pbio.0050008.
Meyer PE, Lafitte F, Bontempi G. minet: A r/bioconductor package for inferring large transcriptional networks using mutual information. BMC Bioinforma. 2008; 9:461.
Spirtes P, Glymour C, Scheines R. Causation, Prediction, and Search, 2nd ed. Boston: The MIT Press; 2000.
Lauritzen SL. Graphical Models. New York: Oxford University Press Inc.; 1996. http://books.google.com/books?id=mGQWkx4guhAC.
Yu D, Son W, Lim J, Xiao G. Statistical completion of a partially identified graph with applications for the estimation of gene regulatory networks. Biostatistics. 2015. doi:10.1093/biostatistics/kxv013.
Prasad TSK, Goel R, Kandasamy K, Keerthikumar S, Kumar S, Mathivanan S, Telikicherla D, Raju R, Shafreen B, Venugopal A, Balakrishnan L, Marimuthu A, Banerjee S, Somanathan DS, Sebastian A, Rani S, Ray S, Harrys Kishore CJ, Kanth S, Ahmed M, Kashyap MK, Mohmood R, Ramachandra YL, Krishna V, Rahiman BA, Mohan S, Ranganathan P, Ramabadran S, Chaerkady R, Pandey A. Human protein reference database 2009 update. Nucleic Acids Res. 2009; 37(suppl 1):767–72. doi:10.1093/nar/gkn892.
Faith JJ, Driscoll ME, Fusaro VA, Cosgrove EJ, Hayete B, Juhn FS, Schneider SJ, Gardner TS. Many microbe microarrays database: uniformly normalized affymetrix compendia with structured experimental metadata. Nucleic Acids Res. 2008; 36(Database issue):866–70. doi:10.1093/nar/gkm815.
Salgado H, Peralta-Gil M, Gama-Castro S, Santos-Zavaleta A, Muniz-Rascado L, Garcia-Sotelo JS, Weiss V, Solano-Lira H, Martinez-Flores I, Medina-Rivera A, Salgado-Osorio G, Alquicira-Hernandez S, Alquicira-Hernandez K, Lopez-Fuentes A, Porron-Sotelo L, Huerta AM, Bonavides-Martinez C, Balderas-Martinez YI, Pannier L, Olvera M, Labastida A, Jimenez-Jacinto V, Vega-Alvarado L, Del Moral-Chavez V, Hernandez-Alvarez A, Morett E, Collado-Vides J. Regulondb v8.0: omics data sets, evolutionary conservation, regulatory phrases, cross-validated gold standards and more. Nucleic Acids Res. 2013; 41(Database issue):203–13. doi:10.1093/nar/gks1201.
Jemal A, Siegel R, Xu J, Ward E. Cancer statistics, 2010. CA Cancer J Clin. 2010; 60(5):277–300. doi:10.3322/caac.20073.
Shedden K, Taylor JM, Enkemann SA, Tsao MS, Yeatman TJ, Gerald WL, Eschrich S, Jurisica I, Giordano TJ, Misek DE, Chang AC, Zhu CQ, Strumpf D, Hanash S, Shepherd FA, Ding K, Seymour L, Naoki K, Pennell N, Weir B, Verhaak R, Ladd-Acosta C, Golub T, Gruidl M, Sharma A, Szoke J, Zakowski M, Rusch V, Kris M, Viale A, Motoi N, Travis W, Conley B, Seshan VE, Meyerson M, Kuick R, Dobbin KK, Lively T, Jacobson JW, Beer DG. Gene expression-based survival prediction in lung adenocarcinoma: a multi-site, blinded validation study. Nat Med. 2008; 14(8):822–7. doi:10.1038/nm.1790.
Pounds S, Morris SW. Estimating the occurrence of false positives and false negatives in microarray studies by approximating and partitioning the empirical distribution of p-values. Bioinformatics. 2003; 19(10):1236–42. doi:10.1093/bioinformatics/btg148.
R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2015. Available at https://www.R-project.org/.
Zang X, Chen M, Zhou Y, Xiao G, Xie Y, Wang X. Identifying cdkn3 gene expression as a prognostic biomarker in lung adenocarcinoma via meta-analysis. Cancer Inform. 2015; 14(Suppl 2):183–91. doi:10.4137/CIN.S17287.
Fan C, Chen L, Huang Q, Shen T, Welsh EA, Teer JK, Cai J, Cress WD, Wu J. Overexpression of major cdkn3 transcripts is associated with poor survival in lung adenocarcinoma. Br J Cancer. 2015; 113(12):1735–43. doi:10.1038/bjc.2015.378.
Tomida S, Takeuchi T, Shimada Y, Arima C, Matsuo K, Mitsudomi T, Yatabe Y, Takahashi T. Relapse-related molecular signature in lung adenocarcinomas identifies patients with dismal prognosis. J Clin Oncol. 2009; 27(17):2793–9. doi:10.1200/JCO.2008.19.7053.
Jones MH, Virtanen C, Honjoh D, Miyoshi T, Satoh Y, Okumura S, Nakagawa K, Nomura H, Ishikawa Y. Two prognostically significant subtypes of high-grade lung neuroendocrine tumours independent of small-cell and large-cell neuroendocrine carcinomas identified by gene expression profiles. The Lancet. 2004; 363(9411):775–81. doi:10.1016/S0140-6736(04)15693-6.
Ding L, Getz G, Wheeler DA, Mardis ER, McLellan MD, Cibulskis K, Sougnez C, Greulich H, Muzny DM, Morgan MB, Fulton L, Fulton RS, Zhang Q, Wendl MC, Lawrence MS, Larson DE, Chen K, Dooling DJ, Sabo A, Hawes AC, Shen H, Jhangiani SN, Lewis LR, Hall O, Zhu Y, Mathew T, Ren Y, Yao J, Scherer SE, Clerc K, Metcalf GA, Ng B, Milosavljevic A, Gonzalez-Garay ML, Osborne JR, Meyer R, Shi X, Tang Y, Koboldt DC, Lin L, Abbott R, Miner TL, Pohl C, Fewell G, Haipek C, Schmidt H, Dunford-Shore BH, Kraja A, Crosby SD, Sawyer CS, Vickery T, Sander S, Robinson J, Winckler W, Baldwin J, Chirieac LR, Dutt A, Fennell T, Hanna M, Johnson BE, Onofrio RC, Thomas RK, Tonon G, Weir BA, Zhao X, Ziaugra L, Zody MC, Giordano T, Orringer MB, Roth JA, Spitz MR, Wistuba II, Ozenberger B, Good PJ, Chang AC, Beer DG, Watson MA, Ladanyi M, Broderick S, Yoshizawa A, Travis WD, Pao W, Province MA, Weinstock GM, Varmus HE, Gabriel SB, Lander ES, Gibbs RA, Meyerson M, Wilson RK. Somatic mutations affect key pathways in lung adenocarcinoma. Nature. 2008; 455(7216):1069–75.
Kollareddy M, Zheleva D, Dzubak P, Brahmkshatriya PS, Lepsik M, Hajduch M. Aurora kinase inhibitors: progress towards the clinic. Invest New Drugs. 2012; 30(6):2411–32. doi:10.1007/s10637-012-9798-6.
We gratefully thank Jessie Norris for language editing of the manuscript.
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science, ICT and Future Planning (NRF-2015R1C1A1A02036312 and NRF-2011-0030810), and the National Institutes of Health grants (1R01CA17221, 1R01GM117597, R15GM113157).
The Escherichia coli dataset analyzed during the current study is available in the Many Microbe Microarrays database (M3D) [48], http://m3d.mssm.edu. The RegulonDB [49] dataset used for validation of the result is available at http://regulondb.ccg.unam.mx. The Lung Cancer Consortium study dataset analyzed during this study is included in the published article [51]. The information of cancer genes used during this study is available from the FoundationOneTM, http://www.foundationone.com. The proposed method ESPACE is implemented the R package "espace", which is available from https://sites.google.com/site/dhyeonyu/software.
DY, JL and GX drafted the manuscript. DY and JL formulate the proposed model and performed simulation studies. GX performed the interpretation of the results in the application to the lung cancer adenocarcinoma dataset. GX designed preprocessing procedure in the real-data applications. FL and XW helped in the verification of the proposed model and revised the manuscript. All authors read and approved the final manuscript.
We have no financial or personal relationships with other people and organizations that cause conflict of interests. The authors declare that they have no competing interests.
Department of Statistics, Inha University, Incheon, Korea
Donghyeon Yu
Department of Statistics, Seoul National University, Seoul, Korea
Johan Lim
Department of Statistical Science, Southern Methodist University, 6425 Boaz Lane, Dallas, TX 75205, USA
Xinlei Wang
Department of Biostatistics, University of Florida, 2004 Mowry Road, Gainesville, FL 32611, USA
Faming Liang
Department of Clinical Sciences, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX 75390, USA
Guanghua Xiao
Search for Donghyeon Yu in:
Search for Johan Lim in:
Search for Xinlei Wang in:
Search for Faming Liang in:
Search for Guanghua Xiao in:
Correspondence to Guanghua Xiao.
Yu, D., Lim, J., Wang, X. et al. Enhanced construction of gene regulatory networks using hub gene information. BMC Bioinformatics 18, 186 (2017). https://doi.org/10.1186/s12859-017-1576-1
Hub gene
Partial correlation
Networks analysis | CommonCrawl |
$69157 in 1932 → 1991
$69,157 in 1932 is worth $687,531.64 in 1991
$69,157 in 1932 has the same purchasing power as $687,531.64 in 1991. Over the 59 years this is a change of $618,374.64.
The average inflation rate of the dollar between 1932 and 1991 was 3.81% per year. The cumulative price increase of the dollar over this time was 894.16%.
The value of $69,157 from 1932 to 1991
So what does this data mean? It means that the prices in 1991 are 6,875.32 higher than the average prices since 1932. A dollar in 1991 can buy 10.06% of what it could buy in 1932.
These inflation figures use the Bureau of Labor Statistics (BLS) consumer price index to calculate the value of $69,157 between 1932 and 1991.
The inflation rate for 1932 was -9.87%, while the inflation rate for 1991 was 4.21%. The 1991 inflation rate is higher than the average inflation rate of 3.34% per year between 1991 and 2021.
USD Inflation Since 1913
The chart below shows the inflation rate from 1913 when the Bureau of Labor Statistics' Consumer Price Index (CPI) was first established.
The Buying Power of $69,157 in 1932
We can look at the buying power equivalent for $69,157 in 1932 to see how much you would need to adjust for in order to beat inflation. For 1932 to 1991, if you started with $69,157 in 1932, you would need to have $687,531.64 in 1932 to keep up with inflation rates.
So if we are saying that $69,157 is equivalent to $687,531.64 over time, you can see the core concept of inflation in action. The "real value" of a single dollar decreases over time. It will pay for fewer items at the store than it did previously.
In the chart below you can see how the value of the dollar is worth less over 59 years.
Value of $69,157 Over Time
In the table below we can see the value of the US Dollar over time. According to the BLS, each of these amounts are equivalent in terms of what that amount could purchase at the time.
$67,642.61 3.08%
$82,281.69 10.88%
$112,569.42 14.36%
$121,655.74 8.07%
$120,141.36 -1.24%
US Dollar Inflation Conversion
If you're interested to see the effect of inflation on various 1950 amounts, the table below shows how much each amount would be worth today based on the price increase of 894.16%.
Equivalent Value
$1.00 in 1932 $9.94 in 1991
$5.00 in 1932 $49.71 in 1991
$10.00 in 1932 $99.42 in 1991
$50.00 in 1932 $497.08 in 1991
$100.00 in 1932 $994.16 in 1991
$500.00 in 1932 $4,970.80 in 1991
$1,000.00 in 1932 $9,941.61 in 1991
$5,000.00 in 1932 $49,708.03 in 1991
$10,000.00 in 1932 $99,416.06 in 1991
$50,000.00 in 1932 $497,080.29 in 1991
$100,000.00 in 1932 $994,160.58 in 1991
$500,000.00 in 1932 $4,970,802.92 in 1991
$1,000,000.00 in 1932 $9,941,605.84 in 1991
Calculate Inflation Rate for $69,157 from 1932 to 1991
To calculate the inflation rate of $69,157 from 1932 to 1991, we use the following formula:
$$\dfrac{ 1932\; USD\; value \times CPI\; in\; 1991 }{ CPI\; in\; 1932 } = 1991\; USD\; value $$
We then replace the variables with the historical CPI values. The CPI in 1932 was 13.7 and 136.2 in 1991.
$$\dfrac{ \$69,157 \times 136.2 }{ 13.7 } = \text{ \$687,531.64 } $$
$69,157 in 1932 has the same purchasing power as $687,531.64 in 1991.
To work out the total inflation rate for the 59 years between 1932 and 1991, we can use a different formula:
$$ \dfrac{\text{CPI in 1991 } - \text{ CPI in 1932 } }{\text{CPI in 1932 }} \times 100 = \text{Cumulative rate for 59 years} $$
Again, we can replace those variables with the correct Consumer Price Index values to work out the cumulativate rate:
$$ \dfrac{\text{ 136.2 } - \text{ 13.7 } }{\text{ 13.7 }} \times 100 = \text{ 894.16\% } $$
Inflation Rate Definition
The inflation rate is the percentage increase in the average level of prices of a basket of selected goods over time. It indicates a decrease in the purchasing power of currency and results in an increased consumer price index (CPI). Put simply, the inflation rate is the rate at which the general prices of consumer goods increases when the currency purchase power is falling.
The most common cause of inflation is an increase in the money supply, though it can be caused by many different circumstances and events. The value of the floating currency starts to decline when it becomes abundant. What this means is that the currency is not as scarce and, as a result, not as valuable.
By comparing a list of standard products (the CPI), the change in price over time will be measured by the inflation rate. The prices of products such as milk, bread, and gas will be tracked over time after they are grouped together. Inflation shows that the money used to buy these products is not worth as much as it used to be when there is an increase in these products' prices over time.
The inflation rate is basically the rate at which money loses its value when compared to the basket of selected goods – which is a fixed set of consumer products and services that are valued on an annual basis.
<a href="https://studyfinance.com/inflation/us/1932/69157/1991/">$69,157 in 1932 is worth $687,531.64 in 1991</a>
"$69,157 in 1932 is worth $687,531.64 in 1991". StudyFinance.com. Accessed on January 22, 2022. https://studyfinance.com/inflation/us/1932/69157/1991/.
"$69,157 in 1932 is worth $687,531.64 in 1991". StudyFinance.com, https://studyfinance.com/inflation/us/1932/69157/1991/. Accessed 22 January, 2022
$69,157 in 1932 is worth $687,531.64 in 1991. StudyFinance.com. Retrieved from https://studyfinance.com/inflation/us/1932/69157/1991/.
Quick Inflation Calculations
Inflation of $1 from 1932 to 1991
Inflation of $10 from 1932 to 1991
Inflation of $100 from 1932 to 1991
Inflation of $1,000 from 1932 to 1991
Inflation of $10,000 from 1932 to 1991
Inflation of $100,000 from 1932 to 1991
Inflation of $1,000,000 from 1932 to 1991
Random Inflation Calculations | CommonCrawl |
Microformal geometry and homotopy algebras
Theodore Voronov
We extend the category of (super)manifolds and their smooth mappings by introducing a notion of microformal or ``thick'' morphisms. They are formal canonical relations of a special form, constructed with the help of formal power expansions in cotangent directions. The result is a formal category so that its composition law is also specified by a formal power series. A microformal morphism acts on functions by an operation of pullback, which is in general a nonlinear transformation. More precisely, it is a formal mapping of formal manifolds of even functions (bosonic fields), which has the property that its derivative for every function is a ring homomorphism. This suggests an abstract notion of a ``nonlinear algebra homomorphism'' and the corresponding extension of the classical ``algebraic-functional'' duality. There is a parallel fermionic version.
The obtained formalism provides a general construction of $L_{\infty}$-morphisms for functions on homotopy Poisson ($P_{\infty}$-) or homotopy Schouten ($S_{\infty}$-) manifolds as pullbacks by Poisson microformal morphisms. We also show that the notion of the adjoint can be generalized to nonlinear operators as a microformal morphism. By applying this to $L_{\infty}$-algebroids, we show that an $L_{\infty}$-morphism of $L_{\infty}$-algebroids induces an $L_{\infty}$-morphism of the "homotopy Lie--Poisson" brackets for functions on the dual vector bundles. We apply this construction to higher Koszul brackets on differential forms and to triangular $L_{\infty}$-bialgebroids. We also develop a quantum version (for the bosonic case), whose relation with the classical version is like that of the Schr\"odinger equation with the Hamilton--Jacobi equation. We show that the nonlinear pullbacks by microformal morphisms are the limits at $\hbar\to 0$ of certain ``quantum pullbacks'', which are defined as special form Fourier integral operators.
Proceedings of the Steklov Institute of Mathematics
SharedIt link
Dive into the research topics of 'Microformal geometry and homotopy algebras'. Together they form a unique fingerprint.
Morphism Mathematics 100%
Functions Mathematics 50%
Pullback Mathematics 50%
Nonlinear Mathematics 37%
Manifold Mathematics 37%
Algebra over a Field Mathematics 12%
Activities per year 2018 2018 2018
Thick morphisms and homotopy bracket structures
Theodore Voronov (Invited speaker)
Thick morphisms of supermanifolds and homotopy algebras
Voronov, T. (2019). Microformal geometry and homotopy algebras. Proceedings of the Steklov Institute of Mathematics, 302(1), 88-129. https://doi.org/10.1134/S0081543818060056
Voronov, Theodore. / Microformal geometry and homotopy algebras. In: Proceedings of the Steklov Institute of Mathematics. 2019 ; Vol. 302, No. 1. pp. 88-129.
@article{3c4c69c05b204503a9f22f3c6b7fe1a4,
title = "Microformal geometry and homotopy algebras",
abstract = "We extend the category of (super)manifolds and their smooth mappings by introducing a notion of microformal or ``thick'' morphisms. They are formal canonical relations of a special form, constructed with the help of formal power expansions in cotangent directions. The result is a formal category so that its composition law is also specified by a formal power series. A microformal morphism acts on functions by an operation of pullback, which is in general a nonlinear transformation. More precisely, it is a formal mapping of formal manifolds of even functions (bosonic fields), which has the property that its derivative for every function is a ring homomorphism. This suggests an abstract notion of a ``nonlinear algebra homomorphism'' and the corresponding extension of the classical ``algebraic-functional'' duality. There is a parallel fermionic version. The obtained formalism provides a general construction of $L_{\infty}$-morphisms for functions on homotopy Poisson ($P_{\infty}$-) or homotopy Schouten ($S_{\infty}$-) manifolds as pullbacks by Poisson microformal morphisms. We also show that the notion of the adjoint can be generalized to nonlinear operators as a microformal morphism. By applying this to $L_{\infty}$-algebroids, we show that an $L_{\infty}$-morphism of $L_{\infty}$-algebroids induces an $L_{\infty}$-morphism of the {"}homotopy Lie--Poisson{"} brackets for functions on the dual vector bundles. We apply this construction to higher Koszul brackets on differential forms and to triangular $L_{\infty}$-bialgebroids. We also develop a quantum version (for the bosonic case), whose relation with the classical version is like that of the Schr\{"}odinger equation with the Hamilton--Jacobi equation. We show that the nonlinear pullbacks by microformal morphisms are the limits at $\hbar\to 0$ of certain ``quantum pullbacks'', which are defined as special form Fourier integral operators.",
author = "Theodore Voronov",
journal = "Steklov Institute of Mathematics. Proceedings ",
publisher = "MAIK Nauka - Interperiodica",
Voronov, T 2019, 'Microformal geometry and homotopy algebras', Proceedings of the Steklov Institute of Mathematics, vol. 302, no. 1, pp. 88-129. https://doi.org/10.1134/S0081543818060056
Microformal geometry and homotopy algebras. / Voronov, Theodore.
In: Proceedings of the Steklov Institute of Mathematics, Vol. 302, No. 1, 2019, p. 88-129.
T1 - Microformal geometry and homotopy algebras
AU - Voronov, Theodore
N2 - We extend the category of (super)manifolds and their smooth mappings by introducing a notion of microformal or ``thick'' morphisms. They are formal canonical relations of a special form, constructed with the help of formal power expansions in cotangent directions. The result is a formal category so that its composition law is also specified by a formal power series. A microformal morphism acts on functions by an operation of pullback, which is in general a nonlinear transformation. More precisely, it is a formal mapping of formal manifolds of even functions (bosonic fields), which has the property that its derivative for every function is a ring homomorphism. This suggests an abstract notion of a ``nonlinear algebra homomorphism'' and the corresponding extension of the classical ``algebraic-functional'' duality. There is a parallel fermionic version. The obtained formalism provides a general construction of $L_{\infty}$-morphisms for functions on homotopy Poisson ($P_{\infty}$-) or homotopy Schouten ($S_{\infty}$-) manifolds as pullbacks by Poisson microformal morphisms. We also show that the notion of the adjoint can be generalized to nonlinear operators as a microformal morphism. By applying this to $L_{\infty}$-algebroids, we show that an $L_{\infty}$-morphism of $L_{\infty}$-algebroids induces an $L_{\infty}$-morphism of the "homotopy Lie--Poisson" brackets for functions on the dual vector bundles. We apply this construction to higher Koszul brackets on differential forms and to triangular $L_{\infty}$-bialgebroids. We also develop a quantum version (for the bosonic case), whose relation with the classical version is like that of the Schr\"odinger equation with the Hamilton--Jacobi equation. We show that the nonlinear pullbacks by microformal morphisms are the limits at $\hbar\to 0$ of certain ``quantum pullbacks'', which are defined as special form Fourier integral operators.
AB - We extend the category of (super)manifolds and their smooth mappings by introducing a notion of microformal or ``thick'' morphisms. They are formal canonical relations of a special form, constructed with the help of formal power expansions in cotangent directions. The result is a formal category so that its composition law is also specified by a formal power series. A microformal morphism acts on functions by an operation of pullback, which is in general a nonlinear transformation. More precisely, it is a formal mapping of formal manifolds of even functions (bosonic fields), which has the property that its derivative for every function is a ring homomorphism. This suggests an abstract notion of a ``nonlinear algebra homomorphism'' and the corresponding extension of the classical ``algebraic-functional'' duality. There is a parallel fermionic version. The obtained formalism provides a general construction of $L_{\infty}$-morphisms for functions on homotopy Poisson ($P_{\infty}$-) or homotopy Schouten ($S_{\infty}$-) manifolds as pullbacks by Poisson microformal morphisms. We also show that the notion of the adjoint can be generalized to nonlinear operators as a microformal morphism. By applying this to $L_{\infty}$-algebroids, we show that an $L_{\infty}$-morphism of $L_{\infty}$-algebroids induces an $L_{\infty}$-morphism of the "homotopy Lie--Poisson" brackets for functions on the dual vector bundles. We apply this construction to higher Koszul brackets on differential forms and to triangular $L_{\infty}$-bialgebroids. We also develop a quantum version (for the bosonic case), whose relation with the classical version is like that of the Schr\"odinger equation with the Hamilton--Jacobi equation. We show that the nonlinear pullbacks by microformal morphisms are the limits at $\hbar\to 0$ of certain ``quantum pullbacks'', which are defined as special form Fourier integral operators.
UR - https://rdcu.be/bflnC
JO - Steklov Institute of Mathematics. Proceedings
JF - Steklov Institute of Mathematics. Proceedings
Voronov T. Microformal geometry and homotopy algebras. Proceedings of the Steklov Institute of Mathematics. 2019;302(1):88-129. Epub 2019 Jan 3. doi: 10.1134/S0081543818060056 | CommonCrawl |
Research | Open | Published: 14 August 2017
A prospective study to evaluate the risk malignancy index and its diagnostic implication in patients with suspected ovarian mass
Santosh Kumar Dora1,
Atal Bihari Dandapat1,
Benudhar Pande1 &
Jatindra Prasad Hota1
There is no universal screening method for discrimination between benign and malignant adnexal masses yet. Various authors have tried tumor markers, imaging studies, cytology but no one yet is a definite method for screening of cancer ovary, for which a combined diagnostic modality has come to practice in form of RMI. With this background we conducted our study "Evaluation of risk malignancy index and its diagnostic value in patients with adnexal masses".
The aim of the study was to determine the effectiveness of risk of malignancy index (RMI-3) in preoperative discrimination between benign and malignant masses and also to reveal the most suitable cut off value. We have conducted a prospective study between November 2014 to October 2016. We included the parameters like menopausal status, ultrasound features, and serum levels of tumor marker like CA-125 for calculating RMI 3. Then RMI was compared with the histopathological report which was taken as gold standard.
In the present study malignant tumors constitute 54.76% (69/126) & benign tumors 45.24% (57/126). Bilaterality in adnexal masses and multilocularity is higher in malignant tumors than benign tumor, but a P –value >0.005 failed to be proved significant in our study. Solid area is seen in 24.69% (20/81) of benign and 75.30% (61/81) of malignant tumor. Similarly ascites was found in 38.09% (48/126) of cases. Out of which 18.75% (9/48) cases were found to be benign and malignancy was confirmed in 81.25% (39/48) patients. There is statistically significant number of malignant ovarian cancer patients where ascites and solid area is seen in USG findings (p = 0.000). Risk of Malignancy Index compared with individual parameters of Ultrasound score, CA-125 or menopausal score and a cut-off point of 236 shows a very high sensitivity (72.5%), specificity (98.2%), positive predictive value (98.1%), negative predictive value (74.7%) and diagnostic accuracy (84.13%) for discriminating malignant and benign pelvic masses.
Simplicity and applicability of the method in the primary evaluation of patients with pelvic masses makes it a good option in daily clinical practice in non-specialized gynecologic departments and also in developing countries where access to a gynaecologist oncologist is limited.
The presence of an adnexal mass is a frequent reason for a woman to be referred to a gynecologist. An adnexal mass may be benign or malignant. It is the risk of malignancy that propels us for early, accurate and prompt diagnosis to lessen mortality and morbidity. In India, ovarian cancer has emerged as the fourth most common malignancy among females with incidence varying between 5.4 and 8 per 100,000populations in different parts of the country [1]. As the symptoms of the ovarian cancer are very vague like bloating, pelvic or abdominal pain, poor appetite, feeling full quickly, and urinary urgency it is also known as "silent killer". Thus, silent occurrence and slow progression, added to the fact that few effective methods for early diagnosis and no universal screening method for diagnosis of malignant ovarian tumor exists, made its mortality rate highest among gynecologic malignancies [2]. The main challenge is to identify patients with high-risk adnexal masses preoperatively and this is compounded by the lack of definitive noninvasive diagnostic test. The discrimination between benign and malignant adnexal mass is central to decision regarding clinical management and surgical planning in such patients. The standardize method for preoperative identification of probable malignant masses would allow optimization of first line treatment for women with ovarian cancer. Early identification of ovarian carcinomas and referral to a gyneco-oncologist can facilitate accurate staging of the disease and optimal cytoreductive treatment, enhancing patientsurvival [3, 4]. Currently clinical examination, ultrasound assessment, assay of tumor markers are part of standard work up for adnexal mass but none of these indicators alone is very sensitive or specific for detecting malignancy in ovarian masses.
To reduce the diagnostic dilemma between benign and malignant ovarian masses, a formula-based scoring system known as risk of malignancy index (RMI) was introduced by Jacobs et al. [5]. in 1990, which was term as RMI 1. It is a product of ultrasound findings (U), the menopausal status (M), and serum CA-125 levels (RMI = U X M X CA-125). The original RMI (RMI-1) has been modified in 1996 by Tingulstadet al. [6]. Known as (RMI2) and again in 1999 known as (RMI3) [7]. The difference between the new indices lies in the different scoring of ultrasound characteristics and menopausal status. The objective of our study is to assess the sensitivity and specificity of RMI 3 prospectively so that women with ovarian mass can be referred to an appropriate specialist.
Type of study
It was a prospective diagnostic study. The study period was from November 2014 to October 2016. All patients with ovarian mass admitted to the gynecology department of, VIMSAR, Burla, India were included in the study. A total of 126 patients were selected by using purposive sampling technique.
Sampling unit
Each patient having an adnexal mass admitted to department of obstetrics & gynecology, VIMSAR, Burla for treatment.
Ethical statement
The study was approved by the VIREC ethical committee of the hospital. The ethical committee approval number is 2014/P-I-RP/14 M–O-OBG036/032. The aim of the study was explained appropriately and informed written consent was obtained from all the patients.
Clinical samples
Women already diagnosed cases of ovarian malignancy receiving chemotherapy, masses arising from urinary tract and gastrointestinal tract and pregnancy with its complications like ectopic, molar and post abortive were excluded from the study.
Information abstracted were age, parity, menstrual status, and family history of cancer, personal history of previous malignancies, symptoms and duration of symptoms. Leading symptoms such as abdominal mass, swelling/discomfort, abdominal pain, gastrointestinal symptoms, urinary symptoms, generalized malaise & fatigue were scrutinized.
All patients underwent routine physical examination. Particular attention was paid to breast examination, lymphadenopathy, abdominal examination and pelvic examination.
Besides the routine investigations, CA-125 serum levels, abdominal ultrasounds findings, and menopausal status of all the cases were recorded preoperatively.
The modified RMI (RMI 3) for each woman was calculated using the product of the ultrasound score (U), the menopausal score (M), and the absolute value of serum CA-125 inserted in the following formula:
$$ \mathrm{RMI}=\mathrm{UxMxserumCA}-125 $$
Five ultrasound features suggestive of malignancy were sought to derive U including multilocularity (more than bilocular), presence of solid areas, bilaterality, presence of ascites, and extra ovarian tumors or evidence of metastases. U of 1 was given if none or one of these findings was detected and a score of 3 if two or more of these features were present. Postmenopausal status was defined as more than one year of amenorrhea, or age older than 50 years for women who had undergone hysterectomy; they scored M = 3. All other patients who did not meet these criteria were defined in a premenopausal status which scored M = 1. The absolute values of serum CA-125 (U/ml) was entered directly into the mentioned equation. The histopathological diagnosis was considered as the gold standard for defining the outcomes. Hence, the RMI was evaluated for sensitivity, specificity, positive predictive value (PPV), negative predictive values (NPV) and diagnostic accuracy, with reference to the actual presence of a malignant or benign pelvic tumor.
Laparotomy was done in all cases. The type of surgical procedure done were either unilateral salpingo-oophorectomy, unilateral salpingo-ophorectomy with biopsy of the contralateral ovary, total abdominal hysterectomy and unilateral salpingoophorectomy, total abdominal hysterectomy with bilateral salpingo-oophorectomy, with ometectomy, with bilateral pelvic lymph node dissection and debulking surgery. Surgical staging was carried out in suspected malignant ovarian tumors. The pelvic and para-aortic lymph nodes were evaluated and all enlarged lymph node resected. Infracolic omentectomy was performed. The other operative findings that were recorded are gross appearance and cut surface, ascites, site of extra ovarian involvement and tumor size. The specimen was sent for histopathological study in the department of pathology VIMSAR Research, Burla. Tumors were classified according to World Health Organization definitions and malignant tumors were staged according to the criteria of the international Federation of Gynecology and Obstetrics (2014).
All statistical analysis were done using SPSS version 24 (IBM) and Microsoft Excel 2016 for windows. A univariate statistical analysis was performed for all sonographic parameters and patient age. The Kologoromov-Smirnov test was used to evaluate the normal distribution of continuous data. According to their distribution, they were compared with the use of student's t–test. The proportions of malignant and benign cases with different sonographic parameters were compared with chi-square, Fisher's exact tests. To determine the best cut-off value to discriminate between benign and malignant adnexal masses, a receiver operating characteristics (ROC) curve was plotted and the odds ratio with 95% confidence interval was calculated. The best cut-off value was chosen according to the highest sensitivity with the lowest false-positive rate. A P-value <0.05 was considered to be significant.
During the period from November 2014 to October 2016 there were 126 patients presented with ovarian masses; those were diagnosed and operated at VSS institute of medical science and research, India. The Table 1 shows out of 126 cases studied, most common encountered were papillary serous cystadenocarcinoma 25.39% (32/126), followed by mucinous cystadenocarcinoma 11.9%(15/126), mucinous cystadenoma 15/126 (11.9%), and dermoid cyst 10.32% (13/126). In the present study malignant tumors constitute 54.76% (69/126) & benign tumors 45.24% (57/126). The surface epithelial tumors were the commonest constituting 79.4% (100/126) followed by the germ cell tumors 12.7% (16/126) and the sex-cord stromal tumors 2.4% (3/126). The detail characteristics of Age, USG score, menopausal status, serum CA 125 levels and RMI are summarized in Table 2. The average age of the patients with benign tumors was 37.12 ± 13.05 years, whereas for malignant tumors it was 47.30 ± 11.43 years. Below the age of 20 years total of 5.5% (7/126) ovarian tumors found. Out of which, 3.97% (5/126) were benign and 1.59% (2/126) were malignant in nature. Above the age of 60 years total of 10.32% (13/126) ovarian tumors found. Out of which 3.97% (5/126) were benign and 6.35% (8/126) were malignant in nature. There is a significant difference of mean age in years 47.30 ± 11.43 for malignant adnexal mass compared to 37.12 ± 13.05 years for benign adnexal mass with a P-value = o.ooo. Premenopausal patients predominate in our study with 61.1% (77/126) cases, while 38.89% (49/126) of the affected patients were in postmenopausal group. 62.34% (48/77) of the premenopausal patients had benign diseases, while 37.66% (29/77) had malignant diseases. Among the postmenopausal patients, 18.4% (9/49) had benign disease, while 81.6% (40/49) had malignant disease. In premenopausal age group most of the ovarian masses were benign compared to postmenopausal patients with a P-value of 0.000. The investigation revealed bilateral adnexal mass was found in 27.78% (35/126) of cases. Out of which 45.72% (16/35) cases were found to be benign and 54.28% (19/35) cases were found to be malignant confirmed by histopathological examinations. Bilaterality in adnexal masses is higher in malignant tumors than benign tumor, but a P –value 0.947 failed to be proved significant in our study.
Table 1 Distributions ovarian tumors according to histopathology
Table 2 Distribution of cases according to Age, USG score, menopausal status, serum CA 125 levels and RMI
Multilocular lesions were found in 50% (63/126) cases. Out of which 46.03% (29/63) cases were found to be benign and 53.97% (34/63) cases were found to be malignant post-surgery with no statistical significance found in our case with a P –value 0.858. Presence of solid components was found in 64.28% (81/126) cases. Out of which 24.69% (20/81) cases were found to be benign and 75.30% (61/81) cases were malignant. Hence presence of solid components in adnexal masses is higher in malignant tumors than benign tumor as evidenced by a P –value 0.000 which is highly significant. Presence of ascites was found in 38.09% (48/126) of cases. Out of which 18.75% (9/48) cases were found to be benign and malignancy was confirmed in 81.25% (39/48) patients. Presence of ascites in adnexal masses is higher in malignant tumors than benign tumor (P –value = 0.000). Evidence of metastasis on USG was found in 14.28% (18/126) of cases. Out of which 5.5% (1/18) cases were found to be benign and 94.4% (17/18) cases were malignant.
We assigned scores of 1(absence of specific findings or presence of one finding), or 3 (two or more findings) to the subjects, depending on the ultrasound findings. 45.24% (57/126) cases had an ultrasound score of 1, while 54.76% (69/126) patients were scored 3. USG score 1 with benign tumor were found to be higher than those of malignant tumor (P –value = 0.000). USG score 3 with malignant tumor were found to be higher than those of benign tumor (P –value = 0.000). Of the 57 patients with an ultrasound score 1, 61.4% (35/57) had benign diseases, while 38.6% (22/57) had malignant diseases. 54.76% (69/126) patients in our series had an ultrasound score of 3, and among them, 31.88% (22/69) had benign, 61.12% (47/69) had malignant tumor. The mean value of CA-125 is (502.09 ± 1525.09) U/ml for malignant adnexal mass compared to (69.89 ± 44.10)U/ml benign masses (p value 0.000).
The performance of CA-125 and RMI (Table 3, Fig. 1) is shown in the receiver-operator characteristic curve (ROC). Best performance was obtained for serum CA-125 level of 143 U/ml (sensitivity 62.319%, specificity 96.491%, PPV 93.5%, NPV 67.5%,accuracy 77.78%,with highest area under the ROC curve i.e. 80.4%).The best performance obtained for RMI-3 was at the cut-off point 236 with a sensitivity of 72.5%, a specificity of 98.2%, a PPV of 98.1% an NPV of 74.7% and an accuracy of 84.13%.85.5% increase in the odds of diagnosing malignant adnexal masses with use of RMI when compared to not using RMI. Relative risk of diagnosing malignant adnexal masses 95.58% more with use of RMI when compared to not using RMI. 98% of malignant adnexal mass patients showing positive test result with RMI.75% of non-malignant adnexal mass patients showing negative test result with RMI. RMI ≥ 236 will increase the probability of diagnosing malignant adnexal masses from 54.8% to 98.15%.RMI < 236 will decrease the probability of diagnosing malignant adnexal masses from 54.8% to 22.24%. Diagnostic accuracy of RMI = 84.13%.Taking into account the best obtained cut-off point for RMI-3, 1 case was false positive (dermoid cyst) and 50 cases were true positive (RMI ≥236 malignant tumors) while 56 cases were true negative and 19 cases were false negative (RMI <236 malignant tumor);2 cases were dysgerminoma, 1 case was sertoli leydig cell tumor, 1 yolk sack tumor, 10 cases were serous cystadenocarcinoma, and 5 cases were mucinous cystadenocarcinoma.
Table 3 Evaluation of RMI, CA-125, USG score and menopausal status
ROC curve of CA-125 in discriminating between benign and malignant adnexal masses
We compared the diagnostic performance of RMI-3 score > 236, against CA-125 level > 143, ultrasound score of 3 and menopausal score of 3. Tables 4, 5 and Fig. 2 summaries the findings from this analysis. Among the criteria RMI score ≥ 236 has highest sensitivity, specificity, PPV, NPV, a diagnostic accuracy, when compared with individual parameters.
Table 4 Point estimates and 95% confidence interval of CA-125 at various cut off points
Table 5 Point estimates and 95% confidence interval of RMI-3 at various cut off points
ROC curve showing the relationship between specificity and sensitivity for RMI-3 in differentiating between benign and malignant pelvic masses
About 10% of women undergo exploratory surgery for evaluation of ovarian masses during their lifetime [8]. Prompt identification of ovarian malignancies and referral to a gyneco-oncologist can enhance the patient survival rates [9]. but a single method which can accurately predict ovarian malignancy is still unavailable. In the pre-operative assessment of adnexal mass, the major diagnostic tools are still clinical impression and ultrasound examination. However, due to limitation of clinical impression and sonographic findings to predict ovarian malignancy, it is not surprising that gynecologists may detect an unexpected ovarian malignancy intra-operatively. Often an improper incision is made, the bowel is not adequately prepared or the surgeon is confronted with need to perform an unplanned cytoreductive surgery. A scoring system that predict ovarian malignancy can improve the chance of better preoperative counseling, better preoperative preparation and where appropriate referring the patients to a specialized center. Herein we report that the multiparametric RMI score can be a useful tool in prediction of malignant ovarian disease, in low-resource settings. Subsequent to introduction to RMI the same research group had reevaluated their diagnostic method in a new group of patients admitted for pelvic masses and confirmed the sensitivity and specificity of RMI and its priority compared to individual criteria [10]. The mean age of the patients with ovarian mass in our study was 42.69 years (range, 10 to 78 years). This is slightly higher than that reported in a similar study by Akdenizetal. in 2009 [11]. In our study, 54.8% of the patients with an ovarian mass had malignant disease. Fifty eight percent of malignancies occurred in postmenopausal patients and 42% among the premenopausal patients. The data seem to agree with earlier reports of similar incidence rates and preponderance in postmenopausal patients [11, 12]. Rao (2014) has recently reported higher sensitivity, specificity, and positive and negative predictive values for a postmenopausal score of 3 [13]. In our study, this parameter had a higher specificity (84.2%) and positive predictive value (81.6%), but lower sensitivity (57.9%) and negative predictive values (62.3%) in assessing malignancy risk.
Ultrasonography is widely appreciated as the best imaging method for evaluation of ovarian pathology. Several groups have reported higher sensitivity, specificity, and positive predictive values for this method [9]. In our study, an ultrasound score of 3 had the sensitivity (68.1%), specificity 61.4%, positive predictive value 68.12% and negative predictive value (61.4%) among the parameters evaluated.
Several candidate biomarkers and their combinations have been employed in assessing the risk of ovarian malignancies, albeit with varying efficiency [14]. Serum CA-125 level is widely appreciated as a useful biomarker for estimating the risk of ovarian cancer, though other gynecological pathology can also increase its levels. Myers et al. [15]. have earlier reported sensitivity and specificity of less than 80%, for this marker, in the prediction of ovarian cancers. Simseket al. (2014) [16] has reported a sensitivity of 78.6% and specificity of 63.5% for a CA125 cut-off of 35 U/ml. Another report indicated a sensitivity of 88% and specificity of 97% for CA125 at a higher cut-off of 88 U/ml [12]. In our study, CA125 levels ≥35 U/ml had a sensitivity of 87%, specificity of only 19.3%, positive predictive value of 56.6%, and negative predictive value of 55% respectively. Best performance of CA-125 in our study was obtained at a cutoff of 143 with sensitivity 62.32%, specificity 96.49%, positive predictive value of 93.5% and negative predictive value of 67.5% and diagnostic accuracy of 77.77%. We suggest that a higher prevalence of inflammatory and nonspecific uterine and ovarian pathology, like pelvic inflammatory diseases and endometriosis might have contributed to elevated CA125 levels in the majority of our patients along with variable levels of CA125 regarding phases of menstrual cycles in premenopausal patients with adnexal masses and its more specificity for nonmucinousepithelial ovarian tumors account for its low diagnostic performance in the detection of malignant ovarian disease.
RMI is calculated from the serum CA125 antigen level, menopausal status, and ultrasonographic findings [5]. Several retrospective and prospective studies have reported it to be the best available tool for triage and referral of ovarian malignancies [10, 17]. Its utility as a diagnostic tool depends on the prevalence of malignancy in the study population [16]. We observed a high prevalence of malignancy (54.8%) among our study group, significantly higher than some of the earlier reports of 30–43% [5, 10, 17]. Jacobs et al. [5] (1990), studying 143 patients, reported a sensitivity of 85.4% and specificity of 96.9% for this method, with a cut-off of 200. Subsequently, several groups have reported its superior sensitivity and specificity in estimating the risk of ovarian malignancy, compared to other parameters. [7, 17,18,19]. The RMI cut-offs in many studies ranged from 25 to 250 (reviewed in Geominiet al. 2009) [18]. Most studies reported an increased diagnostic accuracy and performance with an RMI cut-off of 200 [5,6,7, 13, 19,20,21,22,23]. A recent study reported a sensitivity of 89.5%, specificity of 96.2%, positive predictive value of 77.3%, and negative predictive value of 98.4% [24] when a higher RMI cut-off of 238 was used for the screening. Yamamoto et al. (2009) [18] reported a sensitivity and specificity of 75% and 91%, respectively, using a cut-off of 450.The high sensitivity and specificity, PPV, NPV of the RMI at the optimal cut-off point of 236 in this study had a sensitivity of 72.5%, a specificity of 98.2% and a PPV of 98.1%, and an NPV of 74.7%. Bailey et al. [25] on 182 women with pelvic mass indicated an RMI > at a cut-off point of 200 with sensitivity of 88.5% for diagnosing the invasive lesions while Enakpeneet al. [26] on 302 women with pelvic mass indicated an RMI at a cut-off point of 250, a sensitivity of 88.2%, a specificity of 74.3%, a PPV of 71.3%, and an NPV of 90% for diagnosing the invasive lesions. In the current study, RMI at a cut-off point of 200 had a sensitivity of 73.9%, a specificity of 96.5% a PPV of 96.2%, and an NPV of 75.3%. According to Table 6, the results of previous studies described that many studies showed the best cut-off point for RMI was 200 [5,6,7, 10, 19,20,21,22, 27].
Table 6 Comparison of our results with previous studies
A systematic review study by Geominiet al. [17] in 2009, 116 diagnostic studies for adnexal malignancy was reviewed. The reported result showed that RMI at cut-off point of 200 had a sensitivity of 78% and a specificity 87% for malignant mass diagnoses which was similar to our results.
According to the results of Ulusoys et al. in 2007, the RMI in a cut-off level of 153 showed a sensitivity of 76.4%, a specificity of 77.9%, a PPV of 65.9%, and an NPV of 85.5% for prediction of malignancy [19]. In the present study, RMI, at a cut-off level of 150 had a sensitivity of 79.7%, a specificity of 84.2%, a PPV of 85.9%, and an NPV of 77.4% for detection of malignancy. The best performance in the present study was seen with an RMI cut-off of 236, and the high sensitivity (72.5%) and high specificity (98.2%) observed were comparable to the majority of earlier reports that employed a similar cut-off.
Our results for RMI were in agreement with the results from other studies in which RMI was suggested to be better than other single parameters, with the highest area under the curve. In our study, RMI of ≥236 yielded high sensitivity, specificity, PPV, NPV, and accuracy of 72.5%, 98.2%,98.1%,74.7 and 84.13 respectively, which were similar compared with other studies.
At lower cut off values the sensitivity increases at the expense of specificity, while at a higher cut off values the specificity increases at the expense of sensitivity and more benign cases will be referred as malignant. So the decision of the cut off value (action line) will balance the sensitivity and specificity on one side and the local resources and availability of the specialists on the other side. When there is limitation of referral for specialist care because of distance resources, the RMI can be increased with some degree of sacrifice in sensitivity to achieve a higher level of specificity. In any scoring system to exclude malignancies, the false negative rate should ideally be zero or close to zero [28]..The present study observed nineteen false negative patients. Two cases were dysgerminoma, one case was sertoli leydig cell tumor, one yolk sack tumor, ten cases were serous cystadenocarcinoma, and five cases were mucinous cystadenocarcinoma. Ultrasound score is subjective, it relies on the expertise of the examiner. Gadducciet al. [29] reported mucinous tumors expressed CA-125 less than non-mucinous types. Besides low ultrasound score, the specificity of CA125 more for non-mucinous epithelial ovarian tumors are likely to explain the false negative results in the study.
There is no universal screening method for discrimination between benign and malignant adnexal masses yet. So many authors have tried for earliest diagnosis of malignant ovarian tumors by various parameters. These may be earliest clinical features, tumor markers, imaging studies, cytology but no one yet is a definite method for screening of cancer ovary, In conclusion, the present study demonstrated that in the absence of a definite biomarker, the multi parametric Risk of Malignancy Index (RMI 3) was a better estimate in diagnosing adnexal masses with high risk of malignancy and subsequently guiding the patients to gynecological oncology centers for suitable and effective surgical interventions compared with individual parameters of Ultrasound score, CA-125 or menopausal score and a cut-off point of 236 shows a very high sensitivity (72.5%), specificity (98.2%), positive predictive value (98.1%), negative predictive value (74.7%) and diagnostic accuracy (84.13%) for discriminating malignant and benign pelvic masses. Simplicity and applicability of the method in the primary evaluation of patients with pelvic masses makes it a good option in daily clinical practice in non-specialized gynecologic departments. Besides in a low resource setting where sophisticated radiological and biochemical test may not be available at all places where RMI can be used as a investigations for the triage of patients and referral to a higher center.
Menopausal score
NPV:
Negative predictive values
PPV:
Positive predictive value
RMI:
Risk of malignancy index
Receiver operating characteristics
Ultrasound score
Consolidated Report of Population Based Cancer Registries 2001-2004; National Cancer Registry Program. Indian Council of Medical Research Bangalore, 2006.
Rossing MA, Wicklund KG, Cushing-Haugen KL, et al. Predictive value of symptoms for early detection of ovarian cancer. J Natle Cancer Inst. 2010;102(4):222–9.
Mcgowan L. (1993), patterns of care in carcinoma of the ovary. Cancer. 1993;71:628–33. doi:10.1002/cncr.2820710221.
Bristow RE, Tomacruz RS, Armstrong DK, Trimble EL, Montz FJ. Survival effect of maximal cytoreductive surgery for advanced ovarian carcinoma during the platinum era: ametaanalysis. J Clin Oncol. 2002;20(5):1248–59.
Jacobs I, Oram D, Fairbanks J, Turner J, Frost C, Grudzinskas JG. A risk of malignancy index incorporating CA125,ultrasound and menopausal status for the accurate preoperative diagnosis of ovarian cancer. Br J Obstet Gynaecol. 1990;97(10):922–9.
Tingulstad S, Hagen B, Skjeldestad F, Onsrud M, Kiserud T, Halvorsen T, et al. Evaluation of a risk of malignancy index based on serum CA125, ultrasound findings and menopausal status in the preoperative diagnosis of pelvic masses. BJOG Int J Obstet Gynaecol. 1996;103(8):826–31.
Tingulstad S, Hagen B, Skjeldestad F, Halvorsen T, Nustad K. Onsrud M; the risk-of-malignancy index to evaluate potential ovarian cancers in local hospitals. Obstet Gynecol. 1999;93(3):448.
Royal College of Obstetricians and Gynaecologists, "Management of suspected ovarian masses in premenopausal women," Green-top guideline No. 62. Royal College of Obstetricians and Gynaecologists, 2011.
Brandon J. D. Rein, Sajal Gupta, Rima Dada, Joelle Safi, Chad Michener, and Ashok Agarwal, "Potential Markers for Detection and Monitoring of Ovarian Cancer," Journal of Oncology, vol. 2011, Article ID 475983, 17 pages, 2011. doi: 10.1155/2011/475983.
Davis AP. Jacobs l; wools R; fish a; Oram D. The adnexal mass: benign or malignant? Evaluation of a risk of malignancy index. Br J Obstet Gynaecol. 1993 Oct;100(10):927–31.
Akdeniz N, Kuyumcuoğlu U, Kale A, Erdemoğlu M, Caca F. Risk of malignancy index for adnexal masses. Eur J Gynaecol Oncol. 2009;30(2):178–80.
Bouzari Z, Yazdani S, Kelagar ZS, Abbaszadeh N. Risk of malignancy index as an evaluation of preoperative pelvic mass. Caspian J Intern Med. 2011;2(4):331–5.
Rao JH. Risk of malignancy index in assessment of pelvic mass. Int J Biomed Res. 2014;5(3):184–6.
Escudero JM, Auge JM, Filella X, Torne A, Pahisa J, Molina R. Comparison of serum human epididymis protein 4 with cancer antigen 125 as a tumor marker in patients with malignant and nonmalignant diseases. Clin Chem. 2011 Nov;57(11):1534–44. doi:10.1373/clinchem.2010.157073.
Mayer AR, Chambers SK, Graves E, Home C, Tseng PC, Nelson GE, et al. Ovarian cancer staging: does it require a gynaecologic oncologist? Gynecol Oncol. 1992;47:223–7.
Simsek HS, Tokmak A, Ozgu E, et al. Ole of a risk of malignancy index in clinical approaches to adnexal masses. Asian Pac J Cancer Prev. 2014;15(18):7793–7.
Geomini P, Kruitwagen R, Bremer GL, Cnossen J, Mol BWJ. The accuracy of risk scores in predicting ovarian malignancy: a systematic review. Obstet Gynecol. 2009;113(2):384–94.
Yamamoto Y, Yamada R, Oguri H, Maeda N, Fukaya T. Comparison of four malignancy risk indices in the preoperative evaluation of patients with pelvic masses. Eur J Obstet Gynecol Reprod Biol. 2009;144(2):163–7.
Ulusoy S, Akbayir O, Numanoglu C, Ulusoy N, Odabas E, Gulkijik A. The risk of malignancy index in discrimination of adnexal masses. Int J Gynaecol Obstet. 2007;96(3):186–91.
Obeidat BR, Amarin ZO, Latimer JA, Crawford RA. Risk of malignancy index in the preoperative evaluation of pelvic masses. Int J Gynaecol Obstet. 2004;85(3):255–8.
Ma S, Shen K, and Lang J, "A risk of malignancy index in preoperative diagnosis of ovarian cancer," Chin Med J, 2003;116(3):396–399.
Torres JCC, Derchain SFM, Faundes A, Gontijo RC, Martinez EZ, Andrade LALA. Risk-of-malignancy index in preoperative evaluation of clinically restricted ovarian cancer. Sao Paulo Medical Journal. 2002;120(3):72–6.
Terzi'c M, Dotli'c J, Ladjevi'c IL, Atanackovi'c J, Ladjevi'c N. Evaluation of the risk malignancy index diagnostic value in patients with adnexal masses. Vojnosanitetski Pregled. 2011;68(7):589–93.
Asharfgangooei T, Rezaeezadeh M. Risk of malignancy index in preoperative evaluation of pelvic masses. Asain Pac J Cancer Prev. 2011;12:1727–30.
Bailey J, Tailor A, Naik R, et al. A risk of malignancy index for referral of ovarian cancer cases to tertiary center: does it identify the correct cases. Int J Gynecol Cancer. 2006;166:30–4.
Enakpene CA, Omigbodun AO, Goecke TW, et al. Preoperative evaluation and triage of women with suspicious adnexal masses using risk of malignancy index. J Obstet Gynaecol Res. 2009;35:131–8.
Morgante G, la Marca A, Ditto A, De Leo V. Comparison of two malignancy risk indices based on serum CA125, ultrasound score and menopausal status in the diagnosis of ovarian masses. Br J Obstet Gynaecol. 1999;106(6):524–7.
Harry VN, Narayansingh GV, Parkin DE. The risk of malignancy index for ovarian tumors in Northeast Scotland- a population based study. Scott Med J. 2009;54(2):21–3.
Gadducci A, Cosio S, Capri A. Serum tumor markers in the management of ovarian, endometrial and cervical cancer. Biomed Pharmacother. 2004;58:24–38.
We would like to thanks all the support staff of department of obstetrics and gynaecology VIMSAR, Burla, India.
All data can be obtained by requesting the principal author of the study.
Department of obstetrics and gynaecology, Veer Surendra Sai Institute of Medical Science And Research (VIMSAR), Burla, Sambalpur, Odisha, India
Santosh Kumar Dora
, Atal Bihari Dandapat
, Benudhar Pande
& Jatindra Prasad Hota
Search for Santosh Kumar Dora in:
Search for Atal Bihari Dandapat in:
Search for Benudhar Pande in:
Search for Jatindra Prasad Hota in:
Conception, Design, Development of methodology: AD & BP. Analysis and interpretation of data, Writing, review, and/or revision of the manuscript: SD & JH. All authors read and approved the final manuscript.
Correspondence to Santosh Kumar Dora.
The study was approved by the ethical committee of the hospital. The aim of the study was explained appropriately and informed written consent was obtained from all the patients
Permission obtained from the ethical committee and also from paticipants.
Risk malignancy index
Adnexal mass | CommonCrawl |
Engineering mode coupling in a hybrid plasmon-photonic cavity for dual-band infrared spectroscopic gas sensing
Thang Duy Dao, Florian Dubois, Jasmin Spettel, Andreas Tortschanoff, Clement Fleury, Norbert Cselyuszka, Cristina Consani, Andrianov Nikolai, and Mohssen Moridi
Thang Duy Dao,* Florian Dubois, Jasmin Spettel, Andreas Tortschanoff, Clement Fleury, Norbert Cselyuszka, Cristina Consani, Andrianov Nikolai, and Mohssen Moridi
Sensor Systems, Silicon Austria Labs (SAL), Europastraße 12, 9524 Villach, Austria
*Corresponding author: [email protected]
Thang Duy Dao https://orcid.org/0000-0001-5027-9079
Florian Dubois https://orcid.org/0000-0003-0339-1484
Jasmin Spettel https://orcid.org/0000-0001-9231-5584
Andreas Tortschanoff https://orcid.org/0000-0002-9424-7228
Clement Fleury https://orcid.org/0000-0002-3007-0566
Cristina Consani https://orcid.org/0000-0002-4244-7324
T Dao
F Dubois
J Spettel
A Tortschanoff
C Fleury
N Cselyuszka
C Consani
A Nikolai
M Moridi
Thang Duy Dao, Florian Dubois, Jasmin Spettel, Andreas Tortschanoff, Clement Fleury, Norbert Cselyuszka, Cristina Consani, Andrianov Nikolai, and Mohssen Moridi, "Engineering mode coupling in a hybrid plasmon-photonic cavity for dual-band infrared spectroscopic gas sensing," OSA Continuum 4, 1827-1837 (2021)
Tunable dual-band mid-infrared absorber based on the coupling of a graphene surface plasmon...
Jing Han, et al.
Triple narrow-band plasmonic perfect absorber for refractive index sensing applications of optical...
Yongzhi Cheng, et al.
OSA Continuum 2(7) 2113-2122 (2019)
Electrostatic impacts of plasmonic structure on the performance of monolithically integrated hybrid...
Q. Ding, et al.
Chemical sensors
Destructive interference
Resonant modes
Thermal infrared detectors
Original Manuscript: February 5, 2021
Revised Manuscript: April 27, 2021
Manuscript Accepted: May 4, 2021
Simulation results and discussion
On-chip infrared spectroscopy has become one of the indispensable key technologies for miniature biochemical sensors, gas sensors, food quality control, and environmental monitoring systems. The most important requirement for on-chip spectroscopic sensors is to miniaturize spectroscopic functions to be integrated into thermal emitters and infrared detectors. In this work, we propose a hybrid plasmon-photonic system consisting of a plasmonic grating coupled to a distributed Bragg reflector (DBR)-dielectric-metal cavity for on-chip dual-band spectroscopic sensing applications. The strong coupling between surface-plasmon polaritons and the cavity resonance leads to the hybridization of the photonic states; the mode splitting, the photonic band folding, and the formation of new eigenstates including bound states in the continuum are observed in the system. It is shown that, by engineering the photonic coupling, a dual-band resonant near-perfect absorber is achievable and easily controllable. As a proof of concept, we numerically demonstrate a set of five different dual-band absorbers for CO2, N2O, CO, NO, and NO2 gas sensing applications. The dual-band absorbers can be used for on-chip spectroscopic thermal emitters or infrared detectors in gas sensors. The hybrid plasmon-photonic system can be an attractive photonic platform for applications in emitting and sensing photonic devices.
Miniaturized on-chip spectroscopic devices have attracted considerable industrial interests in the past decades due to their great potential applications in portable chemical sensing and environmental monitoring devices. For example, in the nondispersive infrared (NDIR) sensor such as the carbon dioxide (CO2) sensor, much effort has been spent on the development of IR spectroscopic filters [1,2], particularly on-chip filtering photonic devices including emitters [3–5] and detectors [6,7]. Among them, the most common spectral filtering structure is the resonant perfect absorber, which can efficiently absorb light in a narrow bandwidth with unity absorptivity [8,9]. Perfect absorber structures typically consist of a confined optical resonator associated with inherent losses of materials, for example antenna-on-insulator-metal layered films [9,10], plasmonic gratings [11,12], Fabry-Perot cavities [13,14] and optical Tamm states [15,16]. Nevertheless, controlling the spectral characteristics in terms of the center wavelength and the spectral bandwidth (full width at half maximum – FWHM) of on-chip spectroscopic devices, which must possess resolutions narrower than the vibration-rotation spectrum of the targeting chemicals, still remains a veritable challenge. Furthermore, most of the gases feature dual-band rotational-vibrational absorption spectra, thus, a dual-band perfect absorber whose resonant branches are identical to the absorption spectra of the sensing gas would be beneficial for NDIR sensors.
Engineering strong coupling in photonic structures has been attracting growing attention over the past decade owing to its broad capabilities in designing exceptional photonic devices which have shown great applications in enhancing light-matter interactions [17,18], quantum information [19,20], low-threshold nano lasers [21,22], and nano-scale sensors [23,24]. The strong coupling in hybrid photonic systems can also be used for engineering photonic bands [25–27], nano lasers [28,29], dual-band absorbers [30,31] and high Q-factor photonic devices associated with quasi-bound states in the continuum (BICs) [32–34]. Azzam et al. have demonstrated a hybrid plasmonic-photonic cavity exploring the formation of BICs due to the symmetry incompatibility with the outgoing fields or destructive interference of the plasmon and photonic resonances [33]. The concept of the hybrid plasmonic-photonic system can be further developed for practical applications.
In this work, we demonstrate a hybrid plasmon-photonic cavity for dual-band spectroscopic sensing applications working in the mid-infrared (MIR) region. The photonic platform utilizes the hybridization between surface-plasmon polaritons (SPPs) and the cavity resonance in a 1D plasmonic grating strongly coupled to an asymmetric distributed Bragg reflector (DBR)-dielectric-metal cavity. It is found that by tuning the geometrical parameters, for example the cavity thickness, the coupling between SPPs and the cavity resonance can be engineered, resulting in different hybridized photonic states including the vacuum Rabi splitting, the photonic band folding and BICs. In particular, a strong dual-band resonance with nearly zero reflectance dips is obtained by tuning the cavity thickness. The wavelengths of both resonant branches and their splitting are also adjustable by changing the structural geometries. As a proof of concept, we demonstrate a set of five different dual-band absorbers for CO2, N2O, CO, NO and NO2 gas sensing in which their dual-resonant branches match perfectly to the dual-band absorption spectra of the sensing gases. This hybrid system can be further extended to visible or longer wavelength regions, providing another photonic platform for engineering resonances in nano-plasmonics for applications in miniaturized spectroscopic emitting and sensing devices.
2. Simulation results and discussion
2.1 Engineering strong coupling in hybrid plasmon-photonic cavity
We first investigate three different photonic systems including a plasmonic grating made of tungsten (Fig. 1(a)), an asymmetric DBR-dielectric-metal cavity (Fig. 1(b)) and a hybrid plasmon-photonic cavity (Fig. 1(c)). The plasmonic grating can be 1D or 2D lattices. Here we choose a 1D grating in which its parameters, including the period – p, the width – w and the height – h, are set to 4.07 µm, 1.3 µm and 0.19 µm, respectively. The asymmetric DBR-dielectric-metal cavity comprises a bottom tungsten film, a dielectric cavity and a top mirror made of three DBR (BaF2/Si) layers. The chosen number of DBR layers is three which is optimized for the maximum resonant efficiency (zero reflectance or perfect absorptivity). The dielectric cavity can be air or other lossless dielectrics with a thickness – tc ∼ $n \frac {m \lambda}{2} $ (m is an integer, n is the refractive index of the cavity and $\lambda $ is the resonant wavelength). For example, tc is set to 1.985 µm for an air cavity. A comparison with a BaF2 cavity will be discussed in Section 2.3 for gas-sensing application. The hybrid plasmon-photonic cavity is formed by replacing the bottom mirror in the asymmetric cavity by a plasmonic grating. The tungsten layer for all three systems is fixed at 0.2 µm, which is far larger than the penetration depth of the metal in the MIR region to prevent light transmission. The DBR's parameters with tBaF = 0.660 µm and tSi = 0.286 µm are the same for both asymmetric DBR-dielectric-metal and hybrid plasmon-photonic cavities. We use the rigorous coupled-wave analysis (RCWA, DiffractMOD package from Synopsys' RSoft) for calculating reflectance spectra and band diagrams. The permittivities of tungsten, silicon and BaF2 are taken from the Brendel-Bormann model by Rakic et al. [35], Palik's handbook [36] and the contractor report by Querry [37], respectively. The refractive index of the air cavity is fixed at 1.
Fig. 1. (a) – (c) Sketches and (d) – (f) simulated angle-dependent reflectance spectra of a plasmonic grating, an asymmetric DBR-dielectric-metal cavity and a hybrid plasmon-photonic cavity, respectively. The metal material for grating, cavity and hybrid plasmon-photonic cavity systems is tungsten, the cavity is air and the DBR comprises three pairs of BaF2/Si films. The left inset in (d) represents the calculated SPP's dispersion following Eq. (1) in the plasmonic grating. The right insets in (d) and (f) display zoomed-in spectra between −5° to 5°. For the SPP grating, it is clearly seen that a strong resonance (red arrow) and BIC (blue arrow) appear in the lower-band and upper-band branch of SPP's dispersion, respectively. For the hybrid plasmon-photonic cavity, strong coupling between the cavity resonance and SPPs leads to the formation of vacuum Rabi splitting (denoted by the green arrow with splitting energy Ω1 = 14 meV), band shifting and band folding in the photonic band. A strong resonance (red arrow) and a BIC (blue arrow) are also observed at normal incidence.
The anomaly absorption of plasmonic gratings was firstly observed by Wood [38], and the theory of plasmon resonance from periodic metallic grating has been developed by Fano [39], Ritchie et al. [40] and Maystre [41]. The origin of photonic gaps and the formation of the discontinued resonance at normal incidence in the SPP's dispersion from a plasmonic grating has been explained in detail by Barnes et al. [11]. The anomaly perfect absorption from plasmonic gratings has been widely used for thermal emitters [42,43] and detectors [44,45]. Figure 1(d) shows the angle-dependent reflectance spectra of the plasmonic grating. For a plasmonic grating, SPPs at the metal-air interface are excited satisfying the matching of SPPs' wavevectors with those of the incident light and the periodic lattice:
(1)$${\vec{k}_{spp}} = {\vec{k}_\parallel } + j\vec{G}$$
where $|{{{\vec{k}}_{spp}}} |= {k_0}\sqrt {\frac{{{\varepsilon _m}}}{{{\varepsilon _m} + 1}}} $, is the wavevector of the SPP at the metal-air interface. Here ${\varepsilon _m} $ is the complex permittivity of the metal and ${k_0} = \frac{{2\pi }}{\lambda }$ ($\lambda $ is the wavelength of the incident light). $|{{{\vec{k}}_\parallel }} |= {k_0}\sin \theta $ is the projection of the wavevector of light at an incident angle $ \theta $. $|{\vec{G}} |= \frac{{2\pi }}{p}$, is the primitive lattice vector of the grating with period p; j is an integer. The dispersion of SPPs following Eq. (1) is shown in the left inset of Fig. 1(d). Interestingly, the numerical simulation result shown in Fig. 1(d) reveals a gap between the two branches of SPPs (see the zoomed-in map in the right inset); at normal incidence, the lower-band branch has a strong and sharp resonance at 4.070 µm with a nearly zero reflectance at the crossing between the SPPs(±1), in contrast, the resonance disappears in the upper-band branch forming a symmetry-protected BIC (denoted by the blue arrow) [33]. For the asymmetric DBR-dielectric-metal cavity, a strong and sharp resonance with an almost zero reflectance is clearly observed in the bandgap of the DBR (Fig. 1(e)). The structure has been proposed by Celanovic et al. [13] and followed by others for thermal emitters [14–16] and detectors [14]. At normal incidence, the cavity has a narrow resonance (FWHM ∼ 3 nm) with an almost zero reflectance at 4.014-µm wavelength nearly following a factor of two compared to the cavity thickness, which is close to the resonance of the plasmonic grating. Figure 1(f) shows the angle-dependent reflectance of the hybrid plasmon-photonic cavity. The hybridization of the plasmonic grating and the asymmetric cavity induces strong coupling between the grating SPPs and the cavity resonance, leading to the mode splitting and hybridized plasmon-polariton photonic states. As seen in the zoomed-in dispersion between ±5°, and within 3.9 µm – 4.5 µm wavelength range, the main cavity resonance is perturbed by the SPPs and split into two bands (denoted by the green arrow). The strong coupling also shifts photonic bands of SPPs to the longer wavelength region. Furthermore, they are folded at near normal incidence, resulting in another resonance (red arrow) in the upper-band branch and a BIC (blue arrow) in the lower-band branch. By tuning the cavity or/and SPPs resonances, the coupling strength and therefore, the hybridized resonances (wavelength, bandwidth, intensity) of the hybrid plasmon-photonic cavity can be engineered depending on the practical applications.
2.2 Engineering dual-band resonance hybrid plasmon-photonic cavity
Figure 2(a) presents the dependence of the reflectance on the cavity thickness in the hybrid plasmon-photonic cavity at normal incidence while the grating parameters and the DBR are kept unchanged. When the cavity thickness approaches 1.985 µm, a strong coupling between the cavity resonance and the grating's SPPs occurs (denoted by the green arrow) as discussed previously (Fig. 1(f)). Particularly, when the cavity thickness increases close to 2.167 µm, the second coupling appears, revealing a strong dual-narrowband resonance with almost zero reflectance dips. Figure 2(b) plots the angle-dependent reflectance of the asymmetric DBR-dielectric-metal cavity with a cavity thickness of 2.167 µm. As the cavity thickness increases, the resonance of the cavity also increases accordingly. At normal incidence, a narrow resonance (28.5-nm FWHM) with a nearly zero reflectance dip is found at 4.302 µm. The angle-dependent reflectance of the hybrid plasmon-photonic cavity with the same cavity thickness of 2.167 µm is shown in Fig. 2(c). The hybridization also introduces new hybridized plasmon-polariton photonic bands. Interestingly, around near-normal incidence (inset in Fig. 2(c)), the hybridized states reveal two Friedrich-Wintgen BICs near 1.5 degree incidence and a symmetry-protected BIC at normal incidence (denoted by the blue arrows) [33]. At normal incidence, a strong dual-band resonance (denoted by the red and orange arrows) with nearly zero reflectance dips is observed due to the strong coupling between the cavity resonance and SPPs wherein the upper-band branch is associated with a strong resonance (red arrow).
Fig. 2. (a) Simulated reflectance of the hybrid plasmon-photonic cavity with variation of the cavity thickness while keeping the grating parameters unchanged. The second coupling region appears when the cavity thickness varies between 2.1 µm – 2.2 µm (denoted by red arrow with Rabi splitting energy Ω2 = 3.3 meV); when the cavity thickness approaches 2.167 µm, a strong coupling with two intense resonant branches is observed. Simulated angle-dependent reflectance spectra of (b) an asymmetric DBR-metal cavity and (c) a hybrid plasmon-photonic cavity with an equal cavity thickness of 2.167 µm and the same 3-DBR layers (tBaF = 0.660 µm, tSi = 0.286 µm). The grating parameters in (a) and (c) are the same with p = 4.07 µm, w = 1.3 µm, h = 0.19 µm. The inset in (c) represents zoomed-in spectra between −5° to 5°. The strong coupling between SPPs and cavity photonic resonances induces new polaritonic bands with a strong dual-band resonance at normal incidence.
Details of resonances at normal incidence of the plasmonic grating, the asymmetric metal-DBR cavity and the hybrid plasmon-photonic cavity with a same cavity thickness of 2.167 µm are represented in Figs. 3(a) – 3(c), respectively. The hybrid plasmon-photonic cavity exhibits a strong dual-band resonance with nearly zero reflectance dips at 4.233 µm (0.007 reflectance) and 4.283 µm (0.042 reflectance) and with narrow bandwidths (14-nm FWHM). We further calculate electric field distributions using the finite-difference time-domain method (FDTD, FullWAVE, Synopsis' RSoft) to elucidate the origin of the resonances (Figs. 3(d) – 3(f)). In the simulation, the electric field propagates along the Z-direction and oscillates along the X-direction (the electric field polarization is perpendicular to the grating grooves). For the plasmonic grating excited at 4.070-µm resonance (Fig. 3(d)), the electric field Ex is confined at the vicinity of the metal strip corners wherein the induced Ez field reveals SPPs at the metal/air interface. In contrast, for the metal-DBR cavity excited at 4.300-µm resonance (Fig. 3(e)), the electric field is strongly confined inside the cavity and there is no induced field in the Ez component. In particular, for the hybrid plasmon-photonic cavity excited at one of the two resonant dips (here at 4.233 µm) (Fig. 3(f)), the Ex-component is extremely confined and enhanced not only near the metal strip corners but also inside the cavity while the Ez-component shows strong induced nearfield SPPs at metal/air and also DBR/air interfaces, revealing both cavity resonance's characteristic and the SPP's origin of the hybrid system. With strong nearfield enhancement in the cavity and induced nearfield at the DBR/air interface, the plasmon-photonic cavity can be a good photonic platform for enhancing light-matter interaction including nearfield-enhanced spectroscopy, vibrational and chemical sensing applications. In the multi-spectral vibrational spectroscopy, it is desirable to have resonances at exact vibrations of the molecules to enhance absorption. The hybridized resonances can be further tuned by changing the grating's parameters. Figure 4(a) manifests the resonance tunability and the detuning of the coupling between the SPP and the cavity resonance by changing the grating period. When the grating period approaches 4.07 µm, the strong coupling between the grating SPP and the cavity resonance arises, resulting in a mode splitting with two resonance branches at 4.233 µm and 4.383 µm. Furthermore, the coupling strength can also be tuned by adjusting the grating height wherein the resonance shape and width retain almost unchanged.
Fig. 3. (a) – (c) Simulated reflectance spectra at normal incidence and (d) – (f) simulated electric fields (Ex and Ez) distribution of a plasmonic tungsten grating, an asymmetric DBR-metal cavity and a hybrid plasmon-photonic cavity, respectively. The resonances of the grating and the cavity are 4.070 µm and 4.301 µm, respectively. The hybrid system reveals a dual-band resonance with two nearly zero dips at 4.233 µm and 4.283 µm (nearly perfect absorption). The grating parameters are the same for (a) and (c), period of 4.070 µm, width of 1.300 µm and height of 0.190 µm. The cavity thickness in (b) and (c) is the same at 2.167 µm. For all full-wave simulations of the electric field distribution, the electric field propagates along the Z-direction and oscillates along the X-direction, and the excited wavelengths are at resonances: 4.070 µm for the grating in (d), 4.301 µm for the cavity in (e) and 4.233 µm for the hybrid plasmon-photonic cavity in (f). The electric fields (Ex and Ez) of the plasmon-photonic cavity manifest a hybridized resonance where the electric field is highly confined in the cavity and the induced field component Ez reveals SPPs.
Fig. 4. Simulated dependences of the reflectance on the (a) grating period (with w = 1.3 µm, h = 0.19 µm) and (b) height (with p = 4.07 µm, w = 1.3 µm) from a hybrid plasmon-photonic cavity (p = 4.07 µm, w = 1.3 µm, h = 0.19 µm, tc = 2.166 µm). The cavity parameters are the same for (a) and (b) with tc = 2.166 µm, tBaF = 0.660 µm and tSi = 0.286 µm. The splitting energy (coupling strength) of the dual-band resonance can be tuned by changing the grating height.
2.3 Dual-band perfect absorbers for vibrational spectroscopy application
Recent developments of infrared plasmonic and photonic devices have shown a great advancement for enhancing vibrational spectroscopy including surface-enhanced infrared absorption spectroscopy and miniaturized gas sensors. For example, in the gas sensing application, the emission spectra of a thermal emitter can be engineered by using a wavelength-selective absorber in which its emissivity is optimized at nearly unity and with a narrow bandwidth that can efficiently emit infrared light perfectly matching to the absorption band of the sensing gas. On the other hand, an infrared sensor can be also integrated with a narrow band plasmonic absorber, which can sensitively detect light at the absorption band of the sensing gas. In the NDIR sensing, a thermal emitter filtered at the rotational-vibrational spectrum of the targeted gas is often used as the excitation source. Nevertheless, most of diatomic gases feature dual-branch rotation-vibration spectra corresponding to the rotational–vibrational transitions from one rotational level in the ground vibrational state to one rotational level in the excited vibrational state. Two branches of lines are corresponded to the two transition groups with rotation quantum numbers ΔJ = +1 and ΔJ = – 1, which are so-called R-branch (short wavelength) and P-branch (long wavelength), respectively [46]. Thus, a thermal emitter (or a sensor) that has a dual-band resonance matching to the two branches of the targeted diatomic gas is desirable for the NDIR gas sensing application. Here we show that the hybrid plasmon-photonic cavity proposed in this work can be a good photonic platform for on-chip filtering spectroscopic sensors and emitters for gas sensing applications.
Here we demonstrate five different dual-band absorbers targeting at carbon dioxide (CO2), nitrous oxide (N2O), carbon monoxide (CO), nitric oxide (NO) and nitrogen dioxide (NO2) based on the proposed hybrid plasmon-photonic cavity structure. The dual-band absorbers can be used for thermal emitters following Kirchhoff's law wherein the emissivity of an absorber is equal to its absorptivity at thermal equilibrium. They can also be used as the selective absorbing layer for thermal sensors that absorbs radiation at the dual-band resonance and converts absorbed energy into heat. It is worthy to note that in this device platform, since the metal layer of the grating is rather thick (0.2 µm) and does not transmit light in the infrared region, most of the light confined in the cavity is absorbed by this metal grating and the absorptivity is therefore calculated by 1 – reflectivity. Figures 5(a) – 5(e) represent the normalized absorption coefficient spectra taken from the HITRAN database [47] (top panels) and the relative absorptivity spectra of the designated dual-band absorbers (middle panels) designed for CO2, N2O, CO, NO and NO2, respectively. For the CO2-sensing absorber, we use the same structure as shown in Fig. 2(c) and Fig. 3(c) that exhibits a dual-band resonance with two branches located at 4.233 µm (R-branch) and 4.283 µm (P-branch) with nearly perfect absorptivity (0.993 at R-branch and 0.958 at P-branch) (Fig. 5(a)). Following the same procedure made for the CO2-gas absorber, the parameters of other gases' absorbers are also obtained. Details of geometrical parameters of all gas-sensing absorbers and their resonances are given in Table 1. All the dual-band resonances of the designated gas sensing absorbers are matching perfectly to the R-branch and P-branch of the absorption spectra of the sensing gases. The bandwidth of each branch in the resonance can be further improved to have a broader resonance maximizing the efficiency of the NDIR sensor by using a smaller number of DBR's period. In this work, we simply keep the same configuration for all the absorbers with the air cavity and three DBR periods, which can provide good thermal insulation for the design of infrared emitters or thermal sensors, particularly when combining with micro-electro-mechanical systems (MEMS) technology. The proposed hybrid plasmon-photonic cavity can also work with other dielectric cavity materials for example BaF2, which is also used for the low-refractive index layer of the DBR (BaF2/Si) in this work. The bottom panels in Figs. 5(a) – 5(e) present simulated spectra of the gas-sensing absorbers using BaF2 as the cavity with the same 3-DBR (BaF2/Si) layers. Like air-cavity plasmon-photonic absorbers, the designated BaF2-cavity plasmon-photonic systems also exhibit dual-band resonances at the absorption bands of the sensing CO2, N2O, CO, NO and NO2 gases. Detailed parameters of BaF2-cavity gas-sensing absorbers and their resonances are shown in Table 2. As the refractive index of the cavity increases from 1 (air) to 1.467 (BaF2), the parameters of plasmonic tungsten gratings are also optimized accordingly to obtain the strong coupling between SPPs and cavity resonances.
Fig. 5. Application of the hybrid plasmon-photonic cavity for dual-band resonant gas sensing. (a) – (e) Represented absorption coefficient spectra (top) and simulated relative dual-band absorptivity spectra of hybrid plasmon-photonic cavity devices with air cavity (middle panels) and BaF2 cavity (bottom panels) for CO2, N2O, CO, NO, and NO2 gases, respectively.
Table 1. Geometrical parameters of the gas-sensing absorbers with air-cavity
View Table | View all tables in this article
Table 2. Geometrical parameters of the gas-sensing absorbers with BaF2-cavity
In conclusion, we have successfully introduced a simple hybrid plasmon-photonic system consisting of a plasmonic grating coupled to an asymmetric cavity for dual-band absorber spectroscopic sensing applications. We have numerically demonstrated the strategy for engineering the strong coupling between the grating SPPs and the cavity resonance in the hybrid system. The resonant mode splitting, the photonic band folding and the formation of new eigenstates including BICs have been observed in the system. In particular, a dual-band resonant near-perfect absorber is readily accessible, and the resonance is easily tuned in the system by changing the device parameters. Furthermore, we have successfully demonstrated a set of five different absorbers for CO2, N2O, CO, NO and NO2 gas-sensing applications. The gas-sensing absorbers can be applied for unidirectional thermal emitters and detectors. Although the hybrid plasmon-photonic cavity presented in this work is designed for the MIR region, the system can easily be extended to VIS or far-IR regions, providing another photonic platform for broad applications in lasers, detectors, nonlinear optics, bio-chemical sensing and molecular spectroscopy.
This work has been supported by the Federal Republic of Austria, the Styrian Business Promotion Agency (SFG), the Federal State of Carinthia, Upper Austrian Research (UAR) and the Austrian Association for the Electric and Electronics Industry (FEEI) within the "Silicon Austria" PPP research and investment initiative.
The authors declare that there is no conflict of interest.
1. D. Pergande, T. M. Geppert, R. B. Wehrspohn, S. Moretton, and A. Lambrecht, "Miniature infrared gas sensors using photonic crystals," J. Appl. Phys. 109(8), 083117 (2011). [CrossRef]
2. K. Zhou, Q. Cheng, L. Lu, B. Li, J. Song, M. Si, and Z. Luo, "Multichannel tunable narrowband mid-infrared optical filter based on phase-change material Ge 2 Sb 2 Te 5 defect layers," Appl. Opt. 59(3), 595 (2020). [CrossRef]
3. H. T. Miyazaki, T. Kasaya, M. Iwanaga, B. Choi, Y. Sugimoto, and K. Sakoda, "Dual-band infrared metasurface thermal emitter for CO2 sensing," Appl. Phys. Lett. 105(12), 121107 (2014). [CrossRef]
4. A. Lochbaum, Y. Fedoryshyn, A. Dorodnyy, U. Koch, C. Hafner, and J. Leuthold, "On-Chip Narrowband Thermal Emitter for Mid-IR Optical Gas Sensing," ACS Photonics 4(6), 1371–1380 (2017). [CrossRef]
5. S. Kang, Z. Qian, V. Rajaram, S. D. Calisgan, A. Alù, and M. Rinaldi, "Ultra-Narrowband Metamaterial Absorbers for High Spectral Resolution Infrared Spectroscopy," Adv. Opt. Mater. 7(2), 1801236 (2019). [CrossRef]
6. A. Lochbaum, A. Dorodnyy, U. Koch, S. M. Koepfli, S. Volk, Y. Fedoryshyn, V. Wood, and J. Leuthold, "Compact Mid-Infrared Gas Sensing Enabled by an All-Metamaterial Design," Nano Lett. 20(6), 4169–4176 (2020). [CrossRef]
7. X. Tan, H. Zhang, J. Li, H. Wan, Q. Guo, H. Zhu, H. Liu, and F. Yi, "Non-dispersive infrared multi-gas sensing via nanoantenna integrated narrowband detectors," Nat. Commun. 11(1), 5245 (2020). [CrossRef]
8. N. Landy, S. Sajuyigbe, J. Mock, D. Smith, and W. Padilla, "Perfect Metamaterial Absorber," Phys. Rev. Lett. 100(20), 207402 (2008). [CrossRef]
9. X. Liu, T. Tyler, T. Starr, A. F. Starr, N. M. Jokerst, and W. J. Padilla, "Taming the Blackbody with Infrared Metamaterials as Selective Thermal Emitters," Phys. Rev. Lett. 107(4), 045901 (2011). [CrossRef]
10. T. D. Dao, K. Chen, S. Ishii, A. Ohi, T. Nabatame, M. Kitajima, and T. Nagao, "Infrared Perfect Absorbers Fabricated by Colloidal Mask Etching of Al–Al2O3–Al Trilayers," ACS Photonics 2(7), 964–970 (2015). [CrossRef]
11. W. L. Barnes, T. W. Preist, S. C. Kitson, and J. R. Sambles, "Physical origin of photonic energy gaps in the propagation of surface plasmons on gratings," Phys. Rev. B 54(9), 6227–6244 (1996). [CrossRef]
12. J. Le Perchec, P. Quémerais, A. Barbara, and T. López-Ríos, "Why Metallic Surfaces with Grooves a Few Nanometers Deep and Wide May Strongly Absorb Visible Light," Phys. Rev. Lett. 100(6), 066408 (2008). [CrossRef]
13. I. Celanovic, D. Perreault, and J. Kassakian, "Resonant-cavity enhanced thermal emission," Phys. Rev. B 72(7), 075127 (2005). [CrossRef]
14. A. T. Doan, T. D. Dao, S. Ishii, and T. Nagao, "Gires-Tournois resonators as ultra-narrowband perfect absorbers for infrared spectroscopic devices," Opt. Express 27(12), A725–A737 (2019). [CrossRef]
15. Z.-Y. Yang, S. Ishii, T. Yokoyama, T. D. Dao, M.-G. Sun, P. S. Pankin, I. V. Timofeev, T. Nagao, and K.-P. Chen, "Narrowband Wavelength Selective Thermal Emitters by Confined Tamm Plasmon Polaritons," ACS Photonics 4(9), 2212–2219 (2017). [CrossRef]
16. Z. Wang, J. K. Clark, Y.-L. Ho, B. Vilquin, H. Daiguji, and J.-J. Delaunay, "Narrowband Thermal Emission Realized through the Coupling of Cavity and Tamm Plasmon Resonances," ACS Photonics 5(6), 2446–2452 (2018). [CrossRef]
17. J. P. Reithmaier, G. Se, S. Reitzenstein, L. V. Keldysh, V. D. Kulakovskii, T. L. Reinecke, and A. Forchel, "Strong coupling in a single quantum dot–semiconductor microcavity system," Nature 432(7014), 197–200 (2004). [CrossRef]
18. R. Chikkaraddy, B. de Nijs, F. Benz, S. J. Barrow, O. A. Scherman, E. Rosta, A. Demetriadou, P. Fox, O. Hess, and J. J. Baumberg, "Single-molecule strong coupling at room temperature in plasmonic nanocavities," Nature 535(7610), 127–130 (2016). [CrossRef]
19. A. Imamoglu, D. D. Awschalom, G. Burkard, D. P. DiVincenzo, D. Loss, M. Sherwin, and A. Small, "Quantum Information Processing Using Quantum Dot Spins and Cavity QED," Phys. Rev. Lett. 83(20), 4204–4207 (1999). [CrossRef]
20. T. E. Northup and R. Blatt, "Quantum information transfer using photons," Nat. Photonics 8(5), 356–363 (2014). [CrossRef]
21. A. Kodigala, T. Lepetit, Q. Gu, B. Bahari, Y. Fainman, and B. Kanté, "Lasing action from photonic bound states in continuum," Nature 541(7636), 196–199 (2017). [CrossRef]
22. C. Huang, C. Zhang, S. Xiao, Y. Wang, Y. Fan, Y. Liu, N. Zhang, G. Qu, H. Ji, J. Han, L. Ge, Y. Kivshar, and Q. Song, "Ultrafast control of vortex microlasers," Science 367(6481), 1018–1021 (2020). [CrossRef]
23. J.-H. Park, A. Ndao, W. Cai, L. Hsu, A. Kodigala, T. Lepetit, Y.-H. Lo, and B. Kanté, "Symmetry-breaking-induced plasmonic exceptional points and nanoscale sensing," Nat. Phys. 16(4), 462–468 (2020). [CrossRef]
24. J. Wiersig, "Review of exceptional point-based sensors," Photonics Res. 8(9), 1457 (2020). [CrossRef]
25. A. Christ, S. G. Tikhodeev, N. A. Gippius, J. Kuhl, and H. Giessen, "Waveguide-Plasmon Polaritons: Strong Coupling of Photonic and Electronic Resonances in a Metallic Photonic Crystal Slab," Phys. Rev. Lett. 91(18), 183901 (2003). [CrossRef]
26. R. Ameling and H. Giessen, "Microcavity plasmonics: strong coupling of photonic cavities and plasmons: Microcavity plasmonics," Laser Photonics Rev. 7(2), 141–169 (2013). [CrossRef]
27. L. Ferrier, H. S. Nguyen, C. Jamois, L. Berguiga, C. Symonds, J. Bellessa, and T. Benyattou, "Tamm plasmon photonic crystals: From bandgap engineering to defect cavity," APL Photonics 4(10), 106101 (2019). [CrossRef]
28. W. Zhou, M. Dridi, J. Y. Suh, C. H. Kim, D. T. Co, M. R. Wasielewski, G. C. Schatz, and T. W. Odom, "Lasing action in strongly coupled plasmonic nanocavity arrays," Nat. Nanotechnol. 8(7), 506–511 (2013). [CrossRef]
29. T. Zhang, S. Callard, C. Jamois, C. Chevalier, D. Feng, and A. Belarouci, "Plasmonic-photonic crystal coupled nanolaser," Nanotechnology 25(31), 315201 (2014). [CrossRef]
30. J. Hu, W. Liu, W. Xie, W. Zhang, E. Yao, Y. Zhang, and Q. Zhan, "Strong coupling of optical interface modes in a 1D topological photonic crystal heterostructure/Ag hybrid system," Opt. Lett. 44(22), 5642–5645 (2019). [CrossRef]
31. K. Zhou, Q. Cheng, L. Lu, B. Li, J. Song, and Z. Luo, "Dual-band tunable narrowband near-infrared light trapping control based on a hybrid grating-based Fabry–Perot structure," Opt. Express 28(2), 1647 (2020). [CrossRef]
32. D. C. Marinica, A. G. Borisov, and S. V. Shabanov, "Bound States in the Continuum in Photonics," Phys. Rev. Lett. 100(18), 183902 (2008). [CrossRef]
33. S. I. Azzam, V. M. Shalaev, A. Boltasseva, and A. V. Kildishev, "Formation of Bound States in the Continuum in Hybrid Plasmonic-Photonic Systems," Phys. Rev. Lett. 121(25), 253901 (2018). [CrossRef]
34. M. Meudt, C. Bogiadzi, K. Wrobel, and P. Görrn, "Hybrid Photonic–Plasmonic Bound States in Continuum for Enhanced Light Manipulation," 7 (2020).
35. A. D. Rakic, A. B. Djurišic, J. M. Elazar, and M. L. Majewski, "Optical properties of metallic films for vertical-cavity optoelectronic devices," Appl. Opt. 37(22), 5271–5283 (1998). [CrossRef]
36. E. D. Palik, Handbook of Optical Constants of Solids (Academic Press, 1997).
37. M. R. Querry, Optical Constants of Minerals and Other Materials from the Millimeter to the Ultraviolet (Aberdeen Proving Ground, Md.: US Army Armament, Munitions & Chemical Command, Chemical Research & Development Center, 1987).
38. R. W. Wood, "On a Remarkable Case of Uneven Distribution of Light in a Diffraction Grating Spectrum," Proc. Phys. Soc. London 18(1), 269–275 (1902). [CrossRef]
39. U. Fano, "The Theory of Anomalous Diffraction Gratings and of Quasi-Stationary Waves on Metallic Surfaces (Sommerfeld's Waves)," J. Opt. Soc. Am. 31(3), 213 (1941). [CrossRef]
40. R. H. Ritchie, E. T. Arakawa, J. J. Cowan, and R. N. Hamm, "Surface-Plasmon Resonance Effect in Grating Diffraction," Phys. Rev. Lett. 21(22), 1530–1533 (1968). [CrossRef]
41. D. Maystre, "Theory of Wood's Anomalies," in Plasmonics, S. Enoch and N. Bonod, eds., Springer Series in Optical Sciences (Springer, 2012), 167, pp. 39–83.
42. M. Laroche, C. Arnold, F. Marquier, R. Carminati, J.-J. Greffet, S. Collin, N. Bardou, and J.-L. Pelouard, "Highly directional radiation generated by a tungsten thermal source," Opt. Lett. 30(19), 2623 (2005). [CrossRef]
43. J. Liu, U. Guler, A. Lagutchev, A. Kildishev, O. Malis, A. Boltasseva, and V. M. Shalaev, "Quasi-coherent thermal emitter based on refractory plasmonic materials," Opt. Mater. Express 5(12), 2721 (2015). [CrossRef]
44. S. Ogawa, K. Okada, N. Fukushima, and M. Kimata, "Wavelength selective uncooled infrared sensor by plasmonics," Appl. Phys. Lett. 100(2), 021111 (2012). [CrossRef]
45. T. D. Dao, S. Ishii, A. T. Doan, Y. Wada, A. Ohi, T. Nabatame, and T. Nagao, "An On-Chip Quad-Wavelength Pyroelectric Sensor for Spectroscopic Infrared Sensing," Adv. Sci. 6(20), 1900579 (2019). [CrossRef]
46. S. Albert, K. K. Albert, H. Hollenstein, C. M. Tanner, and M. Quack, "Fundamentals of Rotation–Vibration Spectra," in Handbook of High-resolution Spectroscopy (American Cancer Society, 2011).
47. I. E. Gordon, L. S. Rothman, C. Hill, R. V. Kochanov, Y. Tan, P. F. Bernath, M. Birk, V. Boudon, A. Campargue, K. V. Chance, B. J. Drouin, J.-M. Flaud, R. R. Gamache, J. T. Hodges, D. Jacquemart, V. I. Perevalov, A. Perrin, K. P. Shine, M.-A. H. Smith, J. Tennyson, G. C. Toon, H. Tran, V. G. Tyuterev, A. Barbe, A. G. Császár, V. M. Devi, T. Furtenbacher, J. J. Harrison, J.-M. Hartmann, A. Jolly, T. J. Johnson, T. Karman, I. Kleiner, A. A. Kyuberis, J. Loos, O. M. Lyulin, S. T. Massie, S. N. Mikhailenko, N. Moazzen-Ahmadi, H. S. P. Müller, O. V. Naumenko, A. V. Nikitin, O. L. Polyansky, M. Rey, M. Rotger, S. W. Sharpe, K. Sung, E. Starikova, S. A. Tashkun, J. V. Auwera, G. Wagner, J. Wilzewski, P. Wcisło, S. Yu, and E. J. Zak, "The HITRAN2017 molecular spectroscopic database," J. Quant. Spectrosc. Radiat. Transfer 203, 3–69 (2017). [CrossRef]
D. Pergande, T. M. Geppert, R. B. Wehrspohn, S. Moretton, and A. Lambrecht, "Miniature infrared gas sensors using photonic crystals," J. Appl. Phys. 109(8), 083117 (2011).
K. Zhou, Q. Cheng, L. Lu, B. Li, J. Song, M. Si, and Z. Luo, "Multichannel tunable narrowband mid-infrared optical filter based on phase-change material Ge 2 Sb 2 Te 5 defect layers," Appl. Opt. 59(3), 595 (2020).
H. T. Miyazaki, T. Kasaya, M. Iwanaga, B. Choi, Y. Sugimoto, and K. Sakoda, "Dual-band infrared metasurface thermal emitter for CO2 sensing," Appl. Phys. Lett. 105(12), 121107 (2014).
A. Lochbaum, Y. Fedoryshyn, A. Dorodnyy, U. Koch, C. Hafner, and J. Leuthold, "On-Chip Narrowband Thermal Emitter for Mid-IR Optical Gas Sensing," ACS Photonics 4(6), 1371–1380 (2017).
S. Kang, Z. Qian, V. Rajaram, S. D. Calisgan, A. Alù, and M. Rinaldi, "Ultra-Narrowband Metamaterial Absorbers for High Spectral Resolution Infrared Spectroscopy," Adv. Opt. Mater. 7(2), 1801236 (2019).
A. Lochbaum, A. Dorodnyy, U. Koch, S. M. Koepfli, S. Volk, Y. Fedoryshyn, V. Wood, and J. Leuthold, "Compact Mid-Infrared Gas Sensing Enabled by an All-Metamaterial Design," Nano Lett. 20(6), 4169–4176 (2020).
X. Tan, H. Zhang, J. Li, H. Wan, Q. Guo, H. Zhu, H. Liu, and F. Yi, "Non-dispersive infrared multi-gas sensing via nanoantenna integrated narrowband detectors," Nat. Commun. 11(1), 5245 (2020).
N. Landy, S. Sajuyigbe, J. Mock, D. Smith, and W. Padilla, "Perfect Metamaterial Absorber," Phys. Rev. Lett. 100(20), 207402 (2008).
X. Liu, T. Tyler, T. Starr, A. F. Starr, N. M. Jokerst, and W. J. Padilla, "Taming the Blackbody with Infrared Metamaterials as Selective Thermal Emitters," Phys. Rev. Lett. 107(4), 045901 (2011).
T. D. Dao, K. Chen, S. Ishii, A. Ohi, T. Nabatame, M. Kitajima, and T. Nagao, "Infrared Perfect Absorbers Fabricated by Colloidal Mask Etching of Al–Al2O3–Al Trilayers," ACS Photonics 2(7), 964–970 (2015).
W. L. Barnes, T. W. Preist, S. C. Kitson, and J. R. Sambles, "Physical origin of photonic energy gaps in the propagation of surface plasmons on gratings," Phys. Rev. B 54(9), 6227–6244 (1996).
J. Le Perchec, P. Quémerais, A. Barbara, and T. López-Ríos, "Why Metallic Surfaces with Grooves a Few Nanometers Deep and Wide May Strongly Absorb Visible Light," Phys. Rev. Lett. 100(6), 066408 (2008).
I. Celanovic, D. Perreault, and J. Kassakian, "Resonant-cavity enhanced thermal emission," Phys. Rev. B 72(7), 075127 (2005).
A. T. Doan, T. D. Dao, S. Ishii, and T. Nagao, "Gires-Tournois resonators as ultra-narrowband perfect absorbers for infrared spectroscopic devices," Opt. Express 27(12), A725–A737 (2019).
Z.-Y. Yang, S. Ishii, T. Yokoyama, T. D. Dao, M.-G. Sun, P. S. Pankin, I. V. Timofeev, T. Nagao, and K.-P. Chen, "Narrowband Wavelength Selective Thermal Emitters by Confined Tamm Plasmon Polaritons," ACS Photonics 4(9), 2212–2219 (2017).
Z. Wang, J. K. Clark, Y.-L. Ho, B. Vilquin, H. Daiguji, and J.-J. Delaunay, "Narrowband Thermal Emission Realized through the Coupling of Cavity and Tamm Plasmon Resonances," ACS Photonics 5(6), 2446–2452 (2018).
J. P. Reithmaier, G. Se, S. Reitzenstein, L. V. Keldysh, V. D. Kulakovskii, T. L. Reinecke, and A. Forchel, "Strong coupling in a single quantum dot–semiconductor microcavity system," Nature 432(7014), 197–200 (2004).
R. Chikkaraddy, B. de Nijs, F. Benz, S. J. Barrow, O. A. Scherman, E. Rosta, A. Demetriadou, P. Fox, O. Hess, and J. J. Baumberg, "Single-molecule strong coupling at room temperature in plasmonic nanocavities," Nature 535(7610), 127–130 (2016).
A. Imamoglu, D. D. Awschalom, G. Burkard, D. P. DiVincenzo, D. Loss, M. Sherwin, and A. Small, "Quantum Information Processing Using Quantum Dot Spins and Cavity QED," Phys. Rev. Lett. 83(20), 4204–4207 (1999).
T. E. Northup and R. Blatt, "Quantum information transfer using photons," Nat. Photonics 8(5), 356–363 (2014).
A. Kodigala, T. Lepetit, Q. Gu, B. Bahari, Y. Fainman, and B. Kanté, "Lasing action from photonic bound states in continuum," Nature 541(7636), 196–199 (2017).
C. Huang, C. Zhang, S. Xiao, Y. Wang, Y. Fan, Y. Liu, N. Zhang, G. Qu, H. Ji, J. Han, L. Ge, Y. Kivshar, and Q. Song, "Ultrafast control of vortex microlasers," Science 367(6481), 1018–1021 (2020).
J.-H. Park, A. Ndao, W. Cai, L. Hsu, A. Kodigala, T. Lepetit, Y.-H. Lo, and B. Kanté, "Symmetry-breaking-induced plasmonic exceptional points and nanoscale sensing," Nat. Phys. 16(4), 462–468 (2020).
J. Wiersig, "Review of exceptional point-based sensors," Photonics Res. 8(9), 1457 (2020).
A. Christ, S. G. Tikhodeev, N. A. Gippius, J. Kuhl, and H. Giessen, "Waveguide-Plasmon Polaritons: Strong Coupling of Photonic and Electronic Resonances in a Metallic Photonic Crystal Slab," Phys. Rev. Lett. 91(18), 183901 (2003).
R. Ameling and H. Giessen, "Microcavity plasmonics: strong coupling of photonic cavities and plasmons: Microcavity plasmonics," Laser Photonics Rev. 7(2), 141–169 (2013).
L. Ferrier, H. S. Nguyen, C. Jamois, L. Berguiga, C. Symonds, J. Bellessa, and T. Benyattou, "Tamm plasmon photonic crystals: From bandgap engineering to defect cavity," APL Photonics 4(10), 106101 (2019).
W. Zhou, M. Dridi, J. Y. Suh, C. H. Kim, D. T. Co, M. R. Wasielewski, G. C. Schatz, and T. W. Odom, "Lasing action in strongly coupled plasmonic nanocavity arrays," Nat. Nanotechnol. 8(7), 506–511 (2013).
T. Zhang, S. Callard, C. Jamois, C. Chevalier, D. Feng, and A. Belarouci, "Plasmonic-photonic crystal coupled nanolaser," Nanotechnology 25(31), 315201 (2014).
J. Hu, W. Liu, W. Xie, W. Zhang, E. Yao, Y. Zhang, and Q. Zhan, "Strong coupling of optical interface modes in a 1D topological photonic crystal heterostructure/Ag hybrid system," Opt. Lett. 44(22), 5642–5645 (2019).
K. Zhou, Q. Cheng, L. Lu, B. Li, J. Song, and Z. Luo, "Dual-band tunable narrowband near-infrared light trapping control based on a hybrid grating-based Fabry–Perot structure," Opt. Express 28(2), 1647 (2020).
D. C. Marinica, A. G. Borisov, and S. V. Shabanov, "Bound States in the Continuum in Photonics," Phys. Rev. Lett. 100(18), 183902 (2008).
S. I. Azzam, V. M. Shalaev, A. Boltasseva, and A. V. Kildishev, "Formation of Bound States in the Continuum in Hybrid Plasmonic-Photonic Systems," Phys. Rev. Lett. 121(25), 253901 (2018).
M. Meudt, C. Bogiadzi, K. Wrobel, and P. Görrn, "Hybrid Photonic–Plasmonic Bound States in Continuum for Enhanced Light Manipulation," 7 (2020).
A. D. Rakic, A. B. Djurišic, J. M. Elazar, and M. L. Majewski, "Optical properties of metallic films for vertical-cavity optoelectronic devices," Appl. Opt. 37(22), 5271–5283 (1998).
E. D. Palik, Handbook of Optical Constants of Solids (Academic Press, 1997).
M. R. Querry, Optical Constants of Minerals and Other Materials from the Millimeter to the Ultraviolet (Aberdeen Proving Ground, Md.: US Army Armament, Munitions & Chemical Command, Chemical Research & Development Center, 1987).
R. W. Wood, "On a Remarkable Case of Uneven Distribution of Light in a Diffraction Grating Spectrum," Proc. Phys. Soc. London 18(1), 269–275 (1902).
U. Fano, "The Theory of Anomalous Diffraction Gratings and of Quasi-Stationary Waves on Metallic Surfaces (Sommerfeld's Waves)," J. Opt. Soc. Am. 31(3), 213 (1941).
R. H. Ritchie, E. T. Arakawa, J. J. Cowan, and R. N. Hamm, "Surface-Plasmon Resonance Effect in Grating Diffraction," Phys. Rev. Lett. 21(22), 1530–1533 (1968).
D. Maystre, "Theory of Wood's Anomalies," in Plasmonics, S. Enoch and N. Bonod, eds., Springer Series in Optical Sciences (Springer, 2012), 167, pp. 39–83.
M. Laroche, C. Arnold, F. Marquier, R. Carminati, J.-J. Greffet, S. Collin, N. Bardou, and J.-L. Pelouard, "Highly directional radiation generated by a tungsten thermal source," Opt. Lett. 30(19), 2623 (2005).
J. Liu, U. Guler, A. Lagutchev, A. Kildishev, O. Malis, A. Boltasseva, and V. M. Shalaev, "Quasi-coherent thermal emitter based on refractory plasmonic materials," Opt. Mater. Express 5(12), 2721 (2015).
S. Ogawa, K. Okada, N. Fukushima, and M. Kimata, "Wavelength selective uncooled infrared sensor by plasmonics," Appl. Phys. Lett. 100(2), 021111 (2012).
T. D. Dao, S. Ishii, A. T. Doan, Y. Wada, A. Ohi, T. Nabatame, and T. Nagao, "An On-Chip Quad-Wavelength Pyroelectric Sensor for Spectroscopic Infrared Sensing," Adv. Sci. 6(20), 1900579 (2019).
S. Albert, K. K. Albert, H. Hollenstein, C. M. Tanner, and M. Quack, "Fundamentals of Rotation–Vibration Spectra," in Handbook of High-resolution Spectroscopy (American Cancer Society, 2011).
I. E. Gordon, L. S. Rothman, C. Hill, R. V. Kochanov, Y. Tan, P. F. Bernath, M. Birk, V. Boudon, A. Campargue, K. V. Chance, B. J. Drouin, J.-M. Flaud, R. R. Gamache, J. T. Hodges, D. Jacquemart, V. I. Perevalov, A. Perrin, K. P. Shine, M.-A. H. Smith, J. Tennyson, G. C. Toon, H. Tran, V. G. Tyuterev, A. Barbe, A. G. Császár, V. M. Devi, T. Furtenbacher, J. J. Harrison, J.-M. Hartmann, A. Jolly, T. J. Johnson, T. Karman, I. Kleiner, A. A. Kyuberis, J. Loos, O. M. Lyulin, S. T. Massie, S. N. Mikhailenko, N. Moazzen-Ahmadi, H. S. P. Müller, O. V. Naumenko, A. V. Nikitin, O. L. Polyansky, M. Rey, M. Rotger, S. W. Sharpe, K. Sung, E. Starikova, S. A. Tashkun, J. V. Auwera, G. Wagner, J. Wilzewski, P. Wcisło, S. Yu, and E. J. Zak, "The HITRAN2017 molecular spectroscopic database," J. Quant. Spectrosc. Radiat. Transfer 203, 3–69 (2017).
Albert, K. K.
Albert, S.
Ameling, R.
Arakawa, E. T.
Arnold, C.
Auwera, J. V.
Awschalom, D. D.
Azzam, S. I.
Bahari, B.
Barbara, A.
Barbe, A.
Bardou, N.
Barnes, W. L.
Barrow, S. J.
Baumberg, J. J.
Belarouci, A.
Bellessa, J.
Benyattou, T.
Benz, F.
Berguiga, L.
Bernath, P. F.
Birk, M.
Blatt, R.
Bogiadzi, C.
Boltasseva, A.
Boudon, V.
Burkard, G.
Calisgan, S. D.
Callard, S.
Campargue, A.
Carminati, R.
Celanovic, I.
Chance, K. V.
Chen, K.
Chen, K.-P.
Cheng, Q.
Chevalier, C.
Chikkaraddy, R.
Choi, B.
Christ, A.
Clark, J. K.
Co, D. T.
Collin, S.
Cowan, J. J.
Császár, A. G.
Daiguji, H.
Dao, T. D.
de Nijs, B.
Delaunay, J.-J.
Demetriadou, A.
Devi, V. M.
DiVincenzo, D. P.
Djurišic, A. B.
Doan, A. T.
Dorodnyy, A.
Dridi, M.
Drouin, B. J.
Elazar, J. M.
Fainman, Y.
Fan, Y.
Fano, U.
Fedoryshyn, Y.
Feng, D.
Ferrier, L.
Flaud, J.-M.
Forchel, A.
Fox, P.
Fukushima, N.
Furtenbacher, T.
Gamache, R. R.
Geppert, T. M.
Gippius, N. A.
Gordon, I. E.
Görrn, P.
Greffet, J.-J.
Gu, Q.
Guler, U.
Guo, Q.
Hafner, C.
Hamm, R. N.
Han, J.
Harrison, J. J.
Hartmann, J.-M.
Hess, O.
Hill, C.
Ho, Y.-L.
Hodges, J. T.
Hollenstein, H.
Hu, J.
Imamoglu, A.
Ishii, S.
Iwanaga, M.
Jacquemart, D.
Jamois, C.
Ji, H.
Johnson, T. J.
Jokerst, N. M.
Jolly, A.
Kang, S.
Kanté, B.
Karman, T.
Kasaya, T.
Kassakian, J.
Keldysh, L. V.
Kildishev, A.
Kildishev, A. V.
Kim, C. H.
Kimata, M.
Kitajima, M.
Kitson, S. C.
Kivshar, Y.
Kleiner, I.
Koch, U.
Kochanov, R. V.
Kodigala, A.
Koepfli, S. M.
Kuhl, J.
Kulakovskii, V. D.
Kyuberis, A. A.
Lagutchev, A.
Lambrecht, A.
Landy, N.
Laroche, M.
Le Perchec, J.
Lepetit, T.
Leuthold, J.
Li, B.
Lo, Y.-H.
Lochbaum, A.
Loos, J.
López-Ríos, T.
Loss, D.
Lu, L.
Luo, Z.
Lyulin, O. M.
Majewski, M. L.
Malis, O.
Marquier, F.
Massie, S. T.
Maystre, D.
Meudt, M.
Mikhailenko, S. N.
Miyazaki, H. T.
Moazzen-Ahmadi, N.
Mock, J.
Moretton, S.
Müller, H. S. P.
Nabatame, T.
Nagao, T.
Naumenko, O. V.
Ndao, A.
Nguyen, H. S.
Nikitin, A. V.
Northup, T. E.
Odom, T. W.
Ogawa, S.
Ohi, A.
Okada, K.
Padilla, W.
Padilla, W. J.
Palik, E. D.
Pankin, P. S.
Park, J.-H.
Pelouard, J.-L.
Perevalov, V. I.
Pergande, D.
Perreault, D.
Perrin, A.
Polyansky, O. L.
Preist, T. W.
Qian, Z.
Qu, G.
Quack, M.
Quémerais, P.
Querry, M. R.
Rajaram, V.
Rakic, A. D.
Reinecke, T. L.
Reithmaier, J. P.
Reitzenstein, S.
Rey, M.
Rinaldi, M.
Ritchie, R. H.
Rosta, E.
Rotger, M.
Rothman, L. S.
Sajuyigbe, S.
Sakoda, K.
Sambles, J. R.
Schatz, G. C.
Scherman, O. A.
Se, G.
Shabanov, S. V.
Shalaev, V. M.
Sharpe, S. W.
Sherwin, M.
Shine, K. P.
Si, M.
Small, A.
Smith, D.
Smith, M.-A. H.
Song, Q.
Starikova, E.
Starr, A. F.
Starr, T.
Sugimoto, Y.
Suh, J. Y.
Sun, M.-G.
Sung, K.
Symonds, C.
Tan, X.
Tanner, C. M.
Tashkun, S. A.
Tennyson, J.
Tikhodeev, S. G.
Timofeev, I. V.
Toon, G. C.
Tran, H.
Tyler, T.
Tyuterev, V. G.
Vilquin, B.
Volk, S.
Wada, Y.
Wagner, G.
Wan, H.
Wasielewski, M. R.
Wcislo, P.
Wehrspohn, R. B.
Wiersig, J.
Wilzewski, J.
Wood, R. W.
Wood, V.
Wrobel, K.
Xie, W.
Yang, Z.-Y.
Yao, E.
Yokoyama, T.
Yu, S.
Zak, E. J.
Zhan, Q.
Zhang, C.
Zhang, N.
Zhang, T.
Zhou, K.
Zhou, W.
Adv. Opt. Mater. (1)
Adv. Sci. (1)
APL Photonics (1)
Appl. Opt. (2)
J. Appl. Phys. (1)
J. Opt. Soc. Am. (1)
J. Quant. Spectrosc. Radiat. Transfer (1)
Laser Photonics Rev. (1)
Nat. Phys. (1)
Opt. Lett. (2)
Photonics Res. (1)
Proc. Phys. Soc. London (1)
Equations on this page are rendered with MathJax. Learn more.
(1) k → s p p = k → ∥ + j G →
Geometrical parameters of the gas-sensing absorbers with air-cavity
Gas-sensing absorbers
Dual-band resonance [µm]
Grating parameters [µm]
DBR parameters [µm]
Air cavity thickness tc [µm]
R-branch
P-branch
CO2 4.233 4.283 4.070 1.300 0.190 0.660 0.286 2.167
N2O 4.470 4.526 4.300 1.370 0.215 0.697 0.302 2.290
CO 4.599 4.737 4.540 1.800 0.320 0.690 0.299 2.341
NO 5.255 5.405 5.160 2.000 0.370 0.807 0.350 2.661
NO2 6.139 6.239 5.923 1.950 0.400 0.960 0.416 3.154
Geometrical parameters of the gas-sensing absorbers with BaF2-cavity
BaF2 cavity thickness tc [µm] | CommonCrawl |
Infinitely differentiable function with compact support on $\mathbb{R}^n$ with given properties
For the following parts: $f(t)=\begin{cases}e^{-1/t^2}&t\neq0\\0&t=0\end{cases}$
$\quad(a)$ Show that $f\in C^\infty(\mathbb R)$; that is, $f$ is differentiable to all orders on $\mathbb R$.
$\quad(b)$ Use $f$ to define a function $g\in C^\infty(\mathbb R)$ whose support is $[a,b]$, where $a<b$.
$\quad(c)$ Show how $g$ can be used to define a function $h\in C^\infty(\mathbb R^n)$ such that $$h(x)\begin{cases}=1&||x||\leq1\\\in[0,1]&1<||x||\leq2\\=0&2<||x||.\end{cases}$$
The part I am struggling with is part c. I'm struggling to find a function which fits the given criteria for function values while also being infinitely differentiable with compact support. I used the bump function as my answer to part b. Any help would be greatly appreciated.
real-analysis derivatives
Michael Devin SmithMichael Devin Smith
$\begingroup$ I edited the question into the body. Please do this for your future questions. $\endgroup$
– Rushabh Mehta
The function $h$ in part c is a bump function, so you aren't intended to use it to do this problem. The directions also say to use $f$ to define $g$ in part b, so you should probably back up here. If you are like me, you will probably find it very helpful to graph $f$.
Graph everything (I use Desmos).
Let $g_1(x) = \frac{f(x-a)}{f(x-a)+f(x-b)}$ if $x\in[a,b]$, and 0 otherwise. Define $g_2$ to be a horizontally flipped version of $g_1$, with different constants $c$ and $d$. Stitch those together piecewise to make the function $g$ such that $g\equiv0$ if $x\not\in[a,d]$, $g\equiv1$ if $x\in[b,c]$ and $g$ is smooth everwhere.
Then just choose constants so $g$ is $h$.
Jolly LlamaJolly Llama
$\begingroup$ I guess I don't know how to define g from f then. You say to mess around with 3 copies of f in a fraction and to define g' in such a way to get g. I'm honestly still lost. Am I adding scale/stretch factors to f? Am I adding an additional piecewise component to f? Is it a combination of the two? I'm sorry for not understanding but I just don't know what to do. The professor pretty much just reads us definitions and theorems and writes the proofs for them without any examples on how to apply them and pretty much just says "have at it". $\endgroup$
– Michael Devin Smith
$\begingroup$ That's alright. I don't think it's obvious at all if you've never seen it before. I'll give you some more to work with. $\endgroup$
– Jolly Llama
$\begingroup$ By "horizontally flipped" do you mean replacing x with -x in the definition of $g_1$? $\endgroup$
$\begingroup$ Or could I just do $1-g_1$ with different constants? That's seeming to work on desmos, at least on the surface. $\endgroup$
$\begingroup$ $g_2$ will definitely need to have different constants than $g_1$, so you can move them independently of each other. I changed the formula for $g_2$ slightly, so the other constant was on top. Subtracting from 1 simplifies to the same thing, so you're good. $\endgroup$
Not the answer you're looking for? Browse other questions tagged real-analysis derivatives or ask your own question.
Shifting a smooth function of compact support
Show that Cauchy's function is infinitely differentiable
Determine whether a multivariable function is infinitely differentiable
A continuously differentiable function is weakly differentiable
Integral of derivative of function with constant support is zero: intuitive descriptioin sought
infinitely differentiable function $\exp(-1/(x^2(1-x)^2))$
An example of infinitely differentiable function
Function defined by cases is continuously differentiable | CommonCrawl |
Aerosol and surface contamination of SARS-CoV-2 observed in quarantine and isolation care
Joshua L. Santarpia1,2,
Danielle N. Rivera2,
Vicki L. Herrera1,
M. Jane Morwitzer1,
Hannah M. Creager1,
George W. Santarpia1,
Kevin K. Crown2,
David M. Brett-Major1,
Elizabeth R. Schnaubelt1,3,
M. Jana Broadhurst1,
James V. Lawler1,2,
St. Patrick Reid1 &
John J. Lowe1,2
1682 Altmetric
Viral infection
An Author Correction to this article was published on 22 September 2020
The novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) originated in Wuhan, China in late 2019, and its resulting coronavirus disease, COVID-19, was declared a pandemic by the World Health Organization on March 11, 2020. The rapid global spread of COVID-19 represents perhaps the most significant public health emergency in a century. As the pandemic progressed, a continued paucity of evidence on routes of SARS-CoV-2 transmission has resulted in shifting infection prevention and control guidelines between classically-defined airborne and droplet precautions. During the initial isolation of 13 individuals with COVID-19 at the University of Nebraska Medical Center, we collected air and surface samples to examine viral shedding from isolated individuals. We detected viral contamination among all samples, supporting the use of airborne isolation precautions when caring for COVID-19 patients.
Healthcare worker protection and effective public health measures for emerging infectious diseases require guidance based upon a solid understanding of modes of transmission. Scant evidence describing SARS-CoV-21 transmission dynamics has led to shifting isolation guidelines from the WHO, U.S. CDC and other public health authorities. Evidence suggests that other emerging coronavirus diseases (e.g. SARS and MERS) have airborne transmission potential2,3 in addition to more direct contact and droplet transmission. At least one study suggests that MERS-CoV has the possibility of transmission from mildly ill or asymptomatic individuals4. Surface samples taken in patient care areas for MERS-CoV and SARS-CoV have shown positive PCR results3; however, experts question the presence of viable virus and the implication for transmission through fomites contaminated by the direct contact of the infected person or the settling of virus-laden particles onto the surface5. Nonetheless, nosocomial outbreaks suggest transmission of coronaviruses via environmental contamination6,7. While nosocomial transmission of SARS-CoV-2 is reported, the role of aerosol transmission and environmental contamination remains unclear, and infection preventionists require further data to inform appropriate practices8.
The University of Nebraska Medical Center (UNMC), with its clinical partner Nebraska Medicine, cared for 13 individuals with confirmed SARS-CoV-2 infection evacuated from the Diamond Princess cruise ship as of March 6th, 2020. Patients requiring hospital care were managed in the Nebraska Biocontainment Unit (NBU), and mildly ill individuals were isolated in the National Quarantine Unit (NQU), both located on the medical center campus. Key features of the NBU and NQU include: (1) individual rooms with private bathrooms; (2) negative-pressure rooms (> 12 ACH) and negative-pressure hallways; (3) key-card access control; (4) unit-specific infection prevention and control (IPC) protocols including hand hygiene and changing of gloves between rooms; and (5) personal protective equipment (PPE) for staff that included contact and aerosol protection9.
We initiated an ongoing study of environmental contamination obtaining surface and air samples in 2 NBU hospital and 9 NQU residential isolation rooms housing individuals testing positive for SARS-CoV-2. Samples were obtained in the NQU on days 5–9 of occupancy and in the NBU on day 10. Additional samples were obtained in the NBU on day 18, after Patient 3 had been admitted to the unit for four days. We obtained surface samples, high-volume [50 L/min (Lpm)] air samples, and low-volume (4 Lpm) personal air samples. The surface samples came from common room surfaces, personal items, and toilets. Personal air sampling devices were worn by study personnel on two days during sampling of NBU and NQU rooms.
During the sampling, individuals in isolation were recording symptoms and oral temperatures twice a day. The maximum temperature, during the three days preceding sampling, was recorded, as was the presence of any symptoms. During this time, 57.9% of patients recorded a temperature greater than 37.2 °C (99.0 °F), and 15.8% had a temperature greater than 37.8 C (100 °F). Independent of temperature, 57.9% of patients reported other symptoms, primarily cough.
Surface and aerosol samples were analyzed by RT-PCR targeting the E gene of SARS-CoV-210. Of the 163 samples collected in this study, 121 (72.4%) had a positive PCR result for SARS-CoV-2. Due to the need to cause minimal disruption to individuals in isolation and undergoing hospital care, the precise surface area sampled in this study was not uniform, so results are presented as concentration of gene copies present in the recovered liquid sample (copies/µL). Viral gene copy concentrations recovered from each sample type were generally low and highly variable from sample to sample ranging from 0 to 1.75 copies/µL (Fig. 1A, and Tables S1 and S2), with the highest concentration recovered from an air handling grate in the NBU. Both the sampling time and flow rate were known for all aerosol samples collected in this study, therefore the airborne concentration was calculated for all air samples (copies/L of air; Tables S1 and S2).
A Box and whisker plot demonstrating the max and min (whiskers), median (line) and 25th and 75th percentile gene copy concentrations (copies/µL) for all types of samples collected in this study. Data is presented as a concentration in recovered buffer (sterile PBS) for each sample. Surface samples were in a total of 18 mL (3 mL to pre-moisten and 15 mL to recover), bedroom air and hallway air samples were recovered in 15 mL total, while personal air samples were recovered in 10 mL of sterile PBS. B Percentage of positive samples recovered in each room sampling. Bar patterns from the same room and same individual on multiple dates are identical.
Overall, 70.6% of all personal items sampled were determined to be positive for SARS-CoV-2 by PCR (Fig. 1B and Table S1). Of these samples, 75.0% of the miscellaneous personal items (described in the "Methods" section) were positive by PCR, with a mean concentration of 0.22 copies/µL. Samples of cellular phones were 77.8% positive for viral RNA (0.17 copies/µL mean concentration) and remote controls for in-room televisions were 55.6% percent positive (mean of 0.22 copies/µL). Samples of the toilets in the room were 81.0% positive, with a mean concentration of 0.25 copies/µL. Of all room surfaces sampled (Fig. 1B and Table S1), 75.0% were positive for SARS-CoV-2 RNA. 70.8% of the bedside tables and bed rails indicated the presence of viral RNA (mean concentration 0.26 copies/µL), as did 72.7% of the window ledges (mean concentration 0.22 copies/µL) sampled in each room. The floor beneath patients' beds and the ventilation grates in the NBU were also sampled. All five floor samples, as well as 4 of the 5 ventilation grate samples tested positive by RT-PCR, with mean concentrations of 0.45 and 0.82 copies/µL, respectively.
Air samples in the rooms and in the hallway spaces (Fig. 1B, and Tables S1 and S2) provide information about airborne viral shedding in these facilities. We found 63.2% of in-room air samples to be positive by RT-PCR (mean concentration 2.42 copies/L of air). In the NBU, for the first two sampling events performed on Day 10, the sampler was placed on the window ledge away from the patient (NBU Room A occupied by Patient 1), and was positive for viral RNA (Table S1; 2.42 copies/L of air). During the sampling event on Day 18 in NBU Room B occupied by Patient 3, one sampler was placed near the patient and one was placed near the door greater than 2 m from the patient's bed while the patient was receiving oxygen (1 L) via nasal cannula. Both samples were positive by PCR, with the one closest to the patient indicating a higher airborne concentration of RNA (4.07 as compared to 2.48 copies/L of air). Samples taken outside the rooms in the hallways were 58.3% positive (Fig. 1B and Table S2), with a mean concentration of 2.51 copies/L of air. Both personal air samplers from sampling personnel in the NQU showed positive PCR results after 122 min of sampling activity (Table S2), and both air samplers from NBU sampling indicated the presence of viral RNA after only 20 min of sampling activity (Table S2). The highest airborne concentrations were recorded by personal samplers in NBU while a patient was receiving oxygen through a nasal cannula (19.17 and 48.22 copies/L). Neither individuals in the NQU or patients in the NBU were observed to cough while sampling personnel were in the room wearing samplers during these events.
Between 5 and 16 samples were collected from each room, with a mean of 7.35 samples per room and a mode of 6 samples per room. The percentage of positive samples from each room ranged between 40 and 100% (Fig. 1B, 2A). A Spearman's Rank Order Correlation (ρ) of percent positive samples and total number of samples for each room had a value of 0.33, indicating a weak relationship between the number of samples taken and the percent of positive samples observed, which is likely due to the focus of unplanned samples on areas or objects frequently used by patients. When the percent of positive samples taken was compared to the maximum reported oral temperature of the patient for the previous three days, a ρ of 0.39 indicated only a weak relationship between elevated body temperature and shedding of virus in the environment. Further, recorded oral temperature was compared with the gene copy concentration for each in-room sample type (Table S1). These ρ values ranged from -0.02 (cell phones) to 0.36 (air samples), indicating weak to no significant relationship between body temperature and environmental contamination, with air samples having the strongest correlation, followed by windowsills (ρ = 0.25) and remotes (ρ = 0.23).
Results of SARS-CoV-2 cell culture experiments. Images and graphs describe the results of cell culture of two environmental samples. The two samples are shown: an air sample from the NQU hallway on day 8 (A,C,E), the windowsill from NQU A on day 1 (B,D,F). Cytopathic effect observed in these samples (A,B) is generally mild, compared to the control (top center) which had no environmental sample added. RT-qPCR from daily withdrawals of 100 µL of supernatant from the cell culture of each sample indicates changes in viral RNA in the supernatant throughout cultivation. The hallway air sample indicates a decrease in RNA concentration in the supernatant over the first 2 days, consistent with the withdrawal of supernatant for analysis. Increase in concentration is observed on both days 3 and 4 (C). The windowsill sample showed stable and possible increasing viral concentrations for the first 3 days, despite the withdrawal of supernatant for analysis (D). Immunofluorescent staining of the hallway air sample indicates the presence of SARS-CoV-2, after 3 days of cell culture (E), as compared to control cells (inset), which were not exposed to any environmental sample. TEM images of the lysates from the windowsill culture (F) clearly indicate the presence of intact SARS-CoV-2 virions, after 3 days of cell culture.
A subset of samples that were positive for viral RNA by RT-PCR was examined for viral propagation in Vero E6 cells. Several indicators were utilized to determine viral replication including cytopathic effect (CPE), immunofluorescent staining, time course PCR of cell culture supernatant, and electron microscopy. Due to the low concentrations recovered in these samples cultivation of virus was not confirmed in these experiments. Nevertheless, in two of the samples, cell culture indicated some evidence for the presence of replication competent virus (Fig. 2): an air sample from the NQU hallway on day 8 and the windowsill from NQU A on day 5 (Tables S1 and S2). Microscopic inspection of cell cultures indicated CPE after 3–4 days (Fig. 2A,B). Serial PCR of cell culture supernatant was unclear, but the observed changes in supernatant RNA, in the hallway sample, indicated that after an initial decrease in RNA in the supernatant (consistent with the with daily withdrawal of supernatant for analysis and replacement with fresh supernatant) some increase in viral RNA may have occurred (Fig. 2C). The windowsill sample had consistent viral RNA in the supernatant throughout the time course, despite the daily withdrawal of supernatant for analysis, which could indicate replication (Fig. 2D). Further, immunofluorescence images (Fig. 2E) indicate evidence of viral proteins in the hallway sample and transmission electron microscopy (TEM) of the windowsill sample (Fig. 2F) confirmed the presence of intact SARS-CoV-2 virions after 3 days of cell culture.
Taken together, these data indicate significant environmental contamination in rooms where patients infected with SARS-CoV-2 are housed and cared for, regardless of the degree of symptoms or acuity of illness. Contamination exists in all types of samples: high and low-volume air samples, as well as surface samples including personal items, room surfaces, and toilets. Samples of patient toilets that tested positive for viral RNA are consistent with other reports of viral shedding in stool11. The presence of contamination on personal items is also expected, particularly those items that are routinely handled by individuals in isolation, such as cell phones and remote controls, as well as medical equipment that is in near constant contact with the patient. The observation of viral replication in cell culture for some of the samples confirms the potential infectious nature of the recovered virus.
We noted variability in the degree of environmental contamination (as measured by the percentage of positive samples) from room to room and day to day. In general, percent positive samples in the NQU were higher on Days 5–7 (72.5%) versus Days 8–9 (64.9%). While most NQU rooms had higher percentages of positive specimens earlier in the course of illness, three of the nine rooms (NQU B,G, and I) actually had higher percentages on later days (Days 8 and 9 respectively). On average, a higher percentage of positive samples (81.4% over 3 rooms) was detected in the NBU later in the course of illness (sampled on Days 10 and 18), suggesting that patients with higher acuity of illness or levels of care may be associated with increased levels of environmental contamination. However, the lack of a strong relationship between environmental contamination and body temperature reaffirms the fact that shedding of viral RNA is not necessarily linked to clinical signs of illness.
In the hospital NBU, where patients were generally less mobile, distribution of positive samples suggests a strong influence of airflow. Personal and high-touch items were not universally positive, yet we detected viral RNA in 100% of samples from the floor under the bed and all but one window ledge (which were not used by the patient) in the NBU. Airflow in NBU suites originates from a register at the top center of the room and exits from grates near the head of the patient's bed on either side of the room. Airflow modelling12 has suggested that some fraction of the airflow is directed under the patient's bed, which may cause the observed contamination under the bed, while the dominant airflow likely carries particles away from the patient's bed towards the edges of the room, likely passing by the windows resulting in some deposition there.
Although this study did not employ any size-fractionation techniques in order to determine the size range of SARS-CoV-2 droplets and particles, the data is suggestive that viral aerosol particles are produced by individuals that have the COVID-19 disease, even in the absence of cough. First, in the few instances where the distance between individuals in isolation and air sampling could be confidently maintained at greater than 6 ft, 2 of the 3 air samples were positive for viral RNA. Second, 58.3% of hallway air samples indicate that virus-containing particles were being transported from the rooms to the hallway during sampling activities. It is likely that the positive air samples in the hallway were caused by viral aerosol particles transported or resuspended by personnel exiting the room13,14. Finally, personal air samplers worn by sampling personnel were all positive for SARS-CoV-2, despite the absence of cough by most patients while sampling personnel were present. Recent literature investigating human-expired aerosol suggests that a large fraction is less than 10 µm in diameter across all types of activity (e.g. breathing, talking, and coughing15) and that upper respiratory illness increases production of aerosol particles (less than 10 µm)16. A recent study of SARS-CoV-2 stability indicates that infectious aerosol may persist for several hours and on surfaces for as long as 2 days17.
Our study suggests that SARS-CoV-2 environmental contamination around COVID-19 patients is extensive, and hospital IPC procedures should account for the risk of fomite, and potentially airborne, transmission of the virus. Despite wide-spread environmental contamination and limited SARS-CoV-2 aerosol contamination associated with hospitalized and mildly ill individuals, the implementation of a standard suite of infection prevention and control procedures prevented any documented cases of COVID-19 in healthcare workers, who self-monitored for 14 days after last contact with either ward and underwent two nasal swab PCR assays 24 h apart if they reported fever or any respiratory infection symptoms. The standard IPC protocols for both the NBU and NQU includes negative pressure rooms with 12–15 air-exchanges per hour, negative pressure hallways in the suite compared to outside, strict access control, highly trained staff with well-developed protocols, frequent environmental cleaning, and aerosol-protective personal protective equipment that consisted of N95 filtering facepiece respirators in the NQU and powered air purifying respirators for patient care within the NBU. Further study is necessary to fully quantify risk.
High-touch personal items sampled included cellular phones, exercise equipment, television remotes, and medical equipment. Room surfaces tested included ventilation grates, tabletops, and window ledges. Toilet samples were obtained from the rim of the bowl. Air samples were collected both in isolation rooms and in the hallways of the NBU and NQU during sampling activities, while patients were present. Personal air samplers were worn by sampling personnel on two occasions during sampling activities: once during sampling at the NQU when 6 individual rooms were sampled, and once in the hospital NBU when one room was sampled.
Surface and personal items were collected using 3 × 3 sterile gauze pads pre-wetted with 3 mL of phosphate buffered saline (PBS). Large area surface samples were collected by wiping in an "S" pattern in 2 directions to cover as much of the available surface as possible. Smaller items (e.g. cellular phones, remote controls) were wiped in one direction on every available surface. Following collection, samples were packed in 50 mL conical tubes. Hand hygiene and glove changes were performed between the collection of every sample.
Several personal items were sampled consistently between all quarantine rooms (cellular phones and television remote controls). In addition, individuals were asked about which items they used or handled frequently, and several additional samples were collected based on those responses: exercise equipment, medical equipment (spirometer, pulse oximeter, nasal cannula), personal computers, iPads, reading glasses, and pots used to heat water. This last category was grouped together as "Miscellaneous Personal Samples".
Room surfaces
Several surfaces were sampled in each room. For rooms in the National Quarantine Unit, both the windowsill and the bedside table were sampled. For rooms in the Nebraska Biocontainment Unit, samples were taken on the windowsill, the bed rail or bedside table, under the patient's bed and on the air conditioning return grate nearest the door.
Air samples
Stationary air samples, both inside and outside of patient rooms, were collected using a Sartorius Airport MD8 air sampler operating at 50 Lpm for 15 min. Samples were collected onto a 80 mm gelatin filter. Samplers in patient rooms were placed on bedside tables and nightstands but at least 1 m away from the patient. No attempt was made to ensure the sampler was placed a specific distance from the individual in the room, so, while distance between sampler and individual was neither defined nor consistent, individuals in the room did not directly interact with the sampler.
NQU subjects were ambulatory however, and all were out of bed during sampling. Our NQU protocol advises patients in isolation to maintain a 6-foot distance from staff members who enter their room and wear a procedure mask (e.g. surgical mask) while staff are in the room. Observed adherence with these procedures was high throughout patient stays. For this study, individuals were instructed that they could remove the mask during air sampling activities; however, many individuals did not remove it and therefore the impact of infected individuals wearing masks cannot be assessed in this study.
Study personnel generally left rooms during a significant period of time during air sampling, but it appeared that not all patients removed procedure masks during those periods. Hallway air samples were obtained by placing samplers on the floor approximately 10 cm from the door frame adjacent to rooms where sampling activities were taking place. Study personnel entered and exited rooms several times during air sampling. Additional personal air samples were collected by study personnel during sampling activities wearing Personal Button Samplers (SKC, Inc.) and using Air Chek pumps (SKC, Inc.) both sampling at 4 Lpm. These samples were collected onto 25 mm gelatin filters.
Sample Recovery, RNA extraction and reverse transcriptase PCR
Surface samples were recovered by adding 15 mL of sterile PBS to the conical vial containing the gauze pad and manually shaking the conical for 1 min. 25 mm gelatin filters were removed from the filter housing and placed in a 50 mL conical tube and then dissolved by adding 10 mL of sterile PBS. 80 mm gelatin filters were removed from their filter housing, carefully folded and placed in a 50 mL conical tube and then dissolved by adding 15 mL of sterile PBS. RNA extractions were performed using a Qiagen DSP Virus Spin Kit (QIAGEN GMbH, Hilden, Germany) 200 to 400ul of initial sample was used for RNA extraction, and a negative extraction control was included with each set of extractions. Samples were eluted in 50ul of Qiagen Buffer AVE. RT-PCR was performed using Invitrogen Superscript III Platinum One-Step Quantitative RT-PCR System. Each PCR run included a positive synthetic DNA control and a negative, no template, control of nuclease free water. In addition, blank samples of swipes and gelatin filters, both carried during sampling, and those kept in the laboratory were analyzed. No amplification of blank samples was observed. Reactions were set up and run with initial conditions of 10 min at 55 °C and 4 min at 94 °C then 45 cycles of 94 °C for 15 s and 58 °C for 30 s, QuantStudio 3 (Applied Biosytems, Inc) utilizing the following reagents:
6.1 µL nuclease free water
12.5 µL Invitrogen 2X Master Mix
0.4 µL MgSO4
0.5 µL Primer/Probe Mix (IDT)* (Primers 10uM, Probe 5uM)
0.5 µl SuperScript III Platinum Taq
5.0 µL extracted sample RNA, nuclease free water or positive control
25.0 µL Total
In order to quantify the number of viral gene copies present in each sample from the measured Ct values, a standard curve was developed using synthetic DNA. A 6-log standard curve was run in duplicate beginning at a concentration of 1 × 103 copies/µL. The data was fit with the exponential function:
$$copy \,concentration \left( {\frac{copies}{{\upmu {\text{L}}}}} \right) = 3 \times 10^{6} e^{ - 0.421 \times Ct}$$
where Ct is the cycle time where amplification is definitive. The equation was then used to convert all measured Ct values from all samples into gene copy concentrations. The minimum concentration detected by this assay was 1e−1 copies/µL at between 39 and 44 cycles. Considering a 5 µL sample volume, the derived exponential function above, and the uncertainty in the efficiency of RNA extraction amplification beyond 39.2 is treated as undetected, which equates to 1 copy in 5 µL of recovered sample. The average and standard deviation concentrations were calculated from the triplicate PCR runs for each sample.
The primers and probe used in this study (below) are based on a previously published assay10 targeting the E gene of SARS-CoV-2, which produces the envelope small membrane protein. The gene was used as a target based on its similarity to previously identified coronavirus, including SARS-CoV strain Frankfurt and two Bat SARS-related CoV (GenBank Acc. No. MG772933.1 and NC_014470). The Primer-BLAST18 tool was used to examine the specificity of the assay beyond what was described in the original publication. The search used default parameters and allowed 9 mismatches before ignoring the target sequence. The search returned hits from 957 SARS-CoV-2 isolate sequences and 7 pangolin coronavirus isolate sequences, indicating that it should be sensitive SARS-CoV-2 and only have the potential to cross-react on related non-human coronaviruses. The positive control consisted of ssDNA, targeting the E and N gene (below), in a 1:1 mixture at 103 copies/µL. ssDNA was based on the 2019-nCoV genome sequence published in Genbank19.
*E gene target primers and probe:
Probe: 5′/56-FAM/ACACTAAGCC/ZEN/ATCCTTACTGCGCTTCG/3AIBkFG/-3'
Primer 1: 5′-ATATTGCAGCAGTACGCACACA-3'
Primer 2: 5′-ACAGGTACGTTAATAGTTAATAGCGT-3'
ssDNA E Target Sequence:
5′TTCGGAAGAGACAGGTACGTTAATAGTTAATAGCGTACTTCTTTTTCTTGCTTTCGTG
GTATTCTTGCTAGTTACACTAGCCATCCTTACTGCGCTTCGATTGTGTGCGTACTGCTGC
AATATTGTTAACGTG-3'
ssDNA N Target Sequence:
5′ACCAAAAGATCACATTGGCACCCGCAATCCTGCTAACAATGCTGCAATCGTGCTACA
ACTTCCTCAAGGAACAACATTGCCAAAAGGCTTCTACGCAGAAGGGAGCAGAGGCGG
CAGTCAAGCCTCTTCTCGTTCCTCATCACGTAGT-3'
Cell culture assays
Vero E6 cells were used to culture virus from environmental samples. The cells were cultured in Dulbeccos's minimal essential medium (DMEM) supplemented with heat inactivated fetal bovine serum (10%), Penicillin/Streptomycin (10,000 IU/mL & 10,000 µg/mL) and Amphotericin B (25 µg/mL). For propagation, 100 µl of undiluted samples were added to 24-well plates. The cells were monitored daily to detect virus-induced CPE. After 3–4 days cell supernatants and lysates were collected. For the time course PCR experiments, supernatant was collected on each day. Samples were evaluated for cytopathic effect. Immunofluorescence, performed using mouse monoclonal SARS-CoV to determine the presence of viral antigens. The reagent was obtained through BEI Resources, NIAID, NIH: Monoclonal Anti-SARS-CoV S Protein (Similar to 540C), NR-618. Cell nuclei were labeled with Hoechst 33342. Confocal images were collected with Ziess LSM 800 with Airyscan. For electron microscopy, samples were fixed, processed then sectioned and subsequently subjected to transmission electron microscopy (TEM). CPE images were acquired in the Nebraska Public Health Laboratory (NPHL) BSL3 facility using a 3D printed ocular adapter for cell phone photography, kindly donated by Dr. Jesse Cox, Yellow Basement Designs.
This study was conducted in the National Quarantine Unit and the Nebraska Biocontainment Unit with permission from the University of Nebraska Medical Center and Nebraska Medicine as a part of a quality assurance/quality improvement study on isolation care. Sampling of individual personal items was done with the owner's permission. Patients were informed that use of a face covering was not necessary during sampling activities, but the decision to wear or not wear provided surgical masks was left to the individual. This activity was reviewed by Office of Regulatory Affairs at the University of Nebraska Medical Center and it was determined that this project does not constitute human subject research as defined by 45CFR46.102.
All data is available in the main text or the supplementary materials.
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Zhu, N. et al. A novel coronavirus from patients with pneumonia in China, 2019. New Engl. J. Med. 382, 727–733. https://doi.org/10.1056/NEJMoa2001017 (2020).
Tellier, R., Li, Y., Cowling, B. J. & Tang, J. W. Recognition of aerosol transmission of infectious agents: a commentary. BMC Infect. Dis. 19, 101. https://doi.org/10.1186/s12879-019-3707-y (2019).
Booth, T. F. et al. Detection of airborne severe acute respiratory syndrome (SARS) coronavirus and environmental contamination in SARS outbreak units. J. Infect. Dis. 191, 1472–1477. https://doi.org/10.1086/429634 (2005).
Omrani, A. S. et al. A family cluster of Middle East respiratory syndrome coronavirus infections related to a likely unrecognized asymptomatic or mild case. Int. J. Infect. Dis. 17, 668–672. https://doi.org/10.1016/j.ijid.2013.07.001 (2013).
Morawska L. Droplet fate in indoor environments, or can we prevent the spread of infection in Indoor Air 2005. In: Proceedings of the 10th International Conference on Indoor Air Quality and Climate (eds Yang, X., Zhao, B. & Zhao, R.) 9–23 (Tsinghua University Press, 2006).
Chowell, G. et al. Transmission characteristics of MERS and SARS in the healthcare setting: a comparative study. BMC Med. 13, 210. https://doi.org/10.1186/s12916-015-0450-0 (2015).
Bin, S. Y. et al. Environmental contamination and viral shedding in MERS patients during MERS-CoV outbreak in South Korea. Clin. Infect. Dis. 62(6), 755–760. https://doi.org/10.1093/cid/civ1020 (2016).
Wang, D. et al. Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus-infected pneumonia in Wuhan, China. JAMA 323(11), 1061–1069. https://doi.org/10.1001/jama.2020.1585 (2020).
Beam, E. L. et al. Personal protective equipment processes and rationale for the Nebraska Biocontainment Unit during the 2014 activations for Ebola virus disease. Am. J. Infect. Control. 44(3), 340–342. https://doi.org/10.1016/j.ajic.2015.09.031 (2016).
Corman, V. et al. Diagnostic detection of Wuhan coronavirus 2019 by real-time RT-PCR. World Health Organization. https://www.who.int/docs/default-source/coronaviruse/wuhan-virus-assay-v1991527e5122341d99287a1b17c111902.pdf?sfvrsn=d381fc88_2 (2020).
Ong, S. W. X. et al. Air, surface environmental, and personal protective equipment contamination by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) from a symptomatic patient. JAMA 323(16), 1610–1612. https://doi.org/10.1001/jama.2020.3227 (2020).
Hewlett, A. L., Whitney, S. E., Gibbs, S. G., Smith, P. W. & Viljoen, H. J. Mathematical modeling of pathogen trajectory in a patient care environment. Infect. Control Hosp. Epidemiol. 34(11), 1181–1188. https://doi.org/10.1086/673451 (2013).
Luoma, M. & Batterman, S. A. Characterization of particulate emissions from occupant activities in offices. Indoor Air 11(1), 35–48 (2001).
Wang, J. & Chow, T. T. Numerical investigation of influence of human walking on dispersion and deposition of expiratory droplets in airborne infection isolation room. Build. Environ. 46(10), 1993–2002. https://doi.org/10.1016/j.buildenv.2011.04.008 (2011).
Johnson, G. R. et al. Modality of human expired aerosol size distributions. J. Aerosol Sci. 42, 839–851. https://doi.org/10.1016/j.jaerosci.2011.07.009 (2011).
ADS Article CAS Google Scholar
Lee, J. et al. Quantity, size distribution, and characteristics of cough generated aerosol produced by patients with an upper respiratory tract infection. Aerosol Air Qual. Res. 19, 840–853. https://doi.org/10.4209/aaqr.2018.01.0031 (2019).
van Doremalen, N. et al. Aerosol and surface stability of HCoV-19 (SARS-CoV-2) compared to SARS-CoV-1. New Engl. J. Med. 382, 1564–1567. https://doi.org/10.1056/NEJMc2004973 (2020).
Ye, J. et al. Primer-BLAST: A tool to design target-specific primers for polymerase chain reaction. BMC Bioinformatics 13, 134. https://doi.org/10.1186/1471-2105-13-134 (2012).
Wu, F. et al. Wuhan seafood market pneumonia virus isolate Wuhan-Hu-1, complete genome. GenBank. https://www.ncbi.nlm.nih.gov/nuccore/MN908947 (2020).
The authors would also like to thank all of the individuals in isolation and care at both the National Quarantine Unit and the Nebraska Biocontainment Unit for their willingness and interest in cooperating with this study. The authors would like to thank Tom Bargar and Nicholas Conoan of the Electron Microscopy Core Facility (EMCF) at the University of Nebraska Medical Center for technical assistance. The EMCF is supported by state funds from the Nebraska Research Initiative (NRI) and the University of Nebraska Foundation, and institutionally by the Office of the Vice Chancellor for Research. The authors would like to thank Janice A. Taylor and James Talaska of the Advanced Microscopy Core Facility at the University of Nebraska Medical Center for providing assistance with confocal microscopy. Funded by internal funds from the University of Nebraska Medical Center.
University of Nebraska Medical Center, Omaha, NE, USA
Joshua L. Santarpia, Vicki L. Herrera, M. Jane Morwitzer, Hannah M. Creager, George W. Santarpia, David M. Brett-Major, Elizabeth R. Schnaubelt, M. Jana Broadhurst, James V. Lawler, St. Patrick Reid & John J. Lowe
National Strategic Research Institute, Omaha, NE, USA
Joshua L. Santarpia, Danielle N. Rivera, Kevin K. Crown, James V. Lawler & John J. Lowe
United States Air Force School of Aerospace Medicine, San Antonio, TX, USA
Elizabeth R. Schnaubelt
Joshua L. Santarpia
Danielle N. Rivera
Vicki L. Herrera
M. Jane Morwitzer
Hannah M. Creager
George W. Santarpia
Kevin K. Crown
David M. Brett-Major
M. Jana Broadhurst
James V. Lawler
St. Patrick Reid
John J. Lowe
J.S and J.J.L. conceived of the initial study; J.S. and D.R. developed the sampling strategy; J.V.L., E.S. and D.B-M. collected medical data; S.P.R. and M.J.M performed cell culture assays; G.S., J.B. and H.C. developed PCR assay and performed initial tests; J.S., J.J.L. D.R., V.H. J.V.L. collected samples; K.K.C, D.R. and V.H processed samples V.H performed all PCR; J.S. and J.J.L. wrote the manuscript with contributions from all authors.
Correspondence to Joshua L. Santarpia.
Supplementary information.
Santarpia, J.L., Rivera, D.N., Herrera, V.L. et al. Aerosol and surface contamination of SARS-CoV-2 observed in quarantine and isolation care. Sci Rep 10, 12732 (2020). https://doi.org/10.1038/s41598-020-69286-3
A CFD study of the transport and fate of airborne droplets in a ventilated office: The role of droplet-droplet interactions
Allan Gomez-Flores
Gukhwa Hwang
Hyunjung Kim
Frontiers of Environmental Science & Engineering (2022)
Modeling of nursing care-associated airborne transmission of SARS-CoV-2 in a real-world hospital setting
Attila Nagy
Alpár Horváth
Veronika Müller
GeroScience (2022)
Characterization of hospital airborne SARS-CoV-2
Rebecca A. Stern
Petros Koutrakis
Eric Garshick
Respiratory Research (2021)
COVID-19 false dichotomies and a comprehensive review of the evidence regarding public health, COVID-19 symptomatology, SARS-CoV-2 transmission, mask wearing, and reinfection
Kevin Escandón
Angela L. Rasmussen
Jason Kindrachuk
BMC Infectious Diseases (2021)
Modes of transmission of SARS-CoV-2 and evidence for preventive behavioral interventions
Lucas Zhou
Samuel K. Ayeh
Petros C. Karakousis | CommonCrawl |
OSA Publishing > Optics Express > Volume 28 > Issue 2 > Page 1954
Multi-qubit phase gate on multiple resonators mediated by a superconducting bus
Jin-Xuan Han, Jin-Lei Wu, Yan Wang, Yong-Yuan Jiang, Yan Xia, and Jie Song
Jin-Xuan Han,1 Jin-Lei Wu,1,3 Yan Wang,1 Yong-Yuan Jiang,1 Yan Xia,2 and Jie Song1,4
1Department of Physics, Harbin Institute of Technology, Harbin 150001, China
2Department of Physics, Fuzhou University, Fuzhou 350002, China
[email protected]
[email protected]
Jin-Lei Wu https://orcid.org/0000-0002-6791-8305
J Han
J Wu
Y Wang
Y Jiang
Y Xia
J Song
Issue 2,
•https://doi.org/10.1364/OE.384352
Jin-Xuan Han, Jin-Lei Wu, Yan Wang, Yong-Yuan Jiang, Yan Xia, and Jie Song, "Multi-qubit phase gate on multiple resonators mediated by a superconducting bus," Opt. Express 28, 1954-1969 (2020)
Circuit QED: single-step realization of a multiqubit controlled phase gate with one microwave photonic qubit simultaneously controlling n − 1 microwave photonic qubits (OE)
One-step implementation of a multiqubit controlled-phase gate with superconducting quantum interference devices coupled to a resonator (JOSAB)
One-step implementation of a hybrid Fredkin gate with quantum memories and single superconducting qubit in circuit QED and its applications (OE)
Cavity quantum electrodynamics
Quantum information
Quantum information processing
Quantum memories
Quantum memory
Original Manuscript: November 26, 2019
Revised Manuscript: December 27, 2019
Manuscript Accepted: December 27, 2019
Description of the quantum system
Construction of multi-qubit phase gate
Suppression of unwanted transitions
Equations (25)
We propose a one-step scheme for implementing multi-qubit phase gates on microwave photons in multiple resonators mediated by a superconducting bus in circuit quantum electrodynamics (QED) system. In the scheme, multiple single-mode resonators carry quantum information with their vacuum and single-photon Fock states, and a multi-level artificial atom acts as a quantum bus which induces the indirect interaction among resonators. The method of pulse engineering is used to shape the coupling strength between resonators and the bus so as to improve the fidelity and robustness of the scheme. We also discuss the influence of finite coherence time for the bus and resonators on gate fidelity respectively. Finally, we consider the suppression of unwanted transitions and propose the method of optimized detuning compensation for offsetting unwanted transitions, showing the feasibility of the scheme within the current experiment technology.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Multi-qubit phase gates play an important role and are a crucial element in quantum computation and quantum information processing (QCQIP). In general, there are two types of important multi-qubit gates, which have attracted attention during the past years. One consists of multiple control qubits simultaneously controlling on a single-target qubit [1,2]. The other contains a single qubit simultaneously controlling multi-target qubits [3–5]. These two types of multi-quantum gates are important in QCQIP, such as entanglement preparation [6], error correction [7,8], quantum algorithms [9], and quantum cloning [10].
Circuit quantum electrodynamics (QED) is analogue of cavity QED, which consists of superconducting qubits and microwave resonators or cavities. It is a specially suited platform to realize QCQIP owing to its flexibility, scalability, and tunability [11–14]. It was theoretically predicted earlier that the strong coupling can be readily achieved with superconducting charge qubits [12] or flux qubits [15]. It has been experimentally demonstrated that the strong and ultrastrong couplings between a superconducting qubit and a microwave resonator [16]. QED is now moving toward multiple superconducting qubits and multiple three-dimensional (3D) cavities with greatly enhanced coherence time, making them particularly appealing for large-scale quantum computing [16]. So far, it has been reported that the life of a superconducting resonator is between 1 and 10 ms [17–19]. Superconducting devices including cooper pair boxes, Josephson junctions, and superconducting quantum interference devices (SQUIDs) have been one of the most promising candidates for quantum computing [20–22]. Many schemes have been proposed to achieve a multi-qubit phase gate in circuit QED by encoding quantum information in levels of the artificial atoms [23–28]. Ye et al. [28] proposed a multiplex-controlled phase gate of $n-1$ control qubits simultaneously controlling one target qubit, with $n$ qubits placed in $n$ different cavities. This multi-qubit gate implementation in Ref. [28] is complex by using $2n+2$ basic operations. However, a microwave resonator can act as not only a good quantum data bus [11,29] but a good quantum memory [30] with the experimentally-reported high quality factor here. Schemes for realizing quantum phase gates by encoding quantum information in zero- and one-photon Fock states in cavity QED or circuit QED have been also presented [31–35]. Alternatively, two orthogonal a cat states of the single cavity mode also represented the two logical states of cat qubit in Ref. [34,35]. For the method proposed in [34], a multi-target-qubit controlled phase gate realized by one cat-state qubit (cqubit) simultaneously controlling $n-1$ target cqubits with the cat state qubits in the cavity acting the role of the quantum information carriers. This gate operation is quite simplified because only one-step operation is needed and neither classical pulse nor measurement is required in comparison with Ref. [28].
The focus of this work is on realizing single-step multi-qubit phase gates on multiple single-mode resonators mediated by a bus of a multi-level superconducting atom in circuit QED. On the one hand, in our model the superconducting bus induces the indirect interaction among multiple resonators, and the quantum information is encoded in vacuum- and single-photon states of multiple single-mode resonators other than Refs. [25–28]. On the other hand, it is noted that we use multiple single-mode resonators which is different from Refs. [31,32] that employed multiple modes of one resonator. Therefore, we can realize the distributed quantum computation via employing multiple single-mode resonators. Moreover, the multi-qubit phase gate is constructed by only one step in comparison with Ref. [28].
In addition, in order to enhance the fidelity and robustness of the scheme, the pulse engineering technique [36–40] can be used to shape the coupling strength among resonators and the bus, and then the realization of the multi-qubit phase gate loosens the strict operation time without a precise control. The tunable coupling strength plays a broad role in circuit QED system [41–52]. In experiment, the engineered coupling strength can be tunable by using controlled voltage pulses to modulate the flux threading the SQUID loop of qubit [44,53]. Besides, full tunability of coupling strength can also achieved by an magnetic flux to periodically modulate the frequency of qubits [46] or changing the coupler frequency between two qubits [52]. Finally, we consider the suppression of unwanted transitions and propose the method of optimized detuning compensation for offsetting unwanted transitions.
2. Description of the quantum system
2.1 Description of a superconducting bus
In this section, we would like to introduce the physical model of the system for the protocol, whose schematic diagram is shown in Fig. 1(a). Supposed that there are $n$ single-mode resonators capacitively coupled with a SQUID, i.e., an artificial atom named as a "bus". The Hamiltonian of this bus can be described as [15,54]
(1)$$H_{bus}=\frac{Q^2}{2C}+\frac{(\Phi-\Phi_x)^2}{2L}-E_J\cos\frac{2\pi\Phi}{\Phi_0},$$
in which $C$ is the junction capacitance, $L$ the loop inductance, $Q$ the total change on the capacitor, $\Phi _0=h/2e$ the flux quantum, $E_J=I_c\Phi _0/2\pi$ the Josephson energy with $I_c$ being the critical current of the junction, $\Phi$ the magnetic flux threading the ring, and $\Phi _x$ the static external flux applied to the ring. There is an equation between $\Phi$ and $\Phi _x$, $\Phi =\Phi _x+LI_c$. In Eq. (1), the first term describes the charging energy of the Josephson junction, the second term the free energy of the Josephson junction, and the third term the electromagnetic energy stored in the loop. The potential energy part of the bus Hamiltonian can be written as [15,54]
(2)$$V=V_0\left\{\frac{1}{2}\left[\frac{2\pi(\Phi-\Phi_x)}{\Phi_0}\right]-\frac{E_J}{V_0}\cos\left(2\pi\frac{\Phi}{\Phi_0}\right)\right\},$$
with $V_0=\Phi _0^2/(4\pi ^2L)$. The height of potential barrier in the energy level structure of the bus can be adjusted by changing $E_J$, and the symmetry of potential well can also be adjusted by changing the external magnetic flux $\Phi _x$.
Fig. 1. (a) Schematic diagram of the SQUID qubit defined as the bus coupled to $N$ resonators with capacitance. (b) Schematic diagram of the level configuration for the bus interacting with $n$ resonators.
2.2 A superconducting bus coupled with $n$ single-mode microwave resonator fields
Considering a superconducting bus coupled capacitively to $n$ single-mode microwave resonators. The superconducting bus holds a level structure as shown in Fig. 1(b) formed ground and excited levels, denoted by $|g_1\rangle , |g_2\rangle , |g_3\rangle \cdots |g_n\rangle$ and $|e_1\rangle , |e_2\rangle , |e_3\rangle \cdots |e_n\rangle$, respectively. The classical pulses applied to the bus drive the transitions resonantly between $|e_j\rangle$ and $|g_{j+1}\rangle$ with Rabi frequency $\Omega _j$, while $g_j$ is coupling strength which are coupled resonantly to transitions $|g_j\rangle \leftrightarrow |e_j\rangle$($j=1,2,3 \cdots n$) between $j$ resonators . $\Omega _j$ and $g_j$ are expressed by [15,54]
(3)$$\begin{aligned} \Omega_j&=\frac{1}{2L\hbar}\langle g_{j+1}|\Phi|e_j\rangle\int_S \mathbf{B}^{j}_{\mu\omega}(\vec{r},t)\cdot d\mathbf{S},\\ g_j&=\frac{1}{L}\sqrt{\frac{\omega_j}{2\mu_0\hbar}}\langle g_j|\Phi|e_j\rangle\int_S \mathbf{B}^{j}_r(\vec{r},t)\cdot d\mathbf{S}, \end{aligned}$$
where $j=1,2,3 \cdots n$, $S$ the surface bounded by the loop of the bus, $\omega _j$ the resonator frequencies of resonator $j$th. The magnetic component of $j$th classical microwave applied to the bus is given by $\mathbf {B}^{j}_{\mu \omega }(\vec {r},t)=\mathbf {\tilde {B}}^{j}_{\mu \omega }(\vec {r},t)\cos 2\pi \nu _{\mu m}t$, in which $\nu _{\mu m}=\omega _{\mu \omega }/2\pi$ and $\omega _{\mu \omega }$ is the frequency of the microwave pulse. Thus, $\mathbf {\tilde {B}}^{j}_{\mu \omega }(\vec {r},t)$ is the maximum amplitude of magnetic component. Accordingly, $\mathbf {B}^{j}_r(\vec {r},t)$ is the the magnetic components of the $j$th resonator mode. For a standing-wave resonators, $\mathbf {B}^{j}_r(\vec {r},t)=\mu _0\sqrt {2/V_j}\cos k_j z_j$ ($k_j$ is the wave number of $j$th resonator, $V_j$ and $z_j$ are the $j$th resonator volume and the $j$th resonator axis).
3. Construction of multi-qubit phase gate
In this section, we use the theoretical model to realize the multi-qubit phase gate and show the detailed derivations of effective Hamiltonian for achieving two qubit phase gate, three qubit phase gate and $n$-qubit phase gate.
3.1 Two qubit phase gate
Firstly, let us consider the case that the bus is coupled to two single-mode resonators. We denote the bus possessing a level structure as shown in Fig. 2(a), that is, two excited levels $|e_1\rangle$ and $|e_2\rangle$ and two lowest levels $|g_1\rangle$ and $|g_2\rangle$. A classical field drives the transition resonantly between $|e_1\rangle$ and $|g_2\rangle$ with Rabi frequency $\Omega _1$ , while two resonators are coupled resonantly to transitions $|g_1\rangle \leftrightarrow |e_1\rangle$ and $|g_2\rangle \leftrightarrow |e_2\rangle$ with coupling strength $g_1$ and $g_2$, respectively. The Hamiltonian of the bus interacting with two single-mode resonators can be written as
(4)$$\begin{aligned} H&=H_0+H_i,\\ H_0&=\omega_1 a^\dagger_1 a_1+\omega_2 a^\dagger_2 a_2+\sum_{l=g,e}\sum_{j=1,2}\omega_{lj}|l_j\rangle\langle l_j|,\\ H_i&=\sum^{2}_{n=1}g_na_n|e_n\rangle\langle g_n|+\Omega_1|e_1\rangle\langle g_2|e^{{-}i\omega_{L1}t}+\textrm{H.c.}, \end{aligned}$$
where $\omega _{lj}$ is the frequency of state $|l_j\rangle$ for the bus, $a_1(a_2)$ and $a^\dagger _1(a^\dagger _2)$ the annihilation and creation operators of resonator $1$ (resonator $2$) respectively, and $\omega _{L1}$ the classical filed frequency. In the interaction picture with respect to the unitary transformation $\exp (-iH_0t)$, the interaction Hamiltonian can be written as
(5)$$H_{\textrm{I},2}=\sum^{2}_{n=1}g_na_n|e_n\rangle\langle g_n|+\Omega_1|e_1\rangle\langle g_2|+{\textrm H.c.},$$
for which we have considered resonant conditions $\omega _{ej}-\omega _{gj}=\omega _{j}$ and $\omega _{e1}-\omega _{g2}=\omega _{L1}$. Considering the time evolution under different initial states, the bus is set to be in the $|g_1\rangle$ state initially and the two single-mode resonators in the Fock state subspace {$|0\rangle _{R1}|0\rangle _{R2}$, $|0\rangle _{R1}|1\rangle _{R2}$, $|1\rangle _{R1}|0\rangle _{R2}$, $|1\rangle _{R1}|1\rangle _{R2}$}. For $|0\rangle _{R1}|0\rangle _{R2}$ and $|0\rangle _{R1}|1\rangle _{R2}$ being the initial states of two single-mode resonators, the whole system has no evolution and the transition between the bus and resonator $1$ is prohibited owing to no photon in the resonator $1$ being absorbed initially by the bus for the excitation $|g_1\rangle \rightarrow |e_1\rangle$.
Fig. 2. (a) Schematic diagram of the level configuration for the bus interacting with two resonators. (b) Schematic diagram of the level configuration for the bus interacting with three resonators.
If two single-mode resonators are in $|1\rangle _{R1}|0\rangle _{R2}$, the evolution of the system will occur in a finite subspace {$|\phi _1\rangle =|g_1\rangle |1\rangle _{R1}|0\rangle _{R2}, |\phi _2\rangle =|e_1\rangle |0\rangle _{R1}|0\rangle _{R2}, |\phi _3\rangle =|g_2\rangle |0\rangle _{R1}|0\rangle _{R2}$}. In the presentation of the dressed states (i.e., eigenstates) of $H_{\Omega }=\Omega _1|\phi _2\rangle \langle \phi _3|+\textrm {H.c.}$, the Hamiltonian of the whole system becomes
(6)$$H_{\Pi,2}=\frac{g_1}{\sqrt{2}}\left(e^{i\Omega_1t}|\Phi_+\rangle+e^{{-}i\Omega_1t}|\Phi_-\rangle\right)\langle\phi_1|+\textrm{H.c.},$$
in which $|\Phi _{\pm }\rangle =(|\phi _2\rangle \pm |\phi _3\rangle )/\sqrt 2$ are the dressed states of $H_{\Omega }$ corresponding to the eigenvalues $\pm \Omega _1$. Based on the Hamiltonian $H_{\Pi ,2}$, the large energy splitting between dressed states $|\Phi _{\pm }\rangle$ will lead to the very slow energy exchanges between $|\phi _1\rangle$ and $|\Phi _{\pm }\rangle$ if we set the parameter condition $|\Omega _{1}|\gg |g_1|,|g_2|$. The Stark shifts of $|\phi _1\rangle$ induced by the two strongly-dispersive interactions between $|\phi _1\rangle$ and $|\Phi _{\pm }\rangle$ will offset each other, and thus the Hamiltonian $H_{\Pi ,2}$ is invalid for the evolution of the system with $|1\rangle _{R1}|0\rangle _{R2}$ being the initial state.
When $|1\rangle _{R1}|1\rangle _{R2}$ is the initial state, the system evolves in the finite subspace {$|\varphi _1\rangle =|g_1\rangle |1\rangle _{R1}|1\rangle _{R2}$, $|\varphi _2\rangle =|e_1\rangle |0\rangle _{R1}|1\rangle _{R2}$, $|\varphi _3\rangle =|g_2\rangle |0\rangle _{R1}|1\rangle _{R2}$, $|\varphi _4\rangle =|e_2\rangle |0\rangle _{R1}|0\rangle _{R2}$}. Then in the presentation with respect to the dressed states of $H^{'}_{\Omega }=\Omega _1|\varphi _2\rangle \langle \varphi _3|+\textrm {H.c.}$, the Hamiltonian of the system reads
(7)$$\begin{aligned} H^{'}_{\Pi,2}&=\frac{g_1}{\sqrt2}(e^{i\Omega_1 t}|\Psi_+\rangle +e^{{-}i\Omega_1t}|\Psi_-\rangle ) \langle\varphi_1|\\ &+\frac{g_2}{\sqrt2}(e^{i\Omega_1t}|\Psi_+\rangle -e^{{-}i\Omega_1t}|\Psi_-\rangle )\langle\varphi_4|+\textrm{H.c.}, \end{aligned}$$
in which $|\Psi _\pm \rangle =(|\varphi _2\rangle \pm |\varphi _3\rangle )/\sqrt 2$ are the dressed states of $H^{'}_{\Omega }$ corresponding to the eigenvalues $\pm \Omega _1$. For the further simplification of the $H^{'}_{\Pi ,2}$, we consider the condition $|\Omega _1|\gg |g_1|,|g_2|$, so the Hamiltonian of the system can be developed as [55]
(8)$$H_{{\textrm eff}}=g_{{\textrm eff}}|\varphi_4\rangle\langle\varphi_1|+\textrm{H.c.}$$
with the effective coupling constant $g_{\textrm {eff}}=g_1g_2/\Omega _1$. Therefore, with the choice of the operation time duration $t_I=\pi /g_{\textrm {eff}}$, we can easily obtain
(9)$$\begin{aligned} |g_1\rangle|0\rangle_{R1}|0\rangle_{R2}&\rightarrow|g_1\rangle|0\rangle_{R1}|0\rangle_{R2},\\ |g_1\rangle|0\rangle_{R1}|1\rangle_{R2}&\rightarrow|g_1\rangle|0\rangle_{R1}|1\rangle_{R2},\\ |g_1\rangle|1\rangle_{R1}|0\rangle_{R2}&\rightarrow|g_1\rangle|1\rangle_{R1}|0\rangle_{R2},\\ |g_1\rangle|1\rangle_{R1}|1\rangle_{R2}&\rightarrow-|g_1\rangle|1\rangle_{R1}|1\rangle_{R2}. \end{aligned}$$
It is noted that the state $|g_1\rangle |1\rangle _{R1}|1\rangle _{R2}$ has a $\pi$-phase flip, while the other three states remain invariable. So we finally obtain a $\pi$-phase gate between two single-mode resonators.
3.2 Three qubit phase gate
Now, we consider the quantum system with the bus interacting with three single-mode resonators. The Hamiltonian of the system in the interaction picture becomes
(10)$$H_\textrm{I,3}=\sum^{3}_{m=1}g_ma_m|e_m\rangle\langle g_m|+\sum^{2}_{n=1}\Omega_n|e_n\rangle\langle g_{n+1}|+\textrm{H.c.}$$
in which $g_m$ is coupling strength between resonator $m$ and the bus transition $|e_m\rangle \leftrightarrow |g_m\rangle$, and $\Omega _n$ is the Rabi frequency of the $n$th classical field driving resonantly the bus transition $|e_n\rangle \leftrightarrow |g_{n+1}\rangle$.
In the following we consider the time evolution under different initial states by setting the bus being $|g_1\rangle$ initially.
(1) If the initial state of the resonators is $|0\rangle _{R1}|0\rangle _{R2}|0\rangle _{R3}$, $|0\rangle _{R1}|1\rangle _{R2}|0\rangle _{R3}$, $|0\rangle _{R1}|0\rangle _{R2}|1\rangle _{R3}$ or $|0\rangle _{R1}|1\rangle _{R2}|1\rangle _{R3}$. There is no interaction between the resonator modes and bus due to no photon in the resonator 1 being absorbed initially by the bus, so the whole system has no evolution.
(2) For the initial state of the resonators being $|1\rangle _{R1}|0\rangle _{R2}|0\rangle _{R3}$, $|1\rangle _{R1}|1\rangle _{R2}|0\rangle _{R3}$ or $|1\rangle _{R1}|0\rangle _{R2}$
$\otimes |1\rangle _{R3}$, the system has the similar process of dynamic evolution with the case that the initial state of the two single-mode resonators is in $|1\rangle _{R1}|0\rangle _{R2}$ when only two resonators are considered. Therefore, the system is still not evolving.
(3) When the system is in $|g_1\rangle |1\rangle _{R1}|1\rangle _{R2}|1\rangle _{R3}$ initially, the system evolves in the finite subspace { $|\varphi ^{'}_1\rangle =|g_1\rangle |1\rangle _{R1}|1\rangle _{R2}|1\rangle _{R3}$, $|\varphi ^{'}_2\rangle =|e_1\rangle |0\rangle _{R1}|1\rangle _{R2}|1\rangle _{R3}$, $|\varphi ^{'}_3\rangle =|g_2\rangle |0\rangle _{R1}|1\rangle _{R2}|1\rangle _{R3}$, $|\varphi ^{'}_4\rangle =|e_2\rangle |0\rangle _{R1}|0\rangle _{R2}|1\rangle _{R3}$, $|\varphi ^{'}_5\rangle =|g_3\rangle |0\rangle _{R1}|0\rangle _{R2}|1\rangle _{R3}$, $|\varphi ^{'}_6\rangle =|e_3\rangle |0\rangle _{R1}|0\rangle _{R2}|0\rangle _{R3}$} . Considering the condition ${|\Omega _{1}|,|\Omega _{2}|}\gg {|g_1|, |g_2|, |g_3|}$, the interaction Hamiltonian in the presentation with respect to the dressed states of $H^{''}_{\Omega }=\Omega _1|\varphi ^{'}_2\rangle \langle \varphi ^{'}_3|+\Omega _2|\varphi ^{'}_4\rangle \langle \varphi ^{'}_5|+\textrm {H.c.}$ with the dressed states $|\psi _{\pm }\rangle =(|\varphi ^{'}_2\rangle \pm |\varphi ^{'}_3\rangle )/\sqrt 2$ and $|\psi ^{'}_{\pm }\rangle =(|\varphi ^{'}_4\rangle \pm |\varphi ^{'}_5\rangle )/\sqrt 2$ becomes
(11)$$\begin{aligned} H_{\Pi,3}&=\frac{g_1}{\sqrt2}e^{i\Omega_1t}\left(|\psi_+\rangle\langle\varphi^{'}_1|+|\varphi^{'}_1\rangle\langle\psi_-|\right)+\frac{g_2}{2}e^{i(\Omega_1-\Omega_2)t}\left(|\psi_+\rangle\langle \psi^{'}_+|-|\psi^{'}_-\rangle\langle\psi_-|\right)\\ &+\frac{g_2}{2}e^{i(\Omega_1+\Omega_2)t}\left(|\psi_+\rangle\langle \psi^{'}_-|-|\psi^{'}_+\rangle\langle\psi_-|\right) +\frac{g_3}{\sqrt2}e^{i\Omega_2t}\left(|\psi^{'}_+\rangle\langle\varphi^{'}_6|-|\varphi^{'}_6\rangle\langle\psi^{'}_-|\right)\\ &+\textrm{H.c}. \end{aligned}$$
From $H_{\Pi ,3}$, the resonant transition $|\varphi ^{'}_1\rangle \leftrightarrow |\varphi ^{'}_6\rangle$ is mediated by $|\psi _{\pm }\rangle$ and $|\psi ^{'}_{\pm }\rangle$, and an effective coupling can be obtained by the following three-order perturbation with the effective coupling strength [32], labeled as $g_\textrm {eff,3}$
(12)$$\begin{aligned} &\frac{\langle \varphi^{'}_6|H_{\Pi,3}|\psi^{'}_+\rangle\langle \psi^{'}_+| H_{\Pi,3}|\psi_+\rangle\langle\psi_+|H_{\Pi,3}|\phi^{'}_1\rangle}{\Omega_1\Omega_2}-\frac{\langle \varphi^{'}_6|H_{\Pi,3}|\psi^{'}_-\rangle\langle \psi^{'}_-| H_{\Pi,3}|\psi_+\rangle\langle\psi_+|H_{\Pi,3}|\phi^{'}_1\rangle}{\Omega_1\Omega_2}\\ &-\frac{\langle \varphi^{'}_6|H_{\Pi,3}|\psi^{'}_+\rangle\langle \psi^{'}_+| H_{\Pi,3}|\psi_-\rangle\langle\psi_-|H_{\Pi,3}|\varphi^{'}_1\rangle}{\Omega_1\Omega_2}+\frac{\langle \varphi^{'}_6|H_{\Pi,3}|\psi^{'}_-\rangle\langle \psi^{'}_-| H_{\Pi,3}|\psi_-\rangle\langle\psi_-|H_{\Pi,3}|\phi^{'}_1\rangle}{\Omega_1\Omega_2}\\ &=\frac{g_1g_2g_3}{\Omega_1\Omega_2}. \end{aligned}$$
However, Stark shifts of $|\varphi ^{'}_1\rangle$ and $|\varphi ^{'}_6\rangle$ are zero because the stark shift from the $|\psi _+\rangle$ ($|\psi ^{'}_+\rangle$) balances out the Stark shift from the $|\psi _-\rangle$ ($|\psi ^{'}_-\rangle$) which can be proved by,
(13)$$\frac{\langle \varphi^{'}_1|H_{\Pi,3}|\psi_+\rangle\langle \psi_+| H_{\Pi,3}|\varphi^{'}_1\rangle}{-\Omega_1}+\frac{\langle \varphi^{'}_1|H_{\Pi,3}|\psi_-\rangle\langle \psi_-| H_{\Pi,3}|\varphi^{'}_1\rangle}{\Omega_1}=0,$$
(14)$$\frac{\langle \varphi^{'}_6|H_{\Pi,3}|\psi^{'}_+\rangle\langle \psi^{'}_+| H_{\Pi,3}|\varphi^{'}_6\rangle}{-\Omega_2}+\frac{\langle \varphi^{'}_6|H_{\Pi,3}|\psi^{'}_-\rangle\langle \psi^{'}_-| H_{\Pi,3}|\varphi^{'}_6\rangle}{\Omega_2}=0.$$
Thus, we obtain the effective Hamiltonian of the whole system when the system is in $|\varphi ^{'}_1\rangle$ initially,
(15)$$H^{'}_\textrm{eff,3}=g_\textrm{eff,3}|\varphi^{'}_6\rangle\langle\varphi^{'}_1|+\textrm{H.c}.$$
If the state is in $|g_1\rangle |1\rangle _{R1}|1\rangle _{R2}|1\rangle _{R3}$, the system undergoes a Rabi oscillation with an effective Rabi frequency $g_\textrm {eff,3}$ while other states remain unchanged. By selecting the operation time duration $t^{'}_{II}=\pi /g_\textrm {eff,3}$, therefore we can obtain
(16)$$|g_1\rangle|x\rangle_{R1}|y\rangle_{R2}|z\rangle_{R3}\rightarrow e^{ixyz\pi}|g_1\rangle|x\rangle_{R1}|y\rangle_{R2}|z\rangle_{R3},$$
in which $x, y, z=0,1$. This is a $\pi$-phase gate between three single-mode resonators.
3.3 Multi-qubit phase gate
Multi-qubit gates can achieve large-scale quantum computing which have many applications in QCQIP. Thus, it is meaningful to construct the multi-qubit phase gate. In the following, we use the bus with the configuration of levels shown in Fig. 1(b) to couple to $n$ single-mode resonators. The interaction Hamiltonian of the whole system can be written as
(17)$$H_\textrm{I,n}=\sum^{n}_{i=1}g_ia_i|e_i\rangle\langle g_i|+\sum^{n-1}_{j=1}\Omega_{j}|e_{j}\rangle\langle g_{j+1}|$$
in which $g_i$ is coupling strength between resonator $i$ and the $|e_i\rangle \rightarrow |g_i\rangle$ transition of the bus, and $\Omega _j$ is the Rabi frequency of the $j$th classical field for the $|e_j\rangle \leftrightarrow |g_{j+1}\rangle$ transition of the bus.
By using the similar conditions mentioned above from Eq. (10) to Eq. (15), an effective Hamiltonian can be expressed as
(18)$$H_\textrm{eff,n}=g_\textrm{eff,n}|\psi_n\rangle\langle \psi_1|+\textrm{H.c}$$
where $g_\textrm {eff,n}=g_1g_2 \cdots g_n/\Omega _1\Omega _2 \cdots \Omega _{n-1}$, $|\psi _{n}\rangle =|e_{n}\rangle _{bus}|0\rangle _{R1}|0\rangle _{R2} \cdots |0\rangle _{Rn}$ and $|\psi _{1}\rangle =|g_{1}\rangle _{bus}|1\rangle _{R1}$ $\otimes |1\rangle _{R2} \cdots |1\rangle _{Rn}$. By choosing $t_n=\pi /g_\textrm {eff,n}$, we can obtain a multi-phase gate. Only if the system is in $|\psi _{1}\rangle$, then will the system have a $\pi$-phase flip while the other states remain unchanged.
4. Simulation and analysis
4.1 Application of pulse engineering in shaping coupling strength
In order to verify the validity of the quantum gate constructed in the present scheme, we take an example of conducting the three-qubit phase gate. We numerically plot the evolution of population and phase of the state $|\varphi ^{'}_1\rangle$ with the full Hamiltonian Eq. (10). The value of coupling strength between the bus and resonators is chosen as approximate $2\pi \times 10$ MHz that can be easily achieved in experiment [56]. For convenience, we set $g_1/2\pi =g_2/2\pi =g_3/2\pi =10~$MHz. In realistic situation, learning from Eq. (3) we can realize this condition by adjusting the magnetic components of three resonators, even when three resonators are of largely different mode frequencies. For the condition $\{|\Omega _1|, |\Omega _2|\gg |g_1|, |g_2|, |g_3|\}$, we set $\Omega _1/2\pi =\Omega _2/2\pi =200~$MHz. As expected, the population of $|\varphi ^{'}_1\rangle$ is close to unity at the time $t^{'}_{II}$ ($20~\mu s$) with a perfect $\pi$-phase flip on $|\varphi ^{'}_1\rangle$. The results in Figs. 3(a) and 3(b) show that above theoretical analysis is correct. This implementation of the three-qubit phase gate on three resonators is based on the constant rectangular coupling strength. As shown in Fig. 3(a), the population of $|\varphi ^{'}_1\rangle$ reaches nearly $1$ and has a peak at the strict time of $t^{'}_{II}$ ($20~\mu s$) which means that the operating time requirement is very demanding. Besides, oscillation of the line is conspicuous by enlarging the range of $t\in [19,21]~\mu s$. There is a obvious decline for population from $99.8\%$ down to $97.2\%$ in respect to the operation time of $20 \mu s$ and $21 \mu s$ respectively.
Fig. 3. (a) Time evolution of population for $|\varphi ^{'}_1\rangle$. (b) Time evolution of phase for $|\varphi ^{'}_1\rangle$. Parameters: $g_1/2\pi =g_2/2\pi =g_3/2\pi =10~$MHz and $\Omega _1/2\pi =\Omega _2/2\pi =200~$MHz.
Seeking for robust dynamics and steady final quantum state, many efforts have been devoted to pulse engineering [36–40]. Inspired by pulse engineering, we replace the constant coupling strength by the shaped time-dependent coupling strength in order to strengthen the robustness against a control error. Tunable coupling strength has been widely used in circuit QED system [41–52]. Recently, experiments demonstrated that the coupling strength tunability is suitable for quantum simulation of many body physics [44,46,52]. For example, an experiment was performed by using two qubits and one coplanar waveguide (CPW) resonator on a microchip [44]. The coupling strength between the CPW resonator and qubits can be tunable through adopting controlled voltage pulses generated by an arbitrary waveform generator (AWG) to tune the flux threading the SQUID loop of each qubit individually using flux bias line [53]. In addition, the tunable coupling strength between two qubits can be also achieved by two different ways in [46,52]. In the first scheme, full tunability of the coupling strength can be achieved by parametrically modulating the qubits, that is, one of the qubit is biased by an ac magnetic flux to periodically modulate its frequency [46]. In the second scheme, the strength of the indirect coupling between a pair of nearest-neighor qubits is adjusted by changing the coupler frequency with an additional on-chip bias line, giving a net zero qubit-qubit coupling at a specific flux bias [52].
For the present scheme, the shape of the coupling strength can be engineered by using an AWG to tune the flux pulses threading the resonators corresponding to $\mathbf {B}^{j}_r(\vec {r},t)$ which is the magnetic components of the $j$th resonator mode in Eq. (3). Here, as long as the coupling strength is satisfied with $\int ^{t^{'}_{II}}_{0}g_\textrm {eff,3}dt=\pi$, we choose coupling strength as a single-period $\cos$-like function [setting $g_1=g_2=g_3=g(t)$]
(19)$$g(t)=\frac{g_m}{2}\left[1-\cos\left(\frac{2\pi t}{t^{'}_{II}}+\frac{\pi}{3}\right)\right]$$
where $g_m$ is the maximum amplitude.
Based on $g(t)$ in Eq. (19), the operation time duration can be derived, as $t^{'}_{II}=16\Omega _1\Omega _2\pi /5g^{3}_m$. Then, we plot the time evolutions of population and phase for the state $|\varphi ^{'}_1\rangle$ by using the single-period $\cos$-like coupling strength, for which we pick up $g_m/2\pi =10~$MHz in Figs. 4(a) and 4(b). Obviously, the population not only reaches unity at the time $t^{'}_{II}$ but also keeps stationary during the time $t\in [45,65]~\mu s$. In addition, there is no oscillation once the population of $|\varphi ^{'}_1\rangle$ reaches nearly unity. By designing the coupling strength, the scheme of realizing the three-qubit phase gate loosens the demand of operation time.
Fig. 4. (a) Time evolution of population for $|\varphi ^{'}_1\rangle$ with single-period $\cos$-like function $g(t)$. (b) Time evolution of phase for $|\varphi ^{'}_1\rangle$ with $g(t)$. Parameters: $g_m/2\pi =10~$MHz and $\Omega _1/2\pi =\Omega _2/2\pi =200~$MHz.
For the sake of illustrating the validity and the robustness of the scheme by engineering the coupling strength, we define the fidelity of $\pi$-phase gate as $F(t)=|\langle \Psi _t|\Psi (t)\rangle |^2$, where $|\Psi _t\rangle$ is the target state after the $\pi$-phase gate on a general initial state $|\Psi (0)\rangle =|g_1\rangle _{bus}\otimes (\cos \alpha |0\rangle _{R1}+\sin \alpha |1\rangle _{R1})\otimes (\cos \beta |0\rangle _{R2}+\sin \beta |1\rangle _{R2})\otimes (\cos \gamma |0\rangle _{R3}+\sin \gamma |1\rangle _{R3})$ with $\alpha , \beta , \gamma \in [0,2\pi )$, and $|\Psi (t)\rangle$ is the state of the system at any time by solving the Schrödinger equation with the initial state $|\Psi (0)\rangle$. We plot the time evolution of the gate fidelity in Fig. 5 with two different cases of coupling strength by choosing five different initial states. As shown in Fig. 5(a), the $\pi$-phase gate can be achieved with the fidelity near unity at the final operating time with the constant coupling strength for the five different initial states. By magnifying the time range $t \in [18,20]~\mu s$, there are obvious oscillations of fidelity from $0.95$ to $1.00$. The operation time of the scheme is extremely rigorous and the robustness of the scheme by using constant coupling strength needs to improve. In Fig. 5(b), it is evident that the fidelity can remain over $0.999$ when the operation time $t \geq 39~\mu s$ by engineering the coupling strength. Here, we also plot the fidelity of five different initial states by setting the $y$-coordinate as $\log _{10}(1-F)$ and enlarging the time range $t \in [40, 42]~\mu s$. The fidelity fluctuation for the five initial states can be limited in the range approximately between $0.999$ and $0.9999$, which proves the validity and the robustness of the scheme and loosens the requirement of the strict operation time. In order not to lose the generality, alternatively we set unequal coupling strengths {$g_1/2\pi =10$ MHz, $g_2/2\pi =11$ MHz, $g_3/2\pi =15$ MHz} in Fig. 5(c) and {$g_1/2\pi =10$ MHz, $g_2/2\pi =11$ MHz, $g_m/2\pi =15$ MHz, $g_3/2\pi =g(t)$} in Fig. 5(d). It's obvious that Figs. 5(c) and 5(d) have the same climate as Figs. 5(a) and 5(b).
Fig. 5. (a) and (c): Time evolution of $\pi$-gate fidelity with the constant coupling strength in five different case of initial state $|\Psi (0)\rangle$. (b) and (d): Time evolution of $\pi$-gate fidelity with the shaped coupling strength in five different cases of the initial state $|\Psi (0)\rangle$.
4.2 Effect of finite coherence time
We now give a brief discussion on the effect of finite coherence time on the scheme. We take into account the effect of the decoherence induced by the dissipation of the system on the final gate fidelity. The decohence mechanisms arise through two dominant channels: (i) Energy relaxations of excited states of the bus; (ii) Three resonators decay with their individual decay rates $\kappa _1, \kappa _2$ and $\kappa _3$. The dynamics of the lossy system is governed by the Markovian master equation
(20)$$\begin{aligned} \frac{d\rho}{dt}&=-i\Big[H_{II},~\rho\Big]+\sum^{3}_{i=1}\kappa_i D[a_i]+\gamma_{g_1,e_1}D[\sigma_1]+\gamma_{g_2,e_1}D[\sigma_2]\\ &+\gamma_{g_2,e_2}D[\sigma_3]+\gamma_{g_3,e_2}D[\sigma_4]+\gamma_{g_3,e_3}D[\sigma_5], \end{aligned}$$
where $\rho$ is the density operator of the system, $\sigma _1=|g_1\rangle \langle e_1|$, $\sigma _2=|g_2\rangle \langle e_1|$, $\sigma _3=|g_2\rangle \langle e_2|$, $\sigma _4=|g_3\rangle \langle e_2|$, $\sigma _5=|g_3\rangle \langle e_3|$, $D[A]=A\rho A^{+}-A^{+}A\rho /2-\rho A^{+}A/2$ with $A=a_i$ or $\sigma _j$. Parameters of $\gamma _{g_1,e_1}, \gamma _{g_2,e_1}, \gamma _{g_2,e_2}, \gamma _{g_3,e_2},$ and $\gamma _{g_3,e_3}$ are energy relaxation rates of excited states $|e_1\rangle$, $|e_2\rangle$ and $|e_3\rangle$ for the bus. For convenience, we assume that $\kappa _1=\kappa _2=\kappa _3=\kappa$ and $\gamma _{g_1,e_1}=\gamma _{g_2,e_1}=\gamma _{g_2,e_2}=\gamma _{g_3,e_2}=\gamma _{g_3,e_3}=\gamma _b$.
The fidelity of the gate operation is given by $F=|\langle \Psi _{id}|\rho |\Psi _{id}\rangle |$, where $|\Psi _{id}\rangle$ is the output state of an ideal system without dissipation of bus and decay of resonators. We now numerically simulate the fidelity of gate operation by solving the master equation Eq. (20). Figure 6 shows that the relationship between the fidelity $F$ and coherence time including coherence time of bus $\tau _b^{-1} \equiv \gamma _b$ and coherence time of resonators $\tau _R^{-1} \equiv \kappa$. As the coherence time increases, the fidelity of the gate also increases. Comparing the two lines, the fidelity is insensitive to the coherence time of bus ($\tau _b$) due to the invariable fidelity closed to unity when $\tau _b \geq 1$ ms. Obviously, the coherence time of resonators ($\tau _R$) has a greater influence on the fidelity. When $\tau _b=5$ ms ($\tau _R=5$ ms), the fidelity can reach $0.9983$ ($0.9836$). At the $10$ ms coherence time of the bus and resonators, the fidelities are $0.9985$ and $0.9909$, respectively. However, we have picked up the initial state whose fidelity is close to zero at the initial time in Fig. 5(b). If we consider other initial states or average fidelity, the fidelity will have a higher value. In experiment, by designing a $\pi$-phase difference across the Josephson junction in circuit in order to restrain the energy relaxation induced by quasiparticle dissipation, we can obtain a SQUID with coherence time over 1 ms [57]. Regarding to the coherence time of the resonator, the coherence time of the photons in the resonator can be much longer [58]. Up to now, the superconducting resonator lifetimes between 1 and 10 ms have been reported [17–19].
Fig. 6. Effect of the finite coherence time of the system on the final gate fidelity with the initial state. Parameters: $\alpha =0.24\pi$, $\beta =0.49\pi$ and $\gamma =0.40\pi$.
5. Suppression of unwanted transitions
The bus coupled to $n$ resonators has the complex and multiple level structure, which may cause unwanted transitions. In this section, for simplicity, we take an example of the two-qubit phase gate to show how we can suppress unwanted transitions.
5.1 Sufficiently large differences among transition frequencies
In above contents, we propose to realize multi-qubit gates by using resonant conditions. If the $n$th resonator is resonantly coupled to the $|g_n\rangle \leftrightarrow |e_n\rangle$ transition of the bus but highly decoupled from $|g_j\rangle \leftrightarrow |e_j\rangle$ ($\{j,n\}\in [1,2,3\cdots ]$, $j\neq n$) and classical fields drive resonantly transitions $|e_n\rangle \leftrightarrow |g_{n+1}\rangle$ but highly detuned from transitions between any two irrelevant levels, the probabilities of occurring unwanted transitions will be negligible. Note that these conditions can be satisfied by prior adjustment of the level spacings of the bus or/and the frequencies of resonators. For the superconducting bus, the level spacings can be readily adjusted by changing the external flux applied to the SQUID loop [59–61]. In addition, the frequency of a microwave resonator can be rapidly adjusted with a few nanoseconds [62,63].
In order to explore sufficiently large differences among transition frequencies in the four-level system given in Fig. 2(a) of the two-qubit phase gate, we set $\omega _{e_1,g_2}-\omega _{g_1,e_1}=\omega _{g_2,e_2}-\omega _{e_1,g_2}=\delta$ with $\omega _{g_1,e_1}$, $\omega _{e_1,g_2}$, and $\omega _{g_2,e_2}$ being the transition frequencies of $|g_1\rangle \leftrightarrow |e_1\rangle$, $|e_1\rangle \leftrightarrow |g_2\rangle$, and $|g_2\rangle \leftrightarrow |e_2\rangle$, respectively. For suppressing the unwanted transitions, $\delta$ is supposed to be as large as possible. In order to investigate numerically the effect of $\delta$ on the scheme, we consider the whole Hamiltonian including the unwanted transitions, as $\tilde {H}=H_\textrm {I,2}+H_\textrm {un}$ with the desired part $H_\textrm {I,2}$ Eq. (5) and the unwanted part
(21)$$\begin{aligned} H_\textrm{un}&=\Big(\Omega_1e^{i\delta t}+g_2e^{2i\delta t}a^{{\dagger}}_2\Big)|g_1\rangle\langle e_1|+\Big(g_1e^{i\delta t}a_1+g_2e^{{-}i\delta t}a_2\Big)|e_1\rangle\langle g_2|\\ &+\Big(\Omega_1e^{{-}i\delta t}+g_1e^{{-}2i\delta t}a^{{\dagger}}_1\Big)|g_2\rangle\langle e_2|+\textrm{H.c}, \end{aligned}$$
According to the Schrödinger equation of the Hamiltonian $\tilde {H}=H_\textrm {I,2}+H_\textrm {un}$, Fig. 7(a) shows the final fidelity of the two-qubit gate with varying $\delta$, for which we choose the initial state as $|\psi (0)\rangle =|g_1\rangle _{bus}\otimes (\cos \theta |0\rangle _{R1}+\sin \theta |1\rangle _{R1})\otimes (\cos \eta |0\rangle _{R2}+\sin \eta |1\rangle _{R2})$ with $\theta =0.24\pi$ and $\eta =0.49\pi$. In Fig. 7(a), we also simulate the separate effect of the unwanted transitions induced by the quantum fields in of resonators or the classical field. We learn from Fig. 7(a) that the unwanted transitions induced by the quantum fields have a puny influence on the fidelity and when $\delta /2\pi >1$ GHZ their effect can be neglected. The unwanted transitions induced by the classical fields play a major role in the destruction of the gate fidelity because in Fig. 7(a) the dotted green line and the solid red line basically coincide. Overall, the gate fidelity is very close to unity under the condition of $\delta /2\pi >20$ GHz. To this end, in Fig. 7(b) we pick up $\delta /2\pi =25$ GHz and plot the fidelity evolution of the gate fidelity. The two-qubit $\pi$-phase gate can be achieved finally with the fidelity near unity (over $0.9975$). Therefore, we can choose suitable differences among the transition frequencies to suppress the unwanted transitions.
Fig. 7. (a) Final fidelity of the two-qubit gate with varying $\delta$ at the time $t=\pi \Omega _1/g_1g_2$. (b) Fidelity evolution of the two-qubit gate with $\delta /2\pi =25$ GHz. $g_1/2\pi =10$ MHz, $g_2/2\pi =11$ MHz, and $\Omega _1/2\pi =200$ MHz.
5.2 Optimized detuning compensation for suppressing unwanted transitions
Now we introduce the method of detuning compensation for suppressing unwanted transitions with $\delta$ being not large enough. Since the unwanted transitions induced by the quantum fields in resonators hardly effect the gate fidelity when $\delta /2\pi >1$ GHz, here we solely consider those induced by the classical field. There exist two unwanted transitions induced by the classical field, $|g_1\rangle \leftrightarrow |e_1\rangle$ and $|g_2\rangle \leftrightarrow |e_2\rangle$. When $\delta \gg \Omega _1$, these two unwanted transitions will cause second-order Stark shifts of four levels $|g_1\rangle$, $|e_1\rangle$, $|g_2\rangle$, and $|e_2\rangle$, respectively, as
(22)$$\delta_1=\frac{\Omega_1^2}{\delta},\quad \delta_2={-}\frac{\Omega_1^2}{\delta},\quad \delta_3={-}\frac{\Omega_1^2}{\delta},\quad \delta_4=\frac{\Omega_1^2}{\delta}.$$
Then these Stark shifts will change the three desired resonant transitions, $|g_1\rangle \leftrightarrow |e_1\rangle$, $|e_1\rangle \leftrightarrow |g_2\rangle$, and $|g_2\rangle \leftrightarrow |e_2\rangle$ described by Eq. (5), into off-resonant transitions with detunings $|\delta _1-\delta _2|$, $|\delta _2-\delta _3|$, and $|\delta _3-\delta _4|$, respectively. Therefore, in order to compensate for the Stark shifts, we can reversely introduce detunings to modify Eq. (5) into
(23)$$H'_{\textrm{I},2}=g_1e^{i\delta_{12}t}a^{{\dagger}}_1|g_1\rangle\langle e_1|+\Omega_1e^{i\delta_{23} t}|e_1\rangle\langle g_2|+g_2e^{i\delta_{34}t}a^{{\dagger}}_2|g_2\rangle\langle e_2|+{\textrm H.c}.$$
In order to eliminate the second-order Stark shifts in Eq. (22), one should choose
(24)$$\delta_{12}=\delta_2-\delta_1,\quad\delta_{23}=\delta_3-\delta_2,\quad\delta_{34}=\delta_4-\delta_3.$$
However, since the effective coupling strength $g_1g_2/\Omega _1$ is small, higher-order Stark shifts may also influence the gate fidelity. Therefore, the relations in Eq. (24) may be not correct sufficiently. To this end, it is necessary to seek for the optimized detuning compensation for suppressing unwanted transitions. Here we use the numerical search algorithm to optimize detunings in Eq. (24). We first change the parameters in Eq. (22) into
(25)$$\delta_1=\frac{\Omega_1^2}{\delta},\quad \delta_2= D_2,\quad \delta_3= D_3,\quad \delta_4=\frac{\Omega_1^2}{\delta}.$$
Then we search numerically for the suitable values of $D_2$ and $D_3$ by scanning the range around $-{\Omega _1^2}/{\delta }$ so that the detunings defined in Eq. (24) can ensure the high-fidelity gate. For instance, with $\delta /2\pi =10$ GHz that is not large enough to ensure the high-fidelity gate [see Fig. 7(a)] and the Hamiltonian $\tilde {H}^{'}=H'_\textrm {I,2}+H_\textrm {un}$, in Fig. 8(a) we simulate the final ($t=\pi \Omega _1/g_1g_2$) fidelity of the two-qubit phase gate with varying $D_2$ and $D_3$, for which we have made the preliminary work to know the rough ranges of $D_2$ and $D_3$ that ensure the high fidelity through the numerical search algorithm. There exists a range that guarantees the high-fidelity (over 0.9975) two-qubit gate. Further, in Fig. 8(b) we compare the cases of the detuning compensation and the optimized detuning compensation by plotting the fidelity evolutions of the two-qubit phase gate. In Fig. 8(b), the gate fidelity for the case of the optimized detuning compensation reaches over 0.9975 finally, which indicates that the unwanted transitions have been efficiently suppressed. For the multi-qubit phase gates, the similar processing can also be conducted.
Fig. 8. (a) Final fidelity of the two-qubit phase gate with varying $D_2$ and $D_3$ at the time $t=\pi \Omega _1/g_1g_2$. (b) Fidelity evolutions of the two-qubit phase gate with the detuning compensation and the optimized detuning compensation with $D_2/2\pi =-2$ MHz and $D_3/2\pi =-5.4$ MHz. $\delta /2\pi =10$ GHz. $g_1/2\pi =10$ MHz, $g_2/2\pi =11$ MHz, and $\Omega _1/2\pi =200$ MHz.
To summary, we have presented a scheme to realize multi-qubit phase gates on multiple resonators mediated by a superconducting bus in circuit QED system. Quantum information is loaded on the multiple single-mode resonators, and the superconducting bus mediates the interaction among resonators. Through introducing the shaped coupling strength, we improve the fidelity and robustness of the phase gate at the cost of a longer gate time. The effect of the decoherence induced by dissipation on the fidelity is taken into account, and the result proves that the scheme is very robust to the energy relaxation of the bus and is relatively sensitive to the decay of resonators. In addition, we propose the method of the optimized detuning compensation so as to suppress unwanted transitions.
National Natural Science Foundation of China (11675046); Program for Innovation Research of Science in Harbin Institute of Technology (A201412); Postdoctoral Scientific Research Developmental Fund of Heilongjiang Province (LBH-Q15060).
1. A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. A. Smolin, and H. Weinfurter, "Elementary gates for quantum computation," Phys. Rev. A 52(5), 3457–3467 (1995). [CrossRef]
2. B. Ye, Z. F. Zheng, and C.-P. Yang, "Multiplex-controlled phase gate with qubits distributed in a multicavity system," Phys. Rev. A 97(6), 062336 (2018). [CrossRef]
3. C.-P. Yang, Y.-X. Liu, and F. Nori, "Phase gate of one qubit simultaneously controlling n qubits in a cavity," Phys. Rev. A 81(6), 062323 (2010). [CrossRef]
4. C. Jones, "Composite Toffoli gate with two-round error detection," Phys. Rev. A 87(5), 052334 (2013). [CrossRef]
5. J.-X. Han, J.-L. Wu, Y. Wang, Y. Xia, J. Song, and Y.-Y. Jiang, "Constructing multi-target controlled phase gate in circuit QED and its applications," Europhy. Lett. 127(5), 50002 (2019). [CrossRef]
6. M. Sasura and V. Buzek, "Multiparticle entanglement with quantum logic networks: Application to cold trapped ions," Phys. Rev. A 64(1), 012305 (2001). [CrossRef]
7. P. W. Shor, "Scheme for reducing decoherence in quantum computer memory," Phys. Rev. A 52(4), R2493–R2496 (1995). [CrossRef]
8. A. M. Steane, "Error correcting codes in quantum theory," Phys. Rev. Lett. 77(5), 793–797 (1996). [CrossRef]
9. L. K. Grover, "Quantum computers can search rapidly by using almost any transformation," Phys. Rev. Lett. 80(19), 4329–4332 (1998). [CrossRef]
10. S. L. Braunstein, V. Bužek, and M. Hillery, "Quantum-information distributors: Quantum network for symmetric and asymmetric cloning in arbitrary dimension and continuous limit," Phys. Rev. A 63(5), 052313 (2001). [CrossRef]
11. C.-P. Yang, S.-I. Chu, and S. Han, "Possible realization of entanglement, logical gates, and quantum-information transfer with superconducting-quantum-interference-device qubits in cavity QED," Phys. Rev. A 67(4), 042311 (2003). [CrossRef]
12. A. Blais, R.-S. Huang, A. Wallraff, S. M. Girvin, and R. J. Schoelkopf, "Cavity quantum electrodynamics for superconducting electrical circuits: An architecture for quantum computation," Phys. Rev. A 69(6), 062320 (2004). [CrossRef]
13. J. Clarke and F. K. Wilhelm, "Superconducting quantum bits," Nature 453(7198), 1031–1042 (2008). [CrossRef]
14. J. Q. You and F. Nori, "Atomic physics and quantum optics using superconducting circuits," Nature 474(7353), 589–597 (2011). [CrossRef]
15. C.-P. Yang, S.-I. Chu, and S. Han, "Quantum information transfer and entanglement with SQUID qubits in cavity QED: A dark-state scheme with tolerance for nonuniform device parameter," Phys. Rev. Lett. 92(11), 117902 (2004). [CrossRef]
16. T. Niemczyk, F. Deppe, H. Huebl, E. P. Menzel, F. Hocke, M. J. Schwarz, J. J. Garcia-Ripoll, D. Zueco, T. Hümmer, E. Solano, A. Marx, and R. Gross, "Circuit quantum electrodynamics in the ultrastrong-coupling regime," Nat. Phys. 6(10), 772–776 (2010). [CrossRef]
17. M. Reagor, H. Paik, G. Catelani, L. Sun, C. Axline, E. Holland, I. M. Pop, N. A. Masluk, T. Brecht, L. Frunzio, M. H. Devoret, L. Glazman, and R. J. Schoelkopf, "Reaching 10 ms single photon lifetimes for superconducting aluminum cavities," Appl. Phys. Lett. 102(19), 192604 (2013). [CrossRef]
18. M. Reagor, W. Pfaff, C. Axline, R. W. Heeres, N. Ofek, K. Sliwa, E. Holland, C. Wang, J. Blumoff, K. Chou, M. J.. Hstridge, L. Frunzio, M. H.. Devoret, L. Jiang, and R. J.. Schoelkopf, "Quantum memory with millisecond coherence in circuit QED," Phys. Rev. B 94(1), 014506 (2016). [CrossRef]
19. C. Axline, M. Reagor, R. Heeres, P. Reinhold, C. Wang, K. Shain, W. Pfaff, Y. Chu, L. Frunzio, and R. J. Schoelkopf, "An architecture for integrating planar and 3D cQED devices," Appl. Phys. Lett. 109(4), 042601 (2016). [CrossRef]
20. H. Paik, D. I. Schuster, L. S. Bishop, G. Kirchmair, G. Catelani, A. P. Sears, B. R. Johnson, M. J. Reagor, L. Frunzio, L. I. Glazman, S. M. Girvin, M. H. Devoret, and R. J. Schoelkopf, "Observation of high coherence in josephson junction qubits measured in a three-dimensional circuit QED architecture," Phys. Rev. Lett. 107(24), 240501 (2011). [CrossRef]
21. M. Stern, G. Catelani, Y. Kubo, C. Grezes, A. Bienfait, D. Vion, D. Esteve, and P. Bertet, "Flux qubits with long coherence times for hybrid quantum circuits," Phys. Rev. Lett. 113(12), 123601 (2014). [CrossRef]
22. M. J. Peterer, S. J. Bader, X. Jin, F. Yan, A. Kamal, T. J. Gudmundsen, P. J. Leek, T. P. Orlando, W. D. Oliver, and S. Gustavsson, "Coherence and decay of higher energy levels of a superconducting transmon qubit," Phys. Rev. Lett. 114(1), 010501 (2015). [CrossRef]
23. C.-P. Yang and S.-Y. Han, "n-qubit-controlled phase gate with superconducting quantum-interference devices coupled to a resonator," Phys. Rev. A 72(3), 032311 (2005). [CrossRef]
24. C.-P. Yang and S.-Y. Han, "Realization of an n-qubit controlled-U gate with superconducting quantum interference devices or atoms in cavity QED," Phys. Rev. A 73(3), 032317 (2006). [CrossRef]
25. H.-F. Wang, A.-D. Zhu, and S. Zhang, "One-step implementation of a multiqubit phase gate with one control qubit and multiple target qubits in coupled cavities," Opt. Lett. 39(6), 1489–1492 (2014). [CrossRef]
26. C.-P. Yang, Q.-P. Su, F.-Y. Zhang, and S.-B. Zheng, "Single-step implementation of a multiple-target-qubit controlled phase gate without need of classical pulses," Opt. Lett. 39(11), 3312–3315 (2014). [CrossRef]
27. T. Liu, B.-Q. Guo, C.-S. Yu, and W.-N. Zhang, "One-step implementation of a hybrid fredkin gate with quantum memories and single superconducting qubit in circuit QED and its applications," Opt. Express 26(4), 4498–4511 (2018). [CrossRef]
28. B. Ye, Z.-F. Zheng, and C.-P. Yang, "Multiplex-controlled phase gate with qubits distributed in a multicavity system," Phys. Rev. A 97(6), 062336 (2018). [CrossRef]
29. J. Q. You and F. Nori, "Quantum information processing with superconducting qubits in a microwave field," Phys. Rev. B 68(6), 064509 (2003). [CrossRef]
30. H. Wang, M. Hofheinz, J. Wenner, M. Ansmann, R. C. Bialczak, M. Lenander, E. Lucero, M. Neeley, A. D. O'Connell, D. Sank, M. Weides, A. N. Cleland, and J. M. Martinis, "Improving the coherence time of superconducting coplanar resonators," Appl. Phys. Lett. 95(23), 233508 (2009). [CrossRef]
31. Y.-F. Xiao, X.-B. Zou, Z.-F. Han, and G.-C. Guo, "Quantum phase gate in an optical cavity with atomic cloud," Phys. Rev. A 74(4), 044303 (2006). [CrossRef]
32. X.-Q. Shao, H.-F. Wang, L. Chen, S. Zhang, Y.-F. Zhao, and K.-H. Yeon, "Three-qubit phase gate on three modes of a cavity," Opt. Commun. 282(23), 4643–4646 (2009). [CrossRef]
33. B. Ye, Z.-F. Zheng, Y. Zhang, and C.-P. Yang, "Circuit QED: single-step realization of a multiqubit controlled phase gate with one microwave photonic qubit simultaneously controlling n − 1 microwave photonic qubits," Opt. Express 26(23), 30689–30702 (2018). [CrossRef]
34. Y.-J. Fan, Z.-F. Zheng, Y. Zhang, D.-M. Lu, and C.-P. Yang, "One-step implementation of a multi-target-qubit controlled phase gate with cat-state qubits in circuit QED," Front. Phys. 14(2), 21602 (2018). [CrossRef]
35. C.-P. Yang and Z.-F. Zheng, "Deterministic generation of Greenberger–Horne–Zeilinger entangled states of cat-state qubits in circuit QED," Opt. Lett. 43(20), 5126–5129 (2018). [CrossRef]
36. S. J. Glaser, U. Boscain, T. Calarco, C. P. Koch, W. Köckenberger, R. Kosloff, I. Kuprov, B. Luy, S. Schirmer, T. Schulte-Herbrüggen, D. Sugny, and F. K. Wilhelm, "Training Schrödinger's cat: quantum optimal control strategic report on current status, visions and goals for research in Europe," Eur. Phys. J. D 69(12), 279 (2015). [CrossRef]
37. Y.-H. Kang, Y.-H. Chen, Z.-C. Shi, B.-H. Huang, J. Song, and Y. Xia, "Pulse design for multilevel systems by utilizing Lie transforms," Phys. Rev. A 97(3), 033407 (2018). [CrossRef]
38. J. L. Wu and S. L. Su, "Universal speeded-up adiabatic geometric quantum computation in three-level systems via counterdiabatic driving," J. Phys. A: Math. Theor. 52(33), 335301 (2019). [CrossRef]
39. I. Medina and F. L. Semião, "Pulse engineering for population control under dephasing and dissipation," Phys. Rev. A 100(1), 012103 (2019). [CrossRef]
40. B.-J. Liu, X.-K. Song, Z.-Y. Xue, X. Wang, and M.-H. Yung, "Plug-and-play approach to nonadiabatic geometric quantum gates," Phys. Rev. Lett. 123(10), 100501 (2019). [CrossRef]
41. A. A. Houck, H. E. Türeci, and J. Koch, "On-chip quantum simulation with superconducting circuits," Nat. Phys. 8(4), 292–299 (2012). [CrossRef]
42. I. M. Georgescu, S. Ashhab, and F. Nori, "Quantum simulation," Rev. Mod. Phys. 86(1), 153–185 (2014). [CrossRef]
43. Y.-X. Liu, C.-X. Yang, H.-C. Sun, and X.-B. Wang, "Coexistence of single- and multi-photon processes due to longitudinal couplings between superconducting flux qubits and external fields," New J. Phys. 16(1), 015031 (2014). [CrossRef]
44. Y. Salathé, M. Mondal, M. Oppliger, J. Heinsoo, P. Kurpiers, A. Potočnik, A. Mezzacapo, U. Las Heras, L. Lamata, E. Solano, S. Filipp, and A. Wallraff, "Digital quantum simulation of spin models with circuit quantum electrodynamics," Phys. Rev. X 5(2), 021027 (2015). [CrossRef]
45. M. Roth, M. Ganzhorn, N. Moll, S. Filipp, G. Salis, and S. Schmidt, "Analysis of a parametrically driven exchange-type gate and a two-photon excitation gate between superconducting qubits," Phys. Rev. A 96(6), 062323 (2017). [CrossRef]
46. X. Li, Y. Ma, J. Han, T. Chen, Y. X W. Cai, H. Wang, Y. P. Song, Z.-Y. Xue, Z.-Q. Yin, and L. Y. Sun, "Perfect quantum state transfer in a superconducting qubit chain with parametrically tunable couplings," Phys. Rev. Appl. 10(5), 054009 (2018). [CrossRef]
47. T. Chen and Z. Y. Xue, "Nonadiabatic geometric quantum computation with parametrically tunable coupling," Phys. Rev. Appl. 10(5), 054051 (2018). [CrossRef]
48. Y. l. Wu, L.-P. Yang, M. Gong, Y. r. Zheng, H. Deng, Z. g. Yan, Y. j. Zhao, K. q. Huang, A. D. Castellano, W. J. Munro, K. Nemoto, D.-N. Zheng, C. P. Sun, Y. -x. Liu, X. b. Zhu, and L. Lu, "An efficient and compact switch for quantum circuits," NPJ Quantum Inf. 4(1), 50 (2018). [CrossRef]
49. S. A. Caldwell, N. Didier, C. A. Ryan, E. A. Sete, A. Hudson, P. Karalekas, R. Manenti, M. P. da Silva, R. Sinclair, E. Acala, N. Alidoust, J. Angeles, A. Bestwick, M. Block, B. Bloom, A. Bradley, C. Bui, L. Capelluto, R. Chilcott, J. Cordova, G. Crossman, M. Curtis, S. Deshpande, T. El Bouayadi, D. Girshovich, S. Hong, K. Kuang, M. Lenihan, T. Manning, A. Marchenkov, J. Marshall, R. Maydra, Y. Mohan, W. O'Brien, C. Osborn, J. Otterbach, A. Papageorge, J.-P. Paquette, M. Pelstring, A. Polloreno, G. Prawiroatmodjo, V. Rawat, M. Reagor, R. Renzas, N. Rubin, D. Russell, M. Rust, D. Scarabelli, M. Scheer, M. Selvanayagam, R. Smith, A. Staley, M. Suska, N. Tezak, D. C. Thompson, T.-W. To, M. Vahidpour, N. Vodrahalli, T. Whyland, K. Yadav, W. Zeng, and C. Rigetti, "Parametrically activated entangling gates using transmon qubits," Phys. Rev. Appl. 10(3), 034050 (2018). [CrossRef]
50. M. Reagor, C. B. Osborn, N. Tezak, A. Staley, G. Prawiroatmodjo, M. Scheer, N. Alidoust, E. A. Sete, N. Didier, M. P. d. Silva, E. Acala, J. Angeles, A. Bestwick, M. Block, B. Bloom, A. Bradley, C. Bui, S. Caldwell, L. Capelluto, R. Chilcott, J. Cordova, G. Crossman, M. Curtis, S. Deshpande, T. E. Bouayadi, D. Girshovich, S. Hong, A. Hudson, P. Karalekas, K. Kuang, M. Lenihan, R. Manenti, T. Manning, J. Marshall, Y. Mohan, W. Brien, J. Otterbach, A. Papageorge, J.-P. Paquette, M. Pelstring, A. Polloreno, V. Rawat, C. A. Ryan, R. Renzas, N. Rubin, D. Russel, M. Rust, D. Scarabelli, M. Selvanayagam, R. Sinclair, R. Smith, M. Suska, T. -W. To, M. Vahidpour, N. Vodrahalli, T. Whyland, K. Yadav, W. Zeng, and C. T. Rigetti, "Demonstration of universal parametric entangling gates on a multi-qubit lattice," Sci. Adv. 4(2), eaao3603 (2018). [CrossRef]
51. O. Kyriienko and A. S. Sørensen, "Floquet quantum simulation with superconducting qubits," Phys. Rev. Appl. 9(6), 064029 (2018). [CrossRef]
52. F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, Fernando G. S. L. Brandao, D. A. Buell, B. Burkett, Y. Chen, Z. Chen, B. Chiaro, R. Collins, W. Courtney, A. Dunsworth, E. Farhi, B. Foxen, A. Fowler, C. Gidney, M. Giustina, R. Graff, K. Guerin, S. Habegger, M. P. Harrigan, M. J. Hartmann, A. Ho, M. Hoffmann, T. Huang, T. S. Humble, S. V. Isakov, E. Jeffrey, Z. Jiang, D. Kafri, K. Kechedzhi, J. Kelly, P. V. Klimov, S. Knysh, A. Korotkov, F. Kostritsa, D. Landhuis, M. Lindmark, E. Lucero, D. Lyakh, S. Mandrá, J. R. McClean, M. McEwen, A. Megrant, X. Mi, K. Michielsen, M. Mohseni, J. Mutus, O. Naaman, M. Neeley, C. Neill, M. Y. Niu, E. Ostby, A. Petukhov, J. C. Platt, C. Quintana, E. G. Rieffel, P. Roushan, N. C. Rubin, D. Sank, K. J. Satzinger, V. Smelyanskiy, K. J. Sung, M. D. Trevithick, A. Vainsencher, B. Villalonga, T. White, Z. J. Yao, P. Yeh, A. Zalcman, H. Neven, and J. M. Martinis, "Quantum supremacy using a programmable superconducting processor," Nature 574(7779), 505–510 (2019). [CrossRef]
53. L. DiCarlo, J. M. Chow, J. M. Gambetta, Lev S. Bishop, B. R. Johnson, D. I. Schuster, J. Majer, A. Blais, L. Frunzio, S. M. Girvin, and R. J. Schoelkopf, "Demonstration of two-qubit algorithms with a superconducting quantum processor," Nature 460(7252), 240–244 (2009). [CrossRef]
54. C.-P. Yang, S. Chu, and S. y. Han, "Possible realization of entanglement, logical gates, and quantum-information transfer with superconducting-quantum-interference-device qubits in cavity QED," Phys. Rev. A 67(4), 042311 (2003). [CrossRef]
55. D. F. V. James and J. Jerke, "Effective Hamiltonian theory and its applications in quantum information," Can. J. Phys. 85(6), 625–632 (2007). [CrossRef]
56. P. Mundada, G. Zhang, T. Hazard, and A. Houck, "Suppression of qubit crosstalk in a tunable coupling superconducting circuit," Phys. Rev. Appl. 12(5), 054023 (2019). [CrossRef]
57. I. M. Pop, K. Geerlings, G. Catelani, R. J. Schoelkopf, L. I. Glazman, and M. H. Devoret, "Coherent suppression of electromagnetic dissipation due to superconducting quasiparticles," Nature 508(7496), 369–372 (2014). [CrossRef]
58. X. Gu, A. F. Kockum, A. Miranowicz, Y. X. Liu, and F. Nori, "Microwave photonics with superconducting quantum circuits," Phys. Rep. 718-719, 1–102 (2017). [CrossRef]
59. M. Neeley, M. Ansmann, R. C. Bialczak, M. Hofheinz, N. Katz, E. Lucero, A. O'Connell, H. Wang, A. N. Cleland, and J. M. Martinis, "Process tomography of quantum memory in a Josephson-phase qubit coupled to a two-level state," Nat. Phys. 4(7), 523–526 (2008). [CrossRef]
60. J. Q. You and F. Nori, "Superconducting circuits and quantum information," Phys. Today 58(11), 42–47 (2005). [CrossRef]
61. C.-P. Yang, S.-B. Zheng, and F. Nori, "Multiqubit tunable phase gate of one qubit simultaneously controlling n qubits in a cavity," Phys. Rev. A 82(6), 062326 (2010). [CrossRef]
62. M. Sandberg, C. M. Wilson, F. Persson, T. Bauch, G. Johansson, V. Shumeiko, T. Duty, and P. Delsing, "Tuning the field in a microwave resonator faster than the photon life time," Appl. Phys. Lett. 92(20), 203501 (2008). [CrossRef]
63. Z. L. Wang, Y. P. Zhong, L. J. He, H. Wang, J. M. Martinis, A. N. Cleland, and Q. W. Xie, "Quantum state characterization of a fast tunable superconducting resonator," Appl. Phys. Lett. 102(16), 163503 (2013). [CrossRef]
A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. A. Smolin, and H. Weinfurter, "Elementary gates for quantum computation," Phys. Rev. A 52(5), 3457–3467 (1995).
B. Ye, Z. F. Zheng, and C.-P. Yang, "Multiplex-controlled phase gate with qubits distributed in a multicavity system," Phys. Rev. A 97(6), 062336 (2018).
C.-P. Yang, Y.-X. Liu, and F. Nori, "Phase gate of one qubit simultaneously controlling n qubits in a cavity," Phys. Rev. A 81(6), 062323 (2010).
C. Jones, "Composite Toffoli gate with two-round error detection," Phys. Rev. A 87(5), 052334 (2013).
J.-X. Han, J.-L. Wu, Y. Wang, Y. Xia, J. Song, and Y.-Y. Jiang, "Constructing multi-target controlled phase gate in circuit QED and its applications," Europhy. Lett. 127(5), 50002 (2019).
M. Sasura and V. Buzek, "Multiparticle entanglement with quantum logic networks: Application to cold trapped ions," Phys. Rev. A 64(1), 012305 (2001).
P. W. Shor, "Scheme for reducing decoherence in quantum computer memory," Phys. Rev. A 52(4), R2493–R2496 (1995).
A. M. Steane, "Error correcting codes in quantum theory," Phys. Rev. Lett. 77(5), 793–797 (1996).
L. K. Grover, "Quantum computers can search rapidly by using almost any transformation," Phys. Rev. Lett. 80(19), 4329–4332 (1998).
S. L. Braunstein, V. Bužek, and M. Hillery, "Quantum-information distributors: Quantum network for symmetric and asymmetric cloning in arbitrary dimension and continuous limit," Phys. Rev. A 63(5), 052313 (2001).
C.-P. Yang, S.-I. Chu, and S. Han, "Possible realization of entanglement, logical gates, and quantum-information transfer with superconducting-quantum-interference-device qubits in cavity QED," Phys. Rev. A 67(4), 042311 (2003).
A. Blais, R.-S. Huang, A. Wallraff, S. M. Girvin, and R. J. Schoelkopf, "Cavity quantum electrodynamics for superconducting electrical circuits: An architecture for quantum computation," Phys. Rev. A 69(6), 062320 (2004).
J. Clarke and F. K. Wilhelm, "Superconducting quantum bits," Nature 453(7198), 1031–1042 (2008).
J. Q. You and F. Nori, "Atomic physics and quantum optics using superconducting circuits," Nature 474(7353), 589–597 (2011).
C.-P. Yang, S.-I. Chu, and S. Han, "Quantum information transfer and entanglement with SQUID qubits in cavity QED: A dark-state scheme with tolerance for nonuniform device parameter," Phys. Rev. Lett. 92(11), 117902 (2004).
T. Niemczyk, F. Deppe, H. Huebl, E. P. Menzel, F. Hocke, M. J. Schwarz, J. J. Garcia-Ripoll, D. Zueco, T. Hümmer, E. Solano, A. Marx, and R. Gross, "Circuit quantum electrodynamics in the ultrastrong-coupling regime," Nat. Phys. 6(10), 772–776 (2010).
M. Reagor, H. Paik, G. Catelani, L. Sun, C. Axline, E. Holland, I. M. Pop, N. A. Masluk, T. Brecht, L. Frunzio, M. H. Devoret, L. Glazman, and R. J. Schoelkopf, "Reaching 10 ms single photon lifetimes for superconducting aluminum cavities," Appl. Phys. Lett. 102(19), 192604 (2013).
M. Reagor, W. Pfaff, C. Axline, R. W. Heeres, N. Ofek, K. Sliwa, E. Holland, C. Wang, J. Blumoff, K. Chou, M. J.. Hstridge, L. Frunzio, M. H.. Devoret, L. Jiang, and R. J.. Schoelkopf, "Quantum memory with millisecond coherence in circuit QED," Phys. Rev. B 94(1), 014506 (2016).
C. Axline, M. Reagor, R. Heeres, P. Reinhold, C. Wang, K. Shain, W. Pfaff, Y. Chu, L. Frunzio, and R. J. Schoelkopf, "An architecture for integrating planar and 3D cQED devices," Appl. Phys. Lett. 109(4), 042601 (2016).
H. Paik, D. I. Schuster, L. S. Bishop, G. Kirchmair, G. Catelani, A. P. Sears, B. R. Johnson, M. J. Reagor, L. Frunzio, L. I. Glazman, S. M. Girvin, M. H. Devoret, and R. J. Schoelkopf, "Observation of high coherence in josephson junction qubits measured in a three-dimensional circuit QED architecture," Phys. Rev. Lett. 107(24), 240501 (2011).
M. Stern, G. Catelani, Y. Kubo, C. Grezes, A. Bienfait, D. Vion, D. Esteve, and P. Bertet, "Flux qubits with long coherence times for hybrid quantum circuits," Phys. Rev. Lett. 113(12), 123601 (2014).
M. J. Peterer, S. J. Bader, X. Jin, F. Yan, A. Kamal, T. J. Gudmundsen, P. J. Leek, T. P. Orlando, W. D. Oliver, and S. Gustavsson, "Coherence and decay of higher energy levels of a superconducting transmon qubit," Phys. Rev. Lett. 114(1), 010501 (2015).
C.-P. Yang and S.-Y. Han, "n-qubit-controlled phase gate with superconducting quantum-interference devices coupled to a resonator," Phys. Rev. A 72(3), 032311 (2005).
C.-P. Yang and S.-Y. Han, "Realization of an n-qubit controlled-U gate with superconducting quantum interference devices or atoms in cavity QED," Phys. Rev. A 73(3), 032317 (2006).
H.-F. Wang, A.-D. Zhu, and S. Zhang, "One-step implementation of a multiqubit phase gate with one control qubit and multiple target qubits in coupled cavities," Opt. Lett. 39(6), 1489–1492 (2014).
C.-P. Yang, Q.-P. Su, F.-Y. Zhang, and S.-B. Zheng, "Single-step implementation of a multiple-target-qubit controlled phase gate without need of classical pulses," Opt. Lett. 39(11), 3312–3315 (2014).
T. Liu, B.-Q. Guo, C.-S. Yu, and W.-N. Zhang, "One-step implementation of a hybrid fredkin gate with quantum memories and single superconducting qubit in circuit QED and its applications," Opt. Express 26(4), 4498–4511 (2018).
B. Ye, Z.-F. Zheng, and C.-P. Yang, "Multiplex-controlled phase gate with qubits distributed in a multicavity system," Phys. Rev. A 97(6), 062336 (2018).
J. Q. You and F. Nori, "Quantum information processing with superconducting qubits in a microwave field," Phys. Rev. B 68(6), 064509 (2003).
H. Wang, M. Hofheinz, J. Wenner, M. Ansmann, R. C. Bialczak, M. Lenander, E. Lucero, M. Neeley, A. D. O'Connell, D. Sank, M. Weides, A. N. Cleland, and J. M. Martinis, "Improving the coherence time of superconducting coplanar resonators," Appl. Phys. Lett. 95(23), 233508 (2009).
Y.-F. Xiao, X.-B. Zou, Z.-F. Han, and G.-C. Guo, "Quantum phase gate in an optical cavity with atomic cloud," Phys. Rev. A 74(4), 044303 (2006).
X.-Q. Shao, H.-F. Wang, L. Chen, S. Zhang, Y.-F. Zhao, and K.-H. Yeon, "Three-qubit phase gate on three modes of a cavity," Opt. Commun. 282(23), 4643–4646 (2009).
B. Ye, Z.-F. Zheng, Y. Zhang, and C.-P. Yang, "Circuit QED: single-step realization of a multiqubit controlled phase gate with one microwave photonic qubit simultaneously controlling n − 1 microwave photonic qubits," Opt. Express 26(23), 30689–30702 (2018).
Y.-J. Fan, Z.-F. Zheng, Y. Zhang, D.-M. Lu, and C.-P. Yang, "One-step implementation of a multi-target-qubit controlled phase gate with cat-state qubits in circuit QED," Front. Phys. 14(2), 21602 (2018).
C.-P. Yang and Z.-F. Zheng, "Deterministic generation of Greenberger–Horne–Zeilinger entangled states of cat-state qubits in circuit QED," Opt. Lett. 43(20), 5126–5129 (2018).
S. J. Glaser, U. Boscain, T. Calarco, C. P. Koch, W. Köckenberger, R. Kosloff, I. Kuprov, B. Luy, S. Schirmer, T. Schulte-Herbrüggen, D. Sugny, and F. K. Wilhelm, "Training Schrödinger's cat: quantum optimal control strategic report on current status, visions and goals for research in Europe," Eur. Phys. J. D 69(12), 279 (2015).
Y.-H. Kang, Y.-H. Chen, Z.-C. Shi, B.-H. Huang, J. Song, and Y. Xia, "Pulse design for multilevel systems by utilizing Lie transforms," Phys. Rev. A 97(3), 033407 (2018).
J. L. Wu and S. L. Su, "Universal speeded-up adiabatic geometric quantum computation in three-level systems via counterdiabatic driving," J. Phys. A: Math. Theor. 52(33), 335301 (2019).
I. Medina and F. L. Semião, "Pulse engineering for population control under dephasing and dissipation," Phys. Rev. A 100(1), 012103 (2019).
B.-J. Liu, X.-K. Song, Z.-Y. Xue, X. Wang, and M.-H. Yung, "Plug-and-play approach to nonadiabatic geometric quantum gates," Phys. Rev. Lett. 123(10), 100501 (2019).
A. A. Houck, H. E. Türeci, and J. Koch, "On-chip quantum simulation with superconducting circuits," Nat. Phys. 8(4), 292–299 (2012).
I. M. Georgescu, S. Ashhab, and F. Nori, "Quantum simulation," Rev. Mod. Phys. 86(1), 153–185 (2014).
Y.-X. Liu, C.-X. Yang, H.-C. Sun, and X.-B. Wang, "Coexistence of single- and multi-photon processes due to longitudinal couplings between superconducting flux qubits and external fields," New J. Phys. 16(1), 015031 (2014).
Y. Salathé, M. Mondal, M. Oppliger, J. Heinsoo, P. Kurpiers, A. Potočnik, A. Mezzacapo, U. Las Heras, L. Lamata, E. Solano, S. Filipp, and A. Wallraff, "Digital quantum simulation of spin models with circuit quantum electrodynamics," Phys. Rev. X 5(2), 021027 (2015).
M. Roth, M. Ganzhorn, N. Moll, S. Filipp, G. Salis, and S. Schmidt, "Analysis of a parametrically driven exchange-type gate and a two-photon excitation gate between superconducting qubits," Phys. Rev. A 96(6), 062323 (2017).
X. Li, Y. Ma, J. Han, T. Chen, Y. X W. Cai, H. Wang, Y. P. Song, Z.-Y. Xue, Z.-Q. Yin, and L. Y. Sun, "Perfect quantum state transfer in a superconducting qubit chain with parametrically tunable couplings," Phys. Rev. Appl. 10(5), 054009 (2018).
T. Chen and Z. Y. Xue, "Nonadiabatic geometric quantum computation with parametrically tunable coupling," Phys. Rev. Appl. 10(5), 054051 (2018).
Y. l. Wu, L.-P. Yang, M. Gong, Y. r. Zheng, H. Deng, Z. g. Yan, Y. j. Zhao, K. q. Huang, A. D. Castellano, W. J. Munro, K. Nemoto, D.-N. Zheng, C. P. Sun, Y. -x. Liu, X. b. Zhu, and L. Lu, "An efficient and compact switch for quantum circuits," NPJ Quantum Inf. 4(1), 50 (2018).
S. A. Caldwell, N. Didier, C. A. Ryan, E. A. Sete, A. Hudson, P. Karalekas, R. Manenti, M. P. da Silva, R. Sinclair, E. Acala, N. Alidoust, J. Angeles, A. Bestwick, M. Block, B. Bloom, A. Bradley, C. Bui, L. Capelluto, R. Chilcott, J. Cordova, G. Crossman, M. Curtis, S. Deshpande, T. El Bouayadi, D. Girshovich, S. Hong, K. Kuang, M. Lenihan, T. Manning, A. Marchenkov, J. Marshall, R. Maydra, Y. Mohan, W. O'Brien, C. Osborn, J. Otterbach, A. Papageorge, J.-P. Paquette, M. Pelstring, A. Polloreno, G. Prawiroatmodjo, V. Rawat, M. Reagor, R. Renzas, N. Rubin, D. Russell, M. Rust, D. Scarabelli, M. Scheer, M. Selvanayagam, R. Smith, A. Staley, M. Suska, N. Tezak, D. C. Thompson, T.-W. To, M. Vahidpour, N. Vodrahalli, T. Whyland, K. Yadav, W. Zeng, and C. Rigetti, "Parametrically activated entangling gates using transmon qubits," Phys. Rev. Appl. 10(3), 034050 (2018).
M. Reagor, C. B. Osborn, N. Tezak, A. Staley, G. Prawiroatmodjo, M. Scheer, N. Alidoust, E. A. Sete, N. Didier, M. P. d. Silva, E. Acala, J. Angeles, A. Bestwick, M. Block, B. Bloom, A. Bradley, C. Bui, S. Caldwell, L. Capelluto, R. Chilcott, J. Cordova, G. Crossman, M. Curtis, S. Deshpande, T. E. Bouayadi, D. Girshovich, S. Hong, A. Hudson, P. Karalekas, K. Kuang, M. Lenihan, R. Manenti, T. Manning, J. Marshall, Y. Mohan, W. Brien, J. Otterbach, A. Papageorge, J.-P. Paquette, M. Pelstring, A. Polloreno, V. Rawat, C. A. Ryan, R. Renzas, N. Rubin, D. Russel, M. Rust, D. Scarabelli, M. Selvanayagam, R. Sinclair, R. Smith, M. Suska, T. -W. To, M. Vahidpour, N. Vodrahalli, T. Whyland, K. Yadav, W. Zeng, and C. T. Rigetti, "Demonstration of universal parametric entangling gates on a multi-qubit lattice," Sci. Adv. 4(2), eaao3603 (2018).
O. Kyriienko and A. S. Sørensen, "Floquet quantum simulation with superconducting qubits," Phys. Rev. Appl. 9(6), 064029 (2018).
F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, Fernando G. S. L. Brandao, D. A. Buell, B. Burkett, Y. Chen, Z. Chen, B. Chiaro, R. Collins, W. Courtney, A. Dunsworth, E. Farhi, B. Foxen, A. Fowler, C. Gidney, M. Giustina, R. Graff, K. Guerin, S. Habegger, M. P. Harrigan, M. J. Hartmann, A. Ho, M. Hoffmann, T. Huang, T. S. Humble, S. V. Isakov, E. Jeffrey, Z. Jiang, D. Kafri, K. Kechedzhi, J. Kelly, P. V. Klimov, S. Knysh, A. Korotkov, F. Kostritsa, D. Landhuis, M. Lindmark, E. Lucero, D. Lyakh, S. Mandrá, J. R. McClean, M. McEwen, A. Megrant, X. Mi, K. Michielsen, M. Mohseni, J. Mutus, O. Naaman, M. Neeley, C. Neill, M. Y. Niu, E. Ostby, A. Petukhov, J. C. Platt, C. Quintana, E. G. Rieffel, P. Roushan, N. C. Rubin, D. Sank, K. J. Satzinger, V. Smelyanskiy, K. J. Sung, M. D. Trevithick, A. Vainsencher, B. Villalonga, T. White, Z. J. Yao, P. Yeh, A. Zalcman, H. Neven, and J. M. Martinis, "Quantum supremacy using a programmable superconducting processor," Nature 574(7779), 505–510 (2019).
L. DiCarlo, J. M. Chow, J. M. Gambetta, Lev S. Bishop, B. R. Johnson, D. I. Schuster, J. Majer, A. Blais, L. Frunzio, S. M. Girvin, and R. J. Schoelkopf, "Demonstration of two-qubit algorithms with a superconducting quantum processor," Nature 460(7252), 240–244 (2009).
C.-P. Yang, S. Chu, and S. y. Han, "Possible realization of entanglement, logical gates, and quantum-information transfer with superconducting-quantum-interference-device qubits in cavity QED," Phys. Rev. A 67(4), 042311 (2003).
D. F. V. James and J. Jerke, "Effective Hamiltonian theory and its applications in quantum information," Can. J. Phys. 85(6), 625–632 (2007).
P. Mundada, G. Zhang, T. Hazard, and A. Houck, "Suppression of qubit crosstalk in a tunable coupling superconducting circuit," Phys. Rev. Appl. 12(5), 054023 (2019).
I. M. Pop, K. Geerlings, G. Catelani, R. J. Schoelkopf, L. I. Glazman, and M. H. Devoret, "Coherent suppression of electromagnetic dissipation due to superconducting quasiparticles," Nature 508(7496), 369–372 (2014).
X. Gu, A. F. Kockum, A. Miranowicz, Y. X. Liu, and F. Nori, "Microwave photonics with superconducting quantum circuits," Phys. Rep. 718-719, 1–102 (2017).
M. Neeley, M. Ansmann, R. C. Bialczak, M. Hofheinz, N. Katz, E. Lucero, A. O'Connell, H. Wang, A. N. Cleland, and J. M. Martinis, "Process tomography of quantum memory in a Josephson-phase qubit coupled to a two-level state," Nat. Phys. 4(7), 523–526 (2008).
J. Q. You and F. Nori, "Superconducting circuits and quantum information," Phys. Today 58(11), 42–47 (2005).
C.-P. Yang, S.-B. Zheng, and F. Nori, "Multiqubit tunable phase gate of one qubit simultaneously controlling n qubits in a cavity," Phys. Rev. A 82(6), 062326 (2010).
M. Sandberg, C. M. Wilson, F. Persson, T. Bauch, G. Johansson, V. Shumeiko, T. Duty, and P. Delsing, "Tuning the field in a microwave resonator faster than the photon life time," Appl. Phys. Lett. 92(20), 203501 (2008).
Z. L. Wang, Y. P. Zhong, L. J. He, H. Wang, J. M. Martinis, A. N. Cleland, and Q. W. Xie, "Quantum state characterization of a fast tunable superconducting resonator," Appl. Phys. Lett. 102(16), 163503 (2013).
Acala, E.
Alidoust, N.
Angeles, J.
Ansmann, M.
Arute, F.
Arya, K.
Ashhab, S.
Axline, C.
b. Zhu, X.
Babbush, R.
Bacon, D.
Bader, S. J.
Bardin, J. C.
Barenco, A.
Barends, R.
Bauch, T.
Bennett, C. H.
Bertet, P.
Bestwick, A.
Bialczak, R. C.
Bienfait, A.
Bishop, L. S.
Bishop, Lev S.
Biswas, R.
Blais, A.
Block, M.
Bloom, B.
Blumoff, J.
Boixo, S.
Boscain, U.
Bouayadi, T. E.
Bradley, A.
Brandao, Fernando G. S. L.
Braunstein, S. L.
Brecht, T.
Brien, W.
Buell, D. A.
Bui, C.
Burkett, B.
Buzek, V.
Bužek, V.
Calarco, T.
Caldwell, S.
Caldwell, S. A.
Capelluto, L.
Castellano, A. D.
Catelani, G.
Chen, T.
Chen, Y.-H.
Chen, Z.
Chiaro, B.
Chilcott, R.
Chou, K.
Chow, J. M.
Chu, S.
Chu, S.-I.
Chu, Y.
Clarke, J.
Cleland, A. N.
Cleve, R.
Collins, R.
Cordova, J.
Courtney, W.
Crossman, G.
Curtis, M.
d. Silva, M. P.
da Silva, M. P.
Delsing, P.
Deng, H.
Deppe, F.
Deshpande, S.
Devoret, M. H.
Devoret, M. H..
DiCarlo, L.
Didier, N.
DiVincenzo, D. P.
Dunsworth, A.
Duty, T.
El Bouayadi, T.
Esteve, D.
Fan, Y.-J.
Farhi, E.
Filipp, S.
Fowler, A.
Foxen, B.
Frunzio, L.
g. Yan, Z.
Gambetta, J. M.
Ganzhorn, M.
Garcia-Ripoll, J. J.
Geerlings, K.
Georgescu, I. M.
Gidney, C.
Girshovich, D.
Girvin, S. M.
Giustina, M.
Glaser, S. J.
Glazman, L.
Glazman, L. I.
Gong, M.
Graff, R.
Grezes, C.
Gross, R.
Grover, L. K.
Gu, X.
Gudmundsen, T. J.
Guerin, K.
Guo, B.-Q.
Guo, G.-C.
Gustavsson, S.
Habegger, S.
Han, J.-X.
Han, S.
Han, S.-Y.
Han, Z.-F.
Harrigan, M. P.
Hartmann, M. J.
Hazard, T.
He, L. J.
Heeres, R.
Heeres, R. W.
Heinsoo, J.
Hillery, M.
Ho, A.
Hocke, F.
Hoffmann, M.
Hofheinz, M.
Holland, E.
Hong, S.
Houck, A.
Houck, A. A.
Hstridge, M. J..
Huang, B.-H.
Huang, R.-S.
Huang, T.
Hudson, A.
Huebl, H.
Humble, T. S.
Hümmer, T.
Isakov, S. V.
j. Zhao, Y.
James, D. F. V.
Jeffrey, E.
Jerke, J.
Jiang, L.
Jiang, Y.-Y.
Jiang, Z.
Jin, X.
Johansson, G.
Johnson, B. R.
Jones, C.
Kafri, D.
Kamal, A.
Kang, Y.-H.
Karalekas, P.
Katz, N.
Kechedzhi, K.
Kelly, J.
Kirchmair, G.
Klimov, P. V.
Knysh, S.
Koch, C. P.
Koch, J.
Köckenberger, W.
Kockum, A. F.
Korotkov, A.
Kosloff, R.
Kostritsa, F.
Kuang, K.
Kubo, Y.
Kuprov, I.
Kurpiers, P.
Kyriienko, O.
l. Wu, Y.
Lamata, L.
Landhuis, D.
Las Heras, U.
Leek, P. J.
Lenander, M.
Li, X.
Lindmark, M.
Liu, B.-J.
Liu, T.
Liu, Y. X.
Liu, Y.-X.
Lu, D.-M.
Lu, L.
Lucero, E.
Luy, B.
Lyakh, D.
Ma, Y.
Majer, J.
Mandrá, S.
Manenti, R.
Manning, T.
Marchenkov, A.
Margolus, N.
Marshall, J.
Martinis, J. M.
Marx, A.
Masluk, N. A.
Maydra, R.
McClean, J. R.
McEwen, M.
Medina, I.
Megrant, A.
Menzel, E. P.
Mezzacapo, A.
Mi, X.
Michielsen, K.
Miranowicz, A.
Mohan, Y.
Mohseni, M.
Moll, N.
Mondal, M.
Mundada, P.
Munro, W. J.
Mutus, J.
Naaman, O.
Neeley, M.
Neill, C.
Nemoto, K.
Neven, H.
Niemczyk, T.
Niu, M. Y.
Nori, F.
O'Brien, W.
O'Connell, A.
O'Connell, A. D.
Ofek, N.
Oliver, W. D.
Oppliger, M.
Orlando, T. P.
Osborn, C.
Osborn, C. B.
Ostby, E.
Otterbach, J.
Paik, H.
Papageorge, A.
Paquette, J.-P.
Pelstring, M.
Persson, F.
Peterer, M. J.
Petukhov, A.
Pfaff, W.
Platt, J. C.
Polloreno, A.
Pop, I. M.
Potocnik, A.
Prawiroatmodjo, G.
q. Huang, K.
Quintana, C.
r. Zheng, Y.
Rawat, V.
Reagor, M.
Reagor, M. J.
Reinhold, P.
Renzas, R.
Rieffel, E. G.
Rigetti, C.
Rigetti, C. T.
Roth, M.
Roushan, P.
Rubin, N.
Rubin, N. C.
Russel, D.
Russell, D.
Rust, M.
Ryan, C. A.
Salathé, Y.
Salis, G.
Sandberg, M.
Sank, D.
Sasura, M.
Satzinger, K. J.
Scarabelli, D.
Scheer, M.
Schirmer, S.
Schmidt, S.
Schoelkopf, R. J.
Schoelkopf, R. J..
Schulte-Herbrüggen, T.
Schuster, D. I.
Schwarz, M. J.
Sears, A. P.
Selvanayagam, M.
Semião, F. L.
Sete, E. A.
Shain, K.
Shao, X.-Q.
Shi, Z.-C.
Shor, P.
Shor, P. W.
Shumeiko, V.
Sinclair, R.
Sleator, T.
Sliwa, K.
Smelyanskiy, V.
Smith, R.
Smolin, J. A.
Solano, E.
Song, J.
Song, X.-K.
Song, Y. P.
Sørensen, A. S.
Staley, A.
Steane, A. M.
Stern, M.
Su, Q.-P.
Su, S. L.
Sugny, D.
Sun, C. P.
Sun, H.-C.
Sun, L.
Sun, L. Y.
Sung, K. J.
Suska, M.
Tezak, N.
Thompson, D. C.
To, T.-W.
Trevithick, M. D.
Türeci, H. E.
Vahidpour, M.
Vainsencher, A.
Villalonga, B.
Vion, D.
Vodrahalli, N.
W. Cai, Y. X
-W. To, T.
Wallraff, A.
Wang, C.
Wang, H.
Wang, H.-F.
Wang, X.
Wang, X.-B.
Wang, Y.
Wang, Z. L.
Weides, M.
Weinfurter, H.
Wenner, J.
White, T.
Whyland, T.
Wilhelm, F. K.
Wilson, C. M.
Wu, J. L.
Wu, J.-L.
-x. Liu, Y.
Xia, Y.
Xiao, Y.-F.
Xie, Q. W.
Xue, Z. Y.
Xue, Z.-Y.
y. Han, S.
Yadav, K.
Yan, F.
Yang, C.-P.
Yang, C.-X.
Yang, L.-P.
Yao, Z. J.
Ye, B.
Yeh, P.
Yeon, K.-H.
Yin, Z.-Q.
You, J. Q.
Yu, C.-S.
Yung, M.-H.
Zalcman, A.
Zeng, W.
Zhang, F.-Y.
Zhang, G.
Zhang, W.-N.
Zhao, Y.-F.
Zheng, D.-N.
Zheng, S.-B.
Zheng, Z. F.
Zheng, Z.-F.
Zhong, Y. P.
Zhu, A.-D.
Zou, X.-B.
Zueco, D.
Can. J. Phys. (1)
Eur. Phys. J. D (1)
Europhy. Lett. (1)
Front. Phys. (1)
J. Phys. A: Math. Theor. (1)
Nat. Phys. (3)
New J. Phys. (1)
NPJ Quantum Inf. (1)
Opt. Commun. (1)
Phys. Rep. (1)
Phys. Rev. A (18)
Phys. Rev. Appl. (5)
Phys. Rev. X (1)
Phys. Today (1)
Rev. Mod. Phys. (1)
Sci. Adv. (1)
Equations on this page are rendered with MathJax. Learn more.
(1) H b u s = Q 2 2 C + ( Φ − Φ x ) 2 2 L − E J cos 2 π Φ Φ 0 ,
(2) V = V 0 { 1 2 [ 2 π ( Φ − Φ x ) Φ 0 ] − E J V 0 cos ( 2 π Φ Φ 0 ) } ,
(3) Ω j = 1 2 L ℏ ⟨ g j + 1 | Φ | e j ⟩ ∫ S B μ ω j ( r → , t ) ⋅ d S , g j = 1 L ω j 2 μ 0 ℏ ⟨ g j | Φ | e j ⟩ ∫ S B r j ( r → , t ) ⋅ d S ,
(4) H = H 0 + H i , H 0 = ω 1 a 1 † a 1 + ω 2 a 2 † a 2 + ∑ l = g , e ∑ j = 1 , 2 ω l j | l j ⟩ ⟨ l j | , H i = ∑ n = 1 2 g n a n | e n ⟩ ⟨ g n | + Ω 1 | e 1 ⟩ ⟨ g 2 | e − i ω L 1 t + H.c. ,
(5) H I , 2 = ∑ n = 1 2 g n a n | e n ⟩ ⟨ g n | + Ω 1 | e 1 ⟩ ⟨ g 2 | + H . c . ,
(6) H Π , 2 = g 1 2 ( e i Ω 1 t | Φ + ⟩ + e − i Ω 1 t | Φ − ⟩ ) ⟨ ϕ 1 | + H.c. ,
(7) H Π , 2 ′ = g 1 2 ( e i Ω 1 t | Ψ + ⟩ + e − i Ω 1 t | Ψ − ⟩ ) ⟨ φ 1 | + g 2 2 ( e i Ω 1 t | Ψ + ⟩ − e − i Ω 1 t | Ψ − ⟩ ) ⟨ φ 4 | + H.c. ,
(8) H e f f = g e f f | φ 4 ⟩ ⟨ φ 1 | + H.c.
(9) | g 1 ⟩ | 0 ⟩ R 1 | 0 ⟩ R 2 → | g 1 ⟩ | 0 ⟩ R 1 | 0 ⟩ R 2 , | g 1 ⟩ | 0 ⟩ R 1 | 1 ⟩ R 2 → | g 1 ⟩ | 0 ⟩ R 1 | 1 ⟩ R 2 , | g 1 ⟩ | 1 ⟩ R 1 | 0 ⟩ R 2 → | g 1 ⟩ | 1 ⟩ R 1 | 0 ⟩ R 2 , | g 1 ⟩ | 1 ⟩ R 1 | 1 ⟩ R 2 → − | g 1 ⟩ | 1 ⟩ R 1 | 1 ⟩ R 2 .
(10) H I,3 = ∑ m = 1 3 g m a m | e m ⟩ ⟨ g m | + ∑ n = 1 2 Ω n | e n ⟩ ⟨ g n + 1 | + H.c.
(11) H Π , 3 = g 1 2 e i Ω 1 t ( | ψ + ⟩ ⟨ φ 1 ′ | + | φ 1 ′ ⟩ ⟨ ψ − | ) + g 2 2 e i ( Ω 1 − Ω 2 ) t ( | ψ + ⟩ ⟨ ψ + ′ | − | ψ − ′ ⟩ ⟨ ψ − | ) + g 2 2 e i ( Ω 1 + Ω 2 ) t ( | ψ + ⟩ ⟨ ψ − ′ | − | ψ + ′ ⟩ ⟨ ψ − | ) + g 3 2 e i Ω 2 t ( | ψ + ′ ⟩ ⟨ φ 6 ′ | − | φ 6 ′ ⟩ ⟨ ψ − ′ | ) + H.c .
(12) ⟨ φ 6 ′ | H Π , 3 | ψ + ′ ⟩ ⟨ ψ + ′ | H Π , 3 | ψ + ⟩ ⟨ ψ + | H Π , 3 | ϕ 1 ′ ⟩ Ω 1 Ω 2 − ⟨ φ 6 ′ | H Π , 3 | ψ − ′ ⟩ ⟨ ψ − ′ | H Π , 3 | ψ + ⟩ ⟨ ψ + | H Π , 3 | ϕ 1 ′ ⟩ Ω 1 Ω 2 − ⟨ φ 6 ′ | H Π , 3 | ψ + ′ ⟩ ⟨ ψ + ′ | H Π , 3 | ψ − ⟩ ⟨ ψ − | H Π , 3 | φ 1 ′ ⟩ Ω 1 Ω 2 + ⟨ φ 6 ′ | H Π , 3 | ψ − ′ ⟩ ⟨ ψ − ′ | H Π , 3 | ψ − ⟩ ⟨ ψ − | H Π , 3 | ϕ 1 ′ ⟩ Ω 1 Ω 2 = g 1 g 2 g 3 Ω 1 Ω 2 .
(13) ⟨ φ 1 ′ | H Π , 3 | ψ + ⟩ ⟨ ψ + | H Π , 3 | φ 1 ′ ⟩ − Ω 1 + ⟨ φ 1 ′ | H Π , 3 | ψ − ⟩ ⟨ ψ − | H Π , 3 | φ 1 ′ ⟩ Ω 1 = 0 ,
(14) ⟨ φ 6 ′ | H Π , 3 | ψ + ′ ⟩ ⟨ ψ + ′ | H Π , 3 | φ 6 ′ ⟩ − Ω 2 + ⟨ φ 6 ′ | H Π , 3 | ψ − ′ ⟩ ⟨ ψ − ′ | H Π , 3 | φ 6 ′ ⟩ Ω 2 = 0.
(15) H eff,3 ′ = g eff,3 | φ 6 ′ ⟩ ⟨ φ 1 ′ | + H.c .
(16) | g 1 ⟩ | x ⟩ R 1 | y ⟩ R 2 | z ⟩ R 3 → e i x y z π | g 1 ⟩ | x ⟩ R 1 | y ⟩ R 2 | z ⟩ R 3 ,
(17) H I,n = ∑ i = 1 n g i a i | e i ⟩ ⟨ g i | + ∑ j = 1 n − 1 Ω j | e j ⟩ ⟨ g j + 1 |
(18) H eff,n = g eff,n | ψ n ⟩ ⟨ ψ 1 | + H.c
(19) g ( t ) = g m 2 [ 1 − cos ( 2 π t t I I ′ + π 3 ) ]
(20) d ρ d t = − i [ H I I , ρ ] + ∑ i = 1 3 κ i D [ a i ] + γ g 1 , e 1 D [ σ 1 ] + γ g 2 , e 1 D [ σ 2 ] + γ g 2 , e 2 D [ σ 3 ] + γ g 3 , e 2 D [ σ 4 ] + γ g 3 , e 3 D [ σ 5 ] ,
(21) H un = ( Ω 1 e i δ t + g 2 e 2 i δ t a 2 † ) | g 1 ⟩ ⟨ e 1 | + ( g 1 e i δ t a 1 + g 2 e − i δ t a 2 ) | e 1 ⟩ ⟨ g 2 | + ( Ω 1 e − i δ t + g 1 e − 2 i δ t a 1 † ) | g 2 ⟩ ⟨ e 2 | + H.c ,
(22) δ 1 = Ω 1 2 δ , δ 2 = − Ω 1 2 δ , δ 3 = − Ω 1 2 δ , δ 4 = Ω 1 2 δ .
(23) H I , 2 ′ = g 1 e i δ 12 t a 1 † | g 1 ⟩ ⟨ e 1 | + Ω 1 e i δ 23 t | e 1 ⟩ ⟨ g 2 | + g 2 e i δ 34 t a 2 † | g 2 ⟩ ⟨ e 2 | + H . c .
(24) δ 12 = δ 2 − δ 1 , δ 23 = δ 3 − δ 2 , δ 34 = δ 4 − δ 3 .
(25) δ 1 = Ω 1 2 δ , δ 2 = D 2 , δ 3 = D 3 , δ 4 = Ω 1 2 δ . | CommonCrawl |
Strategies to improve the growth and homogeneity of growing-finishing pigs: feeder space and feeding management
Sergi López-Vergé ORCID: orcid.org/0000-0002-7499-60451,
Josep Gasa1,
Déborah Temple1,
Jordi Bonet2,
Jaume Coma2 &
David Solà-Oriol1
The aim was to test two strategies to improve the growth rate of the slow-growth pigs and to increase the batch's homogeneity at slaughter. In Trial 1 a total of 264 weaned piglets were distributed into 24 pens (11 piglets/pen) according to sex and initial body weight (BW) for the transition period (T; 28 d to 64 d). During the T period, a commercial lidded feeder hopper was used (3.7 pigs/feeder space). When moving to the growing facilities, the 24 pens were maintained and split into two groups of 12 according to sex, feeder type (HD or 5.5 pigs/feeder space and LD or 2.2 pigs/feeder space). In Trial 2 a total of 1067 piglets were used and classified, when leaving the nursery at 63d of age, as Heavy (Hp, n = 524) and Light (Lp, n = 543) pigs. Along the growing period, Hp and half of the Lp pigs were fed with four consecutive feeds, following a standard feeding program (Std). Alternatively, the other half of the Lp pigs were fed according to a budget approach, changing the first three feeds on the basis of an equivalent feed consumption instead of age (Sp).
In Trial 1, higher BW (80.2 kg vs. 82.1 kg; P = 0.02), ADG (704 g/d vs. 725 g/d; P = 0.02) and lower number of lesions were observed for pigs raised in the LD treatment, compared to the HD treatment at d 154 (P < 0.05). The CV of the final BW was numerically lower for the LD treatment. In Trial 2, higher BW and ADG and lower CV were observed for the LSp pigs from 83 d until 163 d (P < 0.001) of age compared to LStd. Moreover, an interaction observed for carcass weight at slaughter (P = 0.016) showed that the Sp pigs had a higher carcass weight than did the Std pigs, and the difference increased as the emptying of the barn facility advanced.
It is concluded that feeder space and feeding management may affect the growth of growing-finishing pigs and body-weight homogeneity at the end of the period.
The growing-fattening is the most expensive period of the pig's life, accounting for 65% of the total cost of a pig of 109 kg body weight (BW) [1]. During growing-fattening feed represents the 50.6% of the total cost or 66.2% of the variable cost. An important factor affecting the growing-finishing swine profit is the variability of the BW at slaughter. The market body weight variability may reduce the value of carcasses, modifying their quality classification and quotation, and increases the occupation time of the facilities. Then, pigs with slow growth within a batch are usually responsible for a non-efficient use of the growing and fattening facilities [2]. Consequently, the search for strategies to reduce to some extent the body weight variability in pig industry is an area where more research is needed, especially by using strategies easy to implement in commercial conditions; feeder space and feed management are among those strategies. A possible way to minimize BW variability relies on the feeder space and design, because feeders constitute a tool used for pigs to correctly access the diets formulated to meet their nutrient requirements [3]. The feeder, then, may affect the performance, growth and homogeneity of pigs. Another strategy used to maximize the performance of the lightest piglets may rely on feeding programs. These programs usually comprise different feeds (one or two to more than six) throughout the growing-finishing period [4,5,6]. Moreover, the standard growing-fattening feeding programs usually treat all animals of a batch as a unit and change from one feed specification to the following one on a fixed day, although other approaches may be implemented, like grouping the pigs by size and changing the first feeds on the basis of an equivalent feed consumption instead of age. Therefore, exploring different multi-phase feeding strategies may lead to differences in growth rate and variability. Thus, the objective of the present work is to observe the effect of feeder space or feeding management on the growth rate and homogeneity of pigs during the growing-finishing period.
Two trials (Trial 1 and Trial 2) were conducted on two different farms located in Catalonia (Spain). Trial 1 was focused on studying the effect of feeder space, with Trial 2 on evaluating the effect of feed management during the growing-finishing period.
In Trial 1, weaned piglets (28 days of age) were allocated in the nursery of a sows-nursery commercial facility (up to 64 days of age). In Trial 2, after weaning at about 21 days, the nursery period was performed in another sows-nursery commercial farm until 63 days of age. Next, in both trials, pigs were moved to two different external growing-finishing farms until slaughtering. No health problems were observed in the two herds during the development of the two trials.
Animals, housing, management and diets
In Trial 1, a total of 264 weaned 28 days old crossbred entire male and female piglets [Pietrain x (Landrace x Large White)] were distributed when moving to the nursery (from 28 to 64 days of age) into 24 pens (11 piglets/pen) according to sex and initial body weight at weaning and individually identified by ear tags.
All animals were obtained from a commercial farm of approximately 350 Landrace x Large White sows (Hermitage, Gepork; Spain). All piglets were vaccinated for circovirus and mycoplasma before weaning and also for Aujezsky during the growing-fattening. The nursery facility accounts by 24 pens (11 piglets / pen) and was equipped with central heating and forced ventilation with a cooling system and completely slatted plastic floors. Each pen was equipped with a nipple water drinker and a commercial feeder hopper with 3 feeder spaces (3FS), equivalent to 3.7 pigs per feeder space. Thereafter, the animals were moved to an external growing-finishing facility and the nursery pens were maintained (11 pigs /pen) and split into two groups of 12 (12 pens for each feeder-space treatment according to sex and BW). Two commercial concrete feeder hoppers were used; with 2 feeder spaces allowing 5.5 pigs / space, "High Density" (HD) or 5 feeder spaces allowing 2.2 pigs/space, "Low Density" (LD). Each pen was also equipped with a nipple drinker to guarantee free access to water for the animals. Regarding the dimensions of each pen, these were above the minimum space per piglet/pig set by European legislation based on live weight (Council Directive 2008/120/EC of December 2008). The growing-finishing facility was equipped with natural ventilation and completely slatted concrete floors.
For Trial 2, a total of 1067 entire male and female crossbred piglets [Pietrain x (Landrace x Large White)] from the same farrowing batch were used and monitored until slaughter. Piglets were individually identified by ear tags at birth. All animals were obtained from a commercial farm of approximately 500 Landrace x Large White sows (Hypor, Hendrix-Genetics; Netherlands). Immediately after weaning, pigs were transferred to a nursery accommodation site where they were distributed into four rooms of 12 pens (22 piglets / pen) according to sex and initial BW. Each pen was equipped with a nipple water drinker and a commercial feeder hopper (5 feeder spaces, equivalent to 4.4 pigs per feeder space). The nursery facility was equipped with central heating and forced ventilation with a cooling system and completely slatted plastic floors. In the growing-finishing facilities, all pigs were immediately re-grouped into 80 pens (13 pigs /pen) according to sex and two categories of BW, as Heavy (Hp, n = 524, BW = 22.88 ± 3.48 kg) and Light (Lp, n = 543, BW = 18.43 ± 4.18 kg) pigs (40 pens for each BW category). The 80 pens were distributed into four lines of 20 pens separated by two corridors in a single fattening room (40 pens/corridor). Along the growing period, Hp and half of the Lp pigs were fed with four consecutive feeds (Table 1) following a standard feeding program (standard or Std). Alternatively, the other half of the Lp pigs were fed "by budget" (Fig. 1), changing the first three feeds on the basis of an equivalent feed consumption instead of age (specific or Sp). Each pen was equipped with a single-spaced growth feeder with a nipple inside and an additional water drinker to guarantee free access to feed and water, respectively. The dimensions of each pen provided the minimum space per pig set by European legislation based on live weight (Council Directive 2008/120/EC of December 2008). The growing-finishing facility was equipped with natural ventilation and completely slatted concrete floors.
Table 1 Summary of the multi-phase diets offered to the animals for Trials 1 and 2
Diagram of the two feeding programs tested (Std vs Sp) during Trial 2
All diets were offered ad libitum, in mash (Trial 1) or pelleted (Trial 2) form and formulated to meet or slightly exceed the FEDNA nutrient requirements [7]. The number of diets offered along the two trials is summarized in Table 1.
Body weight recording
In both trials, pigs were individually weighed throughout the production cycle: starting at the exit of nursery (64 days old) and finishing at day 154 (Trial 1) or the day before each group of animals was sent to slaughter once they reached their market BW, fixed at 105 kg (Trial 2). That means that data (average BW, ADG or CV) regarding day 154 (Trial 1) or day 163 (Trial 2) included all the pigs in both Trials. For the pigs' BW recording, a Veserkal Utilcell SWIFT scale model was used. Thus, pigs were weighed at day 64 (36d post-weaning) and at 92 days, 121 days and 154 days of age for Trial 1, and at day 64 and every three weeks until the finishing barn was emptied for Trial 2 (up to 5 times). In all cases, the selection for slaughter was performed by picking up the animals that had reached their slaughter weight (105 kg) the day before slaughtering and fasting them overnight. The same procedure was conducted two or three more times until the finishing barn was emptied.
Lesion scoring
In Trial 1, skin lesions were evaluated individually in each pen on day 74 (+ 10 days entry at the fattening unit) and day 115, following the three-point scale described in the WQ® protocol for growing pigs on the farm [8]. Pigs were encouraged to stand up in order to make the body more clearly visible. One side of the pigs' body was inspected visually for the presence of scratches, considering five separate regions: i) ears, ii) front (head to back of shoulder), iii) middle (back of shoulder to hindquarters), iv) hindquarters, and v) legs (from the accessory digit upwards). The tail zone was not evaluated. Animals were considered moderately wounded when presenting more than four scratches in any region of the body. Animals were considered severely wounded when presenting more than ten scratches on at least two body regions or any region with more than 15 scratches. Only scratches longer than 2 cm were considered. The percentage of pigs moderately or severely wounded was expressed over the total of pigs housed in each pen.
Carcass characteristics
As previously explained, in Trial 2, pigs that reached their market BW were sent to slaughter in three times and maintaining the traceability of the treatment group (Sp or Std). Therefore, in each selection for slaughter were included pigs for both treatments each time until the finishing barn was emptied. Before the slaughtering process, pigs were stunned in a CO2 chamber and then immediately exsanguinated in a vertical position. Afterwards, pigs were scalded at 65 °C, and carcass traits were obtained on the basis of ultrasounds using the Autofom System (Carometec Food Technology).
Calculations and statistical analyses
Different procedures of the statistical package SAS® (SAS Inst. Inc.; Cary, NC) were used to analyze all of the data. The pigs were the experimental unit in all calculations except when mentioning the variability (expressed as coefficient of variation (%) and referred to the pen as experimental unit).
In Trial 1, the combination of feeder type (HD or LD), sex (male or female) yielded a 2 × 2 factorial arrangement that was analyzed using the GLM procedure defining the model:
$$ {\mathrm{Y}}_{\mathrm{i}\mathrm{j}\mathrm{k}}=\upmu +{\mathrm{treat}}_{\mathrm{i}}+{\mathrm{sex}}_{\mathrm{j}}+\mathrm{sex}\ast {\mathrm{treat}}_{\mathrm{i}\mathrm{j}}+{\upvarepsilon}_{\mathrm{i}\mathrm{j}} $$
where Yijk relates to each observation of the outcome variable, μ is the global mean, treati is the main effect of treatment, sexj is the main effect of sex, and sex*treatij corresponds to the interaction between sex and treatment and, finally, Ɛij is the experimental error term. Regarding the interaction term, it was found to not be significant, so it was removed from the model. The BW at day 64 (end of nursery period) was used as a covariate because the distribution was defined at weaning.
Finally, and regarding lesion scoring, Proc GENMOD was used to assess the differences between treatments. Only the percentage of pigs moderately wounded was considered in the statistical analysis.
In Trial 2, the effect of treatment (Sp or Std diets) on BW and ADG of piglets was analyzed with a repeated measures ANOVA by using the Proc MIXED. Sex was also added as a factor in the model, but as it was not significant. It was declared the pig as the repeated unit, with the option AR(1) of SAS (Autoregressive method) to define the structure of the error (co)variance matrix. Data was grouped by treatment.
The same model was used to compare the effect of treatment considering only the light piglets (LSp or LStd) or as group basis (G1, Sp. or G2, Std.).
For carcass characteristics, the data were also analyzed by group, defining the ANOVA model:
$$ {\mathrm{Y}}_{\mathrm{i}\mathrm{j}\mathrm{k}}=\upmu +{\mathrm{treat}}_{\mathrm{i}}+{\mathrm{time}}_{\mathrm{j}}+\mathrm{treat}\ast \mathrm{time}+{\upvarepsilon}_{\mathrm{i}\mathrm{j}} $$
where Yij relates to each observation of the outcome variable, μ is the global mean, treati is the main effect of treatment, timej is the main effect of time, treat*timeij corresponds to the interaction between treatment and time and, finally, Ɛij is the experimental error term.
In Trial 2, all BW data for each individual pig registered along the whole experimental period were adjusted to the following double-exponential Gompertz function described in previous studies [9, 10], by using the NLIN procedure:
$$ \mathrm{BW}={\mathrm{A}}^{\ast}\exp \left(\hbox{-} \exp \left(\mathrm{b}\hbox{-} \left({\mathrm{c}}^{\ast}\mathrm{t}\right)\right)\right) $$
Where A, b and c are the parameters (constants) of the curve, and t, the time (measured in d). Most of the curves (95.97%) met the convergence criteria. The predicted time to reach a market BW of 105 kg (t105) was calculated for each pig according to the formula above and then analyzed by ANOVA using the GLM procedure as the outcome variable, taking into account the pig as the experimental unit.
Normality and equal variances were verified in both trials in all continuous variables using the Shapiro-Wilk and Levene's Tests, respectively, by using the UNIVARIATE procedure. Differences between groups were assessed using the Tukey test. Finally, in all statistical analyses, significant differences were declared at P ≤ 0.05, while 0.05 < P ≤ 0.15 were considered near-significant trends.
Throughout this section, the results for Trials 1 and 2 are presented independently because the two strategies studied were implemented using different animals. However, the goal for the two experiments was the same, which was to study the effect of two different approaches on the performance and homogeneity of pigs during the growing and finishing phases of production.
Growth performance during the growing and finishing periods
Table 2 includes the growth results measured as body weight (BW) and average daily gain (ADG) obtained in Trial 1.
Table 2 Results of Body weight (BW) and average daily gain (ADG) by Treatment and Sex in Trial 1
It can be observed that pigs raised in the LD treatment tended to present higher BW (33.2 kg vs. 32.7 kg, P = 0.09) and ADG (586.6 g/d vs. 568.3 g/d, P = 0.09) at 92 d of age, as compared to the HD treatment. The slight difference in BW observed at d 92 of age in favor of the LD treatment (+ 0.5 kg) increased to 2.4 kg at d 121 (55.5 kg vs. 53.1 kg, P < 0.0001) and 1.9 kg at d 154 (82.1 kg vs. 80.2 kg, P = 0.022), respectively. Regarding the ADG, a similar trend was observed. Thus, from the periods covering 64 d to 121 d and 64 d to 154 d, pigs raised in the LD treatment presented higher ADG (678.1.6 g/d vs. 636.0 g/d, P < 0.001; 725.2 g/d vs. 704.9 g/d, P = 0.02), as compared to the HD group. Regarding sex, males presented a higher BW than did females at 154 d of age (82.2 kg vs. 80.1 kg, P = 0.011). Similar results are observed for the ADG, which was higher for males during the period covering d 64 to d 154 (726.5 g/d vs. 703.7 g/d, P = 0.015).
The growth results for Trial 2 are summarized in Tables 3 and 4. It is worth mentioning that the results in Table 3 refer only to small pigs (n = 543).
Table 3 Results of body weight (BW), average daily gain (ADG) and the time to reach market BW (T105) by Treatment and Sex, for the light (L) piglets in Trial 2
Table 4 Results of body weight (BW), average daily gain (ADG) and the time to reach market BW (T105) by Treatment and Sex, on a Group (G) basis
It can be observed that Lp pigs allotted to the Sp treatment were always heavier, when compared to the Lp pigs allotted to the Std treatment (29.6 kg vs. 28.7 kg, P = 0.001; 45.5 kg vs. 43.1 kg, P = 0.001; 63.9 kg vs. 60.3 kg, P < 0.001; 81.0 kg vs. 77.2 kg, P < 0.001; 92.7 kg vs. 89.5 kg, P = 0.001) along 83 d, 104 d, 125 d, 146 d and 163 d of age, respectively, with a maximum difference of 3.8 kg, on average, at d 146. Similar results were observed for ADG. Thus, animals of the Sp treatment experienced higher ADG (586.5 g/d vs. 552.3 g/d, P = 0.001; 676 g/d vs. 622 g/d, P < 0.001; 745.2 g/d vs. 690.6 g/d, P < 0.001; 762.9 g/d vs. 720.2 g/d, P < 0.001; 749.3 g/d vs. 720.5 g/d, P < 0.001) than did animals of the Std treatment for the periods covering 64 d to 83 d, 64 d to 104 d, 64 d to 125 d, 64 d to 146 d and 64 d to 163 d of age, respectively. Finally, in contrast to Trial 1, males and females presented similar BW and ADG along the growing-finishing period,
In Table 4, the results of growth are presented per group assuming that each group only differs in the way the Lp piglets were treated [Group 1 (G1) for the Sp treatment, and Group 2 (G2) for the Std treatment]. In this case, the 'group' is considered our global treatment effect and the number of animals used was the whole population (n = 1067).
Thus, it can be observed that pigs from G1 and G2 treatments initially presented a similar weight (32.4 kg vs. 32.2 kg, P = 0.512) at 83 d of age, but from this point onwards, G1 animals were always higher, on average, when compared to G2 animals (48.4 kg vs. 47.4 kg, P = 0.004; 67.1 kg vs. 65.0 kg, P = 0.000; 84.0 kg vs. 82.2 kg, P = 0.000; 96.2 kg vs. 94.1 kg, P < .0001) at 104 d, 125 d, 146 d and 163 d of age, respectively. There were also no differences in ADG between the G1 and G2 treatments from 64 d to 83,121 d (618.7 g/d vs. 609.6 g/d, P = 0.212). However, G1 pigs presented higher ADG than did G2 pigs (695.2 g/d vs. 668.9 g/d, P = 0.000; 761.8 g/d vs. 728.0 g/d, P = 0.000; 773.0 g/d vs. 750.3 g/d, P < .0001; 763.4 g/d vs. 741.8 g/d, P < .0001) for the periods covering 64 d to 104 d, 64 d to 125 d, 64 d to 146 d and 64 d to 163 d of age, respectively. Males and females, again, presented similar BW and ADG along the growing-finishing period.
In Fig. 2, the results of growth for the growing-finishing period (until 163 d of age) are presented. The BW of high pigs (HStd) was always superior (P < 0.0001) to the two Lp pig groups (22.8 kg, 35.4 kg, 51.5 kg, 69.9 kg, 86.9 kg and 99.1 kg at 63d, 83 d, 104 d, 125 d, 146 d and 163 d, respectively). At the end of the growing period (125 d of age), LStd and LSp were, respectively, 13.78 and 8.64% lighter than were the HStd pigs, and pigs allotted to LSp treatment decreased the differences in BW 37.30% between HStd and LStd pigs. At d 163 (finishing period), LStd and LSp were, respectively, 9.70 and 6.46% lighter than were the HStd pigs, and pigs allotted to LSp treatment decreased the differences in BW 33.40% between HStd and LStd pigs.
Growth as function of age for light piglets (Sp and Std) compared to High standard piglets (HStd)
Regarding the time to reach market BW, LSp pigs took almost 5 d less (181.5 d vs. 186.2 d, P = 0.005) to reach a market BW of 105 kg than did the LStd pigs (Table 3). This is in the line with what we observed in the growth results. Regarding sex, no differences were observed between males (184.7 d) and females (184.7 d vs 183.0 d, P = 0.306) respectively. When results are expressed by group (Table 4), animals allotted to G1 spent almost 4 d less (175.5 d vs. 179.1 d, P = 0.003) than did the G2 pigs. Finally, regarding sex, males tended to reach market BW earlier than females (176.3 d vs. 178.3 d, P = 0.092).
Evolution of the variability
In this section (Tables 5 and 6), the results of BW variability, expressed as CV, are presented.
Table 5 Results for the CV throughout the whole production cycle regarding the number of pigs per feeder space
Table 6 Results for the CV throughout the whole production cycle regarding the type feeding management offered to the pigs
In Trial 1 (Table 5), no differences were observed regarding the variability within pen-mates, except for d 92, when animals from the LD group presented a lower CV (15.02% vs. 12.22%, P = 0.05) than the HD group.
From this point onwards, those differences were not maintained although LD pigs always presented a lower CV numerically for d 121 (12.39% vs 10.31%) and 154 (10.53% vs. 8.86%) than did HD pigs, respectively. Finally, the higher reduction in percentage was also observed for animals of the LD treatment (3.84% vs 17.15% of reduction) during the first 28d of the growing period (from 64 d to 92 d). For the whole period (from 64 d to 154 d), the reduction in percentage was more important also for the LD pigs (39.93% vs. 32.59%), when compared to the HD pigs. In any case, within each treatment, a significant CV reduction along time was observed.
In Trial 2, it is observed that the CV decreased from d 83 to d 163 in both treatments (Table 6), as also observed for Trial 1.
However, at day 146, animals of G1 tended to present a lower CV (10.0% vs. 11.1%, P = 0.095) than did the G2 animals. At day 163, G1 animals were more homogeneous (9.7% vs 11.3%, P = 0.005) than were G2 animals.
Presence of wounds in trial 1 and carcass characteristics in trial 2
No differences in the number of wounds were observed in the first period, after the first 10 days at the fattening unit (HD: 11.11% vs. LD: 6.25%, P > 0.05). However, during the second period (d 115), pigs allotted to the LD treatment presented less number of wounds (HD: 18.86% vs. LD: 5.16%, P < 0.05).
The slaughtering results from Trial 2 are summarized in Table 7. The interaction observed in the carcass weight (P = 0.016) showed the Sp treatment produces higher carcass weight than did the Std treatment, and the difference increased as the emptying of fattening advances. In fact, the observed differences between the two treatments were 0.76 kg, 2.4 kg and 3.3 kg, on average, for trucks 1, 2 and 3, respectively. The percentage of lean tissue increased with the slaughtering age in both treatments, being significantly higher for pigs that left the fattening facility later.
Table 7 Effect of the Treatment on carcass weight and lean percentage of pigs slaughtered in Trial 2
Current all-in-all-out swine production systems mainly rely on the piglet supply scheme adopted in the farm [11], although body weight variability helps to reduce farm efficiency and increase occupation time, mainly in the growing-finishing facilities [12, 13].
Thus, pigs with a slow growth rate are expected to reach market BW later than their faster counterparts, reducing the pig producer's income. Therefore, to maximize the lightest pig's BW constitutes an issue in commercial conditions. This problem has a multifactorial origin including genetics (sows and boars) [14,15,16], environment, herd health, management and nutrition [2, 17]. Consequently, the effects of two different strategies (feeder space and feeding management) were studied in the present work in order to know their effect on individual growth and BW variability from the end of the nursery phase until slaughter in two trials performed under commercial conditions.
Feed intake is essential for a correct performance and limiting feed intake directly affects growth potential. Therefore, a correct access to feed is crucial to allow pigs to meet their nutrient requirements or at least not to limit them [3]; so, the feeder acts as the interface used for pigs to potentially meet their maximum growth. Thus, some studies investigated the effect of feeder designs [18, 19] and the number of feeder spaces on pig performance [20,21,22]. There are several types of feeders for pigs in the market, and all of them attempt to maximize feed intake, minimizing the feed waste in order to optimize pig performance. However, the feeder design was not the aim of the present work, since the same sort of commercial concrete feeder was used during the growing finishing period differing only on the number of feeder spaces (expressed as the number of pigs fed per feeder space). The pig:trough ratio can be altered by changing the number of pigs, the number of feeder spaces (the present study) or both [21]. Although the literature makes clear that the appropriate number of pigs per feeder space increases with the age of pigs [18, 20], it was hypothesized that 5.5 (HD) pigs/feeder space would promote less growth than 2.2 (LD) pigs/feeder space due to a possible competition between pen-mates to access the feed or because some of the animals could spend less time than required for eating. The apparent restriction of feeder spaces has contradictory results in the literature; for certain authors, the traditional recommendations have been to provide one feeding space for every three or four pigs [23, 24] when feeding pigs with dry feed. Other authors, nevertheless, showed that 12 [20], 20 [21] or even 30 [3] pigs can be fed by a single-space feeder without compromising their performance given feed in mash, pelleted and mash form respectively. The last authors [3] went further and concluded that 12 pigs can be fed on a single-space feeder without affecting performance because the limiting factor in determining how many pigs can be fed on a single-space feeder is the length of the eating period, which is affected by total daily intake and feeding behavior. However, the literature shows mixed results depending on the age of pigs examined or range of BW. Our results showed a better performance in terms of BW and ADG for pigs allocated to the LD treatment until d 154 of age, in agreement with the results of other recent study [25]; in that work, the authors suggested a better feeding motivation providing more feeder spaces to the pigs. In the present study, a better growth is observed for the pigs allotted to the LD treatment, probably explained by a higher feed intake (not measured in this experiment). In fact, a higher intake from the same diet results in higher growth [26] and probably higher feed wastage, but also lesser competition between pen-mates. Competition between pen-mates usually occurs when piglets are moved to new facilities and mixed in new groups. This sudden mixing normally causes fights, especially during the first days [27]; the fights are also exacerbated when pigs are close in terms of dominance ability [28,29,30,31,32], producing easily observable skin lesions. In the present study, the same pen-mates were maintained from the nursery phase to avoid fights driven by hierarchy establishment in order to isolate the feeder space effect; indeed, no such behavior was observed for either treatment during the first days at the fattening unit because the hierarchy was well established. Nevertheless, at day 115 (51d since moving to the fattening unit), an increase was observed in the number of lesions in the HD group, as compared to the LD group, probably due to the restriction of the number of feeding spaces, as suggested by [33], when they hypothesized that more feeder spaces could reduce some agonistic behaviors like tail biting (Even though tail biting was not measured, no blood or fresh crust were observed for any of the pigs). The skin lesions results differ from [34], as they did not observe differences regarding the feeder spaces in their study, but rather are in the line with [22], as they observed less aggressive interactions when the number of feeder space increased in groups of 20 pigs. Other authors [3], reported that the ideal number of pigs per feeder is not clear and could be very inconsistent. Regarding the results for CV, no significant differences were found for the whole cycle in favor of the LD treatment. However, it is worth mentioning that during the first 28d of the growing-finishing period, the differences observed in terms of BW were also significantly accompanied by a higher decrease in the CV for pigs allotted to the LD treatment (See Table 5). In both treatments, the CV was higher at the beginning of the growing period and then decreased (32.59 and 39.32% for treatments HD and LD, respectively), as pigs approached market BW in line with results obtained by [13, 35].
The other strategy explored to increase the growth and the homogeneity of a group of pigs was the feeding management. In this sense, it is important to recall that energy and nutrient requirements to reach optimal performance vary over time but also between pigs in the same batch [36, 37]. Also, the variability among individuals is not usually considered in practical conditions, since all pigs present in a batch are fed in the same way [38]. In the present study two different multi-phase feeding strategies were tested in two groups of pigs of the same batch (heavy and light). It was planned a four diets program changing the first three feeds, only in the light pigs group, on the basis of an equivalent feed consumption instead of age (specific feeding management). Results showed that light pigs allotted to Sp performed better in terms of BW and ADG than did those allotted to the standard feeding program normally used in the farm. Light pigs grow slowly and, with Sp, take longer to eat the same amount of feed. The better performance of those light pigs compared to the Std could be explained by the fact that the nutrient requirements were better matched [39]. Some studies discussed the existence of compensatory growth, but, as it can be defined as the capacity of the pigs to recover from a delay in their growth caused by feed or nutritional depletion [40], a compensatory growth from the current results cannot be concluded because the experimental plan was not designed to detect it in this trial. Surprisingly no sex differences were found in trial 2; this may be explained by the fact that entire males may not express their full potential or that Pietrain lines have a tendency to reduce feed intake [41]. Regarding the variability between counterparts, a slight improvement of the light pigs was also observed with Sp. Their difference in BW with their bigger counterparts decreased, by increasing the BW of the light piglets, leading to a decrease in the CV of the whole population. The results show that implementing the same growing-fattening feeding program separately to heavy and light pigs of the same group increases the mean slaughtering live weight of the whole group and reduce its variability, compared to maintain a single group.
Regarding the slaughtering results, the interaction observed showed that the Sp pigs always presented a higher carcass weight, and the difference with Std pigs was even higher as the emptying of the barn facility progressed. Concerning the percentage of lean tissue, it was similar for both treatments; nevertheless, lean tissue was higher in pigs that were slaughtered later, in line with the results of [42, 43]; in this latter case, the authors observed that pigs that grow faster are also fatter than pigs with a slower growth rate (lean animals).
In the commercial conditions and with the genetic lines used in this work, it is concluded that higher feeder space availability may improve both BW and ADG along the growing and finishing periods. Pigs allotted to more feeder spaces present a lower number of wounds and tend to have lower BW variability during the growing and finishing phases of production, respectively. Regarding feeding management, our results suggest that the light piglets, subjected to a specific feeding strategy at the start of the growing period, increase their growth rate and partially catch up with their bigger/heavier counterparts, leading to significantly decrease the variability of the population at slaughter.
SIP Consultors. Informe Consolidado - España 2017. http://agricultura.gencat.cat/web/.content/de_departament/de02_estadistiques_observatoris/08_observatoris_sectorials/04_observatori_porci/informes_periodics_2017/E2_informe_economic_2017/fitxer_estatic/Informe-Economic-2017.pdf. Accessed 14 June 2018.
Douglas SL, Edwards SA, Kyriazakis I. Management strategies to improve the performance of low birth weight pigs to weaning and their long-term consequences. J Anim Sci. 2014; https://doi.org/10.2527/jas2013-7388.
Gonyou HW, Lou Z. Effects of eating space and availability in feeders on productivity and eating behavior of grower/finisher pigs. J Anim Sci. 2000; https://doi.org/10.2527/2000.784865x.
Glen JJ. A dynamic programming model for pig production. J Oper Res Soc. 1983;34(6):511–9.
Boland MA, Foster KA, Preckel PV. Nutrition and the economics of swine management. J App Agr Econ. 1999;31:83–96.
Alexander DLJ, Morel PCH, Wood RD. Feeding strategies for maximising gross margin in pig production. In: Pintér JD, editor. Global Optimization: Scientific and Engineering Case Studies. Nonconve Optimization and Its Applications, vol. 85. New York: Springer Science+Business Media; 2006. p. 33–43.
De Blas C, Gasa J. Mateos GG. Fundación Española Desarrollo Nutrición Animal. Necesidades nutricionales para ganado porcino. 2nd ed. Madrid: FEDNA; 2013.
Dalmau A, Temple D, Rodrígues P, Llonch P, Velarde A. Application of the welfare quality® protocol at pig slaughterhouses. Anim Welf. 2009;18:497–505.
Schinckel AP, Ferrel J, Einstein ME, Pearce SM, Boyd RD. Analysis of pig growth from birth to sixty days of age. Prof Anim Sci. 2004; https://doi.org/10.15232/S1080-7446%20(15)30965-7.
Casas GA, Rodríguez D, Afanador G. 2010. Propiedades matemáticas del modelo de Gompertz y su aplicación al crecimiento de los cerdos, 2010. http://www.redalyc.org/pdf/2950/295023477010.pdf. Accessed 14 June 2018.
Leen F, Van den Broeke A, Aluwé M, Lauwers L, Millet S, Van Meensel J. Optimising finishing pig delivery weight: participatory decision problem analysis. Anim Prod Sci. 2017 https://doi.org/10.1071/AN16098.
Hardy B. Management of large units. In: Wiseman J, Varley MA, Chadwich JP, editors. Progress in Pig Science. Cambridge: University Press; 1998. p. 561–81.
Patience JF, Engele K, Beaulieu AD, Gonyou HW, Zijlstra RT. Variation: costs and consequences. Proc of the Banff Pork Seminar Adv Pork Prod. 2004;15:257–66.
Beaulieu AD, Aalhus JL, Williams NH, Patience JF. Impact of piglet birth weight, birth order, and litter size on subsequent growth performance, carcass quality, muscle composition, and eating quality of pork. J Anim Sci. 2010; https://doi.org/10.2527/jas.2009-2222.
Baxter EM, Rutherford KMD, D'Eath RB, Arnott G, Turner SP, Sandøe P, Moustsen VA, Thorup F, Edwards SA, Lawrence AB. The welfare implications of large litter size in the domestic pig II: management factors. Anim Welf. 2013; https://doi.org/10.7120/09627286.22.2.219.
Rutherford KMD, Baxter EM, D'Eath RB, Turner SP, Arnott G, Roehe R, Ask B, Sandøe P, Moustsen VA, Thorup F, Edwards SA, Berg P, Lawrence AB. The welfare implications of large litter size in the domestic pig I: biological factors. Anim Welf. 2013; https://doi.org/10.7120/09627286.22.2.199.
Whitney MH. Factors Affecting Nutrient Recommendations for Swine. 2010. http://porkgateway.org/wp-content/uploads/2015/07/factors-affecting-nutrient-recommendations-for-swine1.pdf. Accessed 14 June 2018.
Hyun Y, Ellis M, Johnson RW. Effects of feeder type, space allowance, and mixing on the growth performance and feed intake pattern of growing pigs. J Anim Sci. 1998; https://doi.org/10.2527/1998.76112771x.
Brumm MC, Ellis M, Johnston LJ, Rozeboom DW, Zimmerman DR, NCR-89 Committee on Swine Management. Interaction of swine nursery and grow-finish space allocations on performance. J Anim Sci. 2001; https://doi.org/10.2527/2001.7981967x.
Walker N. The effects on performance and behaviour of number of growing pigs per mono-place feeder. Anim Feed Sci Technol. 1991;35:3–13.
Nielsen BL, Lawrence AB, Whittemore CT. Feeding behaviour of growing pigs using single or multi-space feeders. Appl Anim Behav Sci. 1996;47:235–46.
Spoodler HAM, Edwards SA, Corning S. Effects of group size and feeder space allowance on welfare in finishing pigs. Anim Sci. 1999;69:481–9.
English PR, Fowler VR, Baxter S, Smith B. The growing and finishing pig: improving efficiency. Ipswich: Farming Press; 1988.
Albar J, Granier R. Feeding with feeders: effect of the number of pigs per eating place on performance. Ann Zootech. 1989; 38 Abstract: 200
He Y, Cui S, Deen J, Shurson GC, Li Y. Effects of feeder space allowance on behavior of slow-growing pigs during the nursery period. J Anim Sci. 2016;94(Suppl 2):4.
Albar J, Granier, R. Feeding with feeders: effect of the number of pigs per eating place on performance. In Annales de Zootechnie. 1989;38:200.
Arey DS, Franklin MF. Effects of straw and unfamiliarity on fighting between newly mixed growing pigs. Appl Anim Behav Sci. 1995;45:23–30.
Parker GA. Assessment strategy and the evolution of fighting behaviour. J Theoret Biol. 1974;47:223–43.
Rushen J. A difference in weight reduces fighting when unacquainted newly weaned pigs first meet. Can J Anim Sci. 1987;67:951–60.
Rushen J. Assessment of fighting ability or simple habituation: what causes young pigs (Sus scrofa) to stop fighting? Aggress Behav. 1988;14:155–67.
Francis DA, Christison GI, Cymbaluk NF. Uniform or heterogeneous weight groups as factors in mixing weanling pigs. Can J Anim Sci. 1996;76:171–6.
Andersen IL, Andenaes H, Bøe KE, Jensen P, Bakken M. The effects of weight asymmetry and resource distribution on aggression in groups of unacquainted pigs. Appl Anim Behav Sci. 2000;68:107–20.
Moinard C, Mendl M, Nicol CJ, Green LE. A case control study of on-farm risk factors for tail biting in pigs. Appl Anim Behav Sci. 2003;81(4):333–55.
Turner SP, Dahlgren M, Arey DS, Edwards SA. Effect of social group size and initial live weight on feeder space requirement of growing pigs given food ad libitum. Anim Sci. 2002;75:75–83.
López-Vergé S, Solà-Oriol D, Gasa J. 2015. Is the lactation period the main variable responsible for reducing the efficiency of the swine production? J Anim Sci. 2015; Vol. 93, Suppl. s3/J. Dairy Sci. Vol. 98, Suppl p.184.
Brossard L, van Milgen J, Dourmad JY. Analyse par modélisation de la variation des performances d'un groupe de porcs en croissance en fonction de l'apport de lysine et du nombre de phases dans le programme d'alimentation. Journées de la Recherche Porcine. 2007;39:95-102.
Pomar C. Predicting responses and nutrient requirements in growing animal populations: the case of the growing-finishing pig. In: Hanigan MD, Novotny JA, Marstaller CL, editors. Mathematical modeling in nutrition and agriculture. Blacksburg: Virginia Polytechnic and State University; 2007. p. 309–33.
Andretta I, Pomar C, Kipper M, Hauschild L, Rivest J. Feeding behavior of growing-finishing pigs reared under precision feeding strategies. J Anim Sci. 2016; https://doi.org/10.2527/jas2016-0392.
Lee JH, Kim JD, Kim JH, Jin J, Han IK. Effect of phase feeding on the growth performance, nutrient utilization and carcass characteristics in finishing pigs. Asian-australas J Anim Sci. 2000;13(8):1137–46.
Cloutier L, Létourneau-Montminy MP, Bernier JF, Pomar J, Pomar C. Effect of a lysine depletion–repletion protocol on the compensatory growth of growing-finishing pigs. J Anim Sci. 2016; https://doi.org/10.2527/jas.2015-9618.
Green DM, Brotherstone S, Schofield CP, Whittemore CT. Food intake and live growth performance of pigs measured automatically and continuously from 25 to 115 kg live weight. J Sci Food Agric. 2003; https://doi.org/10.1002/jsfa.1519.
Rehfeldt C, Kuhn G. Consequences of birth weight for postnatal growth performance and carcass quality in pigs as related to myogenesis. J Anim Sci. 2006;84 Suppl:113–23.
Rauw WM, Soler J, Tibau J, Reixach J, Raya LG. Feeding time and feeding rate and its relationship with feed intake, feed efficiency, growth rate, and rate of fat deposition in growing Duroc barrows. J Anim Sci. 2006; https://doi.org/10.2527/jas.2006-209.
The authors want to thank the help and support of the stock workers of the two farms (Granja El Plata, Granja Vaqueria Miralves) during the development of these two trials.
This manuscript has been proofread by Mr. Chuck Simmons, a native, English-speaking university professor of English.
No funding was received in order to conduct the present work.
Request for the datasets should be made to the corresponding author.
Department of Animal and Food Sciences, Animal Nutrition and Welfare Service, Universitat Autònoma de Barcelona, 08193, Bellaterra, Spain
Sergi López-Vergé, Josep Gasa, Déborah Temple & David Solà-Oriol
Vall Companys Group, 25191, Lleida, Spain
Jordi Bonet & Jaume Coma
Sergi López-Vergé
Josep Gasa
Déborah Temple
Jordi Bonet
Jaume Coma
David Solà-Oriol
We state that all authors had the same level of implication in the writing of the present manuscript. All authors read and approved the final manuscript.
Correspondence to Sergi López-Vergé.
The present work was conducted under the approval of the Animal Ethics Committee of the Universitat Autònoma de Barcelona and was in compliance with the European Union guidelines for the care and use of animals in research (European Parliament, 2010).
López-Vergé, S., Gasa, J., Temple, D. et al. Strategies to improve the growth and homogeneity of growing-finishing pigs: feeder space and feeding management. Porc Health Manag 4, 14 (2018). https://doi.org/10.1186/s40813-018-0090-9
Coefficient of variation
Feeder spaces
Feeding management
Market body weight | CommonCrawl |
An RKHS approach to estimate individualized treatment rules based on functional predictors
MFC Home
Unsupervised robust nonparametric learning of hidden community properties
May 2019, 2(2): 149-168. doi: 10.3934/mfc.2019011
Nonlinear diffusion based image segmentation using two fast algorithms
Lu Tan 1,, , Ling Li 1, , Senjian An 1, and Zhenkuan Pan 2,
School of Electrical Engineering, Computing and Mathematical Sciences (Computing Discipline), Curtin University, Perth 6102, Australia
College of Computer Science and Technology, Qingdao University, Qingdao 266071, China
* Corresponding author: Lu Tan
Full Text(HTML)
Figure(7) / Table(4)
In this paper, a new variational model is proposed for image segmentation based on active contours, nonlinear diffusion and level sets. It includes a Chan-Vese model-based data fitting term and a regularized term that uses the potential functions (PF) of nonlinear diffusion. The former term can segment the image by region partition instead of having to rely on the edge information. The latter term is capable of automatically preserving image edges as well as smoothing noisy regions. To improve computational efficiency, the implementation of the proposed model does not directly solve the high order nonlinear partial differential equations and instead exploit the efficient alternating direction method of multipliers (ADMM), which allows the use of fast Fourier transform (FFT), analytical generalized soft thresholding equation, and projection formula. In particular, we creatively propose a new fast algorithm, normal vector projection method (NVPM), based on alternating optimization method and normal vector projection. Its stability can be the same as ADMM and it has faster convergence ability. Extensive numerical experiments on grey and colour images validate the effectiveness of the proposed model and the efficiency of the algorithms.
Keywords: Active contour model, binary level sets, nonlinear diffusion, alternating direction method of multipliers (ADMM), normal vector projection method (NVPM), optimal first-order methods.
Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35.
Citation: Lu Tan, Ling Li, Senjian An, Zhenkuan Pan. Nonlinear diffusion based image segmentation using two fast algorithms. Mathematical Foundations of Computing, 2019, 2 (2) : 149-168. doi: 10.3934/mfc.2019011
R. Acar and C. R. Vogel, Analysis of bounded variation penalty methods for ill-posed problems, Inverse Problems, 10 (1994), 1217-1229. doi: 10.1088/0266-5611/10/6/003. Google Scholar
R. Andreani, L. D. Secchin and P. J. Silva, Convergence properties of a second order augmented Lagrangian method for mathematical programs with complementarity constraints, SIAM Journal on Optimization, 28 (2018), 2574-2600. doi: 10.1137/17M1125698. Google Scholar
G. Aubert and P. Kornprobst, Mathematical Problems in Images Processing, Applied mathematical sciences, 2006. Google Scholar
G. Aubert and L. A. Vese, Variational method in image recovery, SIAM Journal on Numerical Analysis, 34 (1997), 1948-1979. doi: 10.1137/S003614299529230X. Google Scholar
L. Ambrosio and V. M. Tortorelli, Approximation of functional depending on jumps by elliptic functional via Gamma-convergence, Communications on Pure and Applied Mathematics, 43 (1990), 999-1036. doi: 10.1002/cpa.3160430805. Google Scholar
E. Bae, X. C. Tai and W. Zhu, Augmented Lagrangian method for an Euler's elastica based segmentation model that promotes convex contours, Inverse Problems and Imaging, 11 (2017), 1-23. doi: 10.3934/ipi.2017001. Google Scholar
X. Bresson, S. Esedoglu, P. Vandergheynst, J. P. Thiran and S. Osher, Fast global minimization of the active contour/snake model, Journal of Mathematical Imaging and Vision, 28 (2007), 151-167. doi: 10.1007/s10851-007-0002-0. Google Scholar
V. Caselles, R. Kimmel and G. Sapiro, Geodesic active contours, International Journal of Computer Vision, 22 (1997), 61-79. doi: 10.1109/ICCV.1995.466871. Google Scholar
F. Catte, P.-L. Lions, J.-M. Morel and T. Coll, Image selective smoothing and edge detection by nonlinear diffusion, SIAM Journal on Numerical Analysis, 29 (1992), 182-193. doi: 10.1137/0729012. Google Scholar
T. F. Chan, B. Y. Sandberg and L. A. Vese, Active contours without edges for vector-valued images, Journal of Visual Communication and Image Representation, 11 (2000), 130-141. doi: 10.1006/jvci.1999.0442. Google Scholar
T. F. Chan and L. A. Vese, Active Contours without Edges, IEEE Transactions on Image Processing, 10 (2001), 266-277. doi: 10.1109/83.902291. Google Scholar
P. Charbonnier, L. Blanc-Feraud, G. Aubert and M. Barlaud, Two deterministic half-quadratic regularization algorithms for computed imaging, IEEE International Conference on Image Processing, 2 (1994), 168-172. doi: 10.1109/ICIP.1994.413553. Google Scholar
P. Charbonnier, L. Blanc-Feraud, G. Aubert and M. Barlaud, Deterministic edge-preserving regularization in computed imaging, IEEE Transactions on Image Processing, 6 (1997), 298-311. doi: 10.1109/83.551699. Google Scholar
L. J. Deng, R. Glowinski and X.-C. Tai, A New Operator Splitting Method for Euler's Elastica Model, preprint, arXiv: 1811.07091. Google Scholar
S. Geman and D. E. McClure, Statistical methods for tomographic image reconstruction, Bulletin of the ISI, 52 (1987), 5-21. Google Scholar
T. Goldstein, B. O'Donoghue, S. Setzer and R. Baraniuk, Fast alternating direction optimization methods, SIAM Journal on Imaging Sciences, 7 (2014), 1588-1623. doi: 10.1137/120896219. Google Scholar
P. J. Green, Bayesian reconstructions from emission tomography data using a modified EM algorithm, IEEE Transactions on Medical Imaging, 9 (1990), 84-93. doi: 10.1109/42.52985. Google Scholar
L. Gun, L. Cuihua and Z. Yingpan, et al., An improved speckle-reduction algorithm for SAR images based on anisotropic diffusion, Multimedia Tools and Applications, 76 (2017), 17615-17632. doi: 10.1007/s11042-015-2810-3. Google Scholar
T. Hebert and R. Leahy, A generalized EM algorithm for 3-D Bayesian reconstruction from Poisson data using Gibbs priors, IEEE Transactions on Medical Imaging, 8 (1989), 194-202. doi: 10.1109/42.24868. Google Scholar
D. Mumford and J. Shah, Optimal approximations of piecewise smooth functions and associated variational problems, Communications on Pure and Applied Mathematics, 42 (1989), 577-685. doi: 10.1002/cpa.3160420503. Google Scholar
M. Nikolova, Minimizers of cost-functions involving nonsmooth data-fidelity terms. Application to the processing of outliers, SIAM Journal on Numerical Analysis, 40 (2002), 965-994. doi: 10.1137/S0036142901389165. Google Scholar
M. Nikolova, Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares, Multiscale Modeling & Simulation, 4 (2005), 960-991. doi: 10.1137/040619582. Google Scholar
P. Perona and J. Malik, Scale space and edge detection using anisotropic diffusion, IEEE Transactions on Pattern Analysis and Machine Intelligence, 12 (1990), 629-639. doi: 10.1109/34.56205. Google Scholar
H. K. Rafsanjani, M. H. Sedaaghi and S. Saryazdi, An adaptive diffusion coefficient selection for image denoising, Digital Signal Processing, 64 (2017), 71-82. doi: 10.1016/j.dsp.2017.02.004. Google Scholar
L. Tan, L. Li, W. Liu, et al., A Novel Euler's Elastica based Segmentation Approach for Noisy Images via using the Progressive Hedging Algorithm, preprint, arXiv: 1902.07402. Google Scholar
L. Tan, W. Liu and L. Li et al., A fast computational approach for illusory contour reconstruction, Multimedia Tools and Applications, 78 (2019), 10449-10472. doi: 10.1007/s11042-018-6546-8. Google Scholar
L. Tan, W. Liu and Z. Pan, Color image restoration and inpainting via multi-channel total curvature, Applied Mathematical Modelling, 61 (2018), 280-299. doi: 10.1016/j.apm.2018.04.017. Google Scholar
L. Tan, Z. Pan, W. Liu, J. Duan, W. Wei and G. Wang, Image segmentation with depth information via simplified variational level set formulation, Journal of Mathematical Imaging and Vision, 60 (2018), 1-17. doi: 10.1007/s10851-017-0735-3. Google Scholar
L. Tan, W. Wei and Z. Pan, et al., A High-order Model of TV and Its Augmented Lagrangian Algorithm, Applied Mechanics and Materials, 568 (2014), 726-733. doi: 10.4028/www.scientific.net/AMM.568-570.726. Google Scholar
L. A. Vese and T. F. Chan, A multiphase level set the framework for image segmentation using the Mumford and Shah model, International journal of computer vision, 50 (2002), 271-293. Google Scholar
C. R. Vogel and M. E. Oman, Iterative methods for total variation denoising, SIAM Journal on Scientific Computing, 17 (1996), 227-238. doi: 10.1137/0917016. Google Scholar
B. Wang, X. Yuan and X. Gao et al., A hybrid level set with semantic shape constraint for object segmentation, IEEE Transactions on Cybernetics, 49 (2019), 1558-1569. doi: 10.1109/TCYB.2018.2799999. Google Scholar
S. Yan, X. Tai, J. Liu, et al., Convexity Shape Prior for Level Set based Image Segmentation Method, preprint, arXiv: 1805.08676. Google Scholar
M. Yashtini and S. H. Kang, A fast relaxed normal two split method and an effective weighted TV approach for Euler's elastica image inpainting, SIAM Journal on Imaging Sciences, 9 (2016), 1552-1581. doi: 10.1137/16M1063757. Google Scholar
S. Zheng, Z. Xu, H. Yang, J. Song and Z. Pan, Comparisons of different methods for balanced data classification under the discrete non-local total variational framework, Mathematical Foundations of Computing, 2 (2019), 11-28. doi: 10.3934/mfc.2019002. Google Scholar
Z. Zhou, Z. Guo and D. Zhang, et al., A nonlinear diffusion equation-based model for ultrasound speckle noise removal, Journal of Nonlinear Science, 28 (2018), 443-470. doi: 10.1007/s00332-017-9414-1. Google Scholar
W. Zhu, A numerical study of a mean curvature denoising model using a novel augmented Lagrangian method, Inverse Problems and Imaging, 11 (2017), 975-996. doi: 10.3934/ipi.2017045. Google Scholar
Figure 1. Effects of our model. The first row: initial curves. The second row: the results obtained by ADMM and NVPM. (a2) and (b2) from ADMM, (c2) and (d2) from NVPM
Figure Options
Download as PowerPoint slide
Figure 2. Effects of GAC and PSAC modelsw. The first and the fourth column: initial curves. The second and the fifth column: final curves of GAC model. The third and the sixth column: final curves of PSAC model
Figure 3. Plots of parametric errors and energy curves. The first row is obtained by ADMM. The second row is obtained by NVPM
Figure 4. Effects of our model, GAC model and PSAC model. The first column: initial curves. The second column: the results of our model obtained by ADMM (top) and NVPM (bottom). The third column: the results of GAC model. The last column: the results of PSAC model
Figure 5. Non-threshold solutions of our methods. The first column: final results of $ \phi $. The second column: zoomed small sub-regions (red rectangles in (c1) and (d1)) for detail comparison
Figure 6. Effects of our model, GAC model and PSAC model on colour images. (a1), (b1) and (c1): initial curves. (a2), (b2) and (c2): results of our model via ADMM (a2), NVPM (b2) and NVPM* (c2). (a3), (b3) and (c3): GAC model results. (a4), (b4) and (c4): results of PSAC model
Figure 7. Plots of parametric errors and energy curves. The first row is obtained by our model using ADMM. The second row is obtained by our model using NVPM*
Table 1. Potential functions for the regularization term
$ \varphi(|\nabla\phi|) $ source
(ⅰ) $ |\nabla\phi|^p, 0<p\leq2 $ [21]
(ⅱ) $ \sqrt{1+|\nabla\phi|^2} $ [31]
(ⅲ) $ \sqrt{1+|\nabla\phi|^2}-1 $ [1]
(ⅳ) $ \frac{|\nabla\phi|^2}{1+|\nabla\phi|^2} $ [15]
(ⅴ) $ \log(1+|\nabla\phi|^2) $ [19]
(ⅵ) $ \log(\cosh(|\nabla\phi|)) $ [13]
(ⅶ) $ 1-\lambda^2e^{-\frac{|\nabla\phi|^2}{2\lambda^2}} $ [22]
(ⅷ) $ \lambda^2\log(1+\frac{|\nabla\phi|^2}{\lambda^2}) $ [23]
(ⅸ) $ 2\lambda^2(\sqrt{1+\frac{|\nabla\phi|^2}{\lambda^2}}-1) $ [12]
(ⅹ) $ |\nabla\phi|-\alpha\log(1+\frac{|\nabla\phi|}{\alpha}) $ [17]
Download as excel
Table 2. Comparisons of iterations and time using different methods
Image Methods Iterations Time (sec)
Fig. 1 (a2)
PF (ⅰ) ADMM 3 0.062
NVPM 3 0.047
NVPM* 3 0.039
Fig. 1 (b2)
PF (ⅲ) ADMM 7 0.162
Fig. 1 (c2)
PF (ⅴ) ADMM 5 0.155
Fig. 1 (d2)
PF (ⅶ) ADMM 3 0.094
PF (ⅱ) ADMM 6 0.184
PF (ⅳ) ADMM 16 0.215
PF (ⅵ) ADMM 6 0.336
PF (ⅷ) ADMM 7 0.389
PF (ⅸ) ADMM 5 0.175
PF (ⅹ) ADMM 5 0.183
PF (ⅵ) ADMM 11 2.389
NVPM 11 2.052
NVPM* 11 2.043
PF (ⅸ) ADMM 10 1.998
Sohana Jahan. Supervised distance preserving projection using alternating direction method of multipliers. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-17. doi: 10.3934/jimo.2019029
Russell E. Warren, Stanley J. Osher. Hyperspectral unmixing by the alternating direction method of multipliers. Inverse Problems & Imaging, 2015, 9 (3) : 917-933. doi: 10.3934/ipi.2015.9.917
Bingsheng He, Xiaoming Yuan. Linearized alternating direction method of multipliers with Gaussian back substitution for separable convex programming. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 247-260. doi: 10.3934/naco.2013.3.247
Zhongming Wu, Xingju Cai, Deren Han. Linearized block-wise alternating direction method of multipliers for multiple-block convex programming. Journal of Industrial & Management Optimization, 2018, 14 (3) : 833-855. doi: 10.3934/jimo.2017078
Yuan Shen, Lei Ji. Partial convolution for total variation deblurring and denoising by new linearized alternating direction method of multipliers with extension step. Journal of Industrial & Management Optimization, 2019, 15 (1) : 159-175. doi: 10.3934/jimo.2018037
Sylvia Anicic. Existence theorem for a first-order Koiter nonlinear shell model. Discrete & Continuous Dynamical Systems - S, 2019, 12 (6) : 1535-1545. doi: 10.3934/dcdss.2019106
Foxiang Liu, Lingling Xu, Yuehong Sun, Deren Han. A proximal alternating direction method for multi-block coupled convex optimization. Journal of Industrial & Management Optimization, 2019, 15 (2) : 723-737. doi: 10.3934/jimo.2018067
Jae-Hong Pyo, Jie Shen. Normal mode analysis of second-order projection methods for incompressible flows. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 817-840. doi: 10.3934/dcdsb.2005.5.817
Yue Lu, Ying-En Ge, Li-Wei Zhang. An alternating direction method for solving a class of inverse semi-definite quadratic programming problems. Journal of Industrial & Management Optimization, 2016, 12 (1) : 317-336. doi: 10.3934/jimo.2016.12.317
Feng Ma, Jiansheng Shu, Yaxiong Li, Jian Wu. The dual step size of the alternating direction method can be larger than 1.618 when one function is strongly convex. Journal of Industrial & Management Optimization, 2017, 13 (5) : 0-0. doi: 10.3934/jimo.2020016
Jinchuan Zhou, Changyu Wang, Naihua Xiu, Soonyi Wu. First-order optimality conditions for convex semi-infinite min-max programming with noncompact sets. Journal of Industrial & Management Optimization, 2009, 5 (4) : 851-866. doi: 10.3934/jimo.2009.5.851
Klemens Fellner, Wolfang Prager, Bao Q. Tang. The entropy method for reaction-diffusion systems without detailed balance: First order chemical reaction networks. Kinetic & Related Models, 2017, 10 (4) : 1055-1087. doi: 10.3934/krm.2017042
Mauro Maggioni, James M. Murphy. Learning by active nonlinear diffusion. Foundations of Data Science, 2019, 1 (3) : 271-291. doi: 10.3934/fods.2019012
Ciro D'Apice, Olha P. Kupenko, Rosanna Manzo. On boundary optimal control problem for an arterial system: First-order optimality conditions. Networks & Heterogeneous Media, 2018, 13 (4) : 585-607. doi: 10.3934/nhm.2018027
Yunhai Xiao, Soon-Yi Wu, Bing-Sheng He. A proximal alternating direction method for $\ell_{2,1}$-norm least squares problem in multi-task feature learning. Journal of Industrial & Management Optimization, 2012, 8 (4) : 1057-1069. doi: 10.3934/jimo.2012.8.1057
Deren Han, Zehui Jia, Yongzhong Song, David Z. W. Wang. An efficient projection method for nonlinear inverse problems with sparsity constraints. Inverse Problems & Imaging, 2016, 10 (3) : 689-709. doi: 10.3934/ipi.2016017
Gaohang Yu, Shanzhou Niu, Jianhua Ma. Multivariate spectral gradient projection method for nonlinear monotone equations with convex constraints. Journal of Industrial & Management Optimization, 2013, 9 (1) : 117-129. doi: 10.3934/jimo.2013.9.117
Qiumei Huang, Xiaofeng Yang, Xiaoming He. Numerical approximations for a smectic-A liquid crystal flow model: First-order, linear, decoupled and energy stable schemes. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2177-2192. doi: 10.3934/dcdsb.2018230
Zhenlin Guo, Ping Lin, Guangrong Ji, Yangfan Wang. Retinal vessel segmentation using a finite element based binary level set method. Inverse Problems & Imaging, 2014, 8 (2) : 459-473. doi: 10.3934/ipi.2014.8.459
HTML views (339)
Lu Tan Ling Li Senjian An Zhenkuan Pan | CommonCrawl |
Structured first order conservation models for pedestrian dynamics
NHM Home
Global existence and asymptotic behavior of measure valued solutions to the kinetic Kuramoto--Daido model with inertia
December 2013, 8(4): 969-984. doi: 10.3934/nhm.2013.8.969
Convergence of vanishing capillarity approximations for scalar conservation laws with discontinuous fluxes
Giuseppe Maria Coclite 1, , Lorenzo di Ruvo 2, , Jan Ernest 3, and Siddhartha Mishra 3,
Department of Mathematics, University of Bari, Via E. Orabona 4, I--70125 Bari
Department of Mathematics, University of Bari, via E. Orabona 4, 70125 Bari, Italy
Seminar for Applied Mathematics (SAM), ETH Zürich, HG G 57.2, Rämistrasse 101, 8092 Zürich, Switzerland, Switzerland
Received October 2012 Revised April 2013 Published November 2013
Flow of two phases in a heterogeneous porous medium is modeled by a scalar conservation law with a discontinuous coefficient. As solutions of conservation laws with discontinuous coefficients depend explicitly on the underlying small scale effects, we consider a model where the relevant small scale effect is dynamic capillary pressure. We prove that the limit of vanishing dynamic capillary pressure exists and is a weak solution of the corresponding scalar conservation law with discontinuous coefficient. A robust numerical scheme for approximating the resulting limit solutions is introduced. Numerical experiments show that the scheme is able to approximate interesting solution features such as propagating non-classical shock waves as well as discontinuous standing waves efficiently.
Keywords: discontinuous fluxes, Conservation laws, capillarity approximation..
Mathematics Subject Classification: Primary: 35L65; Secondary: 35L7.
Citation: Giuseppe Maria Coclite, Lorenzo di Ruvo, Jan Ernest, Siddhartha Mishra. Convergence of vanishing capillarity approximations for scalar conservation laws with discontinuous fluxes. Networks & Heterogeneous Media, 2013, 8 (4) : 969-984. doi: 10.3934/nhm.2013.8.969
Adimurthi, S. Mishra and G. D. Veerappa Gowda, Optimal entropy solutions for conservation laws with discontinuous flux-functions,, J. Hyp. Diff. Eqns., 2 (2005), 783. doi: 10.1142/S0219891605000622. Google Scholar
B. Andreianov, K. H. Karlsen and N. H. Risebro, A theory of $L^1$-dissipative solvers for scalar conservation laws with discontinuous flux,, Arch. Ration. Mech. Anal., 201 (2011), 27. doi: 10.1007/s00205-010-0389-4. Google Scholar
E. Audusse and B. Perthame, Uniqueness for scalar conservation laws with discontinuous flux via adapted entropies,, Proc. Roy. Soc. Edinburgh Sect. A, 135 (2005), 253. doi: 10.1017/S0308210500003863. Google Scholar
K. Aziz and A. Settari, Fundamentals of Petroleum Reservoir Simulation,, Applied Science Publishers, (1979). Google Scholar
R. Bürger, K. H. Karlsen, N. H. Risebro and J. D. Towers, Well posedness in $BV_t$ and convergence of a difference scheme for continuous sedimentation in ideal clarifier-thickener units,, Numer. Math., 97 (2004), 25. doi: 10.1007/s00211-003-0503-8. Google Scholar
G. M. Coclite and K. H. Karlsen, A singular limit problem for conservation laws related to the Camassa-Holm shallow water equation,, Comm. Partial Differential Equations, 31 (2006), 1253. doi: 10.1080/03605300600781600. Google Scholar
G. M. Coclite, K. H. Karlsen, S. Mishra and N. H. Risebro, Convergence of vanishing viscosity approximations of $2\times2$ triangular systems of multi-dimensional conservation laws,, Boll. Unione Mat. Ital. (9), 2 (2009), 275. Google Scholar
C. Dafermos, Hyperbolic Conservation laws in Continuum Physics,, $3^{rd}$ edition, (2005). Google Scholar
E. vanDuijn, L. A. Peletier and S. Pop, A new class of entropy solutions of the Buckley-Leverett equation,, SIAM J. Math. Anal., 39 (2007), 507. doi: 10.1137/05064518X. Google Scholar
T. Gimse and N. H. Risebro, Solution of the Cauchy problem for a conservation law with a discontinuous flux function,, SIAM J. Math. Anal., 23 (1992), 635. doi: 10.1137/0523032. Google Scholar
R. Helmig, A. Weiss and B. I. Wohlmuth, Dynamic capillary effects in heterogeneous porous media,, Comp. Geosci., 11 (2007), 261. doi: 10.1007/s10596-007-9050-1. Google Scholar
S. Hassanizadeh and W. G. Gray, Mechanics and thermodynamics of multiphase flow in porous media including interphase boundaries,, Adv. Wat. Res., 13 (1990), 169. doi: 10.1016/0309-1708(90)90040-B. Google Scholar
H. Holden, K. H. Karlsen and D. Mitrovic, Zero diffusion-dispersion-smoothing limits for scalar conservation law with discontinuous flux function,, International Journal of Differential Equations, 2009 (2009), 1. Google Scholar
H. Holden, K. H. Karlsen, D. Mitrovic and E. Y. Panov, Strong compactness of approximate solutions to degenerate elliptic-hyperbolic equations with discontinuous flux function,, Acta Mathematica Scientia, 29B (2009), 573. doi: 10.1016/S0252-9602(10)60004-5. Google Scholar
K. H. Karlsen and F. Kissling, On the singular limit of a two-phase flow equation with heterogeneities and dynamic capillary pressure,, Z. Angew. Math. Mech., (). Google Scholar
K. H. Karlsen, N. H. Risebro and J. Towers, $L^1$ stability for entropy solutions of nonlinear degenerate parabolic convection-diffusion equations with discontinuous coefficients,, Skr. K. Nor. Vidensk. Selsk., 3 (2003), 1. Google Scholar
F. Kissling and C. Rohde, The computation of nonclassical shock waves with a heterogeneous multiscale method,, Netw. Heterog. Media, 5 (2010), 661. doi: 10.3934/nhm.2010.5.661. Google Scholar
P. LeFloch, Hyperbolic Systems of Conservation Laws: The Theory Of Classical and Non-Classical Shock Waves,, Lecture notes in Mathematics., (2002). doi: 10.1007/978-3-0348-8150-0. Google Scholar
S. Mishra and J. Jaffré, On the upstream mobility scheme for two-phase flow in porous media,, Comp. GeoSci., 14 (2010), 105. doi: 10.1007/s10596-009-9135-0. Google Scholar
F. Murat, L'injection du cône positif de $H^{-1}$ dans $W^{-1,q}$ est compacte pour tout $q<2$,, J. Math. Pures Appl. (9), 60 (1981), 309. Google Scholar
S. Mochon, An analysis of the traffic on highways with changing surface conditions,, Math. Model., 9 (1987), 1. doi: 10.1016/0270-0255(87)90068-6. Google Scholar
E. Yu. Panov, Existence and strong pre-compactness properties for entropy solutions of a first-order quasilinear equation with discontinuous flux,, Arch. Ration. Mech. Anal., 195 (2010), 643. doi: 10.1007/s00205-009-0217-x. Google Scholar
E. Yu. Panov, Erratum to: Existence and strong pre-compactness properties for entropy solutions of a first-order quasilinear equation with discontinuous flux,, Arch. Ration. Mech. Anal., 196 (2010), 1077. doi: 10.1007/s00205-010-0303-0. Google Scholar
Boris Andreianov, Kenneth H. Karlsen, Nils H. Risebro. On vanishing viscosity approximation of conservation laws with discontinuous flux. Networks & Heterogeneous Media, 2010, 5 (3) : 617-633. doi: 10.3934/nhm.2010.5.617
Mauro Garavello, Roberto Natalini, Benedetto Piccoli, Andrea Terracina. Conservation laws with discontinuous flux. Networks & Heterogeneous Media, 2007, 2 (1) : 159-179. doi: 10.3934/nhm.2007.2.159
. Adimurthi, Siddhartha Mishra, G.D. Veerappa Gowda. Existence and stability of entropy solutions for a conservation law with discontinuous non-convex fluxes. Networks & Heterogeneous Media, 2007, 2 (1) : 127-157. doi: 10.3934/nhm.2007.2.127
João-Paulo Dias, Mário Figueira. On the Riemann problem for some discontinuous systems of conservation laws describing phase transitions. Communications on Pure & Applied Analysis, 2004, 3 (1) : 53-58. doi: 10.3934/cpaa.2004.3.53
Darko Mitrovic. New entropy conditions for scalar conservation laws with discontinuous flux. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1191-1210. doi: 10.3934/dcds.2011.30.1191
Anupam Sen, T. Raja Sekhar. Structural stability of the Riemann solution for a strictly hyperbolic system of conservation laws with flux approximation. Communications on Pure & Applied Analysis, 2019, 18 (2) : 931-942. doi: 10.3934/cpaa.2019045
Avner Friedman. Conservation laws in mathematical biology. Discrete & Continuous Dynamical Systems - A, 2012, 32 (9) : 3081-3097. doi: 10.3934/dcds.2012.32.3081
Mauro Garavello. A review of conservation laws on networks. Networks & Heterogeneous Media, 2010, 5 (3) : 565-581. doi: 10.3934/nhm.2010.5.565
Len G. Margolin, Roy S. Baty. Conservation laws in discrete geometry. Journal of Geometric Mechanics, 2019, 11 (2) : 187-203. doi: 10.3934/jgm.2019010
Zhi-Qiang Shao. Lifespan of classical discontinuous solutions to the generalized nonlinear initial-boundary Riemann problem for hyperbolic conservation laws with small BV data: shocks and contact discontinuities. Communications on Pure & Applied Analysis, 2015, 14 (3) : 759-792. doi: 10.3934/cpaa.2015.14.759
Wen-Xiu Ma. Conservation laws by symmetries and adjoint symmetries. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 707-721. doi: 10.3934/dcdss.2018044
Tai-Ping Liu, Shih-Hsien Yu. Hyperbolic conservation laws and dynamic systems. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 143-145. doi: 10.3934/dcds.2000.6.143
Yanbo Hu, Wancheng Sheng. The Riemann problem of conservation laws in magnetogasdynamics. Communications on Pure & Applied Analysis, 2013, 12 (2) : 755-769. doi: 10.3934/cpaa.2013.12.755
Stefano Bianchini, Elio Marconi. On the concentration of entropy for scalar conservation laws. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 73-88. doi: 10.3934/dcdss.2016.9.73
Julien Jimenez. Scalar conservation law with discontinuous flux in a bounded domain. Conference Publications, 2007, 2007 (Special) : 520-530. doi: 10.3934/proc.2007.2007.520
Christophe Prieur. Control of systems of conservation laws with boundary errors. Networks & Heterogeneous Media, 2009, 4 (2) : 393-407. doi: 10.3934/nhm.2009.4.393
Alberto Bressan, Marta Lewicka. A uniqueness condition for hyperbolic systems of conservation laws. Discrete & Continuous Dynamical Systems - A, 2000, 6 (3) : 673-682. doi: 10.3934/dcds.2000.6.673
Rinaldo M. Colombo, Kenneth H. Karlsen, Frédéric Lagoutière, Andrea Marson. Special issue on contemporary topics in conservation laws. Networks & Heterogeneous Media, 2016, 11 (2) : i-ii. doi: 10.3934/nhm.2016.11.2i
Laurent Lévi, Julien Jimenez. Coupling of scalar conservation laws in stratified porous media. Conference Publications, 2007, 2007 (Special) : 644-654. doi: 10.3934/proc.2007.2007.644
Alexander Bobylev, Mirela Vinerean, Åsa Windfäll. Discrete velocity models of the Boltzmann equation and conservation laws. Kinetic & Related Models, 2010, 3 (1) : 35-58. doi: 10.3934/krm.2010.3.35
Giuseppe Maria Coclite Lorenzo di Ruvo Jan Ernest Siddhartha Mishra | CommonCrawl |
On the variational $p$-capacity problem in the plane
CPAA Home
Traveling wave phenomena of a diffusive and vector-bias malaria model
May 2015, 14(3): 941-957. doi: 10.3934/cpaa.2015.14.941
KAM Tori for generalized Benjamin-Ono equation
Dongfeng Yan 1,
School of Mathematics and Statistics, Zhengzhou University, Zhengzhou, Henan, 450001, China
Received July 2014 Revised December 2014 Published March 2015
In this paper, we investigate one-dimensional generalized Benjamin-Ono equation, \begin{eqnarray} u_t+\mathcal{H}u_{xx}+u^{4}u_x=0,x\in\mathbb{T}, \end{eqnarray} and prove the existence of quasi-periodic solutions with two frequencies. The proof is based on partial Birkhoff normal form and an unbounded KAM theorem developed by Liu-Yuan[Commun.Math.Phys.307(2011)629-673].
Keywords: quasi-periodic solutions, KAM tori, unbounded KAM theorem., Birkhoff normal form.
Mathematics Subject Classification: Primary: O175.14, O175.2.
Citation: Dongfeng Yan. KAM Tori for generalized Benjamin-Ono equation. Communications on Pure & Applied Analysis, 2015, 14 (3) : 941-957. doi: 10.3934/cpaa.2015.14.941
P. Baldi, Periodic solutions of fully nonlinear autonomous equations of Benjamin-Ono type, Ann. Inst. H. Poincar Anal. Non Linaire, 30 (2013), 33-77. doi: 10.1016/j.anihpc.2012.06.001. Google Scholar
P. Baldi, M. Berti and R. Montalto, KAM for quasi-linear and fully nonlinear forced perturbations of Airy equation, Math. Ann., 359 (2014), 471-536. doi: 10.1007/s00208-013-1001-7. Google Scholar
P. Baldi, M. Berti and R. Montalto, KAM for quasi-linear KdV, C. R. Math. Acad. Sci. Paris, 352 (2014), 603-607. doi: 10.1016/j.crma.2014.04.012. Google Scholar
T. B. Benjamin, Internal waves of permanent form in fluids of great depth, J. Fluid Mech., 29 (1967), 559-592. Google Scholar
M. Berti, L. Biasco and M. Procesi, KAM theory for the Hamiltonian derivative wave equations, Arch. Ration. Mech. Anal., 212 (2014), 905-955. doi: 10.1007/s00205-014-0726-0. Google Scholar
J. Bourgain, Construction of quasi-periodic solutions for Hamiltonian perturbations of linear equations and application to nonlinear pde, Int. Math. Res. Notices, 11 (1994), 475-497. doi: 10.1155/S1073792894000516. Google Scholar
J. Bourgain, On Melnikov's persistence problem, Math. Res. Lett., 4 (1997), 445-458. doi: 10.4310/MRL.1997.v4.n4.a1. Google Scholar
J. Bourgain, Quasi-periodic solutions of Hamiltonian perturbations for 2D linear Schrödinger equation, Ann. Math., 148 (1998), 363-439. doi: 10.2307/121001. Google Scholar
J. Bourgain, Green's Function Estimates for Lattice Schrödinger Operators and Applications, Annals of mathematics studies, Princeton University Press, 2005. Google Scholar
J. Bourgain, On invariant tori of full dimension for 1D periodic NLS, J. Funct. Anal., 229 (2005), 62-94. doi: 10.1016/j.jfa.2004.10.019. Google Scholar
L. Chierchia and J. You, KAM tori for 1D nonlinear wave equation with periodic boundary conditions, Commun. Math. Phys., 211 (2000), 497-525. doi: 10.1007/s002200050824. Google Scholar
G. Iooss, P. I. Plotnikov and J. F. Toland, Standing waves on an infinitely deep perfect fluid under gravity, Arch. Ration. Mech. Anal., 177 (2005), 367-478. doi: 10.1007/s00205-005-0381-6. Google Scholar
J. R. Iorio, On the Cauchy problem for the Benjamin-Ono equation, Comm. Partial Differential equations, 11 (1986), 1031-1081. doi: 10.1080/03605308608820456. Google Scholar
T. Kappler and J. Pöschel, KdV $&$ KAM, Springer-Verlag,Berlin,Heidelberg, 2003. doi: 10.1007/978-3-662-08054-2. Google Scholar
C. E. Kenig, G. Ponce and L. Vega, On the generalized Benjamin-Ono equations, Trans. Amer. Math. Soc., 342 (1994), 155-172. doi: 10.2307/2154688. Google Scholar
S. B. Kuksin, Hamiltonian perturbation of infinite-dimensional linear system with an imaginary spectrum, Funkt. Anal. Prilozh., 21 (1987), 22-37. [English translation in Funct. Anal. Appl., 21(1987), 192-205.] Google Scholar
S. B. Kuksin, Perturbation of quasiperiodic solutions of infinite-dimensional Hamiltonian systems, Izv. Akad. Nauk SSSR, ser. Mat., 52 (1989), 41-63. [English translation in Math. USSR Izv., 32 (1989), 39-62.] Google Scholar
S. B. Kuksin, Nearly Integrable Infinite-dimensional Hamiltonian Systems, Springer-Verlag, Berlin, 1993. Google Scholar
S. B. Kuksin and J. Pöschel, Invariant Cantor manifolds of quasi-periodic oscillations for a nonlinear Schrödinger equation, Ann. of Math., 143 (1996), 149-179. doi: 10.2307/2118656. Google Scholar
S. B. Kuksin, On small denominators equations with large variable coefficients, J. Appl. Math. Phys., 48(1997), 262-271. doi: 10.1007/PL00001476. Google Scholar
S. B. Kuksin, A KAM theorem for equations of the Korteweg-de Vries type, Rev. Math-Math Phys., 10 (1998), 1-64. Google Scholar
S. B. Kuksin, Analysis of Hamiltonian PDEs, Oxford Univ. Press,Oxford, 2000. Google Scholar
J. Liu and X. Yuan, Spectrum for quantum Duffing oscillator and small-divisor equation with large variable coefficient, Commun. Pure Appl. Math., 63 (2010), 1145-1172. doi: 10.1002/cpa.20314. Google Scholar
J. Liu and X. Yuan, A KAM theorem for Hamiltonian partial differential equations with unbounded perturbations, Commun. Math. Phys., 307 (2011), 629-673. doi: 10.1007/s00220-011-1353-3. Google Scholar
J. Liu and X. Yuan, KAM for the derivative nonliear Schrödinger equation with periodic boundary conditions, Journal of Differential Equations, 256 (2014), 1627-1652. doi: 10.1016/j.jde.2013.11.007. Google Scholar
L. Mi, Quasi-periodic solutions of derivative nonlinear Schrödinger equations with a given potential, Journal of Mathematical Analysis and Applications, 390 (2012), 335-354. doi: 10.1016/j.jmaa.2012.01.046. Google Scholar
L. Mi and K. Zhang, Invariant tori for Benjamin-Ono equation with unbounded quasi-periodically forced perturbation, Discrete and Continuous Dynamical Systems-Series A, 34 (2014), 689-707. doi: 10.3934/dcds.2014.34.689. Google Scholar
L. Molinet and F. Ribaud, Well-posedness results for the generalized Benjamin-Ono equation with small initial data, J. Math. Pures Appl., 83 (2004), 277-311. doi: 10.1016/j.matpur.2003.11.005. Google Scholar
H. Ono, Algebraic solitary waves in stratified fluids, Journal of the Physical Society of Japan, 39 (1975), 1082-1091. Google Scholar
J. Pöschel, A KAM theorem for some nonlinear PDEs, Ann. Scuola Norm. Sup. Pisacl. Sci., 23 (1996), 119-148. Google Scholar
J. Pöschel, Quasi-periodic solutions for nonlinear wave equations, Comm. Math. Helv., 71 (1996), 269-296. doi: 10.1007/BF02566420. Google Scholar
T. Tao, Global well-posedness of the Benjamin-Ono equation in H1(R), J. Hyperbolic Differ. Equ., 1 (2004), 27-49. doi: 10.1142/S0219891604000032. Google Scholar
C. E. Wayne, Periodic and quasi-periodic solutions of nonlinear wave equation via KAM theory, Commun. Math. Phys., 127 (1990), 479-528. Google Scholar
X. Yuan and K. Zhang, A reduction theorem for time dependent Schrödinger operator with finite differentiable unbounded perturbation, J. Math. Phys., 54 (2013), 052701. doi: 10.1063/1.4803852. Google Scholar
J. Zhang, M. Gao and X. Yuan, KAM tori for reversible partial differential equations, Nonlinearity, 24 (2011), 1189-1228. doi: 10.1088/0951-7715/24/4/010. Google Scholar
Xiaocai Wang, Junxiang Xu, Dongfeng Zhang. A KAM theorem for the elliptic lower dimensional tori with one normal frequency in reversible systems. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 2141-2160. doi: 10.3934/dcds.2017092
Kai Tao. Strong Birkhoff ergodic theorem for subharmonic functions with irrational shift and its application to analytic quasi-periodic cocycles. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021162
Meina Gao, Jianjun Liu. A degenerate KAM theorem for partial differential equations with periodic boundary conditions. Discrete & Continuous Dynamical Systems, 2020, 40 (10) : 5911-5928. doi: 10.3934/dcds.2020252
Luca Biasco, Luigi Chierchia. On the measure of KAM tori in two degrees of freedom. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6635-6648. doi: 10.3934/dcds.2020134
Claudia Valls. On the quasi-periodic solutions of generalized Kaup systems. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 467-482. doi: 10.3934/dcds.2015.35.467
Guanghua Shi, Dongfeng Yan. KAM tori for quintic nonlinear schrödinger equations with given potential. Discrete & Continuous Dynamical Systems, 2020, 40 (4) : 2421-2439. doi: 10.3934/dcds.2020120
Todor Mitev, Georgi Popov. Gevrey normal form and effective stability of Lagrangian tori. Discrete & Continuous Dynamical Systems - S, 2010, 3 (4) : 643-666. doi: 10.3934/dcdss.2010.3.643
Qihuai Liu, Dingbian Qian, Zhiguo Wang. Quasi-periodic solutions of the Lotka-Volterra competition systems with quasi-periodic perturbations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1537-1550. doi: 10.3934/dcdsb.2012.17.1537
Yanling Shi, Junxiang Xu, Xindong Xu. Quasi-periodic solutions of generalized Boussinesq equation with quasi-periodic forcing. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2501-2519. doi: 10.3934/dcdsb.2017104
Lei Jiao, Yiqian Wang. The construction of quasi-periodic solutions of quasi-periodic forced Schrödinger equation. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1585-1606. doi: 10.3934/cpaa.2009.8.1585
Mikhail B. Sevryuk. Invariant tori in quasi-periodic non-autonomous dynamical systems via Herman's method. Discrete & Continuous Dynamical Systems, 2007, 18 (2&3) : 569-595. doi: 10.3934/dcds.2007.18.569
Àlex Haro, Rafael de la Llave. A parameterization method for the computation of invariant tori and their whiskers in quasi-periodic maps: Numerical algorithms. Discrete & Continuous Dynamical Systems - B, 2006, 6 (6) : 1261-1300. doi: 10.3934/dcdsb.2006.6.1261
Bochao Chen, Yixian Gao. Quasi-periodic travelling waves for beam equations with damping on 3-dimensional rectangular tori. Discrete & Continuous Dynamical Systems - B, 2022, 27 (2) : 921-944. doi: 10.3934/dcdsb.2021075
Yanling Shi, Junxiang Xu. Quasi-periodic solutions for a class of beam equation system. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 31-53. doi: 10.3934/dcdsb.2019171
Siqi Xu, Dongfeng Yan. Smooth quasi-periodic solutions for the perturbed mKdV equation. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1857-1869. doi: 10.3934/cpaa.2016019
Xiaoping Yuan. Quasi-periodic solutions of nonlinear wave equations with a prescribed potential. Discrete & Continuous Dynamical Systems, 2006, 16 (3) : 615-634. doi: 10.3934/dcds.2006.16.615
Meina Gao, Jianjun Liu. Quasi-periodic solutions for derivative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2012, 32 (6) : 2101-2123. doi: 10.3934/dcds.2012.32.2101
Zhenguo Liang, Jiansheng Geng. Quasi-periodic solutions for 1D resonant beam equation. Communications on Pure & Applied Analysis, 2006, 5 (4) : 839-853. doi: 10.3934/cpaa.2006.5.839
Yanling Shi, Junxiang Xu. Quasi-periodic solutions for nonlinear wave equation with Liouvillean frequency. Discrete & Continuous Dynamical Systems - B, 2021, 26 (7) : 3479-3490. doi: 10.3934/dcdsb.2020241
Olga Bernardi, Matteo Dalla Riva. Analytic dependence on parameters for Evans' approximated Weak KAM solutions. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 4625-4636. doi: 10.3934/dcds.2017199
Dongfeng Yan | CommonCrawl |
In the case of a $3\times 3$ matrix there are four classes of numerical ranges [1]:
Numerical range with zero line segments (ovular)
Numerical range with one line segment
Numerical range with two line segments
Numerical range with three line segments (triangular)
The matrix $M=\begin{pmatrix}1&1&1\\0&\ee^{\frac{2\ii}{3}}&1\\0&0&\ee^{\frac{4\ii}{3}}\end{pmatrix}$ has an ovular numerical range
Example of a numerical range whose boundary includes zero line segments.
The matrix $M= $ has a numerical range whose boundary includes one line segment
Example of a numerical range whose boundary includes one line segment.
The matrix $M=\begin{pmatrix}1&1&0\\0&\ee^{\frac{2\ii}{3}}&0\\0&0&\ee^{\frac{4\ii}{3}}\end{pmatrix}$ has a numerical range whose boundary includes two line segments
Example of a numerical range whose boundary includes two line segments.
The matrix $M=\begin{pmatrix}1&0&0\\0&\ee^{\frac{2\ii}{3}}&0\\0&0&\ee^{\frac{4\ii}{3}}\end{pmatrix}$ has a triangular numerical range. Notice that the matrix is normal.
Example of a numerical range whose boundary includes three line segments.
[1]D. S. Keeler, L. Rodman, and I. M. Spitkovsky, "The numerical range of 3x3 matrices," Linear Algebra and its Applications, vol. 252, no. 1-3, pp. 115–139, 1997, [Online]. Available at: https://dx.doi.org/10.1016/0024-3795(95)00674-5.
https://dx.doi.org/10.1016/0024-3795(95)00674-5 | CommonCrawl |
Can hot food ever emit x-rays or gamma rays?
I was just wondering, if heating food up is the result of increasing the energy of bends and stretches in the bonds of the molecules, is it ever possible for tiny amounts of x-rays and gamma rays be emitted?
When we give a molecule enough energy, its electrons can jump to higher orbitals and then return to ground state, releasing EMR with a frequency proportional to the energy gap. So if we heat food, is there a chance that some electron received enough energy to jump up to a really high energy level?
quantum-chemistry spectroscopy
$\begingroup$ Sure, but you'd be hard-pressed to call it "food" at that point, as it'd be dissociated into atomic substituents at best. $\endgroup$ – Todd Minehardt Jun 11 '16 at 15:15
$\begingroup$ Agreed! However, this is worth the "Mythbuster" treatment. If we cannot generate x-rays at reasonable food temperatures, what would it take and what would we have left? See my answer below. $\endgroup$ – Ben Norris Jun 11 '16 at 15:23
$\begingroup$ Your description in terms of bound states of an isolated molecule inside the food doesn't really work. Food is solid or liquid (condensed matter), so the molecules don't exist in isolation. If you did have, say, an isolated water molecule (in steam), it would only have bound states with energies up to some limit, which would be on the order of 1 eV. Any states with higher energies would be states in which the molecule had been ionized or dissociated. As explained in Ben Norris's answer, condensed matter emits a continuous spectrum of radiation, so it's a completely different deal. $\endgroup$ – Ben Crowell Jun 11 '16 at 21:14
$\begingroup$ Can "hot" food ever emit x-rays or gamma rays? // Seems like a Freudian slip. I'm sure vegetables from Chernobyl and Fukushima would be "hot." $\endgroup$ – MaxW Jun 11 '16 at 22:35
$\begingroup$ While not quite related to your question, if you really want to create X-rays in your kitchen, you'd have more success using scotch tape. Though your kitchen would have to be particularly well equipped to include the necessary vacuum chamber. $\endgroup$ – Johnny Jun 12 '16 at 3:40
In theory, yes, you can heat objects to a high enough temperature to emit x-rays or gamma rays. You cannot do this to food, and you certainly cannot do this in your kitchen (or probably any kitchen).
Let's take the lowest energy x-ray out there and see what it would take. X-rays range in frequency from $30 \times 10^{ 16}$ to $30\times 10^{10}$ hertz. The energy of one photon of 30 petahertz radiation is:
$$E=h\nu = \left(6.626\times 1-^{-34}\mathrm{\ j\cdot s}\right)\left(30\times 10^{16}\ \mathrm{s^{-1}}\right) = 1.988 \times 10^{-16}\ \mathrm{J}$$
This is not a lot of energy! However, a single photon is boring. Let's consider a mole of photons. This will also ease comparison with other phenomena, whose energies are listed per mole of events.
$$1.988 \times 10^{-16}\ \mathrm{J} \times 6.022\times 10^{23}\ \mathrm{mol^{-1}}=1.197\times 10^8\ \mathrm{J\cdot mol^{-1}}$$
In theory, if you could pump that much energy into something, you should get some high energy photons out. In practice, it does not work that way. Other stuff happens first. To simplify our example, let's just consider 1 mole of water (18.0 grams) and heat it up. The fate of basically any other matter will be the same, but the energy required will vary a bit.
First, adding energy heats the water. If we start at room temperature $\left(20\ ^\circ\mathrm{C}\right)$, if takes $80\ ^\circ\mathrm{C}\times 18\ \mathrm{g}\times 4.184\ \mathrm{J\cdot g^{-1}\cdot ^\circ C^{-1}}=6025\ \mathrm{J}$ t heat that water to boiling. It takes 40.66 kJ to convert the water into gas. Neither of these puts a big dent in our energy. It takes further energy to heat the water vapor again, but let's see how far we need to take it.
Once we get enough energy into our sample of water, the molecules start to fall apart.
$$\ce{ H2O(g) -> 2H(g) + O(g)} \ \Delta H^\circ =+920\ \mathrm{kJ\cdot mol^{-1}}\ \Delta S^\circ =0.202\ \mathrm{kJ\cdot mol^{-1}\cdot K^{-1}}$$
By fixing $\Delta G=0$ at equilibrium, we can solve for a temperature at which this reaction becomes spontaneous:
$$T=\dfrac{\Delta H}{\Delta S}=\dfrac{+920\ \mathrm{kJ\cdot mol^{-1}}}{0.202\ \mathrm{kJ\cdot mol^{-1}}\cdot K^{-1}}=4596\mathrm{K}$$
We need to heat our water vapor up an additional 4218 K, which takes $19\ \mathrm{g}\times 1.996\ \mathrm{J\cdot g^{-1}\cdot K^{-1}}\times 418\ \mathrm{K}=151.5\times 10^3 \mathrm{J}$.
So, we now have pumped nearly 200,000 J into our water sample, atomized it, and heated it to approximately 5000 K. We are now close to the temperature of the outer layers of the sun! Surely we have enough energy at this temperature to produce x-rays. Nope. At 5000 K, we produce minimal x-rays. Most of the radiation is in the visible, UV, and IR (think about what we get from the sun). Below is a plot of black-body radiation as a function of temperature (image by Wikipedia user Darth Kule and released into the public domain):
Okay, so we are far beyond the reality of what can happen in a conventional oven (or almost any reasonable heat source used for food). At this temperature, we can use the Planck Law to calculate the power output ($I$) of x-rays at the temperature. We can also do this at some normal temperatures and for gamma rays. This model is a little goofy, since food is not a black body, but we will at least calculate the max x-ray and gamma ray output.
Rather than grinding through all the maths, I'll just put in a table of some temperatures and watts. 1 watt is not a lot of power. Most lightbulbs produce light in the kilowatts.
$$\begin{array}{|c|c|c|c|}\hline \mathrm{T\ (K)} & \mathrm{P_{x-ray}\ (W)} & \mathrm{P_{gamma}\ (W)} & \mathrm{notes} \\ \hline 378 & \approx 0 & \approx 0 & \text{boiling point of water} \\ \hline 550 & \approx 0 & \approx 0 & \text{approximate common highest temperature on residential ovens}\\ \hline 700-800 & \approx 0 & \approx 0 & \text{temperature range for wood-fired ovens, tandoors, etc.}\\ \hline 5770 & 4.26\times 10^{-129} & \approx 0 & \text{temperature of the photosphere of the sun}\\ \hline 1.57\times 10^7 & 10.4 & 7.87\times 10^{-54} & \text{estimated temperature of the center of the sun} \\ \hline \end{array}$$
So, if you could heat your food to the temperature of the sun, it would produce minuscule x-ray radiation. It would also no long resemble food.
Mithoron
Ben NorrisBen Norris
$\begingroup$ "This model is a little goofy, since food is not a black body" It'll be pretty damned black at the kinda temperatures you're talking about! *baddum-tsh* $\endgroup$ – David Richerby Jun 11 '16 at 21:36
$\begingroup$ @DavidRicherby not anymore. Hydrogen as well as oxygen are transparent, and any hydrogen-oxygen plasma will be too dilute to significantly interact with photons as well. $\endgroup$ – John Dvorak Jun 11 '16 at 21:50
$\begingroup$ I think that, very strictly speaking, any black body above absolute zero has a chance of emitting an x-ray/gamma photon, because the Planck distribution is never zero for any finite wavelength. Of course, due to exponential suppression, the probability of emitting such a photon is ridiculously small. I believe I have read that a black body at $\mathrm{37\ ^oC}$ can be expected to emit a visible photon every 1000 years. $\endgroup$ – Nicolau Saker Neto Jun 11 '16 at 22:13
$\begingroup$ "Most lightbulbs produce light in the kilowatts." - What kind of lights do you have in your house? o_O $\endgroup$ – marcelm Jun 11 '16 at 22:26
$\begingroup$ Does the table at the end list power emitted per unit mass? Unit area? I first understood that whole center of the sun produces 10 W of x-rays, which doesn't seem right. :P $\endgroup$ – ntoskrnl Jun 12 '16 at 14:32
It has nothing to do with what you were going for, but there is a small, but non-trivial amount of x- and gamma-ray output for most food and so the answer is trivially "yes".
In particular any food containing potassium will have the usual admixture of K-40 (with it's 1.3 and 1.5 MeV gamma lines).
When I worked in a lab with a low-background, high-sensitivity germanium detector I used to try to bring a banana in my lunch on days we were expecting a visitor so I could demonstrate the detector without having to do any paperwork concerning possible exposure of the visitor to radiation: just put the banana in a clean ziplock bag, put that in the detector, start the DAQ, and watch a nice peak grow on the screen. Broccoli works just as well, of course, but it's broccoli.
But of course none of this has anything to do with heating the food.
dmckee --- ex-moderator kittendmckee --- ex-moderator kitten
$\begingroup$ This is a for-sure reason to use to say that our food emits high energy EMR. $\endgroup$ – user314901 Jul 3 '16 at 4:33
Not even close! Your food would likely be vaporized, and leave an awful mark on your kitchen table, before it could contain enough energy to be producing xrays.
When I was a youth, I used to do a lot of 'microwave experiments', as in domestic microwave ovens. I would be known for buying a used microwave from a thrift shop for around $10 and use it to produce 'microwave plasma' with (which was essentially ionizing carbon, analogous to the way neon gas would ionize if you stuck a neon gas filled tube in the microwave and turned it on).
During the course of this, I witnessed a lot of different materials arcing, melting, vaporizing, or ionizing in domestic microwave ovens. At some point, probably after cooking up filamented (incandescent) light bulbs, I started thinking and also wondered about what kind of energies produce xrays. Could I be irradiating myself with with harmful rays or high-energy particles being emitted off of all these arcs? Some of the lightbulbs glowed a crazy green color right before the end of its life, and I remembered reading reading that old-timey xray bulbs produced a distinct green light when producing xrays.
So I looked into it, and it turns out... There is no way that my 1000 watt domestic microwave oven was producing xrays, even if you consider that most lightbulbs are evacuated (under a mild vacuum).
Apparently, if I wanted to start creating x-rays in my own back yard (i didn't), I was going to need at least 2 things:
More power. A lot more, namely voltage. I dont know what kind of voltage was being induced during arcing, but xray tubes range from 30 to 150 kilovolts (kV)!
A much stronger vacuum. Xray tubes have whats called a 'high vacuum', which is about 10^-4 Pa (or 0.00009869233 atm) . In comparison, standard incandescent lightbulbs sit at around 70 kPa (0.7 atm), or 'low vacuum'.
Now people will love to start in with 'Well technically, it could', point out that we can calculate this, and proceed to work out a value that requires hundreds of decimal places before you see a significant digit, but just because the equations don't hit a limit and break down at some point, does not mean its ability to describe something physically real doesn't.
Other quick interesting facts about microwaves:
As stated above, black carbon will ionize in the microwave, such as that from burnt wood or black toner from copy machines.
While it is true that when the magnetron tube is on, the radio waves are at 2.45 Ghz, the magnetron itself turns on and off at 60 cycles per second. This explains the noise that the plasma makes. When the magnetron tube i on, the plasma expands, when the magnatron tube is switched off, shrinks. The plasma expanding and contracting at 60 Hz is why it makes a loud, 60 Hz noise.
Glass is not very good at absorbing microwaves... until its molten. Once glass becomes molten, it will absorb microwaves appreciably, causing it to not only stay molten, but grow, eventually turning all the glass molten. Also this is how some glass manufactures keep their glass molten more efficiently, once already melted.
Microwave assisted chemical synthesis is the new 'green' way to synth, requiring, less to no solvents, sometimes allowing for two homogenized, dry solids to react, higher yields, lower activation energies or no catalysts and in only a fraction of the time to boot.
bwDraco
Adam WhiteAdam White
$\begingroup$ Re, "More power... xray tubes range from 30 to 150 kilovolts" Voltage is not power, and you can generate harmful levels of X radiation with equipment that uses no more power than your microwave oven. $\endgroup$ – Solomon Slow Jun 13 '16 at 18:33
$\begingroup$ Re, "Xray tubes have... In comparison, standard incandescent lightbulbs..." The operating principle of an x-ray tube is completely different from the operating principle of an incandescent light bulb. A light bulb works by heating a tiny bit of wire until it is so hot that it's black-body radiation is visible. An X-ray tube works by forming a beam of high-energy electrons, and colliding them with a metal target to produce Bremsstrahlung radiation. The high vacuum is needed for the electron gun in the X-ray tube. A light bulb doesn't need vacuum, it just needs an inert gass fill. $\endgroup$ – Solomon Slow Jun 13 '16 at 18:43
$\begingroup$ @Adam White, would you mind explaining more why the equation breaks down in describing real phenomena? "Just because the equations don't hit a limit and break down at some point, does not mean its ability to describe something physically real doesn't". $\endgroup$ – user314901 Jul 3 '16 at 4:41
Not the answer you're looking for? Browse other questions tagged quantum-chemistry spectroscopy or ask your own question. | CommonCrawl |
Czechoslovak Mathematical Journal
Chen, Zong-Xuan ; Shon, Kwang Ho
Properties of differences of meromorphic functions. (English). Czechoslovak Mathematical Journal, vol. 61 (2011), issue 1, pp. 213-224
MSC: 30C15, 30D35, 39A10, 39B32 | MR 2782770 | Zbl 1224.30156 | DOI: 10.1007/s10587-011-0008-z
meromorphic function; difference; divided difference; zero; fixed point
Let $f$ be a transcendental meromorphic function. We propose a number of results concerning zeros and fixed points of the difference $g(z)=f(z+c)-f(z)$ and the divided difference $g(z)/f(z)$.
[1] Ablowitz, M., Halburd, R. G., Herbst, B.: On the extension of Painlevé property to difference equations. Nonlinearity 13 (2000), 889-905. DOI 10.1088/0951-7715/13/3/321 | MR 1759006
[2] Bergweiler, W., Langley, J. K.: Zeros of differences of meromorphic functions. Math. Proc. Camb. Phil. Soc. 142 (2007), 133-147. DOI 10.1017/S0305004106009777 | MR 2296397 | Zbl 1114.30028
[3] Bergweiler, W., Eremenko, A.: On the singularities of the inverse to a meromorphic function of finite order. Rev. Mat. Iberoamericana 11 (1995), 355-373. DOI 10.4171/RMI/176 | MR 1344897 | Zbl 0830.30016
[4] Chen, Z. X., Shon, K. H.: On zeros and fixed points of differences of meromorphic functions. J. Math. Anal. Appl. 344-1 (2008), 373-383. MR 2416313 | Zbl 1144.30012
[5] Chiang, Y. M., Feng, S. J.: On the Nevanlinna characteristic of $f(z+\eta)$ and difference equations in the complex plane. Ramanujan J. 16 (2008), 105-129. DOI 10.1007/s11139-007-9101-1 | MR 2407244 | Zbl 1152.30024
[6] Clunie, J., Eremenko, A., Rossi, J.: On equilibrium points of logarithmic and Newtonian potentials. J. London Math. Soc. 47-2 (1993), 309-320. MR 1207951 | Zbl 0797.31002
[7] Conway, J. B.: Functions of One Complex Variable. New York, Spring-Verlag. MR 0503901 | Zbl 0887.30003
[8] Eremenko, A., Langley, J. K., Rossi, J.: On the zeros of meromorphic functions of the form $\sum\nolimits_{k=1}^{\infty}{a_k}/(z-z_k)$. J. Anal. Math. 62 (1994), 271-286. DOI 10.1007/BF02835958 | MR 1269209
[9] Gundersen, G.: Estimates for the logarithmic derivative of a meromorphic function, plus similar estimates. J. London Math. Soc. 37-2 (1988), 88-104. MR 0921748 | Zbl 0638.30030
[10] Halburd, R. G., Korhonen, R.: Difference analogue of the lemma on the logarithmic derivative with applications to difference equations. J. Math. Anal. Appl. 314 (2006), 477-487. DOI 10.1016/j.jmaa.2005.04.010 | MR 2185244 | Zbl 1085.30026
[11] Halburd, R. G., Korhonen, R.: Nevanlinna theory for the difference operator. Ann. Acad. Sci. Fenn. Math. 31 (2006), 463-478. MR 2248826 | Zbl 1108.30022
[12] Hayman, W. K.: Meromorphic Functions. Oxford, Clarendon Press (1964). MR 0164038 | Zbl 0115.06203
[13] Hayman, W. K.: Slowly growing integral and subharmonic functions. Comment. Math. Helv. 34 (1960), 75-84. DOI 10.1007/BF02565929 | MR 0111839 | Zbl 0123.26702
[14] Heittokangas, J., Korhonen, R., Laine, I., Rieppo, J., Tohge, K.: Complex difference equations of Malmquist type. Comput. Methods Funct. Theory 1 (2001), 27-39. DOI 10.1007/BF03320974 | MR 1931600 | Zbl 1013.39001
[15] Hinchliffe, J. D.: The Bergweiler-Eremenko theorem for finite lower order. Results Math. 43 (2003), 121-128. DOI 10.1007/BF03322728 | MR 1962854 | Zbl 1036.30022
[16] Ishizaki, K., Yanagihara, N.: Wiman-Valiron method for difference equations. Nagoya Math. J. 175 (2004), 75-102. DOI 10.1017/S0027763000008916 | MR 2085312 | Zbl 1070.39002
[17] Laine, I.: Nevanlinna Theory and Complex Differential Equations. Berlin, W. de Gruyter (1993). MR 1207139
[18] Yang, L.: Value Distribution Theory. Beijing, Science Press (1993). MR 1301781 | Zbl 0790.30018
[19] Yang, C. C., Yi, H. X.: Uniqueness Theory of Meromorphic Functions. Dordrecht, Kluwer Academic Publishers Group (2003). MR 2105668 | Zbl 1070.30011 | CommonCrawl |
Global existence and steady states of a two competing species Keller--Segel chemotaxis model
KRM Home
Relativistic transfer equations: Comparison principle and convergence to the non-equilibrium regime
December 2015, 8(4): 765-775. doi: 10.3934/krm.2015.8.765
Strong solutions to compressible barotropic viscoelastic flow with vacuum
Tong Tang 1, and Yongfu Wang 2,
Department of Mathematics, College of Sciences, Hohai University, Nanjing 210098, China
Department of Mathematics, Sichuan University, Chengdu, 610064, China
Received November 2014 Revised May 2015 Published July 2015
We consider strong solutions to compressible barotropic viscoelastic flow in a domain $\Omega\subset\mathbb{R}^{3}$ and prove the existence of unique local strong solutions for all initial data satisfying some compatibility condition. The initial density need not be positive and may vanish in an open set. Inspired by the work of Kato and Lax, we use the contraction mapping principle to get the result.
Keywords: Existence, strong solutions, compressible viscoelastic flow..
Mathematics Subject Classification: Primary: 35Q35, 76A10, 76N1.
Citation: Tong Tang, Yongfu Wang. Strong solutions to compressible barotropic viscoelastic flow with vacuum. Kinetic & Related Models, 2015, 8 (4) : 765-775. doi: 10.3934/krm.2015.8.765
Y. Cho, H. J. Choe and H. Kim, Unique solvability of the initial boundary value problems for compressible viscous fluids,, J. Math. Pures Appl., 83 (2004), 243. doi: 10.1016/j.matpur.2003.11.004. Google Scholar
Y. M. Chu, X. G. Liu and X. Liu, Strong solutions to the compressible liquid crystal system,, Pacific J. Math., 257 (2012), 37. doi: 10.2140/pjm.2012.257.37. Google Scholar
M. Hieber, Y. Naito and Y. Shibata, Global existence results for Oldroyd-B fluids in exterior domains,, J. Differential Equations, 252 (2012), 2617. doi: 10.1016/j.jde.2011.09.001. Google Scholar
X. P. Hu and D. H. Wang, Local strong solution to the compressible viscoelastic flow with large data,, J. Differential Equations, 249 (2010), 1179. doi: 10.1016/j.jde.2010.03.027. Google Scholar
X. P. Hu and D. H. Wang, Global existence for the multi-dimensional compressible viscoelastic flows,, J. Differential Equations, 250 (2011), 1200. doi: 10.1016/j.jde.2010.10.017. Google Scholar
X. P. Hu and D. H. Wang, Strong solutions to the three-dimensional compressible viscoelastic fluids,, J. Differential Equations, 252 (2012), 4027. doi: 10.1016/j.jde.2011.11.021. Google Scholar
C. Guillopé and J. C. Saut, Existence results for the flow of viscoelastic fluids with a differential constitutive law,, Nonlinear Anal., 15 (1990), 849. doi: 10.1016/0362-546X(90)90097-Z. Google Scholar
X. D. Huang, J. Li and Z. P. Xin, Global well-posedness of classical solutions with large oscillations and vacuum to the three-dimensional isentropic compressible Navier-Stokes equations,, Comm. Pure Appl. Math., 65 (2012), 549. doi: 10.1002/cpa.21382. Google Scholar
T. Kato, The Cauchy problem for quasi-linear symmetric hyperbolic systems,, Arch. Rational Mech. Anal., 58 (1975), 181. doi: 10.1007/BF00280740. Google Scholar
R. Kupferman, C. Mangoubi and E. S. Titi, A Beale-Kato-Majda breakdown criterion for an Oldroyd-B fluid in the creeping flow regime,, Commun. Math. Sci., 6 (2008), 235. doi: 10.4310/CMS.2008.v6.n1.a12. Google Scholar
P. D. Lax, Hyperbolic Systems of Conservation Laws and the Mathematical Theory of Shock Waves,, Conference Board of the Mathematical Sciences Regional Conference Series in Applied Mathematics, (1973). Google Scholar
Z. Lei, C. Liu and Y. Zhou, Global solutions for incompressible viscoelastic fluids,, Arch. Ration. Mech. Anal., 188 (2008), 371. doi: 10.1007/s00205-007-0089-x. Google Scholar
Z. Lei, C. Liu and Y. Zhou, Global existence for a 2D incompressible viscoelastic model with small strain,, Commun. Math. Sci., 5 (2007), 595. doi: 10.4310/CMS.2007.v5.n3.a5. Google Scholar
F. Lin, C. Liu and P. Zhang, On hydrodynamics of viscoelastic fluids,, Comm. Pure Appl. Math., 58 (2005), 1437. doi: 10.1002/cpa.20074. Google Scholar
F. Lin and P. Zhang, On the initial-boundary value problem of the incompressible viscoelastic fluid system,, Comm. Pure Appl. Math., 61 (2008), 539. doi: 10.1002/cpa.20219. Google Scholar
P. L. Lions, Mathematical Topics in Fluid Mechanics,, vol. 2. Compressible Models, (1998). Google Scholar
A. J. Majda, Compressible Fluid Flow and Systems of Conservation Laws in Several Space Variables,, Applied Mathematical Sciences, (1984). doi: 10.1007/978-1-4612-1116-7. Google Scholar
A. Matsumura and T. Nishida, The initial value problem for the equations of motion of viscous and heat-conductive gases,, J. Math. Kyoto Univ., 20 (1980), 67. Google Scholar
A. Matsumura and T. Nishida, Initial-boundary value problems for the equations of motion of compressible viscous and heat- conductive fluids,, Comm. Math. Phys., 89 (1983), 445. doi: 10.1007/BF01214738. Google Scholar
J. G. Oldroyd, On the formation of rheological equations of state,, Proc. R. Soc. Lond. Ser. A, 200 (1950), 523. doi: 10.1098/rspa.1950.0035. Google Scholar
J. G. Oldroyd, Non-Newtonian effects in steady motion of some idealized elastico-viscous liquids,, Proc. R. Soc. Lond. Ser. A, 245 (1958), 278. doi: 10.1098/rspa.1958.0083. Google Scholar
J. Z. Qian and Z. F. Zhang, Global well-posedness for compressible viscoelastic fluids near equilibrium,, Arch. Ration. Mech. Anal., 198 (2010), 835. doi: 10.1007/s00205-010-0351-5. Google Scholar
J. Z. Qian, Initial boundary value problems for the compressible viscoelastic fluid,, J. Differential Equations, 250 (2011), 848. doi: 10.1016/j.jde.2010.07.026. Google Scholar
R. Salvi and I. Straškraba, Global existence for viscous compressible fluids and their behavior as $t\rightarrow\infty$,, J. Fac. Sci. Univ. Tokyo Sect. IA Math., 40 (1993), 17. Google Scholar
Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348
Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163
Anna Abbatiello, Eduard Feireisl, Antoní Novotný. Generalized solutions to models of compressible viscous fluids. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 1-28. doi: 10.3934/dcds.2020345
José Luiz Boldrini, Jonathan Bravo-Olivares, Eduardo Notte-Cuello, Marko A. Rojas-Medar. Asymptotic behavior of weak and strong solutions of the magnetohydrodynamic equations. Electronic Research Archive, 2021, 29 (1) : 1783-1801. doi: 10.3934/era.2020091
Martin Kalousek, Joshua Kortum, Anja Schlömerkemper. Mathematical analysis of weak and strong solutions to an evolutionary model for magnetoviscoelasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 17-39. doi: 10.3934/dcdss.2020331
Pierre Baras. A generalization of a criterion for the existence of solutions to semilinear elliptic equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 465-504. doi: 10.3934/dcdss.2020439
Yi-Long Luo, Yangjun Ma. Low Mach number limit for the compressible inertial Qian-Sheng model of liquid crystals: Convergence for classical solutions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 921-966. doi: 10.3934/dcds.2020304
Helmut Abels, Johannes Kampmann. Existence of weak solutions for a sharp interface model for phase separation on biological membranes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 331-351. doi: 10.3934/dcdss.2020325
Christian Beck, Lukas Gonon, Martin Hutzenthaler, Arnulf Jentzen. On existence and uniqueness properties for solutions of stochastic fixed point equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020320
Tong Yang, Seiji Ukai, Huijiang Zhao. Stationary solutions to the exterior problems for the Boltzmann equation, I. Existence. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 495-520. doi: 10.3934/dcds.2009.23.495
Ryuji Kajikiya. Existence of nodal solutions for the sublinear Moore-Nehari differential equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1483-1506. doi: 10.3934/dcds.2020326
Rim Bourguiba, Rosana Rodríguez-López. Existence results for fractional differential equations in presence of upper and lower solutions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1723-1747. doi: 10.3934/dcdsb.2020180
Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436
Dorothee Knees, Chiara Zanini. Existence of parameterized BV-solutions for rate-independent systems with discontinuous loads. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 121-149. doi: 10.3934/dcdss.2020332
Daniele Bartolucci, Changfeng Gui, Yeyao Hu, Aleks Jevnikar, Wen Yang. Mean field equations on tori: Existence and uniqueness of evenly symmetric blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3093-3116. doi: 10.3934/dcds.2020039
Xing Wu, Keqin Su. Global existence and optimal decay rate of solutions to hyperbolic chemotaxis system in Besov spaces. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021002
Vandana Sharma. Global existence and uniform estimates of solutions to reaction diffusion systems with mass transport type boundary conditions. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021001
Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, 2021, 20 (1) : 389-404. doi: 10.3934/cpaa.2020273
Manuel Friedrich, Martin Kružík, Jan Valdman. Numerical approximation of von Kármán viscoelastic plates. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 299-319. doi: 10.3934/dcdss.2020322
Tong Tang Yongfu Wang | CommonCrawl |
Nanophotonic Pockels modulators on a silicon nitride platform
Koen Alexander ORCID: orcid.org/0000-0002-5048-749X1,2 na1,
John P. George1,2,3 na1,
Jochem Verbist1,2,4,
Kristiaan Neyts ORCID: orcid.org/0000-0001-5551-97722,3,
Bart Kuyken1,2,
Dries Van Thourhout ORCID: orcid.org/0000-0003-0111-431X1,2 &
Jeroen Beeckman ORCID: orcid.org/0000-0002-0711-24652,3
Nature Communications volume 9, Article number: 3444 (2018) Cite this article
Integrated optics
Nanoscale devices
Silicon nitride (SiN) is emerging as a competitive platform for CMOS-compatible integrated photonics. However, active devices such as modulators are scarce and still lack in performance. Ideally, such a modulator should have a high bandwidth, good modulation efficiency, low loss, and cover a wide wavelength range. Here, we demonstrate the first electro-optic modulators based on ferroelectric lead zirconate titanate (PZT) films on SiN, in both the O-band and C-band. Bias-free operation, bandwidths beyond 33 GHz and data rates of 40 Gbps are shown, as well as low propagation losses (α ≈ 1 dB cm−1). A half-wave voltage-length product of 3.2 V cm is measured. Simulations indicate that further improvement is possible. This approach offers a much-anticipated route towards high-performance phase modulators on SiN.
The exponential increase in data traffic requires high-capacity optical links. A fast, compact, energy-efficient, broadband optical modulator is a vital part of such a system. Modulators integrated with silicon (Si) or silicon nitride (SiN) platforms are especially promising, as they leverage complementary-metal-oxide-semiconductor (CMOS) fabrication techniques. This enables high-yield, low-cost, and scalable photonics, and a route towards co-integration with electronics1. SiN-based integrated platforms offer some added advantages compared to silicon-on-insulator, such as a broader transparency range2, a lower propagation loss3,4, significantly lower nonlinear losses2,5, and a much smaller thermo-optic coefficient2. Therefore, phase modulators on SiN in particular would open new doors in other fields as well, such as nonlinear and quantum optics5,6,7, microwave photonics8, optical phased arrays for LIDAR or free-space communications9, and more.
State-of-the-art silicon modulators rely on phase modulation through free carrier plasma dispersion in p–n10, p–i–n11, and MOS12 junctions. Despite being relatively fast and efficient, these devices suffer from spurious amplitude modulation and high insertion losses. Alternative approaches are based on heterogeneous integration with materials such as III–V semiconductors13,14, graphene15,16, electro-optic organic layers17, germanium18, or epitaxial BaTiO3 (BTO)19,20,21.
Most of these solutions are not viable using SiN. Due to its insulating nature, plasma dispersion effects and many approaches based on co-integration with III–V semiconductors, graphene, and organics, which rely on the conductivity of doped silicon waveguides, cannot be used. The inherent nature of deposited SiN further excludes solutions using epitaxial integration. Finally, SiN is centrosymmetric, hampering Pockels-based modulation in the waveguide core itself, in contrast to aluminum nitride22, or lithium niobate23. Nonetheless, modulators on SiN exist. Using double-layer graphene, Phare et al.24 achieved high-speed electro-absorption modulation, and using piezoelectric lead zirconate titanate (PZT) thin films, phase modulators based on stress-optic effects25, and geometric deformation26, have been demonstrated, albeit with sub-MHz electrical bandwidth.
In this work, we use a novel approach for co-integration of thin-film PZT on SiN27. An intermediate low-loss lanthanide-based layer is used as a seed for the PZT deposition, as opposed to the highly absorbing Pt-based seed layers used conventionally25,26, enabling direct deposition of the layer on top of SiN waveguides fabricated using front-end-of-line CMOS processes.
We demonstrate efficient high-speed phase modulators on a SiN platform, with bias-free operation, modulation bandwidths exceeding 33 GHz in both the O-band and C-band, and data rates up to 40 Gbps. We measure propagation losses down to 1 dB cm−1 and half-wave voltage-length products VπL down to 3.2 Vcm for the PZT-on-SiN waveguides. Moreover, based on simulations we argue that the VπL can be considerably reduced by optimizing the waveguide cross-section, without significantly increasing the propagation loss. Hence, the platform provides an excellent trade-off between optical losses and modulation efficiency. According to simulations, the product VπLα can be as low as 2 VdB in optimized structures. Pure phase modulation also enables complex encoding schemes (such as QPSK), which are not easily achievable with absorption modulation. These results, especially in terms of the achieved modulation bandwidths, strongly improve upon what is currently possible in SiN25,26. In terms of VπLα, this platform can furthermore improve upon carrier dispersion modulators in silicon-on-insulator, which suffer from inherent carrier-induced losses absent in Pockels modulators28.
Device design and fabrication
The waveguides are patterned using 193 nm deep ultraviolet lithography in a 330-nm-thick layer of low pressure chemical vapor deposited SiN on a 3.3-μm-thick buried oxide layer, in a CMOS pilot line. Subsequently, plasma-enhanced chemical vapor deposited SiO2 (thickness ≈1 μm) is deposited over the devices and planarized, either using a combination of dry and wet etching, or by chemical–mechanical polishing (CMP). This step is performed so that the top surface of the SiN waveguide and the surrounding oxide are coplanar. The PZT films are deposited by chemical solution deposition (CSD), using a lanthanide-based intermediate layer (see Methods and ref.27). Finally, Ti/Au electrical contacts are patterned in the vicinity of the waveguides using photolithgraphy, thermal evaporation, and lift-off. For the samples planarized through CMP, propagation losses of 1 dB cm−1 are measured on PZT-covered waveguides without metallic contacts (see Supplementary Note 2).
Figure 1a, b show the top view and waveguide cross-section of a C-band ring modulator, and for images of the other fabricated modulators (O-band ring, C-band Mach–Zehnder), see Supplementary Note 1. Figure 1c shows a schematic of the cross-section. An electric field is applied through in-plane electrodes, changing the refractive index in the PZT and hence the effective index of the waveguide mode. The PZT thin films exhibit a higher refractive index (n ≈ 2.3) than SiN (n ≈ 2), so a significant portion of the optical mode is confined in the PZT. A grating coupler is used for incoupling and outcoupling, into the fundamental quasi-transverse electric (quasi-TE) optical mode. The combined loss of a grating coupler and the transition between a bare and PZT-covered waveguide section is ≈12 dB at the optimum, with a 3 dB bandwidth of ≈90 nm. However, this is currently not optimized and can still be improved by design.
Design and static response of a C-band ring modulator. a Top view of a PZT-on-SiN ring modulator. b Cross-section of a PZT-covered SiN waveguide. The image contrast was enhanced for clarity. c Schematic of the PZT-covered SiN waveguide. The fundamental TE optical mode is plotted in red. The quiver plot shows the applied electric field distribution between the electrodes. PZT thickness, waveguide width, and gap between the electrodes are, respectively, 150 nm, 1200 nm, and 4 μm. d Normalized transmission spectrum of a C-band ring modulator. e Transmission spectra for different DC voltages. f Resonance wavelength shift versus voltage applied across the PZT, including a linear fit
DC characterization and poling stability
Figure 1d shows the transmission spectrum of a C-band (1530–1565 nm) ring modulator. The ring has a loaded Q factor of 2230 and a free spectral range ΔλFSR ≈1.7 nm. The ring radius, the length of the phase shifter L, and the electrode spacing are, respectively, 100, 524, and 4.4 μm. The relatively low Q factor is caused by sub-optimal alignment of the electrodes.
After deposition, the PZT crystallites have one crystal plane parallel to the substrate, but no preferential orientation in the chip's plane. To obtain a significant electro-optic response for the quasi-TE optical mode, a poling step is performed by applying 60–80 V (≈150 kV cm−1) for 1 h at room temperature, followed by several hours of stabilization time.
The transmission spectrum is measured for different direct current (DC) voltages applied across the PZT layer (Fig. 1e). The voltage-induced index change shifts the resonance. In Fig. 1f, the resonance wavelength shift is plotted as a function of voltage, and the slope gives the tuning efficiency Δλ/ΔV ≈ −13.4 pm V−1. From this, we estimate the half-wave voltage-length product to be VπL = |LλFSRΔV/(2Δλ)| ≈ 3.3 Vcm. Through simulation of the optical mode and DC electric field, the effective electro-optic coefficient reff of the PZT layer is estimated to be 61 pm V−1 (see Methods). Measurements on other modulator structures yield consistent values for reff (67 and 70 pm V−1 for, respectively, the C-band Mach–Zehnder and O-band ring), and the smallest VπL value (≈3.2 V cm) was measured on an O-band ring (Supplementary Note 1).
The PZT was poled prior to the measurements, after which no bias voltage was used. To demonstrate longer term stability of the poling, the DC tuning efficiency was periodically measured (sweeping the voltage over [−2, +2] V) on a C-band ring over a total time of almost 3 days. In Fig. 2, the absolute value of the resulting tuning efficiency Δλ/ΔV is plotted as a function of time, decaying towards a stable value of about 13.5 pm V−1 over the course of several hours. The poling stabilized and there have been no indications of decay over much longer periods of time, hence modulation is possible without a constant bias, as opposed to similar materials like BTO19,20,21.
Poling stability of the electro-optic film. Tuning efficiency (C-band ring) as a function of time after poling. The axis on the right shows the estimated corresponding VπL
High-speed characterization
For many applications, high-speed operation is essential. In Fig. 3a the setup used for high-speed characterization is shown. In Fig. 3b, the \(\left| {S_{21}} \right|\) measurement for different modulators is plotted. The measured 3 dB bandwidths of both rings are around 33 GHz, and the Mach–Zehnder has a bandwidth of 27 GHz. The bandwidths are not limited by the intrinsic material response of PZT, but by device design and/or characterization equipment, as the dominating contributions to the Pockels effect are expected to have a bandwidth which is almost two orders of magnitude larger29,30. We furthermore demonstrate that our platform can be used for high-speed data transmission. In Fig. 3c, eye diagrams are plotted for different bitrates, a non-return-to-zero (NRZ) binary sequence (4.2 V peak-to-peak) is used. The eye remains open up until about 40 Gbps, limited by the arbitrary waveform generator (AWG) (25 GHz bandwidth), rather than by the modulator itself. The bit error rates were estimated from the measured eye diagrams31, and are below the hard-decision forward error coding limit of 3.8 × 10−3 32,33 for bitrates up to 40 Gbps (see Supplementary Note 3). At 10 Gbps, an extinction ratio of 3.1 dB is measured (see Supplementary Note 4).
High-speed measurements. a Sketch of the setup used for small signal measurements (solid path in the switches) and for the eye diagram measurements (dashed path). VNA: vector network analyzer, AWG: arbitrary waveform generator, OTF: optical tunable filter. b Electro-optic small signal (\(\left| {S_{21}} \right|\) parameter) measurement of several modulators. c Eye diagrams of a C-band ring modulator, measured with a non-return-to-zero scheme (29 − 1 pseudorandom binary sequence) and a peak-to-peak drive voltage of 4.2 V
Device optimization
The presented devices were not fully optimized in terms of electro-optic modulation parameters. Primarily the PZT thickness could be increased. Sub-optimal thicknesses were used to reduce bend losses and coupling losses into PZT-covered waveguide sections. These limitations can be alleviated by device design. In Fig. 4, simulation results of the most important figures of merit are plotted as a function of the PZT layer thickness and of the electrode spacing. Waveguide height, width, and the wavelength are, respectively, 300 nm, 1.2 μm, and 1550 nm. The waveguide propagation loss α (Fig. 4a) is calculated as the sum of a contribution caused by the electrodes, and a constant intrinsic propagation loss of 1 dB cm−1, a realistic value if the samples are planarized using CMP (see Supplementary Note 2). The half-wave voltage-length product VπL (see Methods) and the product VπLα are shown in Fig. 4b, c, respectively. VπL represents a trade-off between drive voltage and device length, VπLα also takes into account loss, and is arguably more important for many applications26. The loss increases with decreasing electrode spacing, but also with increasing PZT thickness, since the mode expands laterally. Due to the increasing overlap between the optical mode and the PZT, VπL decreases with increasing thickness. VπL also increases with increasing electrode spacing. An optimization of the waveguide width is given in the Supplementary Note 5. From Fig. 4b it is clear that VπL can go well below 2 Vcm. The interplay between these different dependencies can be seen in the plot of VπLα (Fig. 4b), which has an optimum for which VπLα ≈ 2 VdB.
Numerical optimization of a PZT-on-SiN phase modulator. Simulation of the waveguide loss α (a), the half-wave voltage-length product VπL (b), and their product VπLα (c) of a PZT-covered SiN waveguide modulator of the type shown in Fig. 1c, for a wavelength of 1550 nm. Waveguide height, width, and intermediate layer thickness are, respectively, 300 nm, 1.2 μm, and 20 nm. The intrinsic waveguide loss (in the absence of electrodes) was taken to be 1 dB cm−1, and the effective electro-optic Pockels coefficient was 67 pm V−1. The circles show the approximate parameters used in this work, and the diamonds show the optimal point with respect to VπLα
To conclude, we have demonstrated a novel platform for efficient, optically broadband, high-speed, nanophotonic electro-optic modulators. Using a relatively simple chemical solution deposition procedure, we incorporated a thin film of strongly electro-optic PZT onto a SiN-based photonic chip. We demonstrated stable poling of the electro-optic material, and efficient and high-speed modulation, in the absence of a bias voltage. O-band and C-band operation was shown; however, we expect the platform to be operational into the visible wavelength range (>450 nm)2,34,35. From simulations it is clear that the devices characterized in this paper do not yet represent the limitations of the platform and VπLα ≈ 2 V dB is achievable. Moreover, our approach is unique in its versatility, as the PZT film can be deposited on any sufficiently flat surface, enabling the incorporation of the electro-optic films onto other guided-wave platforms.
PZT deposition and patterning
While the details of the lanthanide-assisted deposition procedure have been published elsewhere27, a short summary is given here. Intermediate seed layers based on lanthanides are deposited prior to the PZT deposition. The intermediate layer acts as a barrier layer to prevent the inter-diffusion of elements and as a seed layer providing the lattice match to grow highly oriented thin films. A critical thickness of the intermediate layer needs to be maintained (>5 nm) to avoid diffusion and secondary phase formations. However, on samples with considerable surface topology, thicker intermediate layers are necessary to provide good step coverage and to avoid any issues associated with the conformity in spin-coating. On our samples planarized through etching, step heights between oxide and SiN waveguides varied. We typically used an intermediate layer of thickness ≈24 nm to avoid issues. Both the intermediate layer and the PZT thin films are deposited by repeating the spin-coating and annealing procedure, which allows easy control of the film thickness. The PZT layer is deposited and annealed at 620 °C for 15 min in a tube furnace under an oxygen ambient. This CSD method, also called sol–gel, provides a cheap and flexible alternative to achieve high-quality stoichiometric PZT thin films regardless of the substrate material. A reactive ion etching procedure based on SF6 chemistry is used to pattern the PZT layer. The PZT film was removed selectively over the grating couplers used for the optical measurements.
High-speed measurements
The small-signal response measurements were performed using an Agilent PNA-X N5247A network analyzer and a high-speed photodiode (Discovery Semiconductors DSC10H Optical Receiver). For the eye diagram measurements, an arbitrary waveform generator (Keysight AWG M8195A) and radio frequency (RF) amplifier (SHF S807) are used to apply a pseudorandom NRZ binary sequence, the modulator output is measured with a Keysight 86100D oscilloscope with 50 GHz bandwidth and Discovery Semiconductors DSC-R409 PIN-TIA Optical Receiver.
Calculation of the electro-optic parameters
Using COMSOL Multiphysics®, several parameters can be calculated that strongly influence the performance of the modulators. To obtain efficient phase modulation, it is essential to maximize the overlap between the optical mode and the RF electrical signal, quantified by the electro-optic overlap integral36,
$${\Gamma} = \frac{g}{V}\frac{{\varepsilon _0cn_{\mathrm{PZT}}{\int\!\!\!\int}_{\mathrm{PZT}} {\kern 1pt} E_x^{`\mathrm{e}}\left| {E_x^{\mathrm{op}}} \right|^2{\mathrm{d}}x{\mathrm{d}}y}}{{{\int\!\!\!\int} {\kern 1pt} {\mathrm{Re}}\left( {{\bf{E}}^{{\mathrm{op}}} \times {\bf{H}}^{{\mathrm{op}}^ \ast }} \right) \cdot \widehat {\bf{e}}_z{\mathrm{d}}x{\mathrm{d}}y}},$$
where g is the spacing between the electrodes, V the applied voltage, ε0 the vacuum permittivity, c the speed of light in vacuum, and nPZT the refractive index of PZT. \(E_x^{\mathrm{e}}\) is the in-plane (x-)component of the RF electric field, and \(E_x^{{\mathrm{op}}}\) represents the in-plane transversal component of the optical field. When used as a phase shifter, an important figure of merit is the half-wave voltage-length product VπL. This product relates to the electro-optic coefficient reff of the PZT films and to Γ36,
$$V_\pi L = \frac{{\lambda g}}{{n_{{\mathrm{PZT}}}^3{\it{\Gamma }}r_{{\mathrm{eff}}}}},$$
where λ is the free-space wavelength. Another important parameter is the propagation loss of the optical mode, consisting of an intrinsic contribution (scattering, material loss in the PZT, intermediate layer, nitride and oxide) and a contribution caused by the vicinity of the electrical contacts. The former can be estimated based on cut-back measurements on unmetalized waveguides (see Supplementary Note 2), and the latter can be numerically calculated.
All data that support the findings of this study are available from the corresponding authors upon reasonable request.
This Article was originally published without the accompanying Peer Review File. This file is now available in the HTML version of the Article; the PDF was correct from the time of publication.
Sun, C. et al. Single-chip microprocessor that communicates directly using light. Nature 528, 534–538 (2015).
ADS Article PubMed CAS PubMed Central Google Scholar
Rahim, A. et al. Expanding the silicon photonics portfolio with silicon nitride photonic integrated circuits. J. Light. Technol. 35, 639–649 (2017).
ADS Article CAS Google Scholar
Levy, J. S. et al. CMOS-compatible multiple-wavelength oscillator for on-chip optical interconnects. Nat. Photonics 4, 37–40 (2010).
Bauters, J. F. et al. Ultra-low-loss high-aspect-ratio Si3N4 waveguides. Opt. Express 19, 3163–3174 (2011).
ADS Article PubMed CAS Google Scholar
Moss, D. J., Morandotti, R., Gaeta, A. L. & Lipson, M. New CMOS-compatible platforms based on silicon nitride and Hydex for nonlinear optics. Nat. Photonics 7, 597–607 (2013).
Ramelow, S. et al. Silicon-nitride platform for narrowband entangled photon generation. Preprint at https://arxiv.org/abs/1508.04358 (2015).
Kahl, O. et al. Waveguide integrated superconducting single-photon detectors with high internal quantum efficiency at telecom wavelengths. Sci. Rep. 5, 10941 (2015).
ADS Article PubMed PubMed Central CAS Google Scholar
Zhuang, L., Roeloffzen, C. G., Hoekman, M., Boller, K.-J. & Lowery, A. J. Programmable photonic signal processor chip for radiofrequency applications. Optica 2, 854–859 (2015).
Poulton, C. V. et al. Large-scale silicon nitride nanophotonic phased arrays at infrared and visible wavelengths. Opt. Lett. 42, 21–24 (2017).
ADS Article PubMed Google Scholar
Reed, G. T. et al. High-speed carrier-depletion silicon Mach–Zehnder optical modulators with lateral PN junctions. Front. Phys. 2, 77 (2014).
Xu, Q., Schmidt, B., Pradhan, S. & Lipson, M. Micrometre-scale silicon electro-optic modulator. Nature 435, 325–327 (2005).
Liu, A. et al. A high-speed silicon optical modulator based on a metal–oxide–semiconductor capacitor. Nature 427, 615–618 (2004).
Hiraki, T. et al. Heterogeneously integrated III–V/Si MOS capacitor Mach–Zehnder modulator. Nat. Photonics 11, 482–485 (2017).
Han, J.-H. et al. Efficient low-loss InGaAsP/Si hybrid MOS optical modulator. Nat. Photonics 11, 486–490 (2017).
Liu, M. et al. A graphene-based broadband optical modulator. Nature 474, 64–67 (2011).
Sorianello, V. et al. Graphene–silicon phase modulators with gigahertz bandwidth. Nat. Photonics 12, 40–44 (2018).
Alloatti, L. et al. 100 GHz silicon–organic hybrid modulator. Light Sci. Appl. 3, e173 (2014).
Srinivasan, A. et al. 56 Gb/s germanium waveguide electro-absorption modulator. J. Light Technol. 34, 419–424 (2016).
Abel, S. et al. A strong electro-optically active lead-free ferroelectric integrated on silicon. Nat. Commun. 4, 1671 (2013).
Article PubMed CAS Google Scholar
Xiong, C. et al. Active silicon integrated nanophotonics: ferroelectric BaTiO3-devices. Nano Lett. 14, 1419–1425 (2014).
Eltes, F. et al. A novel 25 Gbps electro-optic Pockels modulator integrated on an advanced Si photonic platform. In IEEE International Electron Devices Meeting (IEDM) 601 (IEEE, San Francisco, 2017).
Xiong, C., Pernice, W. H. & Tang, H. X. Low-loss, silicon integrated, aluminum nitride photonic circuits and their use for electro-optic signal processing. Nano Lett. 12, 3562–3568 (2012).
Wang, C., Zhang, M., Stern, B., Lipson, M. & Lončar, M. Nanophotonic lithium niobate electro-optic modulators. Opt. Express 26, 1547–1555 (2018).
Phare, C. T., Lee, Y.-H. D., Cardenas, J. & Lipson, M. Graphene electro-optic modulator with 30 GHz bandwidth. Nat. Photonics 9, 511–514 (2015).
Hosseini, N. et al. Stress-optic modulator in TriPleX platform using a piezoelectric lead zirconate titanate (PZT) thin film. Opt. Express 23, 14018–14026 (2015).
ADS Article PubMed MathSciNet CAS Google Scholar
Jin, W., Polcawich, R. G., Morton, P. A. & Bowers, J. E. Piezoelectrically tuned silicon nitride ring resonator. Opt. Express 26, 3174–3187 (2018).
George, J. et al. Lanthanide-assisted deposition of strongly electro-optic PZT thin films on silicon: toward integrated active nanophotonic devices. ACS Appl. Mater. Interfaces 7, 13350–13359 (2015).
Reed, G. T. et al. Recent breakthroughs in carrier depletion based silicon optical modulators. Nanophotonics 3, 229–245 (2014).
Günter, P. in Electro-optic and Photorefractive Materials (ed. Günter, P.) 2–17 (Springer, Berlin, 1987).
Abel, S. & Fompeyrine, J. in Thin Films on Silicon: Electronic and Photonic Applications 455–501 (World Scientific, Singapore, 2017).
Agrawal, G. P. Fiber-Optic Communication Systems (Wiley, New York, 2012).
Cho, J., Xie, C. & Winzer, P. J. Analysis of soft-decision FEC on non-AWGN channels. Opt. Express 20, 7915–7928 (2012).
Asif, R. Advanced and flexible multi-carrier receiver architecture for high-count multi-core fiber based space division multiplexed applications. Sci. Rep. 6, 27465 (2016).
Pandey, S. et al. Structural, ferroelectric and optical properties of PZT thin films. Phys. B 369, 135–142 (2005).
Gu, W., Song, Y., Liu, J. & Wang, F. Lanthanum-based compounds: electronic bandgap-dependent electro-catalytic materials toward oxygen reduction reaction. Chem. Eur. J. 23, 10126–10132 (2017).
Koeber, S. et al. Femtojoule electro-optic modulation using a silicon–organic hybrid device. Light Sci. Appl. 4, e255 (2015).
We thank Stéphane Clemmen for his overseeing role in the SiN chip fabrication, Philippe F. Smet for help with processing, Joris Van Kerrebrouck for help with data analysis, Liesbet Van Landschoot for operating the electron microscope, and Yoko Ohara for help with the figures. K.A. is funded by FWO Flanders. This work was funded by the European Commission through grant agreement no. 732894 (FET-Proactive HOT).
These authors contributed equally: Koen Alexander, John P. George.
Photonics Research Group, INTEC Department, Ghent University-imec, Technologiepark-Zwijnaarde 15, 9052, Zwijnaarde, Belgium
Koen Alexander, John P. George, Jochem Verbist, Bart Kuyken & Dries Van Thourhout
Center for Nano- and Biophotonics (NB-Photonics), Ghent University, Technologiepark-Zwijnaarde 15, 9052, Zwijnaarde, Belgium
Koen Alexander, John P. George, Jochem Verbist, Kristiaan Neyts, Bart Kuyken, Dries Van Thourhout & Jeroen Beeckman
Liquid Crystals and Photonics Group, ELIS Department, Ghent University, Technologiepark-Zwijnaarde 15, 9052, Zwijnaarde, Belgium
John P. George, Kristiaan Neyts & Jeroen Beeckman
IDLab, INTEC Department, Ghent University-imec, Technologiepark-Zwijnaarde 15, 9052, Zwijnaarde, Belgium
Jochem Verbist
Koen Alexander
John P. George
Kristiaan Neyts
Bart Kuyken
Dries Van Thourhout
Jeroen Beeckman
K.A. and J.P.G. designed the devices. K.A. performed the chip planarization. J.P.G. performed the PZT deposition, patterning, and metalization. K.A. performed the static optical measurements. J.V. and K.A. carried out the high-speed measurements. K.A. analyzed the data and performed device optimization simulations. K.N., B.K., D.V.T., and J.B. provided general advice and feedback. K.A. and J.P.G. wrote the manuscript, and all authors reviewed the manuscript and agree to its content.
Correspondence to Dries Van Thourhout or Jeroen Beeckman.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Alexander, K., George, J.P., Verbist, J. et al. Nanophotonic Pockels modulators on a silicon nitride platform. Nat Commun 9, 3444 (2018). https://doi.org/10.1038/s41467-018-05846-6
ITO-based electro-optical modulator integrated in silicon-on-insulator waveguide using surface plasmon interference
Khai Q. Le
Physica B: Condensed Matter (2021)
Scaling capacity of fiber-optic transmission systems via silicon photonics
Wei Shi
, Ye Tian
& Antoine Gervais
Nanophotonics (2020)
Coupling Si3N4 waveguide to SOI waveguide using transformation optics
S. Hadi Badri
& M.M. Gilarlue
Optics Communications (2020)
Ultra-Low Index-Contrast Polymeric Photonic Crystal Nanobeam Electro-Optic Modulator
Xiaogang Li
, Xiao Liu
, Yuqin Qin
, Daquan Yang
& Yuefeng Ji
IEEE Photonics Journal (2020)
Cryogenic operation of silicon photonic modulators based on the DC Kerr effect
Uttara Chakraborty
, Jacques Carolan
, Genevieve Clark
, Darius Bunandar
, Gerald Gilbert
, Jelena Notaros
, Michael R. Watts
& Dirk R. Englund
Optica (2020)
Editors' Highlights
Top Articles of 2019
Nature Communications ISSN 2041-1723 (online) | CommonCrawl |
3-DNF proves the algorithm is in P class
To understand fully, please read link
After, reading the link we will take a look at how we recover our solutions to a constrained Sudoku Puzzle.
If we assume that a sudoku puzzle was generated with this procedure we can now create a "semi"-solver. I say "semi" because we need the $3 \times 3$ grid $M_{2,2}$ already solved for us. Let's assume we have this. As an example I will assume we are provided:
$$\begin{bmatrix} 5 & 9 & 6\\ 1 & 2 & 4\\ 3 & 7 & 8 \end{bmatrix}$$
Now we will flatten it into: $[5,9,6,1,2,4,3,7,8]$ and permute as follows:
[8, 5, 9, 6, 1, 2, 4, 3, 7]-----list 1
Now for each list, we will turn them into a $3 \times 3$ grid using the same mapping in step 2 above. For example list 1 would get mapped to
$$\begin{bmatrix} 8 & 5 & 9 \\ 6 & 1 & 2 \\ 4 & 3 & 7 \end{bmatrix}$$
Now we position these in the game board the same way we did as step 3 above. For example our layout would be as follows:
**list1** **list4** **list7**
In the prior example this would give us the correct solution:
$$M = \begin{bmatrix} 8 & 5 & 9 & 4 & 3 & 7 & 6 & 1 & 2\\ 6 & 1 & 2 & 8 & 5 & 9 & 4 & 3 & 7\\ 4 & 3 & 7 & 6 & 1 & 2 & 8 & 5 & 9\\ 7 & 8 & 5 & 2 & 4 & 3 & 9 & 6 & 1\\ 9 & 6 & 1 & 7 & 8 & 5 & 2 & 4 & 3\\ 2 & 4 & 3 & 9 & 6 & 1 & 7 & 8 & 5\\ 3 & 7 & 8 & 1 & 2 & 4 & 5 & 9 & 6\\ 5 & 9 & 6 & 3 & 7 & 8 & 1 & 2 & 4\\ 1 & 2 & 4 & 5 & 9 & 6 & 3 & 7 & 8\\ \end{bmatrix}$$
Then we have list 9 (our input) will always give you correct solution in quadratic time.
For further illustration, I intend to prove that the algorithm aforementioned is in P class in two ways.
Here, we'll take a look at 3-DNF.
(L1 ∧ L2 ∧ L3) | (L4 ∧ L5 ∧ L6) | (L7 ∧ L8 ∧ L9)
Let L1=list1, L2 = list2,...
Therefore, the algorithm generates grids and recovers correct solutions easily.
Now, lets say I want to check the satsifiability of the algorithm's circular shifts. Here, I generate 3 more grids to show that there is a 3x3 positive 3-satisfying permutes.
l = [8, 5, 9, 6, 1, 2, 4, 3, 7]
[5, 9, 6, 1, 2, 4, 3, 7, 8]-l1
x = [5, 9, 6, 1, 2, 4, 3, 7, 8]
[9, 6, 1, 2, 4, 3, 7, 8, 5]-x1
y = [9, 6, 1, 2, 4, 3, 7, 8, 5]
[6, 1, 2, 4, 3, 7, 8, 5, 9]-y1
Here, I demonstrate that the 3x3 shift meets satisfiability for 9! Sudoku grids generated by the algorithm. At the end of the question I prove that the expression is always meets satisfiability when given the correct inputs.
(l1 ∨ x9 ∨ y8) ∧ (l2 ∨ x1 ∨ y9)
l1 = [5, 9, 6, 1, 2, 4, 3, 7, 8]
x9 = [5, 9, 6, 1, 2, 4, 3, 7, 8]
y8 = [5, 9, 6, 1, 2, 4, 3, 7, 8]
All the listed elements above have their defined variables within these expressions. All the expressions hold true.
(𝑙1∨𝑥9∨𝑦8)∧(𝑙2∨𝑥1∨𝑦9)∧(𝑙3∨𝑥2∨𝑦1)∧(𝑙4∨𝑥3∨𝑦2)∧(𝑙5∨𝑥4∨𝑦3)∧(𝑙6∨𝑥5∨𝑦4)∧(𝑙7∨𝑥6∨𝑦5)∧(𝑙8∨𝑥7∨𝑦6)∧(𝑙9∨𝑥8∨𝑦7)∧(𝑙1∨𝑥9∨𝑦8)∧(𝑙2∨𝑥1∨𝑦9)
Here is a chart showing the 3-satsifiability of the algorithm. Proving that the 3x3 shift overlaps all 9! valid grids that the algorithm can generate
Overall, are these proofs correct that constrained Sudoku is in P class?
satisfiability decision-problem
Travis Wells
Travis WellsTravis Wells
As I already explained earlier, any algorithm on a 9x9 input takes constant time. Thus it is in P. This is not very interesting or useful.
When people talk about Sudoku being NP-hard, they don't actually mean Sudoku, they are referring to a generalization on a grid of arbitrary size. Your question doesn't prove that this generalized problem is in P.
General comment: It looks like you're immersed in details, but haven't got a solid grasp on the fundamentals/basics yet. I encourage you to spend some more time learning about the definition of languages, decision problems, P, NP, NP-complete, NP-hard, and reductions before trying to take your Sudoku "project" any further. As it stands some of your statements appear to reflect a misunderstanding of basic concepts, and so you're spending time on things that are a dead end or reflect some basic misconceptions. (For instance, an algorithm can't be in P, and a proof can't be in; a problem can be.) I hope you'll take this as aimed to help you learn, rather than an attempt to criticize you personally or tear you down.
D.W.♦D.W.
$\begingroup$ Okay, I was just going to build upon a new questions after reading through reductibility and etc on Wikipedia. And, I'm building upon the same subject on every new question. I'll accept your answer while I formulate a new question as I'm getting more clarification step by step(both reading and asking) $\endgroup$ – Travis Wells Apr 24 '19 at 19:29
$\begingroup$ @TravisWells, Cool! I'd suggest finding a good textbook or an online course on algorithms or complexity theory. Wikipedia isn't a great resource. Don't expect it to be something you can learn in one sitting. Have fun -- it's a beautiful subject! $\endgroup$ – D.W.♦ Apr 24 '19 at 19:34
$\begingroup$ I'm in the process of formulating a question about NM-3sat. Which is basically no mixed instances. (eg. L1 or x9 or y8) and (l2 or x1 or y1) No negations unless all variables are negated. I was wondering if it would be possible to reduce DNF to NM-3sat showing that constrained puzzles can be just as hard to solve. $\endgroup$ – Travis Wells Apr 24 '19 at 19:40
Not the answer you're looking for? Browse other questions tagged satisfiability decision-problem or ask your own question.
Will this algorithm always solve a constrained sudoku puzzle in quadratic time?
Sudoku Puzzles in O log n time although inefficient
A tentative satisfiability algorithm
Is 2-DNF NP-complete?
Is the halting problem decidable for 3 symbol one dimensional cellular automata?
What is an efficient way to calculate the biggest system of disjunct sets?
SAT Solver Front-End: Strategy to order Quantifiers
n-DNF boolean formula k satisfiability
For a given k-DNF formula, what is the size of the formula for the purpose of complexity?
NP hardness of unique Puzzle Generation | CommonCrawl |
Fun With Zooming Number Lines in Grade 8
By Charles Larrieu Casias
The number line is an anchor representation that threads through the entire middle school curriculum. For this blog post, I want to focus on a creative use of the number line in grade 8 to explore scientific notation and irrational numbers. Let's zoom into a lesson.
In Unit 7, Lesson 10 of grade 8, students learn that the speed of light (or electricity) has different speeds through different materials. These speeds tend to range between $2 \times 10^8$ meters per second and $3 \times 10^8$ meters per second. There are many ways to plot these values on a number line, but since this unit is chiefly concerned with powers of 10, it makes sense to look at a portion of the number line broken into 10 segments going from $0 \times 10^8$ to $10 \times 10^8=1 \times 10^9$. That gives this number line:
This level of precision isn't good enough to discern the different speeds of light very well, so we zoom in! The digital student version has a lovely magnifying glass that zooms into the region between $2 \times 10^8$ and $3 \times 10^8$ and subdivides that region into 10 congruent segments:
This zooming number line provides some intuition about what it means to look at more and more decimal places.
The next unit picks up on this idea of zooming into number lines to find more decimal places of irrational numbers and certain rational numbers like $\frac{2}{11}$. I'm probably not supposed to play favorites, but one of the cleverest, most brilliant pieces of design in the curriculum is the path towards irrational numbers in grade 8, Unit 8.
One confusing way to introduce irrational numbers is by their definition. If "Dedekind cuts*" or "infinite, non-repeating decimal expansions" are your first words to a group of eighth graders, you're going to have a bad time. Instead of starting from a confusing definition and trying to convince students that these numbers exist, it is better to start with an approach where students come across these numbers naturally and investigate their other properties later. Students already know how to find areas of squares like these by decomposition or surrounding with a larger square and subtracting areas of triangles:
You don't have to convince students that these squares have sides and that those sides have lengths. Square roots have an intuitive, geometric meaning, and analyzing their numerical properties comes later.
In later lessons, students return to zooming number lines to examine the decimal expansions of $\frac{2}{11}$ and $\sqrt{2}$ to get the sense of what it means for a number to have an infinite decimal expansion. Try them out!
It's beyond the scope of the course to prove that the square root of 2 is irrational, but zooming number lines help students get comfortable with concepts of limits and scientific notation. That's pretty impressive for eighth grade mathematics.
*Research more about the construction of the real number line. It's fascinating how something so intuitive as a continuum of numbers could be so technical to define rigorously. For example, check out this part of a video that explains Dedekind cuts or this video about how Georg Cantor proved that the rational numbers are countable while real numbers are uncountable.
Check out Unit 8 in more detail. It's fun to see how the unit progresses to give a feel for irrational numbers from a geometric point of view.
Mathematical images shown in this post are excerpted from Open Up Resources Grade 6 – 8 Math by Open Up Resources used under CC BY 4.0. Download for free at www.openupresources.org.
Charles Larrieu Casias
Chuck Larrieu Casias, California, is driven by curiosity and learning. He received his B.Sc. in Mathematics Research from University of Nebraska Lincoln. There, he used math to answer questions like, "What is the Cayley Graph of the Braid Group on Four Strands?" and "What is the stable daily growth rate of this aphid population?" He received his teaching credential and Master of Arts in Teaching from University of Southern California before teaching in LAUSD for 3 years as a Math for America Teaching Fellow. He is now a curriculum writer for Illustrative Mathematics' middle school and high school curricula. In his personal time, he likes to learn computer programming, build computers, play video and board games, and listen to podcasts.
Sometimes the Real World Is Overrated: The Joy of Silly Applications | CommonCrawl |
Consistent constraint-based video-level learning for action recognition
Qinghongya Shi1,2,3,
Hong-Bo Zhang ORCID: orcid.org/0000-0001-5536-52241,2,3,
Hao-Tian Ren1,2,3,
Ji-Xiang Du1,2,3 &
Qing Lei1,2,3
This paper proposes a new neural network learning method to improve the performance for action recognition in video. Most human action recognition methods use a clip-level training strategy, which divides the video into multiple clips and trains the feature learning network by minimizing the loss function of clip classification. The video category is predicted by the voting of clips from the same video. In order to obtain more effective action feature, a new video-level feature learning method is proposed to train 3D CNN to boost the action recognition performance. Different with clip-level training which uses clips as input, video-level learning network uses the entire video as the input. Consistent constraint loss is defined to minimize the distance between clips of the same video in voting space. Further, a video-level loss function is defined to compute the video classification error. The experimental results show that the proposed video-level training is a more effective action feature learning approach compared with the clip-level training. And this paper has achieved the state-of-the-art performance on UCF101 and HMDB51 datasets without using pre-trained models of other large-scale datasets. Our code and final model are available at https://github.com/hqu-cst-mmc/VLL.
Action recognition has gradually become a research hotspot in computer vision and pattern recognition, which is widely applied in intelligent video surveillance, virtual reality, motion analysis, and video retrieval. How to improve the accuracy of human action recognition has been studied by many researchers.
Many methods have been proposed to recognize action in video in recent years. The key to these methods is to learn effective action feature from input data. Several different neural networks are employed in these methods, such as 3D convolutional neural network (ConvNets) [1, 2], multi-stream 2D ConvNets [3–5], and recurrent neural network [6, 7]. The difference between video feature and image feature is whether it contains temporal information. To deal with the different temporal length of video and reduce computational complexity, the input video is divided into a clip set. Each clip has the same number of frames, and the video label is assigned to each clip. In the training stage, the parameters of network are learned from annotated action clips. In the testing stage, each clip in video is classified by the network and the video category is predicted by the voting strategy. This training approach in these methods is named as clip-level training in this work.
Although these works have obtained some significant results, there are some limitations in clip-level learning. First, to feed into the convolutional network, each clip is segmented from video with fixed length by dense sampling or sparse sampling. However, short clip cannot obtain the complete temporal information of human action in video. Therefore, the vision features extracted from the model, which is trained by the clip set, cannot accurately represent the action. Second, during the training stage, the calculation process of each clip is independent in these clip-level methods. It ignores the correlation between clips in the same video. To solve these problems, this paper proposes a new action feature learning method, called video-level learning.
The object of video-level learning is to train the network which can provide more complete and effective video representation rather than clip representation. In video-level learning, the input of network is the initial video. In the pre-processing stage, the video is also divided into a clip set. The video label is assigned as the label of the clip set, rather than each clip. The difference between clip-level and video-level is shown in Fig. 1. The video-level learning can be regraded as the problem of set learning. The clip set covers all the content of the video; therefore, the features learned from the clip set can contain richer action information than those features learned from a single clip.
Comparison of the video-level learning framework and the clip-level learning framework. Vi is the ith video, and Li means action label. \(C^{i}_{j}\) is the jth clip in video i, and mi is the number of clip in video i
To build the video-level learning model, the core is to train the parameters of network through each clip set. In this work, we use 3D ConvNets as the basic network model. According to the theory of convolutional network [8], in the training stage, we have to tell 3D ConvNets what we wish it to minimize. Therefore, to implement video-level learning, the video-level loss function is defined in our method. Using the clip set as input, which sampled from the same video, the video-level loss function of network not only needs to consider the error rate of clip classification, but also needs to consider the relationship between the clips in the same video. To solve this problem, in the proposed video-level learning method, a consistent constraint loss (CCL) function is defined for training the 3D ConvNets. The basic assumption of CCL is that the distance between the clips of the same video in the voting space should be small. Therefore, in the proposed method, the video-level loss (VLL) function includes two items: average classification error of clip set and distance loss of clip set.
In summary, although most of clip-level-based approaches also take several clips as a batch and feed into the network in the actual training process, there are still two main differences between clip-level learning and video-level learning. First, the input data of video-level learning is the clips which are sampled from the same video instead of selecting clips randomly in the clip-level learning. At the same time, the number of clips, which corresponds to the batch size in the training stage, will change dynamically in video-level learning. In other words, the batch size is dynamic in the proposed method. It depends on the video length and sampling method. This means that during the training phase, the network can better adapt video data with different temporal scale and learn more complete information, while in clip-level learning, the number of clips is fixed, which is determined by the pre-defined batch size.
Second, clip-level learning methods use clip classification error as the loss function, such as the average of cross entropy loss of input clips. In video-level learning, a new classification loss of video needs to be defined, such as VLL defined in this paper. The framework of video-level learning is shown in Fig. 1.
Finally, in the testing stage, this paper uses the same voting strategy in clip-level learning to achieve video action classification. The contributions of this paper are threefold:
We propose a new end-to-end training method for video-level feature learning to improve the accuracy of action recognition.
For video-level training, consistent constraint loss is defined to minimize the distance between clips of the same video in voting space. A new loss function of video classification is designed to unify all clips that belonged to the same video.
The experimental results demonstrate that the proposed method is effective. The network trained by the video-level method has better performance than the clip-level method. And without using the pre-trained model of other large-scale data, the proposed method provides higher recognition rates than those of state-of-the-art action recognition methods.
The remainder of this paper is organized as follows. Section 2 introduces the related works, Section 3 describes the algorithms used to implement the proposed method, Section 4 presents and discusses the experimental results, and finally, Section 5 concludes this paper.
Many human action recognition methods have been proposed in recent years. Zhang et al. [9] summarized the work in recent years from different data perspectives including RGB video-based methods [1, 2, 10, 11], depth data-based methods [12, 13], and skeleton data-based methods [14–16]. Although research based on depth and skeleton data has attracted some attention, human action recognition methods in RGB video have always been the mainstream research direction. The paper also focuses on the human action recognition methods in video.
In recent studies, deep learning methods have shown good performance in feature exaction and action recognition. They have become the mainstream method of computer vision research. In [17, 18], for real-time face landmark extraction, the authors proposed a new model called EMTCNN that extended from multi-task cascaded convolutional neural network. To learn action feature, there are two main network structures in these methods: 3D ConvNets and multi-stream structure.
3D ConvNets for action recognition
3D ConvNets is extended from 2D ConvNets, and its convolution kernel contains three dimensions: the two dimensions represent the space information and the other dimension represents the temporal information. 3D convolution kernel can calculate both temporal and spatial features, but it also has more parameters, making the computation of 3D convolutional network larger. Tran et al. [2] first proposed 3D convolutional network for learning spatio-temporal features. Carreira et al. [1] proposed inflated 3D ConvNets (I3D) and used pre-trained models from large dataset to obtain the highest accuracy of human action recognition, such as Kinetics [19] and ImageNet dataset. In [10], a pseudo-3D residual networks (P3D) is proposed to learn spatial-temporal representation. Hara et al. [20] extended the 2D ResNet to 3D structure and proposed the ResNeXt method to recognize action. And Tran et al. [11] tried to decrease the number of parameters, making the 3D convolution kernel decomposing to 2D spatial convolution kernel and 1D temporal convolution kernel.
Multi-stream structure for action recognition.
To model the temporal information, several methods added the correspondence motion sequence of the input video to the feature learning network, such as optical flow image sequence and motion boundary sequence. Simonyan et al. [21] used a two-stream structure to calculate the appearance and motion from image and optical flow sequence respectively. The appearance feature and motion feature were fused as the action feature. Wang et al. [22] proposed temporal segment networks (TSN) which used sparse temporal sampling to obtain long-range temporal information. In addition, some works applied the self-supervised approaches to learn video feature based on multi-stream structure. Wang et al. [23] proposed a two-stream-based self-supervised approach to learn visual feature by regressing both motion and appearance statistical information without action label. In this work, both RGB data and optical data were used to compute appearance and motion respectively. Crasto et al. [24] introduced a learning network instead of the effect achieved by optical flow, but it also needs optical flow to train. Wang et al. [25] proposed two-stream and 3D ConvNets fusion mode to recognize human action with arbitrary size and length.
In addition, there are also some works using an attention module to improve the accuracy of action detection and recognition [26–28]. Li et al. [26] proposed an attention-based GCN for action detection in video to capture the relations among proposals and reduce the redundant proposals. And in [28], the authors proposed a new spatio-temporal deformable ConvNet model with an attention mechanism, which takes into consideration the mutual correlations in both temporal and spatial domains, to effectively capture the long-range and long-distance dependencies in the video actions.
However, all the above methods use clip-level training strategy. And due to that long-range clip needs higher computational costs, the length of the clip is short, generally 16 frames in these works. In this paper, 3D ConvNets is also used as the basic network. Different with these methods, this paper uses the video-level method instead of the clip-level method to train more accuracy feature representation network. In addition, some works used the large dataset, such as sports1M [29] and Kinetics [19], to achieve high performance. However, large dataset implies greater computational costs. How to obtain more higher recognition performance without pre-trained model is still an issue that is worth to study. It is also discussed in this paper.
The proposed method
To describe the proposed method of action recognition in this paper, the problem of video-level learning can be defined as follows. The proposed method uses a set of pairwise components D={<V1,L1>,<V2,L2>,...,<Vn,Ln>} to present the input data, where Vi denotes the ith video in the dataset, Li is the correspondence action label, and n is the number of the videos. In this paper, the input video is segmented into a clip set \(V_{i}=\left \{C^{i}_{1},C^{i}_{2},...,C^{i}_{m_{i}}\right \}\), where \(C^{i}_{j}\) is the jth clip in ith video and mi is the size of the clip set sampled from video. The input of network <Vi,Li> is transferred to a new pairwise of clip set and label \(<\left \{C^{i}_{1},C^{i}_{2},...,C^{i}_{m_{i}}\right \}, L_{i}>\), as illustrated in Fig. 1. This paper samples a fixed-length continuous frames from the video as a clip. The process of clip generation is shown in Fig. 2. Referring to the clip-level learning methods [1, 2, 23], the length of the clip is set to 16 frames in this paper.
The process of clip generation
Suppose that the proposed model can be defined as a function Li=f(Vi). The task of the training stage is to learn the parameters in this function. After parameter training, given a testing video Vt, the category of the video Lt is calculated by this function. The detail of the proposed model is described in the following section.
$$\begin{array}{@{}rcl@{}} L_{t}=f\left(V_{t}\right)=f\left(\left\{C^{t}_{1},C^{t}_{2},...,C^{t}_{m_{t}}\right\}\right) \end{array} $$
Network structure and video-level learning
In this work, the proposed network uses 3D ConvNets as the feature extractor. The network structure of the proposed network is shown in Fig. 3. In order to make the parameters of network as few as possible, we only use 5 convolution layers and 5 pooling layers (each convolution layer is immediately followed by a pooling layer), 2 fully connected layers and a softmax layer to predict action labels. Inspired by the previous works of 3D ConvNets [1, 2, 23], all of the convolutional kernels are set to 3∗3∗3 in the proposed approach.
The structure of the proposed network
To improve the action recognition performance of 3D ConvNets, the video-level learning strategy is proposed in this paper. In our method, the network used the whole video as the input. The network needs to process clip set with different size. Therefore, the network extracts the feature for each clip independently. To achieve video-level training, the video-level loss is defined by minimizing the classification error of each clip and the distance of each clip pair, which is named as consistent constraint loss in this paper.
Video-level loss function
The most common loss function of 3D ConvNets is cross entropy function Lossce, aiming to measure the similarity between the distribution of ground truth and the distribution of predicted label, as shown in Eq. 2. In clip-level learning, the 3D ConvNets is trained by minimizing the classification error of clips.
$$\begin{array}{@{}rcl@{}} \text{Loss}_{ce}\left(y,\hat{y}\right)=-\sum\limits_{j=1}^{N}y_{j}\log\left(\hat{y_{j}}\right) \end{array} $$
where y is the one-hot vector of ground truth of the input video, N is the number of category, and \(\hat {y}\) is the predict score of the clip.
However, this strategy ignores the relationship between clips. To address this problem, video-level loss is proposed in this work. To calculate video-level loss, the video classification loss is computed by the average of cross entropy function of all clips from same video. And more importantly, the distance of these clips in voting space is defined as the consistent constraint loss Lossccl. Finally, the video-level loss Lossvll is a combination of the average Lossce and Lossccl, as shown in Fig. 4. The calculation process of these functions is defined as follows.
$$\begin{array}{@{}rcl@{}} \begin{aligned} \text{Loss}_{vll}=&(1-\alpha)\frac{1}{m_{i}}\sum\limits_{j=1}^{m_{i}}{\text{Loss}_{ce}\left(y_{j},\hat{y_{j}}\right)}\\ &+\alpha \text{Loss}_{ccl} \end{aligned} \end{array} $$
where mi is the size of the input clip set corresponding to the ith video and α is the balanced weight of cross entropy loss function and consistent constraint loss function.
To achieve the assumption of the consistent constraint, which means the output of the network for each clip from the same video should be consistent, this paper uses the output vector of the network as the input of consistent constraint loss function. And the consistent constraint is computed by the distance of each clip pair in the same video. The purpose of adding this constraint to the video-level loss function is to make the network provide more closer the classification score for clips from the same video. In this work, there are several consistent constraint loss functions that are discussed.
First, the average of Euclidean distance of each clip pair is applied to compute the consistent constraint loss. It is defined as Euclidean distance loss as shown in Eq. 4.
$$\begin{array}{@{}rcl@{}} \text{Loss}^{euc}_{ccl}=\frac{2}{m_{i}*(m_{i}-1)}\sum\limits_{k=1}^{m_{i}}\sum\limits_{j>k}^{m_{i}}{\Vert \hat{y_{k}}-\hat{y_{j}}\Vert}_{2} \end{array} $$
where \(\frac {m_{i}*(m_{i}-1)}{2}\) is the number of clip pair in the input set. Because Euclidean distance of clip pair is symmetric, that is \({\Vert \hat {y_{k}}-\hat {y_{j}}\Vert }_{2} = {\Vert \hat {y_{j}}-\hat {y_{k}}\Vert }_{2}\), we only calculate one of them in Eq. 4. The index of k is from 1 to mi, and the index of j is from k+1 to mi, total \(\frac {m_{i}*(m_{i}-1)}{2}\) items.
Further, during the training process, the distribution of prediction scores of the samples with incorrect prediction should be more and more consistent with the samples with correct prediction. Therefore, the consistent constraint loss function is further defined in Eq. 5, which is named as error direction loss.
$$\begin{array}{@{}rcl@{}} \text{Loss}^{err}_{ccl}=\frac{1}{N_{e}}\sum\limits_{i \in E}{\Vert \hat{y_{i}}-R_{\text{mean}}\Vert}_{2} \end{array} $$
where E is the set of samples that are predicted incorrectly in each round of training. Ne is the size of set E, and Ne<mi. Rmean is the mean vector of the prediction score of the samples that are predicted correctly.
Finally, the consistent constraint loss proposed in this paper is composed of Eqs. 4 and 5, as shown as follows. During the training, we not only make the output of the network more consistent for the clips in the same video, but more importantly, we require the network to adjust the output of the clips, which are incorrectly predicted, to be more consistent with the output of the clips that are predicted correctly.
$$\begin{array}{@{}rcl@{}} \begin{aligned} \text{Loss}_{ccl}= \left\{\begin{array}{cl} \text{Loss}^{euc}_{ccl} & N_{e}=m_{i}||N_{e}=0\\ \text{Loss}^{err}_{ccl} & \text{others} \end{array}\right. \end{aligned} \end{array} $$
Experimental results and discussion
In this section, some experiments are performed on UCF101 [30] and HMDB51 [31] datasets to verify the effectiveness of the proposed video-level learning method.
Dataset and experiment setting
UCF101 dataset. UCF101 contains 101 action categories, total 13320 videos which are collected from YouTube. It is one of the most commonly used datasets in the research of action recognition. Each video has been divided into several clips, total 148166 clips in this experiment. And UCF101 provides the diversity in terms of actions, and with the presence of large variations in camera motion, object appearance, and pose, it is also a challenging dataset.
HMDB51 dataset. HMDB51 collects the videos from various sources, mostly from movies, and a small proportion from public databases such as the Prelinger archive, YouTube, and Google videos. The dataset contains 6766 videos with 51 action categories, each containing a minimum of 101 clips.
In the experiment, this paper uses standard training/testing splits from the official website of these datasets. To reduce the parameters of 3D ConvNets, this paper uses relu function as activate function and set dropout value to 0.05. All input images are resized to 112*112 with random crop. The entire network is fine-tuned with SGD on 0.001 learning rate. And every 2000 iterations, the learning rate decreased by 0.1. The balanced weight of video-level loss function is set to 0.3. The proposed method is implemented on two NVIDIA GeForce RTX 2080Ti GPUs which take about 8 and 2.5 h to train the model on UCF101 and HMDB51 datasets respectively. For fair comparison, in the testing, all methods use the accuracy of video category, which is predicted by the voting of clip category in this video, as the measure metric.
Performance evaluation on UCF101
Comparison of video-level training and clip-level training
To compare the performance of video-level and clip-level training, some experimental results are shown in Table 1. In Table 1, using clip-level training with cross entropy loss function, the accuracy of action classification is 51.52%. The action classification accuracy of the network, which is trained by video-level strategy with the same loss function, is 53.44%. From this comparison, it can be seen that video-level training is more effective than clip-level training, and the accuracy is improved by 1.92%.
Table 1 Accuracy of clip-level training and video-level training with different loss functions
Comparison of different loss functions in video-level training
In video-level training, the proposed loss functions also are discussed in the experiment. In Table 1, \(\text {Loss}_{vll}\left (\text {Loss}^{euc}_{ccl}\right)\) indicates that Euclidean distance loss function \(\text {Loss}^{euc}_{ccl}\) is used as the consistent constraint loss function in video-level loss function Lossvll. \(\text {Loss}_{vll}\left (\text {Loss}^{err}_{ccl}\right)\) indicates that error direction loss function \(\text {Loss}^{err}_{ccl}\) is applied as the consistent constraint loss function in Lossvll. And Lossvll(Lossccl) means using Lossccl function which is defined in Eq. 6 as the consistent constraint loss function in Lossvll. From this comparison, the action recognition accuracy of the network which is trained by the video-level loss function with Euclidean distance loss \(\text {Loss}^{euc}_{ccl}\) is 57.11%. It is 3.67% higher than the results of the network trained by cross entropy loss function.
Compared to the loss function \(\text {Loss}_{vll}\left (Loss^{euc}_{ccl}\right)\), the network trained by the loss function \(\text {Loss}_{vll}\left (\text {Loss}^{err}_{ccl}\right)\) has better recognition performance. Finally, the proposed method, which uses Lossvll(Lossccl) function to train the 3D ConvNets, obtains the highest accuracy 58.76%, which is 7.24% higher than the network trained by clip-level training.
Balanced weight discussion
The performance of different weight value α in video-level loss function Lossvll is shown in Fig. 5. In this experiment, the consistent constraint loss function in Lossvll use Lossccl function defined in Eq. 6, and α is set from 0.1 to 0.9. In Fig. 5, α=0.3 achieves the best recognition accuracy.
Recognition accuracy with different weight α
Comparison with the state-of-the-art
To evaluate the effectiveness of the proposed method, the accuracy of the proposed method is compared with the state-of-the-art methods. The comparison results are shown in Table 2. None of the methods in Table 2 uses the pre-trained of other large datasets. Table 2 indicates that the proposed method has the better recognition accuracy.
Table 2 Comparison results of the proposed with other state-of-the-art action recognition methods
Qualitative evaluation
To further verify the effectiveness of the proposed method, we show some action recognition results of video-level training and clip-level training. We use red to represent the corresponding ground truth, and other colors for the rest of the categories. We select three videos for qualitative evaluation. Some examples of these videos are shown in Fig. 6. These samples are selected from three different action categories: "ApplyEyeMakeup," "ApplyLipstick," and "BabyCrawling." Each video contains 16 clips. Each clip has 16 frames. Figures 7 and 8 show the recognition results of these three samples.
Examples of the videos selected for qualitative evaluation
Visualization of action recognition result by voting
Visualization of action recognition result on top 5 probabilities
In Fig. 7a, it is the vote distribution of the clips in testing video. Each clip category is predicted by the clip-level training model. From Fig. 7a, we can find that based on the clip-level training model, only one clip is correctly classified, and the other 15 clips are classified into the wrong category in sample 1. In sample 2, two clips are correctly classified, and the other 14 clips are classified into the wrong categories. In sample 3, there are 7 clips correctly classified. Therefore, based on clip-level training, the recognition results of sample 1 and sample 2 are wrong, and sample 3 is correctly predicted. Similarly, Fig. 7b is the clip vote distribution predicted by the video-level training model. Based on the video-level training model, in these samples, the number of correctly classified clips has increased significantly. For example, in sample 1, there are 15 clips correctly predicted and only one clip is classified incorrectly. Finally, both sample 1 and sample 2 can be correctly classified by using the video-level training model.
In addition, we can obtain the same conclusion from the probability of video classification. Figure 8a shows the top 5 probabilities of video classification by the clip-level training, while Fig. 8b shows the top 5 probabilities of video-level training. We choose the category with the highest predict probability as the predict result. From the comparison, it can be found that based on the video-level training model, sample 1 and sample 2 can be adjusted from the classification error to a high probability on the category corresponding to ground truth, and then correctly classified. For sample 3, the probability corresponding to ground truth is increased by using the video-level training model.
Performance evaluation on HMDB51
In this section, we evaluate the performance of the proposed method on the HMDB51 dataset. HMDB51 is less than UCF101, which has only 51 classes. Table 3 shows the experimental results. From Table 3, we can find that compared with clip-level training, using the same cross entropy loss function, the model trained by video-level strategy can improve the recognition accuracy from 32.6 to 33.15%.
Table 3 Accuracy of clip-level training and video-level training with different loss functions on HMDB51
As the above introduction, a new loss function Lossvll is used in video-level learning. Comparing with only cross entropy loss function, the accuracy of the model which is trained by the proposed consistent constraint loss function in Lossvll is 35.38%, which is improved by 2.23%.
We also compare the proposed method with other state-of-the-art methods on HMDB51 dataset. Table 4 shows the comparison results. Compared with the methods without using other large-scale dataset to pre-train the model, our method achieves the higher accuracy. Compared with the methods using Kinetics dataset to pre-train the model, the proposed method also has better performance. All the above experiments further verify that the proposed video-level training method is also effective on small dataset.
Table 4 Comparison results of the proposed with other state-of-art action recognition methods on HMDB51
In this paper, we proposed a new neural network training method named video-level learning to improve the performance of 3D ConvNets. Different with the traditional training method which used clips as input, the proposed method used the entire video as input. This method defined a video-level loss function which contained cross entropy loss function and consistent constraint loss function to train the 3D ConvNets. And in this paper, we discussed three different consistent constraint loss functions. The experimental results show that in comparison with the clip-level learning method, the proposed method has better action recognition performance. And the effectiveness of the proposed method is verified by comparison with the state-of-art methods.
Although the proposed method can effectively improve the accuracy of the network, this work still has some limitations. In this paper, we only report the results without using pre-trained models of other large-scale datasets. There are mainly the following reasons. First, to verify the effectiveness of the proposed method, our motivation is to use the simplest 3D ConvNets as the basic network to highlight the impact of video-level learning. The backbone network of 3D ConvNets only contains 5 convolution layers, 5 pooling layers, 2 fully connected layers, and a softmax layer. Second, we also pay attention to some complex convolution networks which have been proposed with better performance in recent years, such as P3D [10] and 3D ResNet [20]. To use these pre-trained models, it needs to modify the structure of the backbone network to be consistent with the structure of these well-trained models. However, what kind of network structure is the best for action recognition still is an open and complex issue in action recognition research.
In the future work, we will try to find more effective 3D convolutional model instead of the simple 3D ConvNets which is used in this work, discuss the performance of these methods based on the well-trained models, and apply these methods on other large-scale action datasets.
3D ConvNets:
Three-dimension convolution network
Convolution neural network
CCL:
Consistent constraint loss
VLL:
Video-level learning
J. Carreira, A. Zisserman, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Quo vadis, action recognition? A new model and the kinetics dataset (IEEEHonolulu, 2017), pp. 4724–4733. https://doi.org/10.1109/CVPR.2017.502.
D. Tran, L. Bourdev, R. Fergus, L. Torresani, M. Paluri, in Proceedings of the IEEE International Conference on Computer Vision. Learning spatiotemporal features with 3d convolutional networks (IEEESantiago, 2015), pp. 4489–4497.
W. Dai, Y. Chen, C. Huang, M. Gao, X. Zhang, in 2019 International Joint Conference on Neural Networks (IJCNN). Two-stream convolution neural network with video-stream for action recognition (IEEEBudapest, 2019), pp. 1–8. https://doi.org/10.1109/IJCNN.2019.8851702.
J. Xu, K. Tasaka, H. Yanagihara, in 2018 24th International Conference on Pattern Recognition (ICPR). Beyond two-stream: skeleton-based three-stream networks for action recognition in videos (IEEEBeijing, 2018), pp. 1567–1573. https://doi.org/10.1109/ICPR.2018.8546165.
V. A. Chenarlogh, F. Razzazi, Multi-stream 3D CNN structure for human action recognition trained by limited data. IET Comp. Vision. 13(3), 338–344 (2019). https://doi.org/10.1049/iet-cvi.2018.5088.
L. Song, L. Weng, L. Wang, X. Min, C. Pan, in 2018 25th IEEE International Conference on Image Processing (ICIP). Two-stream designed 2d/3d residual networks with lstms for action recognition in videos (IEEEAthens, 2018), pp. 808–812. https://doi.org/10.1109/ICIP.2018.8451662.
T. Lin, X. Zhao, Z. Fan, in 2017 IEEE International Conference on Image Processing (ICIP). Temporal action localization with two-stream segment-based RNN (IEEEBeijing, 2017), pp. 3400–3404. https://doi.org/10.1109/ICIP.2017.8296913.
Y. Bengio, A. Courville, P. Vincent, Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell.35(8), 1798–1828 (2013). https://doi.org/10.1109/TPAMI.2013.50.
H. -B. Zhang, Y. -X. Zhang, B. Zhong, Q. Lei, L. Yang, J. -X. Du, D. -S. Chen, A comprehensive survey of vision-based human action recognition methods. Sensors. 19(5), 1005 (2019). https://doi.org/10.3390/s19051005.
Z. Qiu, T. Yao, T. Mei, in Proceedings of the IEEE International Conference on Computer Vision. Learning spatio-temporal representation with pseudo-3d residual networks (IEEEVenice, 2017), pp. 5533–5541.
D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, M. Paluri, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. A closer look at spatiotemporal convolutions for action recognition (IEEESalt Lake City, 2018), pp. 6450–6459.
C. Zhang, Y. Tian, X. Guo, J. Liu, DAAL: deep activation-based attribute learning for action recognition in depth videos. Comp. Vision Image Underst.167:, 37–49 (2018). https://doi.org/10.1016/j.cviu.2017.11.008.
Z. Shi, T. -K. Kim, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Learning and refining of privileged information-based RNNS for action recognition from depth sequences (IEEEHonolulu, 2017), pp. 3461–3470.
C. Si, W. Chen, W. Wang, L. Wang, T. Tan, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). An attention enhanced graph convolutional lstm network for skeleton-based action recognition (IEEELong Beach, 2019), pp. 1227–1236. https://doi.org/10.1109/CVPR.2019.00132.
S. Yan, Y. Xiong, D. Lin, in Thirty-second AAAI Conference on Artificial Intelligence. Spatial temporal graph convolutional networks for skeleton-based action recognition (AAAINew Orleans, 2018).
M. Li, S. Chen, X. Chen, Y. Zhang, Y. Wang, Q. Tian, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Actional-structural graph convolutional networks for skeleton-based action recognition (IEEELong Beach, 2019), pp. 3590–3598. https://doi.org/10.1109/CVPR.2019.00371.
H. Kim, H. Kim, E. Hwang, in 2019 IEEE International Conference on Big Data and Smart Computing (BigComp). Real-time facial feature extraction scheme using cascaded networks (IEEEKyoto, 2019), pp. 1–7.
H. -W. Kim, H. -J. Kim, S. Rho, E. Hwang, Augmented EMTCNN: a fast and accurate facial landmark detection network. Appl. Sci.10(7), 2253 (2020). https://doi.org/10.3390/app10072253.
W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al., The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017).
K. Hara, H. Kataoka, Y. Satoh, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Can spatiotemporal 3D CNNs retrace the history of 2D CNNs and imagenet? (IEEESalt Lake City, 2018), pp. 6546–6555.
K. Simonyan, A. Zisserman, in Advances in Neural Information Processing Systems. Two-stream convolutional networks for action recognition in videos (Neural information processing systems foundationMontreal, 2014), pp. 568–576.
L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, L. Van Gool, in European Conference on Computer Vision. Temporal segment networks: towards good practices for deep action recognition (SpringerAmsterdam, 2016), pp. 20–36.
J. Wang, J. Jiao, L. Bao, S. He, Y. Liu, W. Liu, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Self-supervised spatio-temporal representation learning for videos by predicting motion and appearance statistics (IEEELong Beach, 2019), pp. 4006–4015.
N. Crasto, P. Weinzaepfel, K. Alahari, C. Schmid, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Mars: Motion-augmented RGB stream for action recognition (IEEELong Beach, 2019), pp. 7882–7891.
X. Wang, L. Gao, P. Wang, X. Sun, X. Liu, Two-stream 3-D convNet fusion for action recognition in videos with arbitrary size and length. IEEE Trans. Multimed.20(3), 634–644 (2018).
J. Li, X. Liu, Z. Zong, W. Zhao, M. Zhang, J. Song, in The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, February 7-12, 2020. Graph attention based proposal 3D convNets for action detection (AAAI PressNew York, NY, USA, 2020), pp. 4626–4633. https://aaai.org/ojs/index.php/AAAI/article/view/5893.
J. Li, X. Liu, W. Zhang, M. Zhang, J. Song, N. Sebe, Spatio-Temporal Attention Networks for Action Recognition and Detection. IEEE Trans. Multimed., 1–1 (2020). https://doi.org/10.1109/TMM.2020.2965434.
J. Li, X. Liu, M. Zhang, D. Wang, Spatio-temporal deformable 3d convnets with attention for action recognition. Pattern Recog.98:, 107037 (2020). https://doi.org/10.1016/j.patcog.2019.107037.
A. Karpathy, G. Toderici, S. Shetty, T. Leung, F. F. Li, in 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Large-scale video classification with convolutional neural networks (IEEEColumbus, 2014).
K. Soomro, A. R. Zamir, M. Shah, Ucf101: a dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012).
H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, T. Serre, in 2011 International Conference on Computer Vision. HMDB: a large video database for human motion recognition (IEEEBarcelona, 2011), pp. 2556–2563.
C. Gan, B. Gong, K. Liu, H. Su, L. J. Guibas, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Geometry guided convolutional neural networks for self-supervised video representation learning (IEEESalt Lake City, 2018), pp. 5589–5597.
Y. Zhu, Y. Long, Y. Guan, S. Newsam, L. Shao, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Towards universal representation for unseen action recognition (IEEESalt Lake City, 2018), pp. 9436–9445.
O. Köpüklü, N. Kose, A. Gunduz, G. Rigoll, Resource efficient 3D convolutional neural networks. arXiv preprint arXiv:1904.02422 (2019).
D. Kim, D. Cho, I. S. Kweon, Self-supervised video representation learning with space-time cubic puzzles. Proc. AAAI Conf. Artif. Intell.33:, 8545–8552 (2019).
The authors would like to thank the anonymous reviewers for their valuable and insightful comments on an earlier version of this manuscript.
This work is supported by the Natural Science Foundation of China (No. 61871196 and 61673186), the National Key Research and Development Program of China (No. 2019YFC1604700), the Natural Science Foundation of Fujian Province of China (No. 2019J01082) and Promotion Program for Young and Middle-aged Teacher in Science and Technology Research of Huaqiao University (ZQN-YX601), Subsidized Project for Postgraduates'Innovative Fund in Scientific Research of Huaqiao University(No.18014083014).
Department of Computer Science and Technology, Huaqiao University, Xiamen, Fujian, China
Qinghongya Shi, Hong-Bo Zhang, Hao-Tian Ren, Ji-Xiang Du & Qing Lei
Fujian Key Laboratory of Big Data Intelligence and Security, Huaqiao University, Xiamen, Fujian, China
Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Xiamen, Fujian, China
Qinghongya Shi
Hong-Bo Zhang
Hao-Tian Ren
Ji-Xiang Du
Qing Lei
All authors took part in the discussion of the work described in this paper. All authors read and approved the final manuscript.
Qinghongya Shi received the B.S. degree from Huaqiao University, China, in 2017, where she is currently pursuing the M.S. degree. Her research interests include image processing and computer vision.
Hong-Bo Zhang received a Ph.D. in Computer Science from Xiamen University in 2013. Currently, he is an associate professor with the School of Computer Science and Technology of Huaqiao University. He is the member of Fujian key laboratory of big data intelligence and security. His research interests include computer vision and pattern recognition.
Hao-Tian Ren received the B.S. degree from Huaqiao University, China, in 2018, where she is currently pursuing the M.S. degree. Her research interests include computer vision and machine learning.
Ji-Xiang Du received a Ph.D. in Pattern Recognition and Intelligent System from the University of Science and Technology of China (USTC), Hefei, China, in 2005. He is currently a professor at the School of Computer Science and Technology at Huaqiao University. He is the director of Fujian key laboratory of big data intelligence and security. His current research interests mainly include pattern recognition and machine learning.
Qing Lei received a Ph.D. from the Cognitive Science Department of Xiamen University, China. She joined the faculty of Huaqiao University in 2005. Her research interests include human motion analysis and object detection/recognition.
Correspondence to Hong-Bo Zhang.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Shi, Q., Zhang, HB., Ren, HT. et al. Consistent constraint-based video-level learning for action recognition. J Image Video Proc. 2020, 35 (2020). https://doi.org/10.1186/s13640-020-00519-1
Received: 06 April 2020
Accepted: 14 July 2020
Consistent constraint
3D CNN
Action recognition | CommonCrawl |
Physics And Astronomy (4)
Proceedings of the International Astronomical Union (2)
European Astronomical Society Publications Series (1)
Publications of the Astronomical Society of Australia (1)
Symposium - International Astronomical Union (1)
International Astronomical Union (3)
Long Baseline Interferometric Observations of Circumstellar Dust Shells at 11 Microns
W.C. Danchi, L. Greenhill, M. Bester, C.G. Degiacomi, C.H. Townes, M.G. Wolfire
Journal: Symposium - International Astronomical Union / Volume 158 / 1994
Published online by Cambridge University Press: 19 July 2016, pp. 383-386
The spatial distribution of dust around a sample of well-known late-type stars has been studied with the Infrared Spatial Interferometer (ISI) located at Mt. Wilson. Currently operating with a single baseline as a heterodyne interferometer at 11.15 μm, the ISI has obtained visibility curves of these stars. Radiative transfer modeling of the visibility curves has yielded estimates of the inner radii of the dust shells, the optical depth at 11 μm, and the temperature of the dust at the inner radii. For stars in which the dust is resolved, estimates of the stellar diameter and temperature can also be made. Broadly speaking two classes of stars have been found. One class has inner radii of their dust shells very close to the photospheres of the stars themselves (3–5 stellar radii) and at a higher temperature (~ 1200 K) than previously measured. This class includes VY CMa, NML Tau, IRC +10216, and o Ceti. For the latter two the visibility curves change with the luminosity phase of the star and new dust appears to form at still smaller radii during minimum luminosity. The second class of stars has dust shells with substantially larger inner radii and very little dust close to the stars, and includes α Ori, α Sco, α Her, R Leo, and χ Cyg. This indicates sporadic production of dust and no dust formation within the last several decades.
Extended Carbon Emission in the Galaxy: Dark Gas along the G328 Sightline
IAU Issue 315 - From Interstellar Clouds...
M. Burton, M. Ashley, C. Braiding, M. Freeman, C. Kulesa, M. Wolfire, D. Hollenbach, G. Rowell, J. Lau
Journal: Proceedings of the International Astronomical Union / Volume 11 / Issue S315 / August 2015
Published online by Cambridge University Press: 12 September 2016, E10
Print publication: August 2015
We present spectral data cubes of the [CI] 809 GHz, 12CO 115 GHz, 13CO 110 GHz and HI 1.4 GHz line emission from a ~1° region along the l = 328° (G328) sightline in the Galactic Plane. The [CI] data comes from the High Elevation Antarctic Terahertz telescope at Ridge A on the summit of the Antarctic plateau, where the extremely low levels of precipitable water vapour open atmospheric windows for THz observations. The CO data comes from the Southern Galactic Plane Survey being conducted with the Mopra telescope. Emission arises principally from gas in three spiral arm crossings along the sight line. The distribution of the emission in the CO and [CI] lines is found to be similar, with the [CI] slightly more extended, and both are enveloped in extensive HI. Spectral line ratios are similar across the entire extent of the Galaxy. However, towards the edges of the molecular clouds the [CI]/13CO and 12CO/13CO line ratios rise by ~ 50%, and the [CI]/Hi ratio falls by ~ 10%. We attribute this to sightlines passing predominantly through the surfaces of photodissociation regions (PDRs), where the carbon is found mainly as C or C+ rather than CO, while the gas is mostly molecular. This is the signature of dark molecular gas.
The Mopra Southern Galactic Plane CO Survey
Michael G. Burton, C. Braiding, C. Glueck, P. Goldsmith, J. Hawkes, D. J. Hollenbach, C. Kulesa, C. L. Martin, J. L. Pineda, G. Rowell, R. Simon, A. A. Stark, J. Stutzki, N. J. H. Tothill, J. S. Urquhart, C. Walker, A. J. Walsh, M. Wolfire
Journal: Publications of the Astronomical Society of Australia / Volume 30 / 2013
Published online by Cambridge University Press: 14 August 2013, e044
We present the first results from a new carbon monoxide (CO) survey of the southern Galactic plane being conducted with the Mopra radio telescope in Australia. The 12CO, 13CO, and C18O J = 1–0 lines are being mapped over the $l = 305^{\circ }\text{--} 345^{\circ }, b = \pm 0.5^{\circ }$ portion of the fourth quadrant of the Galaxy, at 35 arcsec spatial and 0.1 km s−1 spectral resolution. The survey is being undertaken with two principal science objectives: (i) to determine where and how molecular clouds are forming in the Galaxy and (ii) to probe the connection between molecular clouds and the 'missing' gas inferred from gamma-ray observations. We describe the motivation for the survey, the instrumentation and observing techniques being applied, and the data reduction and analysis methodology. In this paper, we present the data from the first degree surveyed, $l = 323^{\circ } \text{--} 324^{\circ }, b = \pm 0.5^{\circ }$ . We compare the data to the previous CO survey of this region and present metrics quantifying the performance being achieved; the rms sensitivity per 0.1 km s−1 velocity channel is ~1.5 K for ${\rm ^{12}CO}$ and ~0.7 K for the other lines. We also present some results from the region surveyed, including line fluxes, column densities, molecular masses, ${\rm ^{12}CO/^{13}CO}$ line ratios, and ${\rm ^{12}CO}$ optical depths. We also examine how these quantities vary as a function of distance from the Sun when averaged over the 1 square degree survey area. Approximately 2 × 106M⊙ of molecular gas is found along the G323 sightline, with an average H2 number density of $n_{\text{H}_2} \sim 1$ cm−3 within the Solar circle. The CO data cubes will be made publicly available as they are published.
PDRs and XDRs
M. Röllig, R. Simon, V. Ossenkopf, J. Stutzki, M.G. Wolfire
Journal: European Astronomical Society Publications Series / Volume 52 / 2011
Photodissociation regions (PDRs) are gas phases in which far-ultraviolet radiation plays a significant role in the heating and/or chemistry (Tielens & Hollenbach 1985). X-ray dissociation regions (XDRs) are regions in which X-rays dominate the heating and/or chemistry (Moloney et al. 1996). Much progress has been mode in modeling PDRs (e.g., Röllig et al. 2007) and XDRs (e.g., Meijerink et al. 2007) and the models predict several line intensities and line ratios of atomic and molecular species that are good diagnostics for discriminating between the two. I will discuss the theoretical models in light of the most recent observations.
First Astronomical Detection of the CF+ Ion
D. A. Neufeld, P. Schilke, K. M. Menten, M. G. Wolfire, J. H. Black, F. Schuller, H. Müller, S. Thorwirth, R. Güsten, S. Philipp
Journal: Proceedings of the International Astronomical Union / Volume 1 / Issue S231 / August 2005
We report the first astronomical detection of the CF$^+$ (fluoromethylidynium) ion, obtained by recent observations of its $J = 1-0$ (102.6 GHz), $J = 2-1$ (205.2 GHz), and $J = 3-2$ (307.7 GHz) pure rotational emissions toward the Orion Bar. Our search for CF$^+$ — carried out using the IRAM 30m and APEX 12m telescopes—was motivated by recent theoretical models that predict CF$^+$ abundances of a $\rm few \times 10^{-10}$ in UV-irradiated molecular regions where C$^+$ is present. The measurements confirm the predictions. They provide support for our current theories of interstellar fluorine chemistry, which suggest that hydrogen fluoride should be ubiquitous in interstellar gas clouds. | CommonCrawl |
William A. Veech's publications
JMD Home
This Volume
2019, 14: ⅴ-xxv. doi: 10.3934/jmd.2019v
Bill Veech's contributions to dynamical systems
Giovanni Forni 1, , Howard Masur 2, and John Smillie 3,
Department of Mathematics, University of Maryland, College Park, MD 20742, USA
Department of Mathematics, University of Chicago, 5734 S. University Ave., Chicago, IL 60637, USA
Mathematics Institute, University of Warwick, Coventry CV4 7AL, United Kingdom
Received February 10, 2019 Published February 2019
Citation: Giovanni Forni, Howard Masur, John Smillie. Bill Veech's contributions to dynamical systems. Journal of Modern Dynamics, 2019, 14: ⅴ-xxv. doi: 10.3934/jmd.2019v
J. Auslander, G. Greschonig and A. Nagar, Reflections on equicontinuity, Proc. Amer. Math. Soc., 142 (2014), 3129-3137. doi: 10.1090/S0002-9939-2014-12034-X. Google Scholar
J. Athreya, A. Bufetov, A. Eskin and M. Mirzakhani, Lattice point asymptotics and volume growth on Teichmüller space, Duke Math. J., 161 (2012), 1055-1111. doi: 10.1215/00127094-1548443. Google Scholar
A. Avila and V. Delecroix, Weak mixing directions in non-arithmetic Veech surfaces, J. Amer. Math. Soc., 29 (2016), 1167-1208. doi: 10.1090/jams/856. Google Scholar
A. Avila and G. Forni, Weak mixing for interval exchange transformations, and translation flows, Ann. of Math., 165 (2007), 637-664. doi: 10.4007/annals.2007.165.637. Google Scholar
A. Avila and S. Gouëzel, Small eigenvalues of the Laplacian for algebraic measures in moduli space, and mixing properties of the Teichmüller flow, Ann. of Math., 178 (2013), 385-442. doi: 10.4007/annals.2013.178.2.1. Google Scholar
A. Avila, S. Gouëzel and J.-C. Yoccoz, Exponential mixing for the Teichmüller flow, Publications Mathématiques de l'IHÉS, 104 (2006), 143-211. doi: 10.1007/s10240-006-0001-5. Google Scholar
A. Avila and M. Viana, Simplicity of Lyapunov spectra: Proof of the Zorich–Kontsevich conjecture, Acta Math., 198 (2007), 1-56. doi: 10.1007/s11511-007-0012-1. Google Scholar
M. Bainbridge, J. Smillie and B. Weiss, Horocycle dynamics: new invariants and eigenform loci in the stratum H(1,1), preprint, arXiv: 1603.00808. Google Scholar
D. Bernazzani, Most interval exchanges have no roots, J. Mod. Dyn., 11 (2017), 249-262. doi: 10.3934/jmd.2017011. Google Scholar
C. Boldrighini, M. Keane and F. Marchetti, Billiards in polygons, Ann. Probab., 6 (1978), 532-540. doi: 10.1214/aop/1176995475. Google Scholar
M. Boshernitzan, A condition for minimal interval exchange maps to be uniquely ergodic, Duke Math. J., 52 (1985), 723-752. doi: 10.1215/S0012-7094-85-05238-X. Google Scholar
____, Rank two interval exchange transformations, Ergodic Theory and Dynamical Systems, 8(1988), 379–394. doi: 10.1017/S0143385700004521. Google Scholar
I. I. Bouw and M. Möller, Teichmüller curves, triangle groups, and Lyapunov exponents, Ann. of Math., 172 (2010), 139-185. doi: 10.4007/annals.2010.172.139. Google Scholar
A. Bufetov, Logarithmic asymptotics for the number of periodic orbits of the Teichmüller flow on Veech's space of zippered rectangles, Mosc. Math. J., 9 (2009), 245-261. doi: 10.17323/1609-4514-2009-9-2-245-261. Google Scholar
K. Calta, Veech surfaces and complete periodicity in genus two, J. Amer. Math. Soc., 17 (2004), 871-908. doi: 10.1090/S0894-0347-04-00461-8. Google Scholar
J. Chaika and A. Eskin, Self-Joinings for 3-IETs, preprint, arXiv: 1805.11167v2. Google Scholar
J. Chaika and R. Treviño, Logarithmic laws and unique ergodicity, J. Mod. Dyn., 11 (2017), 563-588. doi: 10.3934/jmd.2017022. Google Scholar
D. Chen, M. Möller and D. Zagier, Quasimodularity and large genus limits of Siegel-Veech constants, J. Amer. Math. Soc., 31 (2018), 1059-1163. doi: 10.1090/jams/900. Google Scholar
A. Danilenko and A. Solomko, Simple mixing actions with uncountably many prime factors, Colloq. Math., 139 (2015), 37-54. doi: 10.4064/cm139-1-3. Google Scholar
D. Dolgopyat, Livsic theory for compact group extensions of hyperbolic systems, Mosc. Math. J., 5 (2005), 55-67. doi: 10.17323/1609-4514-2005-5-1-55-66. Google Scholar
R. Ellis, The Veech structure theorem, Trans. of the Amer. Math. Soc., 186 (1973), 203-218. doi: 10.1090/S0002-9947-1973-0350712-1. Google Scholar
____, The Furstenberg structure theorem, Pacific Journal of Math., 76(1978), 345–349. doi: 10.2140/pjm.1978.76.345. Google Scholar
A. Eskin, M. Kontsevich and A. Zorich, Sum of Lyapunov exponents of the Hodge bundle with respect to the Teichmüller geodesic flow, Publications mathématiques de l'IHÉS, 120 (2014), 207-333. doi: 10.1007/s10240-013-0060-3. Google Scholar
A. Eskin and H. Masur, Asymptotic formulas on flat surfaces, Erg. Th. Dynam. Sys., 21 (2001), 443-478. doi: 10.1017/S0143385701001225. Google Scholar
A. Eskin, H. Masur and A. Zorich, Moduli spaces of abelian differentials: The principal boundary, counting problems, and the Siegel–Veech constants, Publications Mathématiques de l'IHÉS, 97 (2003), 61-179. doi: 10.1007/s10240-003-0015-1. Google Scholar
A. Eskin and M. Mirzakhani, Counting closed geodesics in moduli space, Journal of Modern Dynamics, 5 (2011), 71-105. doi: 10.3934/jmd.2011.5.71. Google Scholar
____, Invariant and stationary measures for the SL(2,ℝ) action on moduli space, Publications Mathématiques de l'IHÉS, 127 (2018), 95–324. doi: 10.1007/s10240-018-0099-2. Google Scholar
A. Eskin, M. Mirzakhani and A. Mohammadi, Isolation, equidistribution, and orbit closures for the SL(2,ℝ) action on moduli space, Ann. Math., 182 (2015), 673-721. doi: 10.4007/annals.2015.182.2.7. Google Scholar
A. Eskin, M. Mirzakhani and K. Rafi, Counting closed geodesics in strata, Invent. Math., 215 (2019), 535-607. doi: 10.1007/s00222-018-0832-y. Google Scholar
A. Eskin, G. Margulis and S. Mozes, Upper bounds and asymptotics in a quantitative version of the Oppenheim conjecture, Ann. of Math., 147 (1998), 93-141. doi: 10.2307/120984. Google Scholar
S. Filip, Zero Lyapunov exponents and monodromy of the Kontsevich-Zorich cocycle, Duke Math. J., 166 (2017), 657-706. doi: 10.1215/00127094-3715806. Google Scholar
G. Forni, Deviation of ergodic averages for area-preserving flows on higher genus surfaces, Ann. of Math., 155 (2002), 1-103. doi: 10.2307/3062150. Google Scholar
____, On the Lyapunov exponents of the Kontsevich–Zorich cocycle, in Handbook of Dynamical Systems, 1B (eds. B. Hasselblatt and A. Katok), Elsevier, 2006,549–580. Google Scholar
G. Forni and C. Matheus, An example of a Teichmüller disk in genus 4 with degenerate Kontsevich–Zorich spectrum, preprint, 2008, arXiv: 0810.0023. Google Scholar
____, Introduction to Teichmüller theory and its applications to dynamics of interval exchange transformations, flows on surfaces and billiards, J. Mod. Dynam., 8 (2014), 271–436. doi: 10.3934/jmd.2014.8.271. Google Scholar
G. Forni, C. Matheus and A. Zorich, Square-tiled cyclic covers, J. Mod. Dyn., 5 (2011), 285-318. doi: 10.3934/jmd.2011.5.285. Google Scholar
____, Zero Lyapunov exponents of the Hodge bundle, Comment. Math. Helv., 89 (2014), 489–535. doi: 10.4171/CMH/325. Google Scholar
H. Furstenberg, The structure of distal flows, Amer. J. Math., 85 (1963), 477-515. doi: 10.2307/2373137. Google Scholar
E. Glasner and B. Weiss, A simple weakly mixing transformation with nonunique prime factors, Amer. J. Math., 116 (1994), 361-375. doi: 10.2307/2374933. Google Scholar
E. Gutkin and C. Judge, Affine mappings of translation surfaces: Geometry and arithmetic, Duke Math. J., 103 (2000), 191-213. doi: 10.1215/S0012-7094-00-10321-3. Google Scholar
F. Herrlich and G. Schmithüsen, An extraordinary origami curve, Mathematische Nachrichten, 281 (2008), 219-237. doi: 10.1002/mana.200510597. Google Scholar
P. Hubert and T. A. Schmidt, Infinitely generated Veech groups, Duke Math. J., 123 (2004), 49-69. doi: 10.1215/S0012-7094-04-12312-8. Google Scholar
A. del Junco, A simple map with no prime factors, Israel J. Math., 104 (1998), 301–320. doi: 10.1007/BF02897068. Google Scholar
A. del Junco and D. Rudolph, On ergodic actions whose self-joinings are graphs, Ergodic Theory Dynam. Systems, 7 (1987), 531-557. doi: 10.1017/S0143385700004193. Google Scholar
____,A rank-one, rigid, simple, prime map, Ergodic Theory Dynam. Systems, 7 (1987), 229–247. doi: 10.1017/S0143385700003977. Google Scholar
A. B. Katok, Invariant measures of flows on oriented surfaces, Soviet Math. Dokl., 14 (1973), 1104-1108. Google Scholar
____, Interval exchange transformations and some special flows are not mixing, Israel Journal of Mathematics, 35 (1980), 301–310. doi: 10.1007/BF02760655. Google Scholar
A. Katok and A. Kononenko, Cocycles' stability for partially hyperbolic systems, Math. Res. Lett., 3 (1996), 191-210. doi: 10.4310/MRL.1996.v3.n2.a6. Google Scholar
A. B. Katok and A. M. Stepin, Approximations in ergodic theory, Uspehi Mat. Nauk, 22 (1967), 81-106. Google Scholar
A. B. Katok and A. M. Zemlyakov, Topological transitivity of billiards in polygons, Math. Notes of the Academy of Sciences of the USSR, 18 (1975), 760–764; errata, 20 (1976), 1051. Google Scholar
M. Keane, Interval exchange transformations, Math. Z., 141 (1975), 25-31. doi: 10.1007/BF01236981. Google Scholar
____, Non-ergodic interval exchange transformations, Israel Journal of Mathematics, 26 (1977), 188–196. doi: 10.1007/BF03007668. Google Scholar
R. Kenyon and J. Smillie, Billiards in rational-angled triangles, Comment. Mathem. Helv., 75 (2000), 65-108. doi: 10.1007/s000140050113. Google Scholar
S. P. Kerckhoff, Simplicial systems for interval exchange maps and measured foliations, Ergodic Theory and Dynamical Systems, 5 (1985), 257-271. doi: 10.1017/S0143385700002881. Google Scholar
S. P. Kerckhoff, H. Masur and J. Smillie, Ergodicity of billiard flows and quadratic differentials, Annals of Mathematics, 124 (1986), 293-311. doi: 10.2307/1971280. Google Scholar
H. B. Keynes and D. Newton, A 'minimal', non-uniquely ergodic interval exchange transformation, Mathematische Zeitschrift, 148 (1976), 101-105. doi: 10.1007/BF01214699. Google Scholar
M. Kontsevich, Lyapunov exponents and Hodge theory, in The Mathematical Beauty of Physics: A Memorial Volume for Claude Itzykson (eds. J. M. Drouffe and J. B. Zuber), Saclay, France 5-7 June 1996, Advanced Series in Mathematical Physics, 24, World Scientific Pub. Co. Inc., River Edge, NJ, 1997. Google Scholar
M. Kontsevich and A. Zorich, Lyapunov exponents and Hodge theory, preprint, 1997, arXiv: hep-th/9701164v1. Google Scholar
J. Marklof and A. Strömbergsson, Free Path Lengths in Quasicrystals, Communications in Mathematical Physics, 330 (2014), 723-755. doi: 10.1007/s00220-014-2011-3. Google Scholar
H. Masur, Interval exchange transformations and measured foliations, Annals of Mathematics, 115 (1982), 169-200. doi: 10.2307/1971341. Google Scholar
____, Ergodic actions of the mapping class group, Proc. A.M.S., 94 (1985), 455–459. doi: 10.1090/S0002-9939-1985-0787893-5. Google Scholar
____, Lower bounds for the number of saddle connections and closed trajectories of a quadratic differential, in Holomorphic Functions and Moduli (eds. D. Drasin, I. Kra, C. J. Earle, A. Marden and F. W. Gehring), Mathematical Sciences Research Institute Publications, 10, Springer, New York, NY, 1988,215–228. doi: 10.1007/978-1-4613-9602-4_20. Google Scholar
C. T. McMullen, Billiards and Teichmüller curves on Hilbert modular surfaces, J. Amer. Math. Soc., 16 (2003), 857-885. doi: 10.1090/S0894-0347-03-00432-6. Google Scholar
____, Teichmüller geodesics of infinite complexity, Acta Math., 191 (2003), 191–223. doi: 10.1007/BF02392964. Google Scholar
J.-Ch. Puchta, On triangular billiards, Comment. Mathem. Helv., 76 (2001), 501-505. doi: 10.1007/PL00013215. Google Scholar
G. Rauzy, Échanges d' intervalles et transformations induites, Acta Arith., 34 (1979), 315-328. doi: 10.4064/aa-34-4-315-328. Google Scholar
M. Rees, An alternative approach to the ergodic theory of measured foliations on surfaces, Ergodic Theory and Dynamical Systems, 1 (1981), 461-488. doi: 10.1017/s0143385700001383. Google Scholar
A. Sauvaget, Volumes and Siegel-Veech constants of $\mathcal H(2g-2)$ and Hodge integrals, Geometric and Functional Analysis, 28 (2018), 1756-1779. doi: 10.1007/s00039-018-0468-5. Google Scholar
J. Smillie and B. Weiss, Minimal sets for flows on moduli space, Israel J. Math., 142 (2004), 249-260. doi: 10.1007/BF02771535. Google Scholar
R. Treviño, On the ergodicity of flat surfaces of finite area, Geometric and Functional Analysis, 24 (2014), 360-386. doi: 10.1007/s00039-014-0269-4. Google Scholar
W. A. Veech, Almost automorphic functions on groups, American Journal of Mathematics, 87 (1965), 719-751. doi: 10.2307/2373071. Google Scholar
____, The equicontinuous structure relation for minimal Abelian transformation groups, American Journal of Mathematics, 90 (1968), 723–732. doi: 10.2307/2373480. Google Scholar
____, Strict ergodicity in zero dimensional dynamical systems and the Kronecker-Weyl theorem mod 2, Transactions of the American Mathematical Society, 140 (1969), 1–33. doi: 10.2307/1995120. Google Scholar
____, Point-distal flows, American Journal of Mathematics, 92 (1970), 205–242. doi: 10.2307/2373504. Google Scholar
____, Topological Dynamics, Bulletin of American Mathematical Society, 83 (1977), 775–830. doi: 10.1090/S0002-9904-1977-14319-X. Google Scholar
____, Interval exchange transformations, Journal d'Analyse Mathématique, 33 (1978), 222–272. doi: 10.1007/BF02790174. Google Scholar
____, Projective Swiss cheeses and uniquely ergodic interval exchange transformations, in Ergodic Theory and Dynamical Systems (ed. A. Katok), Progress in Mathematics, 10, Birkhäuser, Boston, MA, 1981,113–193. doi: 10.1007/978-1-4899-6696-4_5. Google Scholar
____, A Gauss measure on the set of interval exchange transformations, Proceedings of the National Academy of Sciences of the United States of America, 78 (1981), 696–697. doi: 10.1073/pnas.78.2.696. Google Scholar
____, Gauss measures for transformations on the space of interval exchange maps, Annals of Mathematics, 115 (1982), 201–242. doi: 10.2307/1971391. Google Scholar
____, A criterion for a process to be prime, Monatshefte für Mathematik, 94 (1982), 335–341. doi: 10.1007/BF01667386. Google Scholar
____, he metric theory of interval exchange transformations Ⅰ. Generic spectral properties, American Journal of Mathematics, 106 (1984), 1331–1359. doi: 10.2307/2374396. Google Scholar
____, The metric theory of interval exchange transformations Ⅱ. Approximation by primitive interval exchanges, American Journal of Mathematics, 106 (1984), 1361–1387. doi: 10.2307/2374397. Google Scholar
____, The metric theory of interval exchange transformations Ⅲ. The Sah-Arnoux-Fathi invariant, American Journal of Mathematics, 106 (1984), 1389–1422. doi: 10.2307/2374398. Google Scholar
____, Dynamics over Teichmüller space, Bulletin of the American Mathematical Society, 14 (1986), 103–106. doi: 10.1090/S0273-0979-1986-15406-6. Google Scholar
____, The Teichmüller geodesic flow, Annals of Mathematics, 124 (1986), 441–530. doi: 10.2307/2007091. Google Scholar
____, Periodic points and invariant pseudomeasures for toral endomorphisms, Ergodic Theory and Dynamical Systems, 6 (1986), 449–473. doi: 10.1017/S0143385700003606. Google Scholar
____, Boshernitzan's criterion for unique ergodicity of an interval exchange transformation, Ergodic Theory and Dynamical Systems, 7 (1987), 149–153. doi: 10.1017/S0143385700003862. Google Scholar
____, Teichmüller curves in moduli space, Eisenstein series and an application to triangular billiards, Inventiones Mathematicae, 97 (1989), 553–583. doi: 10.1007/BF01388890. Google Scholar
____, The billiard in a regular polygon, Geometric and Functional Analysis GAFA, 2 (1992), 341–379. doi: 10.1007/BF01896876. Google Scholar
____, Geometric realizations of hyperelliptic curves, in Algorithms, Fractals, and Dynamics (ed. Y. Takahashi), Springer, Boston, MA, 1995,217–226. doi: 10.1007/978-1-4613-0321-3_19. Google Scholar
____, Siegel measures, Annals of Mathematics, 148 (1998), 895–944. doi: 10.2307/121033. Google Scholar
____, Decoding Rauzy induction: Bufetov's question, Moscow Mathematical Journal, 10 (2010), 647–657. doi: 10.17323/1609-4514-2010-10-3-647-657. Google Scholar
____, Invariant distributions for interval exchange transformations, in Dynamical Systems and Group Actions (eds. L. Bowen, R. Grigorchuk and Y. Vorobets), Contemporary Mathematics, 567, American Mathematical Soc., 2012,191–220. doi: 10.1090/conm/567. Google Scholar
____, Möbius orthogonality for generalized Morse-Kakutani flows, American Journal of Mathematics, 139 (2017), 1157–1203. doi: 10.1353/ajm.2017.0031. Google Scholar
____, Riemann sums and Möbius, Journal d'Analyse Mathématique, 135 (2018), 413–436. doi: 10.1007/s11854-018-0046-7. Google Scholar
Y. B. Vorobets, Planar structures and billiards in rational polygons: The Veech alternative, Russian Mathematical Surveys, 51 (1996), 779-817. doi: 10.1070/RM1996v051n05ABEH002993. Google Scholar
A. Wilkinson, The cohomological equation for partially hyperbolic diffeomorphisms, Astérisque, 358 (2013), 75-165. Google Scholar
A. Zorich, Finite Gauss measure on the space of interval exchange transformations. Lyapunov exponents, Ann. Inst. Fourier, 46 (1996), 325-370. doi: 10.5802/aif.1517. Google Scholar
____, Deviation for interval exchange transformations, Ergodic Theory Dynam. Systems, 17 (1997), 1477–1499. doi: 10.1017/S0143385797086215. Google Scholar
Figure 1. Rauzy diagram from Veech's personal notes, June 21, 1977
Mauricio Achigar. Extensions of expansive dynamical systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020399
The Editors. The 2019 Michael Brin Prize in Dynamical Systems. Journal of Modern Dynamics, 2020, 16: 349-350. doi: 10.3934/jmd.2020013
Nitha Niralda P C, Sunil Mathew. On properties of similarity boundary of attractors in product dynamical systems. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021004
Toshiko Ogiwara, Danielle Hilhorst, Hiroshi Matano. Convergence and structure theorems for order-preserving dynamical systems with mass conservation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3883-3907. doi: 10.3934/dcds.2020129
Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331
Alessandro Fonda, Rodica Toader. A dynamical approach to lower and upper solutions for planar systems "To the memory of Massimo Tarallo". Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021012
Stefan Siegmund, Petr Stehlík. Time scale-induced asynchronous discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1011-1029. doi: 10.3934/dcdsb.2020151
Hong Fu, Mingwu Liu, Bo Chen. Supplier's investment in manufacturer's quality improvement with equity holding. Journal of Industrial & Management Optimization, 2021, 17 (2) : 649-668. doi: 10.3934/jimo.2019127
Skyler Simmons. Stability of Broucke's isosceles orbit. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021015
François Ledrappier. Three problems solved by Sébastien Gouëzel. Journal of Modern Dynamics, 2020, 16: 373-387. doi: 10.3934/jmd.2020015
Ugo Bessi. Another point of view on Kusuoka's measure. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020404
Jiahao Qiu, Jianjie Zhao. Maximal factors of order $ d $ of dynamical cubespaces. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 601-620. doi: 10.3934/dcds.2020278
Fanni M. Sélley. A self-consistent dynamical system with multiple absolutely continuous invariant measures. Journal of Computational Dynamics, 2021, 8 (1) : 9-32. doi: 10.3934/jcd.2021002
Mikhail I. Belishev, Sergey A. Simonov. A canonical model of the one-dimensional dynamical Dirac system with boundary control. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021003
Dmitry Dolgopyat. The work of Sébastien Gouëzel on limit theorems and on weighted Banach spaces. Journal of Modern Dynamics, 2020, 16: 351-371. doi: 10.3934/jmd.2020014
Chao Wang, Qihuai Liu, Zhiguo Wang. Periodic bouncing solutions for Hill's type sub-linear oscillators with obstacles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 281-300. doi: 10.3934/cpaa.2020266
Chiun-Chuan Chen, Yuan Lou, Hirokazu Ninomiya, Peter Polacik, Xuefeng Wang. Preface: DCDS-A special issue to honor Wei-Ming Ni's 70th birthday. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : ⅰ-ⅱ. doi: 10.3934/dcds.2020171
Wenya Qi, Padmanabhan Seshaiyer, Junping Wang. A four-field mixed finite element method for Biot's consolidation problems. Electronic Research Archive, , () : -. doi: 10.3934/era.2020127
HTML views (1355)
Giovanni Forni Howard Masur John Smillie | CommonCrawl |
Emergence of microbial diversity due to cross-feeding interactions in a spatial model of gut microbial metabolism
Milan J. A. van Hoek1 &
Roeland M. H. Merks ORCID: orcid.org/0000-0002-6152-687X1,2
BMC Systems Biology volume 11, Article number: 56 (2017) Cite this article
The human gut contains approximately 1014 bacteria, belonging to hundreds of different species. Together, these microbial species form a complex food web that can break down nutrient sources that our own digestive enzymes cannot handle, including complex polysaccharides, producing short chain fatty acids and additional metabolites, e.g., vitamin K. Microbial diversity is important for colonic health: Changes in the composition of the microbiota have been associated with inflammatory bowel disease, diabetes, obesity and Crohn's disease, and make the microbiota more vulnerable to infestation by harmful species, e.g., Clostridium difficile. To get a grip on the controlling factors of microbial diversity in the gut, we here propose a multi-scale, spatiotemporal dynamic flux-balance analysis model to study the emergence of metabolic diversity in a spatial gut-like, tubular environment. The model features genome-scale metabolic models (GEM) of microbial populations, resource sharing via extracellular metabolites, and spatial population dynamics and evolution.
In this model, cross-feeding interactions emerge readily, despite the species' ability to metabolize sugars autonomously. Interestingly, the community requires cross-feeding for producing a realistic set of short-chain fatty acids from an input of glucose, If we let the composition of the microbial subpopulations change during invasion of adjacent space, a complex and stratified microbiota evolves, with subspecies specializing on cross-feeding interactions via a mechanism of compensated trait loss. The microbial diversity and stratification collapse if the flux through the gut is enhanced to mimic diarrhea.
In conclusion, this in silico model is a helpful tool in systems biology to predict and explain the controlling factors of microbial diversity in the gut. It can be extended to include, e.g., complex nutrient sources, and host-microbiota interactions via the intestinal wall.
The human colon is a dense and diverse microbial habitat, that contains hundreds of microbial species [1]. These species together form a community that breaks down complex polysaccharides into monosaccharides, which are then fermented further into short chain fatty acids (SCFAs) that are taken up by the host [2]. The composition of the intestinal microbiota and the topology of the community-level metabolic network formed by it [3] are associated with health and disease. For example, the microbiota produces the short-chain fatty acid butyrate, which has been proposed to lower the risk for colon cancer [2]. Inflammatory bowel disease (IBD) and obesity are correlated with gain or loss of enzymes in the periphery of the network [3], suggesting that in obese persons and in IBD patients the microbiota produces a different set of metabolic end-products. Topological analysis further found indications that microbiota of obese individuals have a more diverse set of enzymes to extract energy from the diet [3]. Patients with diarrhea-predominant irritable bowel syndrome show large temporal shifts in the composition of the microbiota [4].
The most important source of bacterial diversity in the colon is probably due to metabolic interactions between bacteria [5]. The main nutrient sources entering the colon are non-degraded polysaccharides, including resistant starch and cellulose, oligosaccharides, proteins and simple sugars [6]. In addition to these exogenous sources of sugar, the colonic epithelium secretes mucins, which are an important nutrient source for the microbiota [6].
In this paper we ask what mechanisms are responsible for the diversity of the gut microbiota. The structured environment and the diversity of undigested nutrient sources (e.g., complex polysaccharides, e.g., found in food fibers) found in the gut have been shown to sustain diverse microbial communities [2, 7]. Interestingly, however, diverse ecosystems can also arise in homogeneous environments with only one primary resource [8–12]. For example, glucose-limited, continuous cultures of E. coli reproducibly evolve acetate cross-feeding within about 100 generations (see Ref. [11] and references therein). In these experiments, one subpopulation enhances its glucose uptake efficiency and secretes acetate as a waste product. The acetate then provides a niche for a second strain that can grow on low concentrations of acetate.
Mathematical modeling can help understand under what conditions such cross-feeding and diversification can emerge in homogeneous environments. In their isologous diversification model, Kaneko and Yomo [13, 14] studied sets of identical, chaotically oscillating metabolic networks that exchange metabolites via a common, shared medium. Although small populations of oscillators will easily synchronize with one another, larger populations will break up in specialized, synchronized sub-populations. Mathematical modeling has also given insight into the conditions that make specialization and cross-feeding beneficial from an evolutionary point of view. For example, cross-feeding can evolve if there exists a trade-off between uptake efficiency of the primary and secondary nutrient source [15], or if a trade-off exists between growth rate and yield [16]. In absence of such metabolic trade-offs, cross-feeding can evolve if the enzymatic machinery required to metabolize all available nutrients is so complex that distributing enzymes across a number of species or strains becomes the more probable, 'easier' evolutionary solution [17].
These initial mathematical models included simplified or conceptual models of metabolism. More recently, it has become feasible to construct models of microbial communities based on genome-scale metabolic network models (reviewed in Ref. [18]). In these models, multiple species of bacteria interact with one another by modifying a common pool of metabolites. One class of models optimizes the bacterial and community growth rates in parallel, assuming flux-balance of whole community at once [19] or iteratively within the individual bacteria and at community level [20]. Such approaches can also include dynamic changes of the community-level constraints, including extracellular concentrations of metabolites [21].
To also capture the emergent population dynamics of bacterial communities due to secretion and uptake of metabolites by the bacteria, (static optimization-based) dynamic flux-balance analysis (dFBA) has been introduced [22]. These couple the optimization-based flux-balance analysis (FBA) approach for modeling intracellular metabolism, with an ordinary-differential equation model (ODE) for modeling the metabolite concentrations in the substrate. These community models more closely approximate microbial metabolism than the initial, more abstract models, such that the results can be compared directly to experimental observations. For example, Tzamali and coworkers [23] used multispecies dFBA to compare the performance of metabolic mutants of E. coli in batch monoculture versus its performance in co-culture with an alternative mutant. Their model predicted co-cultures that were more efficient than their constituent species. Louca and Doebeli [24] proposed methodology to calibrate the bacterial models in such dynamic multispecies FBA approaches to data from experimental monocultures. By coupling these calibrated dynamical models of isolated strains of E. coli, the framework could reproduce experimentally observed succession of an ancestral monoculture of E. coli by a cross-feeding pair of specialists. Because these models assume direct metabolic coupling of all species in the model via the culture medium, the model best applies to well-mixed batch culture systems or chemostats. The more recent coupled dynamic multi-species dFBA and mass transfer models [18, 25–27], or briefly, spatial dFBA (sdFBA) models are more suitable for modeling the gut microbiota. These spatial extensions of the multispecies dFBA approach couple multiple dFBA models to one another via spatial mass transport models (based on numerical solutions of partial-differential equations), such that bacteria can exchange metabolites with their direct neighbors.
In order to explore whether and under which circumstances a diverse microbial community can arise from a single nutrient source in the gut, here we extended the sdFBA approach to develop a multiscale model of collective, colonic carbohydrate metabolism and bacterial population dynamics and evolution in a gut-like geometry. To this end, we combined spatial models of population dynamics with genome-scale metabolic models (GEMs) for individual bacterial species and a spatial mass transport model. In addition to the sdFBA approaches, we extended the model with an "evolutionary" component, in order to allow for unsupervised diversification of the microbial communities. We inoculate the metabolic system with a meta-population of bacteria containing a set of available metabolic pathways. When, depending on the local availability of nutrients, the bacterial population expands into its local neighborhood the metapopulation gains or looses metabolic pathways at random. We find that spatially structured, microbial diversity emerges spontaneously in our model starting from a single resource. This diversity depends on interspecies cross-feeding interactions.
A full multiscale model of the metabolism of the human gut would need to include around 1014 individual bacteria belonging to hundreds of bacterial species, for which in many cases curated GEMs are unavailable. We thus necessarily resorted to a more coarse-grained approach, while maintaining some level of biological realism by constructing the model based on a validated, genome-scale metabolic network model of Lactobacillus plantarum [28]. Figure 1 gives an overview of the workflow of the paper. We first (1) constructed a metabolic model representing a subset of the gut microbiota, which we used for the dFBA model (2). We then asked to what extent cross-feeding can emerge in large communities of interacting and diversified bacteria, such as those found in the colon, using a dynamic multi-species metabolic modeling (DMMM) approach [18, 23, 29], which is an extension of the dynamic flux-balance analysis (dFBA) method [22, 30]. To this end, we constructed a well-mixed model of a bacterial consortium (3), by coupling 1000 of the dFBA models via a common, external exchange medium that allowed the bacteria to exchange a subset of the metabolites in the GEM. We initiated the exchange medium with a pulse of glucose, then observed the turn-over of glucose into a series of short-chain fatty acids (4), and quantified cross-feeding (5): the extent to which the bacteria exchanged metabolites via the common medium. Next we asked to what extent spatially diversified microbial communities can emerge in a tube-like environment (6), if the microbial communities are allowed to specialize to the local availability of metabolites. In the spatial model, the GEMs inside the bacteria were allowed to evolve. After running the model for a fixed time, we quantified how much the GEMs had diversified and performed local cross-feeding (7) and to what extent they had locally changed the external concentrations of metabolites (8), leading to stratification and niche formation.
Workflow of the modeling. (1) Construction of "metabacterium" model, based on a Lactobacillus plantarum GEM [28] extended with metabolic pathways commonly found in the gut microbiota; (2) dynamic flux-balance analysis model; (3) well-mixed community of "metabacteria" exchanging metabolites via a common medium; (4) observation of metabolites in the common medium; (5) measure cross-feeding coefficient; (6) spatial modeling in a gut-like environment with evolving "metabacteria"; (7) look for speciation and cross-feeding; (8) look for stratification of metabolic environment
Construction of a metabolic model representing a subset of the gut microbiota
We first constructed a hypothetical, but biologically-realistic "supra-organism" model [3, 31], called "metabacterium" here, that represents a sample of the gut microbial community in a single metabolic network model. For this preliminary, explorative study we used a GEM of Lactobacillus plantarum [28], a resident of the colon and a strain widely used for probiotics, and extended it with four key metabolic pathways of the intestinal microbial community: (1) propionate fermentation, (2) butyrate fermentation, (3) the acrylate pathway and (4) the Wood-Ljungdahl pathway. In future versions of our framework this network could be replaced by metabolic network models derived from metagenomic data [3] as they become available. The current, simplified network contains 674 reactions (Supplementary File 1), and compares well with consensus metabolic networks of carbohydrate fermentation in the colon [32, 33]. For a schematic overview of the key pathways including in the metabolic network, see Fig. 2 a.
a. Simplified scheme of central carbon metabolism of the GEM: 1) Glycolysis. 2) lactate fermentation. 3) Propionate fermentation. 4) Acrylate pathway. 5) Pyruvate dehydrogenase. 6) Pyruvate formate-lyase. 7) Butyrate fermentation. 8) Acetate fermentation. 9) Acetogenesis via Wood-Ljungdahl pathway. 10) Ethanol fermentation. 11) butyryl-CoA:acetate-CoA transferase. Pathways are reversible - arrow directions indicate the most common direction; b. Metabolite dynamics over time. At time 0 only glucose is available
The uptake and excretion rates of genome-scale metabolic networks can be calculated using constraint-based modeling. To represent diauxic growth, i.e., by-product secretion as a function of extracellular metabolite concentrations, we used an extension of FBA called Flux Balance Analysis with Molecular Crowding (FBAwMC) [34]. FBAwMC correctly predicts diauxic growth and the associated secretion of by-products in micro-organisms including E. coli, Saccharomyces cerevisiae [35], and L.plantarum [36]. As an additional, physiologically-plausible constraint FBAwMC assumes that only a finite number of metabolic enzymes fits into a cell, with each enzyme having a maximum metabolic turnover, V max. For each reaction, FBAwMC requires a crowding coefficient, defined as the enzymatic volume needed to reach unit flux through that reaction. Each reaction is assigned a "crowding coefficient", a measure of the protein cost of a reaction: Enzymes with low crowding coefficients have small molecular volume or catalyse fast reactions. Given a set of maximum input fluxes, FBAwMC predicts the optimal uptake and excretion fluxes as a function of the extracellular metabolite concentrations.
As FBAwMC optimizes growth rate, not growth yield as in standard FBA, it predicts a switch to glycolytic metabolism at high glucose concentrations at which faster metabolism is obtained with suboptimal yield. Its accurate prediction of diauxic growth together with by-product secretion as a function of extracellular metabolite concentrations make FBAwMC a suitable method for a microbial community model.
Metabolic diversity causes cross-feeding in a well-mixed system
To study the extent of cross-feeding emerging already from a non-evolving metabolic community of "metabacteria", we first set up a simulation of 1000 interacting metapopulations, where each subpopulation was initiated with a set of crowding coefficients selected at random from an experimentally determined distribution of crowding coefficients of Escherichia coli [35, 36], for lack of similar data sets for L. plantarum. The simulation was initiated with pure glucose and was ran under anaerobic conditions. We then performed FBAwMC on all 1000 metapopulations, optimizing for ATP production rate as a proxy for growth rate. This yielded 1000 sets of metabolic input and output fluxes, F i , and growth rates, μ i for all 1000 metapopulations. These were used to update the extracellular concentrations, \(\vec {M}\) and metapopulation sizes, X i , by performing a single finite-difference step of [23, 29]
$$ \frac{d\vec{M}}{dt}=\sum_{i}X_{i}\vec{F_{i}} $$
$$ \frac{dX_{i}}{dt}=\mu_{i}X_{i}. $$
with a timestep Δ t=0.1 h. After updating the environment in this way, we performed a next time simulation step.
Figure 2 b shows how, in the simulation, the metabacteria modified the environment over time. The secondary metabolites that were produced mostly are acetate, butyrate, carbon dioxide, ethanol, formate, lactate and propionate. This compares well with the metabolites that are actually found in the colon [37] or in an in vitro model of the colon [38]. In the first 30 min of the simulation, the initial pulse of glucose is consumed, and turned over into acetate (red), lactate (grey), formate (brown), and ethanol (yellow). These are then consumed again, and turned over into proprionate (purple) via pathways 3 and 4 (Fig. 2 a) and into butyrate (blue) via pathways 7 and 11. CO 2 is also increasing due to the turnover of pyruvate into acetyl co-A via pathway 5 (pyruvate dehydrogenase). After about two hours of simulated time, proprionate and CO 2 levels drop again due to the production of butyrate (blue): proprionate is consumed reversing reaction 3 and 4; CO 2 is consumed in pathway 9 that produces acetate from formate. The conversion of acetate back to acetyl-coA then drives the production of butyrate; a surplus of acetyl-coA is turned over into acetaldehyde and ethanol in pathway 10. Interestingly, formate and CO 2 are produced at the same time; this rarely occurs in any single organism but does occur in this microbial consortium.
To test to what extent these results depend on the ability of the individual FBAwMC models to represent metabolic switching and overflow metabolism [34, 36], we also simulated the model using standard flux-balance analysis [39]. In this case, all glucose was converted into ethanol, whereas lactate and propionate did not appear in the simulation (Additional file 1: Figure S1). To test to what extent the results rely on cross-feeding, we also checked if any of the single-species simulations could also produce so many metabolites. Out of 100 single-species simulations none produced as many or more excreted metabolite species than the interacting set of species.
Quantification of cross-feeding
Most of the metabolites were only transiently present in the medium, \(\vec {M}\), suggesting that the metabolites were re-absorbed and processed further by the bacteria. To quantify the amount of such cross-feeding in the simulations, we defined a cross-feeding factor, C(i), with i a species identifier. Let
$$\begin{array}{@{}rcl@{}} F_{\mathrm{up,tot}}(i,j)&\equiv& \int_{t=0}^{t_{\text{max}}} B(n,t)F_{\text{up}}(i,j,t)dt\\ F_{\mathrm{ex,tot}}(i,j)&\equiv& \int_{t=0}^{t_{\text{max}}} B(n,t)F_{\text{ex}}(i,j,t)dt \end{array} $$
be the total amount of metabolite j that species i consumes and excretes during the simulation. B(i,t) here equals the biomass of species i at time t. The amount of carbon species i gets via cross-feeding then equals,
$$ \begin{aligned} C(i)&=\sum\limits_{j}c_{\mathrm{C}}(j)\text{max}(F_{\mathrm{up,tot}}(i,j)-F_{\mathrm{ex,tot}}(i,j),0)\\ &\quad-6F_{\mathrm{up,tot}}(i,\text{glucose}). \end{aligned} $$
Here, c C(j) is the molar amount of carbon atoms per mol metabolite j (e.g., c C(glucose)=6). If species i during the fermentation consumes more of metabolite j than it has produced, species i has cross-fed on metabolite j. We subtract the amount of glucose from the sum, because glucose is the primary nutrient source that is present at the start of the simulation. Now we can calculate the total amount of carbon the population acquires via cross-feeding, relative to the total amount of carbon taken up by the population
$$ C_{\text{rel}}=\frac{\sum_{i}C(i)}{\sum_{i}\sum_{j} c_{\mathrm{C}}(j)F_{\mathrm{up,tot}}(i,j)}. $$
If C rel=0, there is no cross-feeding. In that case, every species only consumes glucose as carbon source or only consumes as much carbon from other metabolites as it has secreted itself. Conversely, if C rel=1 all carbon the species has consumed during the simulation is from non-glucose carbon sources the species has excreted itself. For the whole simulation C rel=0.39±0.02, indicating that 39% of all carbon consumed by the bacteria comes from cross-feeding. Cross-feeding was largest on lactate, CO 2, acetate, ethanol, formate and propionate. Many of these metabolites are known to be involved in bacterial cross-feeding in the colon or cecum (for interconversion between acetate and lactate, see Ref. [40]; and for interconversion between acetate and butyrate in the murine cecum, see Ref. [41]). In the original L. plantarum model we also find cross-feeding, but only on lactate and acetaldehyde (Additional file 2: Figure S2). Taken together, in agreement with previous computational studies that showed cross-feeding in pairs of interacting E. coli [23], these simulations show that cross-feeding interactions occur in coupled dynamic FBAwMC models.
Spatially explicit, evolutionary model
The well-mixed simulations showed that cross-feeding appears in populations of interacting metabacterial metabolic networks. However, this does not necessarily imply microbial diversity, because it is possible that the same metabacterium secretes and reabsorbs the same metabolites into the substrate, in which case there would be no true cross-feeding. Furthermore, the previous section did not make clear whether cross-feeding will be ecologically stable under conditions where subpopulations of the supra-organisms are lost. In a spatially explicit model, cross-feeding possibly arises more easily and is more easy to detect, as different metabolic functions can be performed at different locations [42]. We therefore developed a spatially explicit, multiscale evolutionary model of gut microbial metabolism. We initiate the simulation with a population of metapopulations of bacteria that can perform all metabolic functions under anaerobic conditions, just as in the well-mixed simulation. We then let the systems evolve and study if meta-populations of bacteria with specific metabolic roles evolve.
Figure 3 sketches the overall structure of our model. The model approximates the colon as the cross-section of a 150 cm long tube with a diameter of 10 cm. The tube is subdivided into patches of 1 cm 2, each containing a uniform concentration of metabolites, and potentially a metapopulation of gut bacteria (hereafter called "metabacterium") (Fig. 3 a). Each metabacterium represents a small subpopulation (or 'metapopulation') of gut bacteria with diverse metabolic functions, and is modeled using a metabolic network model containing the main metabolic reactions found in the gut microbiota, as described above (Fig. 2 a). Based on the local metabolite concentrations, \(\vec {c}(\vec {x},t)\), the metabolic model delivers a set of exchange fluxes F i,n and a growth rate, \(\mu (\vec {x})\), which is assumed to depend on the ATP production rate (Fig. 3 b; see "Methods" for detail). The metabolites disperse to adjacent patches due to local mixing, which we approximate by a diffusion process (Fig. 3 c), yielding
$$ \frac{d\vec{c}(\vec{x},t)}{d t}=\vec{F}(\vec{x},t)B(\vec{x},t) + \frac{D}{L^{2}} \sum\limits_{\vec{i}\in\text{NB}(\vec{x})} \left(\vec{c}(\vec{i},t)-\vec{c}(\vec{x},t)\right), $$
Setup of the simulation model of a metabolizing gut microbial community. The model represents a community of growing subpopulations of genetically identical bacteria. a The metabolism of each population is modeled using a unique, modified GEM of L. plantarum[28]; b Based on extracellular metabolite concentrations, the genome scale model predicts the growth rate (r) of the subpopulation and the influx and efflux rates of a subset of 115 metabolites. These are used as derivatives for a partial-differential equation model describing the concentrations of extracellular metabolites, \(\partial c_{i}(\vec {x},t)/\partial t=F_{i}(\vec {x})+D\nabla ^{2}c(\vec {x},t)\), where c the metabolites diffuse between adjacent grid sites, \(\vec {x}\). d The population is represented on a two-dimensional, tube-like structure, with periodic inputs of glucose. e To mimic advection of metabolites through the gut, the concentrations are periodically shifted to the right, until they f exit from the end of the tube. g The bacterial populations hop at random to adjacent grid sites; to mimic adherence to the gut wall mucus bacterial populations are not advected, unless indicated otherwise. h Once the subpopulation has grown to twice its original size, it divides into an empty spot in the same lattice size at which time the metabolic network is mutated. i Two subpopulations can live on one grid point; with yellow indicating presence of one subpopulation, and green indicating the presence of two subpopulations. (Structural formulas: Licensed under Public domain via Wikimedia Commons; "Alpha-D-Glucopyranose" by NEUROtiker, also licenced under public domain via Wikipedia Commons)
where \(\vec {F}(\vec {x},t)\) is the flux of metabolites between the medium and the metabacterium, and the sum runs over the four nearest neighbors \(\text {NB}(\vec {x})\); dispersion is approximated by Fick's law, where D is a diffusion coefficient and L=1 cm the interface length between two adjacent patches. The local density of metabacteria, \(B(\vec {x})\) is given by
$$ \frac{d B(\vec{x},t)}{dt}=\mu(\vec{x},t)B(\vec{x},t). $$
To mimic meals, a pulse of glucose of variable magnitude enters the tube once every eight hours (Fig. 3 d). The metabolites move through the tube via a simplified model of advection: At regular intervals, all metabolites are shifted one patch (Fig. 3 e). Metabolites continuously leave the tube at the end through an open boundary condition (Fig. 3 f). To mimic peristaltic movements that locally mix the gut contents together, metabacteria randomly hop to adjacent lattice sites (Fig. 3 g) and leave the gut only via random hops over the open boundary condition. In a subset of simulations, accelerated bowel movements are simulated by advecting the metabacteria together with the metabolites. To a first approximation, the boundaries are impermeable to the metabolites, a situation reflecting many in vitro models of the gut microbiota (reviewed in Ref. [43]); later versions of the model will consider more complex boundary conditions including absorption of metabolites [44].
When the local biomass in a patch, \(B(\vec {x},t)\), has grown to twice its original value, the metapopulation expands into the second position on the grid point (Fig. 3 h). To mimic a local carrying capacity, the metapopulation does not spread out or grow any further if both positions in the patch are occupied. In the visualizations of the simulations, full patches are shown in green, singly occupied patches are shown in yellow, and empty patches are shown in black (Figs. 3 i and 4). During expansion, changes in the relative abundance of species may enhance or reduce the rate of particular reactions, or even delete them from the metapopulation completely. Similarly, metabolic reactions can be reintroduced due to resettling of metabolic species, e.g., from the gut wall mucus [45]. To mimic such changes in species composition of the metapopulation, during each expansion step, we delete enzymes from the metabolic network at random, reactivate enzymes at random, or randomly change crowding coefficients such that the metapopulation can specialize on one particular reaction or become a generalist.
Screenshot of the spatially explicit model. The proximal end of the colon is on the left, the distal end on the right. Thus, nutrients flow from left to right. a Cells on the grid. At maximum 2 cells can be on the same grid point. Yellow:one cell present, green: 2 cells present. (b) Glucose concentration. Black: low concentrations, white: high concentrations. (c) Formate concentrations. In total, 115 extracellular metabolites are taken into account in the model
The crowding coefficients, as they appear in the flux-balance analysis with molecular crowding (FBAwMC) method that we used for this model, give the minimum cellular volume filled with enzymes required to generate a unit metabolic flux; they are given by the V max of the enzyme and enzyme volume [34]. Equivalently, in our metapopulation model, the crowding coefficient of a reaction is the minimum intracellular volume averaged over all bacteria in the patch that must be filled with enzymes in order to generate a unit flux through the reaction. It depends on the density of the enzyme in the bacteria and also on the corresponding values of V max. Because the V max of a reaction can differ orders of magnitudes between species (see for example the enzyme database BRENDA [46]), the evolutionary dynamics in our model could drive the metabacteria to reduce all crowding coefficients concurrently, producing a highly efficient generalist. To introduce a biologically more realistic trade-off between metabolic rate and cost in terms of volume, we therefore included an experimentally observed trade-off between growth rate and growth yield among micro-organisms [47, 48]: Micro-organisms that grow fast have low growth yield and vice versa. We take this trade-off into account explicitly by assuming a maximal growth rate given the carbon uptake rate of the cells. This trade-off prevents the metabacteria from growing infinitely fast by mutating their crowding coefficients.
As an initial condition, we distribute metabacteria over the grid, each containing all available metabolic reactions, i.e., each metabacterium initially contains all bacterial "species" that the complete metabacterium represents. To reflect variability in the relative abundances of the bacterial species in each metabacterium the crowding coefficients are drawn at random from an experimental distribution as described above (Fig. 3 a).
Evolution of diversity due to metabolic cross-feeding
To evaluate the behavior of our model, we performed ten independent simulations. These show largely similar phenomenology; therefore we will first describe the progression of one representative simulation in detail, and then discuss differences with the other simulations. Figure 5 a shows the average number of metabolic reactions present in the metabacteria over time in the simulation. At t=0 all metabacteria still have all 674 reactions, but over time the number of available reactions gradually drops to below 200. This reduction of the number of metabolic genes could indicate a homogeneous population that is specialized, e.g., on fermentation of glucose where most of the metabolic network is not used. An alternative explanation is that each of the metapopulation retains a subset of the full network, an indication of cross-feeding. The amount of cross-feeding will likely change over the tube: The metabacteria in the front have direct access to glucose, whereas the metabacteria further down in the tube may rely on the waste-products of those in front. We therefore determined a temporal average of the cross-feeding factors, C rel (Eq. 5), at each position in the tube over t=3500 to t=4000, a time range at which most genes have been lost. The first observation to note is that in the spatial evolutionary simulations, the average cross-feeding factor C rel has a higher value than in the well-mixed simulations. In this particular simulation, the spatial average cross-feeding factor at t=4000 is C rel=0.65±0.09, compared with C rel=0.39±0.02 in the well-mixed case (n=10). The cross-feeding factor for individual cells (C(i), Eq. 4), showed large population heterogeneity. As Fig. 5 b shows, the cross-feeding factor in the tube front is close to 0, indicating the presence of primary glucose consumers, while cross-feeding slowly increases towards the distal end until it almost reaches 1, indicating complete cross-feeding. Thus in the proximal end the bacteria rely mostly on the primary nutrient source, while near the distal end cells of the tube rely on cross-feeding. This observation is consistent for all simulations (see Additional file 3: Figure S3).
Outcome of the evolutionary simulations. a population average and standard deviation of the number of enzymatic reactions ("genome size") over time. b Population average and standard deviation of the cross-feeding factor C n as a function of the position in the colon. The averages and standard deviation are over the vertical dimension and are calculated over the final part of the simulation, from 3500 h until 4000 h. For the graphs of the other simulations, see Additional file 3: Figure S3
Emergence of metabolic stratification
We next investigated the mechanism by which such cross-feeding emerges in the simulation. Additional file 4: Figure S4 plots the metabolite concentrations over evolutionary time for the simulation of Fig. 5. In this particular simulation, the concentrations of formate and lactate initially rise rapidly, after which they drop gradually. The butyrate concentrations increase over evolutionary time. In all simulations, the metabolite concentrations change gradually, but not necessarily following the same temporal pattern.
Figure 6 shows the spatial distribution of a set of key metabolites averaged over 2000 h to 4000 h of the representative simulation. Interestingly, the flow of metabolites through the colon in interaction with the bacterial population creates a spatially structured, metabolic environment. The proximal end is dominated by the primary carbon source glucose (Fig. 6 a), with the peak in the average glucose concentration due to the periodic glucose input. Further down in the tube we find fermentation products, including lactate and ethanol, whereas the distal end contains high levels of acetate and CO 2, showing that the metabacteria convert the glucose into secondary metabolites. Among these secondary metabolites, the levels of acetate (Fig. 6 b), ethanol (Fig. 6 e), formate (Fig. 6 f), lactate (Fig. 6 g) and propionate (Fig. 6 h) drop towards the distal end off the tube, so they are further metabolized by the metabacteria. In this particular simulation, butyrate and CO2 are not consumed and their concentrations increase monotonically towards the end. The small drop at the very distal end is caused by the metabolite outflow. The profiles of the other simulations were consistent with this representative simulation (Additional file 5: Figure S5). In all simulations, the proximal end was dominated by glucose. Further towards the end of the tube, zones of fermentation products developed as in the representative simulation, but the precise location of each product was different and not all products were present. Most notably, in two out of ten simulations, butyrate was absent and in two other simulations proprionate was absent. Also, in three out of ten simulations lactate was more confined to the front of the tube (up to around 50 sites) than in the representative simulation.
Average metabolite concentrations along the colon. Average are taken over the second half of the simulation (2000hrs-4000hrs). a Glucose. b Acetate. c Butyrate. d CO2. e Ethanol. fFormate. g Lactate. h Propionate
Metabacteria specialize on local metabolic niches
These results demonstrate that the metabacteria spatially structure their metabolic environment, generating a stratified structure of metabolic "niches" along the tube, each offering a separate set of metabolites. Therefore, we next asked if this environmental structuring gives rise to metapopulations uniquely adopted to the microenvironment. We took computational samples of all metabacteria found in the tube between 3500 h and 4000 h, to average out the variations at the short timescale. We tested the growth rate of these samples (consisting of on average n≈1100 metabacteria) in six, homogeneous metabolic environments, containing uniform concentrations of pure (1) glucose, (2) acetate, (3) formate, (4) lactate, and (5) propionate, and (6) a mixture of of CO 2 and H 2. Figure 7 shows the average and standard deviation of the growth rates of the metabacteria in each of these six environments, as a function of the position from which they were sampled from the tube. Strikingly, the metabacteria near the distal end of the tube have lost their ability to grow on glucose (Fig. 7 a), indicating that they have specialized on secondary metabolites, including acetate (Fig. 7 b) and lactate (Fig. 7 e). Interestingly, in support of the conclusion that the metabacteria specialize on the metabolic niches generated by the population as a whole, the metabacteria sampled from the distal end on average grow faster on acetate and lactate than the metabacteria sampled from the front of the tube. Acetate and lactate are produced in the proximal colon and flow to the distal part of the tube where the metabacteria can metabolize it; in the front of the tube acetate and lactate concentrations are lower, such that neutral drift effects can safely remove the corresponding metabolic pathways from the metabacteria. Remarkably, the metabacteria also grow on CO 2, because of the presence of hydrogen gas, that allows growth on CO2 via the Wood-Ljungdahl pathway [49]. To further characterize the alternative metabolic modes occurring in the model, we clustered the population present at the end of the simulation t = 4000 h with respect to their maximum growth rates in the six environments (Fig. 8). Clearly, different metabolic "species" can be distinguished. One "species" can metabolize glucose, a second "species" can metabolize most secondary metabolites and a third "species" has specialized on acetate. Thus in our simulation model a number of functional classes appear along the tube, each specializing on its own niche in the full metabolic network.
Average growth rates along the colon. Average are taken over the final part of the simulations (3500-4000 hrs) All growth rates ar calculated in the presence of unlimited hydrogen gas, water, sodium, ammonium, phosphate, sulfate and protons. a Growth rate on glucose. b Growth rate on acetate. c Growth rate on CO2. d Growth rate on formate. e Growth rate on lactate. f Growth rate on propionate
Hierarchical clustering of all cells present at the end of the simulation, with respect to the growth rates on glucose, acetate, CO2, formate, lactate and propionate. Black indicates low growth rate, red high growth rate. We used [72] to perform the cluster analysis, with average linkage and a euclidian distance metric
Increased flux through the tube makes diversity collapse
From the results in the previous section, we conclude that the inherent spatial structuring of the colon results in separate niches. This allows the population to diversify, such that different "species" have different metabolic tasks. A recent population-wide metagenomics study of stool samples from the Flemish and Dutch population [50] showed that, among a range of life-style related factors and medicine use, the diversity of the human gut microbiota correlates strongest with the Bristol stool scale (BSS), a self-assessed indicator of the "softness" of the stool. The analysis showed that for softer stools (higher stool index, indicative of faster transit times [51]), the diversity of the gut microbiota was reduced [52]. To investigate whether transit time could also be correlated with reduced diversity in our model, we studied the effect of increased fluxes through the tube ("diarrhea"), by assuming that the supra-bacteria flow through the tube at the same rate as the metabolites do. Strikingly, the maximal growth rate of the cells has become independent of the position (Fig. 9). Again, we clustered the population present at the end of the simulation with respect to their maximum growth rates in glucose, acetate, H2 and CO2, formate, lactate and propionate (Fig. 10). In contrast to the simulations without cell flow, the population does practically not diversify. All supra-bacteria can grow on glucose, acetate and H2 and CO2. Thus, our simulations suggest that increased transit times may contribute to a reduction of microbial diversity, by reducing the spatial heterogeneity in the gut and, consequently, the construction of ecological niches and cross-feeding interactions.
Average growth rates along the colon, when cells flow through the colon as fast as metabolites. Average are taken over the final part of the simulations (3500-4000 hrs) All growth rates ar calculated in the presence of unlimited hydrogen gas, water, sodium, ammonium, phosphate, sulfate and protons. a Growth rate on glucose. b Growth rate on acetate. c Growth rate on CO2. d Growth rate on formate. e Growth rate on lactate. f Growth rate on propionate
Hierarchical clustering of all cells present at the end of the simulation with cell flow, with respect to the growth rates on formate, CO2, propionate, lactate, glucose and acetate. Black indicates low growth rate, red high growth rate. We used [72] to perform the cluster analysis, with average linkage and a euclidian distance metric
We have presented a coupled dynamic multi-species dynamic FBA and mass-transfer model of the gut microbiota. We first studied a non-spatial variant of the model, in order to determine to what extent cross-feeding can emerge in a non-evolving, diverse population of metabacteria. The individual metabacteria in this model contain the major carbohydrate fermentation pathways in the colon. Starting from glucose as a primary resource, the model produced acetate, butyrate, carbon dioxide, ethanol, formate, lactate and propionate. These fermentation products compared well with the short-chain fatty acids found in the colon [37] or with those found in an in vitro model of the colon [38]. Our model generated these short-chain fatty acids only if it was run with FBAwMC and not with standard FBA, indicating that the individual metabacteria must be able to exhibit diauxic shifts. In FBAwMC these are due to rate-yield metabolic trade-offs [34, 36].
It has been argued that metabolic trade-offs in combination with mutational dynamics may already explain population diversity as it will select for suboptimal phenotypes with equally fit mutational neighbors - i.e., 'survival of the flattest' [53]. This mechanism may already sufficiently explain diversity in microbial ecosystems, suggesting that cross-feeding or spatial heterogeneity is not required for diversity. However, cross-feeding interactions exist in the gut [54, 55] and are likely to be an important factor in determining microbial diversity. Indeed, our spatially explicit, sdFBA model shows that already on a single food source a stratified structure of metabolic niches is formed, with glucose consumers in front, followed by strata inhabited by secondary and tertiary consumers.
Interestingly, these secondary and tertiary consumers specialized to their metabolic niche: Metabacteria sampled from the rear end of the tube could no longer grow on the primary resource glucose (Fig. 7 a), and they grew better on the secondary metabolite lactate than bacteria from the front did (Fig. 7 e). This specialization was mostly due to "gene loss", i.e., simplification of the metabolic networks. Interestingly, metabacteria with reduced genomes did not have a growth advantage in our model, yet they lost essential pathways required for metabolizing the primary resource. Such "trait loss without loss of function due to provision of resources by ecological interactions" [56] is indicative of an evolutionary mechanism known as compensated trait loss [56]. Note, however, that because smaller metabacteria did not have a growth advantage in our model, the gene loss in our model is due to drift. Hence it differs from the Black Queen Hypothesis [57], which proposes that the saving of resources associated with gene loss accelerate the evolution of compensated trait loss. An interesting future extension of the model would consider the metabolic costs associated with the maintenance of metabolic pathways.
The formation of metabolic niches and the observed compensated trait loss required that the metabacteria can maintain their approximate position in the gut-like tube, e.g., by adhering to the gut wall or by sufficiently fast reproduction [52]. The microbial diversification did not occur if the metabacteria moved along with the flow of the metabolites, a situation resembling diarrhea. Decreased microbial diversity is often seen causative for diarrhea, e.g., because it facilitates colonization by pathogenic species including Clostridium difficile [58]. Our model results suggest an additional, inverse causation, where accelerated transit reduces microbial diversity. Experimental studies are consistent with the idea that transit speed is causative for reduced diversity, but with a different mechanism: Microbiota sampled from softer stools (i.e., higher BSS and faster transit time) have higher growth potential, suggesting that faster transits favor fast growing species [52]. A second potential strategy to preventing wash-out from the gut at high transit times is adherence to the gut wall e.g., by the species of the P enterotype [52]. Thus these observations suggest that the reduction of microbial diversity at fast transits is due to selection for fast growing or adherent species. Our computational model suggests an alternative hypothesis, namely that increased transit times reduce the potential for bacterial cross-feeding, thus reducing the build-up of metabolic niches in the environment.
We have presented a coupled dynamic multi-species dynamic FBA and mass-transfer model of the gut microbiota. We first studied a non-spatial variant of the model, in order to determine to what extent cross-feeding can emerge in a non-evolving, diverse population of metabacteria. The individual metabacteria in this model contain the major carbohydrate fermentation pathways in the colon. Starting from glucose as a primary resource, the model produced acetate, butyrate, carbon dioxide, ethanol, formate, lactate and propionate. We next discussed a spatial variant of the model in a gut-like environment, a tube in which the metabolites diffuse and advect from input to output, and the bacteria attach to the gut wall. This spatially explicit, sdFBA model was extended with models of bacterial population dynamics, and 'mutation' of the metabacteria due to the gain and loss of pathways from the local population. In this model, a stratified structure of metabolic niches formed, with glucose consumers in front, followed by strata inhabited by secondary and tertiary consumers that lost the ability to grow on the primary resource. Interestingly, the stratification, and hence niche formation and specialization was lost if we increased transit speeds through the tube, to mimic diarrhea. Thus our model results suggest that enhanced enhanced transit speeds might contribute to the observation that softer stools (i.e., faster transit) have lower diversity [52].
Of course our model is a simplification as it lacks many key features of the gut microbiota and of the gut itself. The metabacterium only contain a minimal subset of the metabolic pathways that are found in the gut microbiota. Future versions of our model could extend the current metabacterium model with additional metabolic pathways, e.g., methanogenesis or sulfate reduction. Adding multiple pathways would increase the number of potential cross-feeding interactions and improve the biological realism of the model. An alternative route that we are currently taking is to include multiple, alternative metabacteria, each representing a functional group in the human gut microbiota [59]. This will allow us to compare the metabolic diversification observed in our computational model with metagenomics data, or use the model to compare alternative enterotypes [60].
A further simplification of this first study of our model, is that we have focused exclusively on glucose metabolism. Future versions of the model will also consider lipid and amino acid metabolism, allowing us to compare the effect of alternative "diets" and consider the break-down of complex polysaccharides present in plant-derived food fibers. Further extensions include more complex interactions with the gut wall, which is currently impenetrable as in some in vitro models of the gut microbiota [61, 62]. Additional terms in Eq. 6 will allow us to study the effects of SCFA from the gut lumen, oxygen supply, and effects of the production of mucus by the gut wall [63].
Metabolic model
We converted the GEM of L. plantarum [28] to a stoichiometric matrix, S. Reversible reactions were replaced by a forward and a backward irreversible reactions. Next, we added four metabolic pathways that are crucial in carbohydrate fermentation in the colon, but are not present in the network: propionate fermentation, butyrate fermentation, the acrylate pathway and the Wood-Ljungdahl pathway. We used the Kegg database (http://www.genome.jp/kegg) [64] to add the necessary reactions. For the Wood-Ljungdahl pathway, we followed the review paper [49]. Additional file 6 lists all reactions and metabolites of the GEM, in particular those that we added to the GEM of L. plantarum.
To calculate the fluxes through the metabolic network as a function of the extracellular environment, we used flux-balance analysis with molecular crowding (FBAwMC) [34, 35]. FBAwMC assumes that all reactions through a are in steady state:
$$ \frac{d\vec{x}}{dt}=\mathbf{S}\cdot \vec{f}=0, $$
where \(\vec {x}\) is a vector of all metabolites, \(\vec {f}\) is a vector describing the metabolic flux through each reaction in the network, and S is the stoichiometric matrix. FBAwMC attempts to find a solution \(\vec {f}\) of Eq. 8 that optimizes for an objective function under a set of constraints \(\vec {f}_{\text {lb}}\leq \vec {f}\leq \vec {f}_{\text {ub}}\), with \(\vec {f}_{\text {lb}}\) and \(\vec {f}_{\text {ub}}\) the lower and upper bounds of the fluxes. Furthermore, FBAwMC constrains the amount of metabolic enzymes in the cell. This leads to the following constraint
$$ \sum a_{i}f_{i}\leq V_{\text{prot}}, $$
where \(a_{i}\equiv \frac {Mv_{i}}{Vb_{i}}\) is the "crowding coefficient", M the cell mass, V the cell volume, v i the molar volume of the enzyme catalysing reaction i and b i is a parameter describing the proportionality between enzyme concentration and flux. For a derivation of Eq. 9 see Ref. [34]. V prot is a constant (0≤V prot≤1) representing the volume fraction of macromolecules devoted to metabolic enzymes. We use a value of V prot=0.2, equal to the value used in [36] for other bacteria.
The crowding coefficients are not known for every reaction in the metabolic network. Therefore, following Vazquez and coworkers [35], crowding coefficients were chosen at random from a distribution of known crowding coefficients for E. coli based on published molar volumes (Metacyc [65]) and turnover numbers (Brenda [46]). Both in the well-mixed simulations as in the spatially explicit simulations, we allowed for unlimited influx of hydrogen gas, water, sodium, ammonium, phosphate, sulfate and protons. To calculate the growth rate, we find a solution of Eq. 8 that maximizes the rate of ATP production, given the crowding constraint (Eq. 9). ATP production has been shown to be a good proxy for biomass production [66] and it allows us to avoid the additional complexity of, e.g., amino acid metabolism and vitamin metabolism. The growth rate μ was then calculated by dividing the ATP production rate by a factor of 27.2, the factor that was used for ATP in the biomass equation of the original L. plantarum model [28].
Well-mixed model
Simulations of the well-mixed model are performed in Matlab, using the COBRA Toolbox [67]. We use an approach similar to Ref. [23] to model a population of cells in a well-mixed environment. We initiated 1000 cells with crowding coefficients for all their reactions set according to the experimental distribution of E. coli (see Section Metabolic model) We start with a total biomass concentration (B) of 0.01 gram dry weight/liter (gDW/l), divided equally over all 1000 metabacteria (i.e., ∀i∈ [ 1,1000]:B i (0)=10−5 gr DW/l). At time t=0 we initiate the environment with a glucose concentration of 1.0 mM. At every time-step, the maximal uptake rate for each metabolite j is a function of its concentration, c j (t), as,
$$ F_{\mathrm{up,max}}(j)={\frac{1}{\Delta t}}\frac{c_{j}(t)}{\sum_{i=1}^{1000}B_{i}(t)}. $$
We then perform FBAwMC for all 1000 supra-bacteria and update the concentrations of all metabolites that are excreted or taken up, as,
$$ c_{j}(t+\Delta t)=c_{j}(t)+\Delta t\sum\limits_{i=1}^{1000} F_{i,j} B_{i} $$
FBAwMC yields a growth rate μ i for each supra-bacterium i, which is used to update the biomass as,
$$ B_{i}(t+\Delta t)=B_{i}(t)+ \mu_{i} B_{i}(t) \Delta t. $$
This procedure is continued until the supra-bacteria have stopped growing.
For the spatially explicit simulations, we developed a C++ package to perform constraint-based modeling using the GNU Linear Programming Kit (GLPK, http://www.gnu.org/software/glpk/) as linear programming tool. The multiscale, computational model of the gut microbiota was also developed in C++. It describes individual metabacteria, or "cells" living on a grid, each with its own, unique GEM. Nutriets enter the grid at one end, flows through the grid, diffuses over the grid and is consumed by the cells. Uptake and excretion of metabolites is calculated using the GEM in each cell. The cells divide proportional to the calculated ATP production rate and mutate upon division. We simulate a total time of 4000 h (equivalent to 80000 time steps). A model description in pseudocode is given in Fig. 11. All parameters in the model are given in Table 1.
Pseudocode of the spatially explicit computational model
Table 1 Parameters of the spatially explicit model
We initialize the grid with cells that have the same metabolic network as in the well-mixed simulations. We choose the crowding coefficients for each reaction randomly. We allow maximally 2 cells to be present on each grid point. Thus, per grid point there are two "slots" that can be empty or filled by a cell. At time t=0, we initialize every slot of every grid point with a probability of 50% with a cell with random crowding coefficients. Because of the modeled population size (in the order of 1000 cells), each cell should be viewed as a metapopulation of bacteria that is representative for the local composition of the intestinal microbiota: i.e, a metabacterium.
Nutrient dynamics
We assumed that nutrients enter the colon every eight hours. In this study we consider glucose as the primary resource, because we want to focus on the bacterial diversity that can result from a single resource. Thus we assume that polysaccharides are already broken down to glucose. To allow for variability, we pick the amount of glucose from a normal distribution with mean of 42 mmol and a relative standard deviation of 20%. This mean value is chosen such that one the one hand all nutrients are consumed during passage through the gut and on the other hand it allows for a sufficiently large population size (≈1000 metabacteria).
The glucose is consumed by the metabacteria, according to the metabolic networks. These network take into account 115 extracellular metabolites, whose dynamics are all modeled explicitly in the model. The majority of these metabolites are never produced. Production and consumption for each metabolite is modeled using
$$ c_{i}(t+\Delta t)=c_{i}(t)+\Delta t \sum\limits_{n=1}^{2}(F_{i,n} V_{n} \text{DENS\_MAX}/4.0) $$
Thus, the concentration c i (t) of each metabolite i is updated each timestep Δ t according to the calculated influx/efflux, F i,n , and cell volume, V n , of the cells on the grid point (maximally 2). Fluxes in the metabolic network have unit mmol·g DW −1·h −1, where external metabolite concentrations are in mmol·l −1. To convert the fluxes to extracellular concentration changes, we therefore multiply with DENS_MAX; it is the maximum bacterial density in g DW·l −1, which is estimated as explained in Table 1. The division by four is because there can be at maximum 2 cells of volume 2 at one grid point. DENS_MAX is the maximum local density of bacterial cells; it is used to calculate the change in metabolite concentration based on the metabolite influx and efflux. If a grid point is fully occupied with two meta-bacteria the cell density at that point equals DENS_MAX. A high DENS_MAX results in large changes in extracellular concentrations due to exchange fluxes. We estimated DENS_MAX using an estimated bacterial density of 1014 cells/l, an estimated bacterial cell size of 10−16 l/cell and a cellular density of 100 g DW/l, i.e., \(\mathrm {max cell density} = 10^{14}\frac {cells}{l}*10^{-16}\frac {l}{cells}* 100\frac {g\;DW}\cdot {l cell}^{-1}\) [68, 69]. To prevent negative concentrations, the uptake per time step Δ t is capped at
$$ \text{MAX\_UPTAKE}_{i}=\frac{4.0 c_{i}}{\Delta t * \text{DENS\_MAX} * (V_{1}+V_{2})}. $$
Each metabolite flows through the colon: Every 15 min, all metabolites are shifted one grid point to the right. This results in a passage time of 37.5 h, similar to observed colonic transit times (e.g., 39 hrs in [70]). Every metabolite is also dispersed uniformly due to turbulence and peristalsis. In absence of exact data for dispersion coefficients, we simplify these processes by a diffusion processes, with an effective diffusion constant of 14×103 μ m 2/s for all metabolites. This dispersion coefficient is an order of magnitude higher than the diffusion constant of glucose in water, and provides a good balance between local mixing while maintaining sufficient differentiation in our simulations.
FBAwMC yields growth rate, μ, for each metabacterium i using an empirical, auxiliary reaction [71]. The volume of the metabacterium is then updated, as
$$ V_{i}(t+\Delta t)=V_{i}(t)+V_{i}(t) * \mu_{i} * \Delta t. $$
Cell death is taken into account in a density dependent way. This stabilizes the population, making sure that the population does not grow too fast if too much nutrients are given or dies out if too little nutrients are given. The death rate of a cell is calculated as follows
$${} \begin{aligned} \text{DEATH\_RATE}=\left({\vphantom{\frac{\text{TOTAL\_NEIGHBOURS}}{\text{MAX\_NEIGHBOURS}}}}\text{DEATH\_BASAL}+\text{DEATH\_DENS}\right.\\ \left.\frac{\text{TOTAL\_NEIGHBOURS}}{\text{MAX\_NEIGHBOURS}}\right), \end{aligned} $$
where TOTAL_NEIGHBOURS is the total amount of neighbours and MAX_NEIGHBOURS the maximum amount of neighbours (17 in the centre of the grid, because there are 2 slots per grid point).
Next the metabacteria expand into the empty patch on the same grid point when their volume exceeds a value of 2. The volume of the parent metabacterium is then equally distributed over the two daughter metabacteria. During this expansion, three types of "mutations" can occur:
the complete deletion of a reaction, i.e., extinction of the species responsible for this reaction, with probability μ_DEL;
the reintroduction of metabolic pathways, corresponding to the invasion of the bacterium previously responsible for this pathway, with probability μ_BIRTH;
the strengthening or weakening of one of the pathways, corresponding to the relative growth or suppression of a bacterial species in the metapopulation, with probability μ_POINT.
To delete reaction (a) the maximal flux through that reaction is set to 0. To reintroduce a reaction (b), we release the constraint by setting it to a practically infinite value (999999 mmol/gr DW hr). A point mutation (c) corresponds to a change of the crowding coefficient (a i in Eq. 9) of that specific reaction, as
$$ a_{i,\text{new}}=a_{i,\text{old}}*10^{\text{step}}, $$
In this way, the metabacteria specialize on certain reactions, i.e., by having only one or a few bacterial species in the patch. step is selected at random from a normal distribution with mean 0 and standard deviation μ_P O I N T_S T E P. In this way, if the crowding coefficient is large, the mutation step will be large as well. This is necessary, because crowding coefficients are almost distributed log-normally [35, 36].
A possible non-physical side effect of this approach is that all crowding coefficients evolve to a value of a i =0, in which case the growth rates would no longer be limited by enzymatic efficiency and volume of the patch. In reality, bacteria must trade off growth rate and growth yield (see Fig. 12 and Refs. [47, 48]). To take this trade-off into account, we first calculate the total carbon uptake rate using FBAwMC as described above. We then calculate the maximal allowed growth rate, μ max belonging to that carbon uptake rate, using the empirical formula μ max=1/3.9G up (i.e., the black curve in Fig. 12). We cap the growth rate μ to the maximum growth rate, μ max.
Derivation of empirical formula for maximum growth rates as a function of the glucose uptake rate. Green squares are data from yeast species [48]; blue squares represent data from bacterial species [47]. The black, dashed curve is the maximum allowed growth yield given the glucose uptake rate, G up. The empirical function is \(\frac {1}{3.9G_{up}+2.8}\) and is designed such that all data points lie below it
Cell movement
To model the cells' random movement over the grid, we loop over all grid points in random order. Every grid point has two "slots" that may or may not be occupied. Each slot, whether it is occupied or not, has a probability of P_MOVECELL to exchange its position with a randomly chosen slot in a randomly chosen neighboring grid point, but this only succeeds if that slot has not already moved this turn.
An advection algorithm is introduced to model the flow of bacteria along the tube, with parameter P_CELL_FLOW determining the advection velocity relative to the metabolite flux (see Section Nutrient dynamics). At each metabolite flow step (once every 15 min), with probability P_CELL_FLOW all the cells shift one grid point to the right synchronously. I.e., for the default value P_CELL_FLOW=0 the cells do not flow at all, whereas for P_CELL_FLOW=1 the cells flow at the same rate as the metabolites. We performed simulations with P_CELL_FLOW∈{0,0.5,1}. To mimic reentry of bacterial species from the environment, we assume periodic boundary conditions: All cells that leave the distal end of the gut, enter into the proximal end.
BSS:
Bristol stool scale
dFBA:
Dynamic flux-balance analysis
DMMM:
Dynamic multi-species metabolic model
FBA:
Flux-balance analysis
FBAwMC:
Flux-balance analysis with molecular crowding
GEM:
GEnome-scale metabolic model
GLPK:
GNU linear programming kit
ODE:
Ordinary-differential equation
sdFBA:
Spatial dynamic flux-balance analysis
Bäckhed F, Ley RE, Sonnenburg JL, Peterson DA, Gordon JI. Host-bacterial mutualism in the human intestine. Science. 2005; 307(5717):1915–20. doi:10.1126/science.1104816.
Blaut M, Clavel T. Metabolic diversity of the intestinal microbiota: implications for health and disease. J Nutr. 2007; 137(3 Suppl 2):751–5.
Greenblum S, Turnbaugh PJ, Borenstein E. Metagenomic systems biology of the human gut microbiome reveals topological shifts associated with obesity and inflammatory bowel disease. P Natl Acad Sci USA. 2012; 109(2):594–9.
Durbán A, Abellán JJ, Jiménez-Hernández N, Artacho A, Garrigues V, Ortiz V, Ponce J, Latorre A, Moya A. Instability of the faecal microbiota in diarrhoea-predominant irritable bowel syndrome. FEMS Microbiol Ecol. 2013; 86(3):581–9.
Rabiu BA, Gibson GR. Carbohydrates: a limit on bacterial diversity within the colon. Biol Rev Camb Philos Soc. 2002; 77(3):443–53.
Cummings JH, Macfarlane GT. The control and consequences of bacterial fermentation in the human colon. J Appl Bacteriol. 1991; 70(6):443–59.
De Filippo C, Cavalieri D, Di Paola M, Poullet JB, Massart S, Collini S, Pieraccini G, Lionetti P. Impact of diet in shaping gut microbiota revealed by a comparative study in children from Europe and rural Africa. P Natl Acad Sci USA. 2010; 107(33):14691–6.
Helling RB, Vargas CN, Adams J. Evolution of Escherichia coli during growth in a constant environment. Genetics. 1987; 116(3):349–58.
Ko EP, Yomo T, Urabe I. Dynamic clustering of bacteral population. Physica D. 1994; 75:81–8.
Rosenzweig RF, Sharp RR, Treves DS, Adams J. Microbial evolution in a simple unstructured environment: genetic differentiation in Escherichia coli. Genetics. 1994; 137(4):903–17.
Treves DS, Manning S, Adams J. Repeated evolution of an acetate-crossfeeding polymorphism in long-term populations of Escherichia coli. Mol Biol Evol. 1998; 15(7):789–97.
Maharjan R, Seeto S, Notley-McRobb L, Ferenci T. Clonal adaptive radiation in a constant environment. Science. 2006; 313(5786):514–7. doi:10.1126/science.1129865.
Kaneko K, Yomo T. Isologous diversification: a theory of cell differentiation. B Math Biol. 1997; 59:139–96.
Kaneko K, Yomo T. Isologous Diversication for Robust Development of Cell Society. J Theor Biol. 1999; 199:243–56.
Doebeli M. A model for the evolutionary dynamics of cross-feeding polymorphisms in microorganisms. Popul Ecol. 2002; 44:59–70.
Pfeiffer T, Bonhoeffer S. Evolution of cross-feeding in microbial populations. Am Nat. 2004; 163(6):126–35. doi:10.1086/383593.
Crombach A, Hogeweg P. Evolution of resource cycling in ecosystems and individuals. BMC Evol Biol. 2009; 9:122. doi:10.1186/1471-2148-9-122.
Zomorrodi AR, Segrè D. Synthetic Ecology of Microbes: Mathematical Models and Applications. J Mol Biol. 2016; 428(Part B):837–61.
Khandelwal RA, Olivier BG, Röling WFM, Teusink B, Bruggeman FJ. Community Flux Balance Analysis for Microbial Consortia at Balanced Growth. PLoS ONE. 2013; 8(5):64567. doi:10.1371/journal.pone.0064567.s004.
Shoaie S, Ghaffari P, Kovatcheva-Datchary P, Mardinoglu A, Sen P, Pujos-Guillot E, de Wouters T, Juste C, Rizkalla S, Chilloux J, Hoyles L, Nicholson JK, Doré J, Dumas ME, Clement K, Bäckhed F, Nielsen J, Consortium MO. Quantifying Diet-Induced Metabolic Changes of the Human Gut Microbiome. Cell Metab. 2015; 22(2):320–31. doi:10.1016/j.cmet.2015.07.001 doi:10.1016/j.cmet.2015.07.0012015.07.001 http://dx.doi.org/10.1016/j.cmet.2015.07.001.
Zomorrodi AR, Islam MM, Maranas CD. d-OptCom: Dynamic Multi-level and Multi-objective Metabolic Modeling of Microbial Communities. ACS Synth Biol. 2014; 3(4):247–57. doi:10.1021/sb4001307.
Mahadevan R, Edwards JS, Doyle III FJ. Dynamic flux balance analysis of diauxic growth in Escherichia coli. Biophys J. 2002; 83(3):1331–40.
Tzamali E, Poirazi P, Tollis IG, Reczko M. A computational exploration of bacterial metabolic diversity identifying metabolic interactions and growth-efficient strain communities. BMC Syst Biol. 2011; 5:167. doi:10.1186/1752-0509-5-167.
Louca S, Doebeli M. Calibration and analysis of genome-based models for microbial ecology. eLife. 2015;4(e08208). doi:10.7554/eLife.08208.10.7554/eLife.08208
Harcombe WR, Riehl WJ, Dukovski I, Granger BR, Betts A, Lang AH, Bonilla G, Kar A, Leiby N, Mehta P, Marx CJ, Segrè D. Metabolic resource allocation in individual microbes determines ecosystem interactions and spatial dynamics. Cell Rep. 2014; 7(4):1104–15.
Cole JA, Kohler L, Hedhli J, Luthey-Schulten Z. Spatially-resolved metabolic cooperativity within dense bacterial colonies. BMC Syst Biol. 2015; 9(1):395.
Chen J, Gomez JA, Höffner K, Phalak P, Barton PI, Henson MA. Spatiotemporal modeling of microbial metabolism. BMC Syst Biol. 2015; 10(1):21–1.
Teusink B, Wiersma A, Molenaar D, Francke C, de Vos WM, Siezen RJ, Smid EJ. Analysis of growth of Lactobacillus plantarum wcfs1 on a complex medium using a genome-scale metabolic model. J Biol Chem. 2006; 281(52):40041–8. doi:10.1074/jbc.M606263200.
Zhuang K, Izallalen M, Mouser P, Richter H, Risso C, Mahadevan R, Lovley DR. Genome-scale dynamic modeling of the competition between Rhodoferax and Geobacter in anoxic subsurface environments. ISME J. 2011; 5(2):305–16.
Varma A, Palsson BO. Stoichiometric flux balance models quantitatively predict growth and metabolic by-product secretion in wild-type Escherichia coli W3110. Appl Environ Microb. 1994; 60(10):3724–31.
Borenstein E. Computational systems biology and in silico modeling of the human microbiome. Brief Bioinform. 2012; 13(6):769–80.
Pryde SE, Duncan SH, Hold GL, Stewart CS, Flint HJ. The microbiology of butyrate formation in the human colon. FEMS Microbiol Lett. 2002; 217(2):133–9.
Binsl TW, De Graaf AA, Venema K, Heringa J, Maathuis A, De Waard P, Van Beek JHGM. Measuring non-steady-state metabolic fluxes in starch-converting faecal microbiota in vitro. Benef Microb. 2010; 1(4):391–405. doi:10.3920/BM2010.0038.
Beg QK, Vazquez A, Ernst J, de Menezes MA, Bar-Joseph Z, Barabasi AL, Oltvai ZN. Intracellular crowding defines the mode and sequence of substrate uptake by Escherichia coli and constrains its metabolic activity. P Natl Acad Sci USA. 2007; 104(31):12663–8. doi:10.1073/pnas.0609845104.
Vazquez A, Beg QK, Demenezes MA, Ernst J, Bar-Joseph Z, Barabasi AL, Boros LG, Oltvai ZN. Impact of the solvent capacity constraint on e. coli metabolism. BMC Syst Biol. 2008; 2:7. doi:10.1186/1752-0509-2-7.
van Hoek MJA, Merks RMH. Redox balance is key to explaining full vs. partial switching to low-yield metabolism. BMC Syst Biol. 2012; 6(1):22. doi:10.1186/1752-0509-6-22.
Cummings JH, Pomare EW, Branch WJ, Naylor CP, Macfarlane GT. Short chain fatty acids in human large intestine, portal, hepatic and venous blood. Gut. 1987; 28(10):1221–7.
de Graaf AA, Maathuis A, de Waard P, Deutz NEP, Dijkema C, de Vos WM, Venema K. Profiling human gut bacterial metabolism and its kinetics using [u-13c]glucose and nmr. NMR Biomed. 2010; 23(1):2–12. doi:10.1002/nbm.1418.
Price ND, Reed JL, Palsson BO. Genome-scale models of microbial cells: evaluating the consequences of constraints. Nat Rev Microbiol. 2004; 2(11):886–97.
Moens F, Verce M, De Vuyst L. Lactate- and acetate-based cross-feeding interactions between selected strains of lactobacilli, bifidobacteria and colon bacteria in the presence of inulin-type fructans. Int J Food Microbiol. 2017; 241:225–36.
den Besten G, Lange K, Havinga R, van Dijk TH, Gerding A, van Eunen K, Muller M, Groen AK, Hooiveld GJ, Bakker BM, Reijngoud DJ. Gut-derived short-chain fatty acids are vividly assimilated into host carbohydrates and lipids. AJP Gastrointest Liver Physiol. 2013; 305(12):900–10.
Pfeiffer T, Schuster S, Bonhoeffer S. Cooperation and competition in the evolution of ATP-producing pathways. Science. 2001; 292(5516):504–7.
Venema K, van den Abbeele P. Experimental models of the gut microbiome. Best Pract Res Cl Ga. 2013; 27(1):115–26.
Minekus M, Smeets-Peeters M, Bernalier A, Marol-Bonnin S, Havenaar R, Marteau P, Alric M, Fonty G, Huis in't Veld JH. A computer-controlled system to simulate conditions of the large intestine with peristaltic mixing, water absorption and absorption of fermentation products. Appl Microbiol Biot. 1999; 53(1):108–14.
Macfarlane S, Woodmansey EJ, Macfarlane GT. Colonization of Mucin by Human Intestinal Bacteria and Establishment of Biofilm Communities in a Two-Stage Continuous Culture System. Appl Environ Microb. 2005; 71(11):7483–92.
Chang A, Scheer M, Grote A, Schomburg I, Schomburg D. Brenda, amenda and frenda the enzyme information system: new content and tools in 2009. Nucleic Acids Res. 2009; 37(Database issue):588–92. doi:10.1093/nar/gkn820.
Fuhrer T, Fischer E, Sauer U. Experimental identification and quantification of glucose metabolism in seven bacterial species. J Bacteriol. 2005; 187(5):1581–90. doi:10.1128/JB.187.5.1581-1590.2005.
Merico A, Sulo P, Piskur J, Compagno C. Fermentative lifestyle in yeasts belonging to the Saccharomyces complex. FEBS J. 2007; 274(4):976–89.
Ragsdale SW, Pierce E. Acetogenesis and the wood-ljungdahl pathway of co(2) fixation. Biochim Biophys Acta. 2008; 1784(12):1873–98. doi:10.1016/j.bbapap.2008.08.012.
Falony G, Joossens M, Vieira-Silva S, Wang J, Darzi Y, Faust K, Kurilshikov A, Bonder MJ, Valles-Volomer M, Vandeputte D, Tito RY, Chaffron S, Rymenans L, Verspecht C, De Sutter L, Lima-Mendez G, D'hoe K, Jonckheere K, Homola D, Garcia R, Tigchelaar EF, Eeckhaudt L, Fu J, Henckaerts L, Zhernakova A, Wijmenga C, Raes J. Population-level analysis of gut microbiome variation. Science. 2016; 352(6285):560–4.
Lewis SJ, Heaton KW. Stool Form Scale as a Useful Guide to Intestinal Transit Time. Scand J Gastroentero. 1997; 32(9):920–4.
Vandeputte D, Falony G, Vieira-Silva S, Tito RY, Joossens M, Raes J. Stool consistency is strongly associated with gut microbiota richness and composition, enterotypes and bacterial growth rates. Gut. 2015; 65(1):57–62.
Beardmore RE, Gudelj I, Lipson DA, Hurst LD. Metabolic trade-offs and the maintenance of the fittest and the flattest. Nature. 2011; 472(7343):342–6. doi:10.1038/nature09905.
Falony G, Vlachou A, Verbrugghe K, De Vuyst L. Cross-feeding between bifidobacterium longum bb536 and acetate-converting, butyrate-producing colon bacteria during growth on oligofructose. Appl Environ Microb. 2006; 72(12):7835–41. doi:10.1128/AEM.01296-06.
Flint HJ, Duncan SH, Scott KP, Louis P. Interactions and competition within the microbial community of the human colon: links between diet and health. Environ Microbiol. 2007; 9(5):1101–11. doi:10.1111/j.1462-2920.2007.01281.x doi:10.1111/j.1462-2920.2007.01281.x1462-2920.2007.01281.x.http://dx.doi.org/10.1111/j.1462-2920.2007.01281.x
Ellers J, Toby Kiers E, Currie CR, McDonald BR, Visser B. Ecological interactions drive evolutionary loss of traits. Ecol Lett. 2012; 15(10):1071–82.
Morris JJ, Lenski RE, Zinser ER. The Black Queen Hypothesis: evolution of dependencies through adaptive gene loss. MBio. 2012; 3(2):e00036–12.
Chang JY, Antonopoulos DA, Kalra A, Tonelli A, Khalife WT, Schmidt TM, Young VB. Decreased Diversity of the Fecal Microbiome in Recurrent Clostridium difficile–Associated Diarrhea. J Infect Dis. 2008; 197(3):435–8.
Gill SR, Pop M, Deboy RT, Eckburg PB, Turnbaugh PJ, Samuel BS, Gordon JI, Relman DA, Fraser-Liggett CM, Nelson KE. Metagenomic analysis of the human distal gut microbiome. Science. 2006; 312(5778):1355–9. doi:10.1126/science.1124234.
Arumugam M, Raes J, Pelletier E, Le Paslier D, Yamada T, Mende DR, Fernandes GR, Tap J, Bruls T, Batto JM, Bertalan M, Borruel N, Casellas F, Fernandez L, Gautier L, Hansen T, Hattori M, Hayashi T, Kleerebezem M, Kurokawa K, Leclerc M, Levenez F, Manichanh C, Nielsen HB, Nielsen T, Pons N, Poulain J, Qin J, Sicheritz-Ponten T, Tims S, Torrents D, Ugarte E, Zoetendal EG, Wang J, Guarner F, Pedersen O, de Vos WM, Brunak S, Doré J, Antolín M, Artiguenave F, Blottiere HM, Almeida M, Brechot C, Cara C, Chervaux C, Cultrone A, Delorme C, Denariaz G, Dervyn R, Foerstner KU, Friss C, van de Guchte M, Guedon E, Haimet F, Huber W, van Hylckama-Vlieg J, Jamet A, Juste C, Kaci G, Knol J, Lakhdari O, Layec S, Le Roux K, Maguin E, Mérieux A, Melo Minardi R, M'rini C, Muller J, Oozeer R, Parkhill J, Renault P, Rescigno M, Sanchez N, Sunagawa S, Torrejon A, Turner K, Vandemeulebrouck G, Varela E, Winogradsky Y, Zeller G, Weissenbach J, Ehrlich SD, Bork P. Enterotypes of the human gut microbiome. Nature. 2011; 473(7346):174–80.
Molly K, Vande Woestyne M, Verstraete W. Development of a 5-step multi-chamber reactor as a simulation of the human intestinal microbial ecosystem. Appl Microbiol Biot. 1993; 39(2):254–8.
Molly K, Woestyne MV, Smet ID. Validation of the simulator of the human intestinal microbial ecosystem (SHIME) reactor using microorganism-associated activities. Microb Ecol Health D. 1994; 7(4):191–200.
Kashyap PC, Marcobal A, Ursell LK. Genetically dictated change in host mucus carbohydrate landscape exerts a diet-dependent effect on the gut microbiota. P Natl Acad Sci USA. 2013; 110(42):17059–64.
Kanehisa M, Goto S, Sato Y, Furumichi M, Tanabe M. Kegg for integration and interpretation of large-scale molecular data sets. Nucleic Acids Res. 2011; 40(Database issue):D109–14.
Caspi R, Foerster H, Fulcher CA, Kaipa P, Krummenacker M, Latendresse M, Paley S, Rhee SY, Shearer AG, Tissier C, Walk TC, Zhang P, Karp PD. The metacyc database of metabolic pathways and enzymes and the biocyc collection of pathway/genome databases. Nucleic Acids Res. 2008; 36(Database issue):623–31. doi:10.1093/nar/gkm900.
Schuetz R, Kuepfer L, Sauer U. Systematic evaluation of objective functions for predicting intracellular fluxes in Escherichia coli. Mol Syst Biol. 2007; 3:119. doi:10.1038/msb4100162.
Schellenberger J, Que R, Fleming RMT, Thiele I, Orth JD, Feist AM, Zielinski DC, Bordbar A, Lewis NE, Rahmanian S, Kang J, Hyduke DR, Palsson BO. Quantitative prediction of cellular metabolism with constraint-based models: the cobra toolbox v2.0. Nat Protoc. 2011; 6(9):1290–307. doi:10.1038/nprot.2011.308.
Norland S, Heldal OM, AND and Tumyr. On the relation between dry matter and volume of bacteria. Microbial Ecol. 1987; 13:95–101.
Ley RE, Peterson DA, Gordon JI. Ecological and evolutionary forces shaping microbial diversity in the human intestine. Cell. 2006; 124(4):837–48. doi:10.1016/j.cell.2006.02.017.
Arhan P, Devroede G, Jehannin B, Lanza M, Faverdin C, Dornic C, Persoz B, Tétreault L, Perey B, Pellerin D. Segmental colonic transit time. Dis Colon Rectum. 1981; 24(8):625–9.
Orth JD, Thiele I, Palsson BØ. What is flux balance analysis?Nat Biotechnol. 2010; 28(3):245–8.
Guffanti A. Finding the needle in the haystack. Genome Biol. 2002; 3. reports2008. doi:10.1186/gb-2002-3-2-reports2008.10.1186/gb-2002-3-2-reports2008
The authors thank Daniël Muysken for his critical reading of the manuscript. We thank SURFsara (http://www.surfsara.nl) for the support in using the Lisa Compute Cluster.
This work was financed by the Netherlands Consortium for Systems Biology (NCSB), which is part of the Netherlands Genomics Initiative/Netherlands Organisation for Scientific Research.
Availability of data and material
The dataset supporting the conclusions of this article is included within the article and its additional files.
MvH and RM designed the study and drafted the manuscript. MvH performed the simulations and analyzed the data. Both authors have read and approved the final version of the manuscript.
Milan J.A. van Hoek is Scientific Consultant for SysbioSim B.V. in Leiden, The Netherlands.
Life Sciences Group, Centrum Wiskunde & Informatica, Science Park 123, Amsterdam, 1098 XG, The Netherlands
Milan J. A. van Hoek & Roeland M. H. Merks
Mathematical Institute, Leiden University, Niels Bohrweg 1, Leiden, 2333, CA, The Netherlands
Roeland M. H. Merks
Milan J. A. van Hoek
Correspondence to Roeland M. H. Merks.
Figure S1. Simulation of the non-spatial, extended L. plantarum model using standard flux-balance analysis (FBA). Metabolite dynamics over time. The simulation is initialized with a pulse of glucose. Note that with standard FBA all 1000 cells behave identically, because the crowding coefficients are not used. (PDF 93 kb)
Figure S2. Simulation of the non-spatial, standard L. plantarum model using flux-balance analysis with molecular crowding (FBAwMC). Metabolite dynamics over time. The simulation is initialized with a pulse of glucose. (PDF 105 kb)
Figure S3. Population average and standard deviation of the cross-feeding factor C i as a function of the position in the colon for all n=10 runs. The averages and standard deviation are over the vertical dimension and are calculated over the final part of the simulation, from 3500 h until 4000 h. (PDF 259 kb)
Figure S4. Population averages of the metabolite concentrations over evolutionary time of the simulation in Fig. 5. (PDF 422 kb)
Figure S5. Average metabolite concentraties along the tube for all n=10 simulations. The averages are taken over the second half of the simulations, from 2000 h to 4000 h. (PDF 462 kb)
Microsoft Excel File with all reactions and metabolites of the genome scale model of Lactobacillus plantarum [28], extended with proprionate fermentation, butyrate fermentation, the acrylate pathway, and the Wood-Ljungdahl pathway. (XLS 94 kb)
Hoek, M.v., Merks, R. Emergence of microbial diversity due to cross-feeding interactions in a spatial model of gut microbial metabolism. BMC Syst Biol 11, 56 (2017). https://doi.org/10.1186/s12918-017-0430-4
Dynamic multi-species metabolic modeling
Intestinal microbiota
Multiscale modeling
Compensated trait loss
Microbial communities
Networks and information flow | CommonCrawl |
Arkiv för Matematik
Ark. Mat.
Volume 51, Number 2 (2013), 223-249.
An improved Riemann mapping theorem and complexity in potential theory
Steven R. Bell
More by Steven R. Bell
We discuss applications of an improvement on the Riemann mapping theorem which replaces the unit disc by another "double quadrature domain," i.e., a domain that is a quadrature domain with respect to both area and boundary arc length measure. Unlike the classic Riemann mapping theorem, the improved theorem allows the original domain to be finitely connected, and if the original domain has nice boundary, the biholomorphic map can be taken to be close to the identity, and consequently, the double quadrature domain is close to the original domain. We explore some of the parallels between this new theorem and the classic theorem, and some of the similarities between the unit disc and the double quadrature domains that arise here. The new results shed light on the complexity of many of the objects of potential theory in multiply connected domains.
Research supported by the NSF Analysis and Cyber-enabled Discovery and Innovation programs, grant DMS 1001701.
Ark. Mat., Volume 51, Number 2 (2013), 223-249.
Received: 7 October 2011
First available in Project Euclid: 1 February 2017
https://projecteuclid.org/euclid.afm/1485907216
2012 © Institut Mittag-Leffler
Bell, Steven R. An improved Riemann mapping theorem and complexity in potential theory. Ark. Mat. 51 (2013), no. 2, 223--249. doi:10.1007/s11512-012-0168-6. https://projecteuclid.org/euclid.afm/1485907216
Aharonov, D. and Shapiro, H. S., Domains on which analytic functions satisfy quadrature identities, J. Anal. Math. 30 (1976), 39–73.
Mathematical Reviews (MathSciNet): MR447589
Zentralblatt MATH: 0337.30029
Digital Object Identifier: doi:10.1007/BF02786704
Bell, S., Solving the Dirichlet problem in the plane by means of the Cauchy integral, Indiana Univ. Math. J. 39 (1990), 1355–1371.
Mathematical Reviews (MathSciNet): MR1087195
Digital Object Identifier: doi:10.1512/iumj.1990.39.39060
Bell, S., The Szegő projection and the classical objects of potential theory in the plane, Duke Math. J. 64 (1991), 1–26.
Digital Object Identifier: doi:10.1215/S0012-7094-91-06401-X
Bell, S., The Cauchy Transform, Potential Theory, and Conformal Mapping, CRC Press, Boca Raton, FL, 1992.
Bell, S., Unique continuation theorems for the $\bar{\partial}$-operator and applications, J. Geom. Anal. 3 (1993), 195–224.
Bell, S., Complexity of the classical kernel functions of potential theory, Indiana Univ. Math. J. 44 (1995), 1337–1369.
Digital Object Identifier: doi:10.1512/iumj.1995.44.2030
Bell, S., Ahlfors maps, the double of a domain, and complexity in potential theory and conformal mapping, J. Anal. Math. 78 (1999), 329–344.
Bell, S., The fundamental role of the Szegő kernel in potential theory and complex analysis, J. Reine Angew. Math. 525 (2000), 1–16.
Digital Object Identifier: doi:10.1515/crll.2000.067
Bell, S., Complexity in complex analysis, Adv. Math. 172 (2002), 15–52.
Digital Object Identifier: doi:10.1006/aima.2002.2076
Bell, S., The Bergman kernel and quadrature domains in the plane, in Quadrature Domains and their Applications (Santa Barbara, CA, 2003 ), Operator Theory: Advances and Applications 156, pp. 35–52, Birkhäuser, Basel, 2005.
Digital Object Identifier: doi:10.1007/3-7643-7316-4_3
Bell, S., Quadrature domains and kernel function zipping, Ark. Mat. 43 (2005), 271–287.
Bell, S., Bergman coordinates, Studia Math. 176 (2006), 69–83.
Digital Object Identifier: doi:10.4064/sm176-1-5
Bell, S., The Green's function and the Ahlfors map, Indiana Univ. Math. J. 57 (2008), 3049–3063.
Bell, S., Density of quadrature domains in one and several complex variables, Complex Var. Elliptic Equ. 54 (2009), 165–171.
Digital Object Identifier: doi:10.1080/17476930902759361
Bell, S., Ebenfelt, P., Khavinson, D. and Shapiro, H. S., On the classical Dirichlet problem in the plane with rational data, J. Anal. Math. 100 (2006), 157–190.
Bell, S., Gustafsson, B. and Sylvan, Z., Szegő coordinates, quadrature domains, and double quadrature domains, Comput. Methods Funct. Theory 11 (2011), 25–44.
Crowdy, D., Quadrature domains and fluid dynamics, in Quadrature Domains and their Applications (Santa Barbara, CA, 2003 ), Operator Theory: Advances and Applications 156, pp. 113–129, Birkhäuser, Basel, 2005.
Ebenfelt, P., Singularities encountered by the analytic continuation of solutions to Dirichlet's problem, Complex Variables Theory Appl. 20 (1992), 75–91.
Ebenfelt, P., Gustafsson, B., Khavinson, D. and Putinar, M. (eds.), Quadrature Domains and Their Applications, Operator Theory: Advances and Applications 156, Birkhäuser, Basel, 2005.
Farkas, H. and Kra, I., Riemann Surfaces, Springer, New York, 1980.
Digital Object Identifier: doi:10.1007/978-1-4684-9930-8
Gustafsson, B., Quadrature domains and the Schottky double, Acta Appl. Math. 1 (1983), 209–240.
Gustafsson, B., Applications of half-order differentials on Riemann surfaces to quadrature identities for arc-length, J. Anal. Math. 49 (1987), 54–89.
Gustafsson, B. and Shapiro, H. S., What is a quadrature domain? in Quadrature Domains and their Applications (Santa Barbara, CA, 2003 ), Operator Theory: Advances and Applications 156, pp. 1–25, Birkhäuser, Basel, 2005.
Kerzman, N. and Stein, E. M., The Cauchy kernel, the Szegő kernel, and the Riemann mapping function, Math. Ann. 236 (1978), 85–93.
Kerzman, N. and Trummer, M., Numerical conformal mapping via the Szegő kernel, J. Comput. Appl. Math. 14 (1986), 111–123.
Digital Object Identifier: doi:10.1016/0377-0427(86)90133-0
Putinar, M. and Shapiro, H. S., The Friedrichs operator of a planar domain II, in Recent Advances in Operator Theory and Related Topics (Szeged, 1999 ), Operator Theory: Advances and Applications 127, pp. 519–551, Birkhäuser, Basel, 2001.
Digital Object Identifier: doi:10.1007/978-3-0348-8374-0_29
Shapiro, H. S., The Schwarz Function and its Generalization to Higher Dimensions, Univ. of Arkansas Lecture Notes in the Mathematical Sciences, Wiley, New York, 1992.
Shapiro, H. S. and Ullemar, C., Conformal mappings satisfying certain extremal properties and associated quadrature identities, Preprint, TRITA-MAT-1986-6, Royal Inst. of Technology, Stockholm, 1981.
Institut Mittag-Leffler
Nonlinear Riemann-Hilbert problems for doubly connected domains and closed boundary data
Efendiev, M. A. and Wendland, W. L., Topological Methods in Nonlinear Analysis, 2001
ON SOME SUFFICIENT CONDITIONS FOR STARLIKENESS OF ORDER $\alpha$ IN $C^n$
Liu, Ming-Sheng and Zhu, Yu-Can, Taiwanese Journal of Mathematics, 2006
DISTORTION THEOREMS OF STARLIKE MAPPINGS IN SEVERAL COMPLEX VARIABLES
Liu, Taishun, Wang, Jianfei, and Lu, Jin, Taiwanese Journal of Mathematics, 2011
Conformal metrics and boundary accessibility
Nieminen, Tomi, Illinois Journal of Mathematics, 2009
Visual representation of the Riemann and Ahlfors maps via the Kerzman–Stein equation
Bolt, Michael, Snoeyink, Sarah, and Van Andel, Ethan, Involve: A Journal of Mathematics, 2010
The Dirichlet-Neumann problem for the dissipative Helmholtz equation in a 2-D cracked domain with the Neumann condition on cracks
Kolybasova, V. V. and Krutitskii, P. A., Tsukuba Journal of Mathematics, 2006
Interpolation and Sampling for Generalized Bergman Spaces on finite Riemann surfaces
Schuster, Alexander and Varolin, Dror, Revista Matemática Iberoamericana, 2008
Algorithmic Information Theory
van Lambalgen, Michiel, Journal of Symbolic Logic, 1989
Effective choice and boundedness principles in computable analysis
Brattka, Vasco and Gherardi, Guido, Bulletin of Symbolic Logic, 2011
Central limit theorem for Fourier transforms of stationary processes
Peligrad, Magda and Wu, Wei Biao, The Annals of Probability, 2010
euclid.afm/1485907216 | CommonCrawl |
Stability of positive steady-state solutions to a time-delayed system with some applications
DCDS-B Home
On the family of cubic parabolic polynomials
doi: 10.3934/dcdsb.2021272
Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.
Readers can access Online First articles via the "Online First" tab for the selected journal.
Existence and approximation of attractors for nonlinear coupled lattice wave equations
Lianbing She 1, , Mirelson M. Freitas 2, , Mauricio S. Vinhote 3, and Renhai Wang 4,,
School of Mathematics and Compute Science, Liupanshui Normal University, Liupanshui, Guizhou 553004, China
Faculty of Mathematics, Federal University of Pará, Raimundo Santana Cruz Street, S/N, 68721-000, Salinópolis, Pará, Brazil
Ph.D Program in Mathematics, Federal University of Pará, Augusto Corrêa Street, 01, 66075-110, Belém, Pará, Brazil
Institute of Applied Physics and Computational Mathematics, PO Box 8009, Beijing 100088, China
* Corresponding author: Renhai Wang ([email protected])
Received June 2021 Revised August 2021 Early access November 2021
Fund Project: Lianbing She was supported by the Science and Technology Foundation of Guizhou Province ([2020]1Y007), School level Foundation of Liupanshui Normal University(LPSSYKYJJ201801, LPSSYKJTD201907). Renhai Wang was supported by China Postdoctoral Science Foundation under grant numbers 2020TQ0053 and 2020M680456
This paper is concerned with the asymptotic behavior of solutions to a class of nonlinear coupled discrete wave equations defined on the whole integer set. We first establish the well-posedness of the systems in $ E: = \ell^2\times\ell^2\times\ell^2\times\ell^2 $. We then prove that the solution semigroup has a unique global attractor in $ E $. We finally prove that this attractor can be approximated in terms of upper semicontinuity of $ E $ by a finite-dimensional global attractor of a $ 2(2n+1) $-dimensional truncation system as $ n $ goes to infinity. The idea of uniform tail-estimates developed by Wang (Phys. D, 128 (1999) 41-52) is employed to prove the asymptotic compactness of the solution semigroups in order to overcome the lack of compactness in infinite lattices.
Keywords: Coupled discrete wave equations, global attractors, upper semicontinuity, asymptotic compactness.
Mathematics Subject Classification: Primary: 35B40, 35B41; Secondary: 37L30.
Citation: Lianbing She, Mirelson M. Freitas, Mauricio S. Vinhote, Renhai Wang. Existence and approximation of attractors for nonlinear coupled lattice wave equations. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021272
A. Y. Abdallah, Uniform exponential attractors for first order non-autonomous lattice dynamical systems, J. Differ. Equ., 251 (2011), 1489-1504. doi: 10.1016/j.jde.2011.05.030. Google Scholar
P. W. Bates, H. Lisei and K. Lu, Attractors for stochastic lattice dynamical systems, Stoch. Dyn., 6 (2006), 1-21. doi: 10.1142/S0219493706001621. Google Scholar
P. W. Bates, K. Lu and B. Wang, Attractors for lattice dynamical systems, J. Bifur. Chaos Appl. Sci. Engrg., 11 (2001), 143-153. doi: 10.1142/S0218127401002031. Google Scholar
H. Cui and P. E. Kloeden, Invariant forward attractors of non-autonomous random dynamical systems, J. Differential Equations, 265 (2018), 6166-6186. doi: 10.1016/j.jde.2018.07.028. Google Scholar
H. Cui, J. A. Langa and Y. Li, Measurability of random attractors for quasi strong-to-weak continuous random dynamical systems, J. Dynam. Differential Equations, 30 (2018), 1873-1898. doi: 10.1007/s10884-017-9617-z. Google Scholar
S. N. Chow, J. M. Paret and W. Shen, Traveling waves in lattice dynamical systems, J. Differ. Equ., 149 (1998), 248-291. doi: 10.1006/jdeq.1998.3478. Google Scholar
T. Caraballo, A. N. Carvalho, J. A. Langa and F. Rivero, Existence of pullback attractors for pullback asymptotically compact processes, Nonlinear Anal., 72 (2010), 1967-1976. doi: 10.1016/j.na.2009.09.037. Google Scholar
T. Caraballo, I. D. Chueshov and P. E. Kloeden, Synchronization of a stochastic reaction-diffusion system on a thin two-layer domain, SIAM J. Math. Anal., 38 (2006/07), 1489-1507. doi: 10.1137/050647281. Google Scholar
T. Caraballo, B. Guo, N. H. Tuan and R. Wang, Asymptotically autonomous robustness of random attractors for a class of weakly dissipative stochastic wave equations on unbounded domains, Proc. Roy. Soc. Edinburgh Sect. A, (2020), 1–31. doi: 10.1017/prm.2020.77. Google Scholar
T. Caraballo, G. Lukaszewicz and J. Real, Pullback attractors for non-autonomous 2D Navier-Stokes equations in unbounded domains, C. R. Math. Acad. Sci. Paris, 342 (2006), 263-268. doi: 10.1016/j.crma.2005.12.015. Google Scholar
T. Caraballo, A. M. Mérquez-Durén and J. Real, Pullback and forward attractors for a 3D LANS-$\alpha$ model with delay, Discrete Contin Dyn Syst., 15 (2006), 559-578. doi: 10.3934/dcds.2006.15.559. Google Scholar
T. Caraballo, P. Marín-Rubio and J. Valero, Autonomous and non-autonomous attractors for differential equations with delays, J. Differential Equations, 208 (2005), 9-41. doi: 10.1016/j.jde.2003.09.008. Google Scholar
T. L. Carrol and L. M. Pecora, Synchronization in chaotic systems, Phys. Rev. Lett., 64 (1990), 821-824. doi: 10.1103/PhysRevLett.64.821. Google Scholar
T. Caraballo and J. Real, Navier-Stokes equations with delays, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 457 (2001), 2441-2453. doi: 10.1098/rspa.2001.0807. Google Scholar
T. Erneux and G. Nicolis, Propagating waves in discrete bistable reaction diffusion systems, Physica D, 67 (1993), 237-244. doi: 10.1016/0167-2789(93)90208-I. Google Scholar
J. Huang, X. Han and S. Zhou, Uniform attractors for non-autonomous Klein-Gordon Schrödinger lattice systems, Appl. Math. Mech., 30 (2009), 1597-1607. doi: 10.1007/s10483-009-1211-z. Google Scholar
X. Han, Random attractors for stochastic sine-Gordon lattice systems with multiplicative white noise, J. Math. Anal. Appl., 376 (2011), 481-493. doi: 10.1016/j.jmaa.2010.11.032. Google Scholar
X. Han, Exponential attractors for lattice dynamical systems in weighted spaces, Discrete Contin. Dyn. Syst., 31 (2011), 445-467. doi: 10.3934/dcds.2011.31.445. Google Scholar
X. Han, Asymptotic dynamics of stochastic lattice differential equations: A review, Continuous and Distributed Systems II. Stud. Syst. Decis. Control, 30 (2015), 121-136. doi: 10.1007/978-3-319-19075-4_7. Google Scholar
X. Han, Random attractors for second order stochastic lattice dynamical systems with multiplicative noise in weighted spaces, Stoch. Dyn., 12 (2012), 1150024. doi: 10.1142/S0219493711500249. Google Scholar
X. Han, Asymptotic behaviors for second order stochastic lattice dynamical systems on Zk in weighted spaces, J. Math. Anal. Appl., 397 (2013), 242-254. doi: 10.1016/j.jmaa.2012.07.015. Google Scholar
X. Han and P. E. Kloeden, Attractors Under Discretisation, SpringerBriefs in Mathematics. BCAM SpringerBriefs. Springer, Cham; BCAM Basque Center for Applied Mathematics, Bilbao, 2017. doi: 10.1007/978-3-319-61934-7. Google Scholar
X. Han, P. E. Kloeden and S. Sonner, Discretisation of global attractors for lattice dynamical systems, J. Dynam. Differential Equations, 32 (2020), 1457-1474. doi: 10.1007/s10884-019-09770-1. Google Scholar
X. Han, W. Shen and S. Zhou, Random attractors for stochastic lattice dynamical systems in weighted spaces, J. Differ. Equ., 250 (2011), 1235-1266. doi: 10.1016/j.jde.2010.10.018. Google Scholar
J. P. Keener, Propagation and its failure in coupled systems of discrete excitable cells, SIAM J. Appl. Math., 47 (1987), 556-572. doi: 10.1137/0147038. Google Scholar
P. E. Kloeden and J. A. Langa, Flattening, squeezing and the existence of random attractors, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 463 (2007), 163-181. doi: 10.1098/rspa.2006.1753. Google Scholar
P. E. Kloeden, P. Marín-Rubio and J. Real, Pullback attractors for a semilinear heat equation in a non-cylindrical domain, J. Differential Equations, 244 (2008), 2062-2090. doi: 10.1016/j.jde.2007.10.031. Google Scholar
P. E. Kloeden and T. Lorenz, Construction of nonautonomous forward attractors, Proc. Amer. Math. Soc., 144 (2016), 259-268. doi: 10.1090/proc/12735. Google Scholar
P. E. Kloeden and M. Rasmussen, Nonautonomous Dynamical Systems, vol. 176 of Mathematical Surveys and Monographs, Americal Mathematical Society, 2011. doi: 10.1090/surv/176. Google Scholar
P. E. Kloeden, J. Real and C. Sun, Pullback attractors for a semilinear heat equation on time-varying domains, J. Differential Equations, 246 (2009), 4702-4730. doi: 10.1016/j.jde.2008.11.017. Google Scholar
J. C. Robinson, Dimensions, Embeddings and Attractors, Cambridge Tracts in Mathematics, Cambridge University Press, Cambridge, 2011. doi: 10.1017/CBO9780511933912. Google Scholar
[32] J. C. Robinson, Infinite-Dimensional Dynamical Systems: An Introduction to Dissipative Parabolic PDEs and the Theory of Global Attractors, Cambridge University Press, Cambridge, 2001. doi: 10.1007/978-94-010-0732-0. Google Scholar
J. C. Robinson, Infinite-Dimensional Dynamical Systems, Cambridge Texts in Applied Mathematics, 2001. doi: 10.1007/978-94-010-0732-0. Google Scholar
J. C. Robinson, Global attractors: Topology and finite-dimensional dynamics, J. Dynam. Differential Equations, 11 (1999), 557-581. doi: 10.1023/A:1021918004832. Google Scholar
L. Shi, R. Wang, K. Lu and B. Wang, Asymptotic behavior of stochastic FitzHugh-Nagumo systems on unbounded thin domains, J. Differential Equations, 267 (2019), 4373-4409. doi: 10.1016/j.jde.2019.05.002. Google Scholar
B. Wang, Dynamics of systems on infinite lattices, J. Differential Equations, 221 (2006), 224-245. doi: 10.1016/j.jde.2005.01.003. Google Scholar
B. Wang, Attractors for reaction-diffusion equations in unbounded domains, Phys. D, 128 (1999), 41-52. doi: 10.1016/S0167-2789(98)00304-2. Google Scholar
B. Wang, Random attractors for the stochastic Benjamin-Bona-Mahony equation on unbounded domains, J. Differential Equations, 246 (2009), 2506-2537. doi: 10.1016/j.jde.2008.10.012. Google Scholar
B. Wang, Asymptotic behavior of non-autonomous fractional stochastic reaction-diffusion equations, Nonlinear Anal., 158 (2017), 60-82. doi: 10.1016/j.na.2017.04.006. Google Scholar
B. Wang, Weak pullback attractors for stochastic Navier-Stokes equations with nonlinear diffusion terms, Proc. Amer. Math. Soc., 147 (2019), 1627-1638. doi: 10.1090/proc/14356. Google Scholar
B. Wang, Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, J. Differential Equations, 253 (2012), 1544-1583. doi: 10.1016/j.jde.2012.05.015. Google Scholar
B. Wang, Random attractors for non-autonomous stochastic wave equations with multiplicative noise, Discrete Contin. Dyn. Syst., 34 (2014), 269-300. doi: 10.3934/dcds.2014.34.269. Google Scholar
B. Wang, Weak pullback attractors for mean random dynamical systems in Bochner spaces, J. Dynam. Differential Equations, 31 (2019), 2177-2204. doi: 10.1007/s10884-018-9696-5. Google Scholar
B. Wang, Dynamics of fractional stochastic reaction-diffusion equations on unbounded domains driven by nonlinear noise, J. Differential Equations, 268 (2019), 1-59. doi: 10.1016/j.jde.2019.08.007. Google Scholar
B. Wang, Asymptotic behavior of stochastic wave equations with critical exponents on $\mathbb{R}^{3}$, Tran. Amer. Math. Soc., 363 (2011), 3639-3663. doi: 10.1090/S0002-9947-2011-05247-5. Google Scholar
C. Wang, G. Xue and C. Zhao, Invariant Borel probability measures for discrete long-wave-short-wave resonance equations, Appl. Math. Comp., 339 (2018), 853-865. doi: 10.1016/j.amc.2018.06.059. Google Scholar
R. Wang, Long-time dynamics of stochastic lattice plate equations with nonlinear noise and damping, J. Dynam. Differential Equations, 33 (2021), 767-803. doi: 10.1007/s10884-020-09830-x. Google Scholar
R. Wang, B. Guo and B. Wang, Well-posedness and dynamics of fractional FitzHugh-Nagumo systems on $\mathbb{R}^N$ driven by nonlinear noise, Sci. China Math., 64 (2021), 2395-2436. doi: 10.1007/s11425-019-1714-2. Google Scholar
R. Wang, Y. Li and B. Wang, Random dynamics of fractional nonclassical diffusion equations driven by colored noise, Discrete Contin. Dyn. Syst., 39 (2019), 4091-4126. doi: 10.3934/dcds.2019165. Google Scholar
R. Wang, Y. Li and B. Wang, Bi-spatial pullback attractors of fractional non-classical diffusion equations on unbounded domains with $(p, q)$-growth nonlinearities, Appl. Math. Optim., 84 (2021), 425-461. doi: 10.1007/s00245-019-09650-6. Google Scholar
R. Wang, L. Shi and B. Wang, Asymptotic behavior of fractional nonclassical diffusion equations driven by nonlinear colored noise on $\mathbb{R}^N$, Nonlinearity, 32 (2019), 4524-4556. doi: 10.1088/1361-6544/ab32d7. Google Scholar
R. Wang and B. Wang, Random dynamics of $p$-laplacian lattice systems driven by infinite-dimensional nonlinear noise, Stochastic Process. Appl., 130 (2020), 7431-7462. doi: 10.1016/j.spa.2020.08.002. Google Scholar
C. Zhao, H. Jiang and T. Caraballo, Statistical solutions and piecewise Liouville theorem for the impulsive reaction-diffusion equations on infinite lattices, Appl. Math. Comput., 404 (2021), 126103. doi: 10.1016/j.amc.2021.126103. Google Scholar
C. Zhao, G. Xue and G. Lukaszewicz, Pullback attractors and invariant measures for discrete Klein-Gordon-Schrödinger equations, Discrete Contin. Dyn. Syst.B, 23 (2018), 4021-4044. doi: 10.3934/dcdsb.2018122. Google Scholar
S. Zhou, Attractors for second order lattice dynamical systems, J. Differential Equations, 179 (2002), 605-624. doi: 10.1006/jdeq.2001.4032. Google Scholar
S. Zhou, Attractors and approximations for lattice dynamical systems, J. Differential Equations, 200 (2004), 342-368. doi: 10.1016/j.jde.2004.02.005. Google Scholar
S. Zhou and X. Han, Pullback exponential attractors for non-autonomous lattice systems, J. Dynam. Differential Equations, 24 (2012), 601-631. doi: 10.1007/s10884-012-9260-7. Google Scholar
S. Zhou, C. Zhao and Y. Wang, Finite dimensionality and upper semicontinuity of compact kernel sections of non-autonomous lattice systems, Discrete Contin. Dyn. Syst., 21 (2008), 1259-1277. doi: 10.3934/dcds.2008.21.1259. Google Scholar
V. Z. Zhu, Y. Sang and C. Zhao, Pullback attractor and invariant measures for the discrete Zakharov equations, J. Appl. Anal. Comput., 9 (2019), 2333-2357. doi: 10.11948/20190091. Google Scholar
Zhijian Yang, Yanan Li. Upper semicontinuity of pullback attractors for non-autonomous Kirchhoff wave equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4899-4912. doi: 10.3934/dcdsb.2019036
Yonghai Wang. On the upper semicontinuity of pullback attractors with applications to plate equations. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1653-1673. doi: 10.3934/cpaa.2010.9.1653
Yonghai Wang, Chengkui Zhong. Upper semicontinuity of pullback attractors for nonautonomous Kirchhoff wave models. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 3189-3209. doi: 10.3934/dcds.2013.33.3189
Zhaojuan Wang, Shengfan Zhou. Existence and upper semicontinuity of attractors for non-autonomous stochastic lattice systems with random coupled coefficients. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2221-2245. doi: 10.3934/cpaa.2016035
Zhaojuan Wang, Shengfan Zhou. Existence and upper semicontinuity of random attractors for non-autonomous stochastic strongly damped wave equation with multiplicative noise. Discrete & Continuous Dynamical Systems, 2017, 37 (5) : 2787-2812. doi: 10.3934/dcds.2017120
Renhai Wang, Yangrong Li. Backward compactness and periodicity of random attractors for stochastic wave equations with varying coefficients. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4145-4167. doi: 10.3934/dcdsb.2019054
Irena Lasiecka, Roberto Triggiani. Global exact controllability of semilinear wave equations by a double compactness/uniqueness argument. Conference Publications, 2005, 2005 (Special) : 556-565. doi: 10.3934/proc.2005.2005.556
Matheus C. Bortolan, José Manuel Uzal. Upper and weak-lower semicontinuity of pullback attractors to impulsive evolution processes. Discrete & Continuous Dynamical Systems - B, 2021, 26 (7) : 3667-3692. doi: 10.3934/dcdsb.2020252
Yan Cui, Zhiqiang Wang. Asymptotic stability of wave equations coupled by velocities. Mathematical Control & Related Fields, 2016, 6 (3) : 429-446. doi: 10.3934/mcrf.2016010
María Astudillo, Marcelo M. Cavalcanti. On the upper semicontinuity of the global attractor for a porous medium type problem with large diffusion. Evolution Equations & Control Theory, 2017, 6 (1) : 1-13. doi: 10.3934/eect.2017001
John M. Ball. Global attractors for damped semilinear wave equations. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 31-52. doi: 10.3934/dcds.2004.10.31
Linfang Liu, Xianlong Fu. Existence and upper semicontinuity of (L2, Lq) pullback attractors for a stochastic p-laplacian equation. Communications on Pure & Applied Analysis, 2017, 6 (2) : 443-474. doi: 10.3934/cpaa.2017023
Ling Xu, Jianhua Huang, Qiaozhen Ma. Upper semicontinuity of random attractors for the stochastic non-autonomous suspension bridge equation with memory. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 5959-5979. doi: 10.3934/dcdsb.2019115
Yejuan Wang. On the upper semicontinuity of pullback attractors for multi-valued noncompact random dynamical systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3669-3708. doi: 10.3934/dcdsb.2016116
Wenlong Sun. The boundedness and upper semicontinuity of the pullback attractors for a 2D micropolar fluid flows with delay. Electronic Research Archive, 2020, 28 (3) : 1343-1356. doi: 10.3934/era.2020071
Na Lei, Shengfan Zhou. Upper semicontinuity of pullback attractors for non-autonomous lattice systems under singular perturbations. Discrete & Continuous Dynamical Systems, 2022, 42 (1) : 73-108. doi: 10.3934/dcds.2021108
Takashi Narazaki. Global solutions to the Cauchy problem for the weakly coupled system of damped wave equations. Conference Publications, 2009, 2009 (Special) : 592-601. doi: 10.3934/proc.2009.2009.592
Filippo Dell'Oro. Global attractors for strongly damped wave equations with subcritical-critical nonlinearities. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1015-1027. doi: 10.3934/cpaa.2013.12.1015
Sergey Dashkovskiy, Oleksiy Kapustyan. Robustness of global attractors: Abstract framework and application to dissipative wave equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021054
Kosuke Ono. Global existence and asymptotic behavior of small solutions for semilinear dissipative wave equations. Discrete & Continuous Dynamical Systems, 2003, 9 (3) : 651-662. doi: 10.3934/dcds.2003.9.651
HTML views (98)
Lianbing She Mirelson M. Freitas Mauricio S. Vinhote Renhai Wang | CommonCrawl |
Knights, Knaves and Normals
In the kingdom of Boolistan, every inhabitant is either a Knight, Knave or Normal. Knights can only make true statements, Knaves can only lie, and Normals must either tell the truth or lie.
Warmup: The local tavern only allows Normals (no one can relax around Knights and Knaves). What can a Normal say to prove their identity?
Challenge: Only knights can dine at King Arthur's Round Table. What can a Knight say to prove their identity?
Remarks: In conventional logic, where every statement is either true or false, the challenge is impossible (since Normals can say anything). To make this doable, we allow circular self-referential statements, like the famous example, "this statement is false". Formally, a circular self-referential statement is an equation of the form $$ s = f(x_1,\dots,x_n,s) $$ where $x_1,\dots,x_n$ are grounded logical propositions (like "I am a Knave"), $f$ is a Boolean function, and $s$ is a Boolean variable. We say that such a statement is True if setting $s=$ True makes the equation hold, and similarly say it is False if $s=$ False is a solution. This means some such statements are both True and False, while others are neither. For example, "this statement is false" would be the equation $s=\neg s$, which has no solutions, so is neither True nor False. On the other hand, "this statement is true" would be $s=s$, which is both True and False.
We then allow knights to say any True statement, Knaves to say any False statement, while Normals can say a statement as long as it is True or False or both.
logical-deduction liars
vaxquis
$\begingroup$ Must any given Normal always tell the truth or always lie, or can they always do either? $\endgroup$ – evankh Jul 9 '15 at 6:06
$\begingroup$ @knave Good thing we have a Knave on the spot in case we need to know anything about them! ;-) $\endgroup$ – Rand al'Thor Jul 9 '15 at 7:13
$\begingroup$ For the challenge, I like "If you claim I am not a knight, I would have to kill you for the insult." Doesn't meet the criteria, but it might work well enough to get you a seat at the table. $\endgroup$ – user3294068 Jul 9 '15 at 14:55
$\begingroup$ @Mike my sincere thanks for fixing this puzzle. With your current setting, both your reasoning and the answers are correct. My last & only concern is the use of the word "self-referential" here. As Smullyan discussed in problem 255, it's completely OK for a statement to be self-referential - it's the circularity invoked by using a recursive statement that makes it ungrounded. Also, the formula you proposed for $s$ is unambiguously a en.wikipedia.org/wiki/Recursion - that's why I'd call them "recursive" or "circular"/"self-dependent", not "self-referential". $\endgroup$ – vaxquis Jul 11 '15 at 17:04
For the warmup:
"I am a knave"
Should do it.
For the challenge:
"If I am not a knight, this is a lie"
This statement can only work iff the speaker is a knight, as otherwise it will lead to a logical paradox, which is neither true nor false.
March Ho
frodoskywalkerfrodoskywalker
$\begingroup$ I don't see how the second answer precludes a Normal (could just be that I took formal logic too long ago...)? $\endgroup$ – Emerson Jul 9 '15 at 5:25
$\begingroup$ After rereading, I realised that this answer was identical in logical form to my (now deleted) answer. Edited this answer with an explanation. $\endgroup$ – March Ho Jul 9 '15 at 7:04
$\begingroup$ It's not clear to me that the challenge answer is something a knight can say. Knights must speak in truths, I thought. (Which is different from "not lying - I took it to mean that they can't say things that are neither truth nor lie.) I'd think "truth requires" a factual, actual statement. This doesn't seem to fit that, since the second clause can't be true/false, since the "this" in the quote - the sentence, has no clear assertion. WHAT is a lie if the speaker is not a knight? (PS- I suspect I'm about to learn I suck at logic.) $\endgroup$ – Jaydles Jul 9 '15 at 16:06
$\begingroup$ @vaxquis The question says that Knaves can only lie and that Normals can only tell the truth and lie. That implies that they are not able to make ungrounded statements. $\endgroup$ – Rob Watts Jul 9 '15 at 20:10
$\begingroup$ @vaxquis I was not aware of that. That means that instead of Smullyan making "a subtle error in his logic", OP has a different requirement for knights, knaves, and normals - they can only make statements with a truth value. In other words, Smullyan's knights just can't lie, while OP's knights must tell the truth. Do you know the specific wording that Smullyan used? $\endgroup$ – Rob Watts Jul 9 '15 at 20:21
"I can lie."
For knights this is false, and for knaves this is true, so only Normals can say it.
Alternatively:
"I am a Normal." followed by "I am not a Normal."
Or any other pair of one truth and one falsehood. Normals are the only ones who can both lie and tell the truth.
"If I am not a Knight, this is false."
Simply causes a paradox if the speaker is not a Knight. Since a paradox is not a truth nor a lie, Normals can't say it. (I did come up with this before seeing frodoskywalker's answer.)
evankhevankh
$\begingroup$ I'm not sure if we can trust this answer... $\endgroup$ – Mark N Jul 9 '15 at 18:21
$\begingroup$ @MarkN Oh, I'm definitely trustworthy... $\endgroup$ – evankh Jul 9 '15 at 20:04
He can say "(At least) Sometimes I lie." - A knight can not say this, because he never lies. And a knave can not say it, because it would be the truth for him.
He can say "My next statement will not be a lie" Since the knight will know for sure he can never lie. But the Normal cannot 100% know if his next statement might be a lie. He can try, but there could be any thinkable scenario where his next statement could be a lie. Since there is a non-zero chance for the Normal to lie or tell the truth on his next statement, he cannot make the claim, since it is neither true nor false, but a vague guess. And per the rules they can only state truth or lie not something unknown.
FalcoFalco
$\begingroup$ It wouldn't be the truth for the knave, he always lies, not sometimes, because sometimes would imply that he would also tell the truth $\endgroup$ – Wouter Jul 9 '15 at 10:02
$\begingroup$ @Wouter Sometimes is logically equivalent to "at least once" but i can make it more explicit. $\endgroup$ – Falco Jul 9 '15 at 10:09
A knight could say: "That knight (points at known knight) can confirm I am a knight." This should work as long as King Arthur is a knight (only tells the truth) who started allowing/accepting other knights in at his table. Also that all the knights know all the other knights. Any normal or knave that tried to enter that used this line would be declined by the pointed at knight.
"I am a knave you know...sometimes, I just like to say a lie, and just see what happens. Like this one time last week, I lied to this knight, and let me tell you..."
Mark NMark N
$\begingroup$ Then the knight has to give the King his public key. $\endgroup$ – schil227 Jul 9 '15 at 20:31
What can a Knight say to prove their identity?
Let's assume that every inhabitant of Boolistan has to obey the King (the penalty for disobedience is death, obviously), and the Knight's in question name is Sofa. Thus, the answer is
To prove his identity, the Knight in question has to first say to the King to give him [i.e. the Knight] an order: >>My King, order me to, literally, "Say I am Sofa, King, a Knight, but if, and only if, you're a Knight. Otherwise remain silent."<<
Then the King has to do what the Knight asked for. Then, obviously, if he's a Knight, he'll do it, and it will be true. If he isn't, he won't say anything, because he had been given a direct order to remain silent.
vaxquisvaxquis
$\begingroup$ Sofa King.... nice $\endgroup$ – Cain Jul 9 '15 at 23:04
Okay, this is what I got: the warmup
"knights tell part of the truth." Since knaves always lie, they can only say that knights tell part of a lie, and knights cannot determine whether a part of a truth is a whole truth, and are thus unable to answer. Of course, a knave could really be saying "Knights tell part of a lie," in which case, they really say "Knights tell part of a truth." The Normal can say so, because they can lie and tell the truth: Knights tell part of the truth and part of a lie.
The only surefire way is assuming that Normals can tell lies and truths in the same sentence. In this case, the Normal says, "I Lie and I tell truths." A knight cannot lie and thus cannot admit so, a Knave cannot lie about telling the truth yet tell the truth about lying or vice versa, but a Normal can lie and tell the truth at once: They can lie about lying and tell the truth about being truthful, or lie about telling the truth and tell the truth about lying.
Assuming that at least one knight knows the person trying to enter, the person trying to answer can ask said Knight, "Can I lie?" If a normal or knave, the other Knight will say "Yes", otherwise "No". We could also assume so, since the dinner is for Knights only, any one there is a Knight, which helps solidify this answer.
However, assuming that no one knows the person trying to enter, they can prove so in a two-step process: First, the guard asks if they can say the following sentence, which is written on a scroll: "I can tell lies and truths in the same sentence." A Knight cannot say a lie in the same sentence, and will answer "No." A Knave cannot say a truth in a sentence, but will lie and say "yes" (note that it asks whether they can do BOTH, hence the knave isn't telling the truth about saying a lie and forming a contradiction). A Normal can either lie or tell the truth, and will answer either "Yes" or "No."
If they answered "No", then they are either a Knight or a Normal. From there, the gaurd hands them the following scroll: "I tell part-truths and part-lies." Under pain of death, they are told to read the scroll aloud. A knight cannot say a lie, even if only a part of a statement, and thus will answer "I cannot." A Normal, however, under the threat of Death, will read the script to try and save his life, thus revealing his deceit.
Nyk 232Nyk 232
$\begingroup$ Note: the challenge works even better if the first scroll is a secret passphrase given to them ahead of time: When asked what the password is, they will answer "I cannot say," then threatened under death to read a scroll given them by the guard, all to give a further feeling of doom. Although, a normal watching this would quickly realize that the correct answer when given the scroll is "I cannot say". $\endgroup$ – Nyk 232 Jul 9 '15 at 15:37
First post ever on here :)
For the warmup I would say:
If 1+1=2 then I am a Normal. This works because if a Knight says this statement, it ends in a contradiction, same goes for the case when a Knave says it. The statement is only true and valid when a Normal says it.
For the challenge we can operate along the same lines:
If 1+1=2 then I am a Knight. Again the antecedent is necessarily true, so for the statement to be true and not a contradiction, then only a Knight could say this.
Hope all the formatting worked out properly!
$\begingroup$ Hi, welcome to Puzzles! $\endgroup$ – Voitcus Jul 9 '15 at 6:20
$\begingroup$ I think the one does not imply this, because there is no connection between "1+1=2" and "I am a knight". You can say "If a rectangle has all borders equal, then it is a square", but you can't "If a rectangle has all borders equal, then it is red" $\endgroup$ – Voitcus Jul 9 '15 at 7:19
$\begingroup$ The problem is, "if 1+1=2 then I am a Knight" is simply true for a Knight and false for a non-Knight. As such, the Normal (or the Knave) could lie and say "if 1+1=2 then I am a Knight" without a problem. $\endgroup$ – Glen O Jul 9 '15 at 7:21
$\begingroup$ "If 1 equals 1 then I am a normal" is truth when spoken by a liar and a lie when spoken by a knave or a knight. As such it can be stated by normals and knaves. Since 1 does in face equal 1 it reduces to "I am a normal", there is no paradox. $\endgroup$ – Taemyr Jul 10 '15 at 13:45
"You might as well let everyone in, since only Normals are allowed but there's already a Knight and a Knave in there." Since he's telling the truth and lying in the same sentence, he's capable of both truth and lies, and thus is a Normal. (Alternately, he could be a Knight truly telling that they've let in a Knight and a Knave by accident, but my answer assumes that the tavern's authentication procedures have not already been broken.)
This is more of a practical than logical solution, but I'd recommend the Knight bring along someone known to be a Knave as his squire, who can inversely vouch for him at the door - his knavish squire would say "Don't let him in, this guy's no knight." :)
recognizerrecognizer
In the case that the King is King Arthur, the King of Britons:
from the Monty Python and the Holy Grail
Normals' pass:
I haven't ever eaten ham before.
In reference to (real spoiler):
"We eat ham and jam and spam a lot!" (The Camelot Song)
Knights' pass:
It could be carried by an african swallow.
In reference to (real and big spoiler):
A guard answering another guard that coconuts cannot be carried by a european swallow, but maybe by an african swallow. In the end, we see that only Sir Bedevere and Sir Lancelot. We see Sir bedevere testing that early in the film; and Sir Lancelot probably doesn't need a pass since he is 'carried away easily'.
Second case might be invalid cause i totally forgot about
the aptly named Sir Not-appearing-in-this-film
I assume normals would have have no idea about the subject.
bunyaClovenbunyaCloven
$\begingroup$ I don't understand how your challenge response answers the question? The answer doesn't have enough context to be deemed true or false, it's more-so just random gibberish. $\endgroup$ – Mark N Jul 9 '15 at 13:02
$\begingroup$ The more-so-random-gibberish answer, if searched with google as a full sentence, gives nothing but the exact reference i was trying to give in the first page; but yeah. $\endgroup$ – bunyaCloven Jul 9 '15 at 13:13
$\begingroup$ @MarkN: It's a reference to the movie Monty Python and the Holy Grail, though I'm not quite sure what it's supposed to mean here. $\endgroup$ – supercat Jul 9 '15 at 13:16
$\begingroup$ @supercat got it..This might be more suitable as a comment then. $\endgroup$ – Mark N Jul 9 '15 at 13:20
$\begingroup$ I think the warmup is fine but I agree that the challenge answer is insufficient. I would leave it out until you have something solid for that part. $\endgroup$ – Engineer Toast Jul 9 '15 at 13:20
For the normal:
A Normal can just say something paradoxical like "I'm a liar/Knave".
And for the knight:
A Knight has to say something that would prove he's a Knight if true, and just lead to a contradiction/dead end if false. Indeed, there's no way "Either this single sentence is true and I am a Knight, or it is false and I am a Normal or Knave" would be false without causing some sort of a contradiction: click
NautilusNautilus
"I am a knave." The knave saying this would be telling the truth. A knight would be lying to say this. A normal could make this claim in a lie.
I have an answer that I believe solves it without need of a paradox.
We know from the warm-up that we can identify a normal with certainty. Our Knight picks out a normal and points, then proclaims, "If she is being as truthful as I am, she would agree I'm a knight." A knave's corresponding normal would indeed agree they are a knight (because she would be lying in this case), but the knave cannot say this because it would be the truth. A knight's corresponding normal would be telling the truth, and so she would of course agree that they are a knight. If a normal were to lie, their corresponding normal would also lie, agreeing that they are a knight. They cannot say this because it would be the truth. If a normal were to tell the truth, their corresponding normal could not agree they are a knight because she must also tell the truth, thus they still cannot say this.
Shadow503Shadow503
$\begingroup$ This seems quite similar to my answer....just more convoluted $\endgroup$ – Mark N Jul 9 '15 at 18:13
$\begingroup$ @MarkN I missed your answer before; just gave you an upvote! Still, I think my answer is stronger as it doesn't require a previously validated knight. However, both of our answers are suboptimal (in comparison with the currently accpeted answer) because they both require others to have knowledge of the Knight in question. $\endgroup$ – Shadow503 Jul 9 '15 at 18:35
$\begingroup$ "A knight's corresponding normal would be telling the truth" Why? "If a normal were to tell the truth, ...because she must also tell the truth," Why? $\endgroup$ – Taemyr Jul 10 '15 at 13:54
$\begingroup$ @Taemyr Good question! Because the knight says. . . >! "If she is being as truthful as I am, then she would agree I'm a knight." $\endgroup$ – Shadow503 Jul 10 '15 at 14:17
$\begingroup$ So, no reason. The normal could be lying about the knight, or a normal could lie about another normal, even if that normal was telling the truth. $\endgroup$ – Taemyr Jul 10 '15 at 14:19
"Sometimes I am older than a day before, but sometimes I am younger."
For the challenge, I can't think anything. Of course a knight can show his shield to prove, but I guess it doesn't count. Some other considerations below, within a spoiler:
The watchmen can ask a knight of a thing he doesn't know, but in such a way that would make Normals think it is common knowledge for Knights. The real Knight would answer "I don't know", while a Normal would try to lie. Also, for example, the question from the guard could be "how much does beer in the tavern cost?". Because Knights do not enter taverns, they don't know - but a Normal knows and because he pretends to be a Knight, he would answer truth. However, it requires cooperation between the guard and a knight. It will also fail when Normals will find out that "I don't know" is the right answer.
VoitcusVoitcus
$\begingroup$ Your answer for the warmup is just a false statement, which could be said by a Knave or a Normal. The challenge answer isn't fool-proof, since you assume that all Normals have the same knowledge. $\endgroup$ – Nuclear Wang Jul 9 '15 at 7:26
$\begingroup$ What I wrote in challenge is not an answer, these are considerations, maybe somebody else can find out something from this. $\endgroup$ – Voitcus Jul 9 '15 at 7:28
Not the answer you're looking for? Browse other questions tagged logical-deduction liars or ask your own question.
About the island of Knights and Knaves
About Knights and Knaves and their consistency
Knights , Knaves and Spies - Part 1
Knights and knaves at a party
Meta Knights and Knaves Puzzle with Hats
Identify A B & C using the least number of directed logic statements (Knights, Knaves, Normals)
Knights Knaves and Spies
Island of Knights, Knaves and Spies | CommonCrawl |
Classification of finite irreducible conformal modules over Lie conformal algebra $ \mathcal{W}(a, b, r) $
ERA Home
Finite/fixed-time synchronization for complex networks via quantized adaptive control
doi: 10.3934/era.2021005
Tori can't collapse to an interval
Sergio Zamora ,
018 McAllister Bldg, University Park, PA 16802-6402, USA
* Corresponding author: Sergio Zamora
Received October 2020 Published January 2021
Fund Project: The author would like to thank Raquel Perales and Anton Petrunin
Here we prove that under a lower sectional curvature bound, a sequence of Riemannian manifolds diffeomorphic to the standard $ m $-dimensional torus cannot converge in the Gromov–Hausdorff sense to a closed interval.
The proof is done by contradiction by analyzing suitable covers of a contradicting sequence, obtained from the Burago–Gromov–Perelman generalization of the Yamaguchi fibration theorem.
Keywords: Alexandrov geometry, sectional curvature bounded below, Yamaguchi collapse fibration.
Mathematics Subject Classification: Primary: 53C23, 53C20; Secondary: 53C21.
Citation: Sergio Zamora. Tori can't collapse to an interval. Electronic Research Archive, doi: 10.3934/era.2021005
L. Auslander and M. Kuranishi, On the holonomy group of locally Euclidean spaces, Ann. of Math., 65 (1957), 411-415. doi: 10.2307/1970053. Google Scholar
[2] R. L. Bishop and R. J. Crittenden, Geometry of manifolds, Academic Press, 1964. Google Scholar
D. Burago, Y. Burago and S. Ivanov, A Course in Metric Geometry, Graduate Studies in Mathematics, 33. American Mathematical Society, Providence, RI, 2001. doi: 10.1090/gsm/033. Google Scholar
Y. Burago, M. Gromov and G. A. D. Perel'man, Alexandrov spaces with curvature bounded below, Russian Mathematical Surveys, 47 (1992), 1-58. doi: 10.1070/RM1992v047n02ABEH000877. Google Scholar
J. W. S. Cassels, An Introduction to The Geometry Of Numbers, Springer Science & Business Media (2012). Google Scholar
L. S. Charlap, Bieberbach Groups and Flat Manifolds, Springer-Verlag, New York, 1986. doi: 10.1007/978-1-4613-8687-2. Google Scholar
M. Gromov, Filling Riemannian manifolds, J. Differential Geom., 18 (1983), 1-147. doi: 10.4310/jdg/1214509283. Google Scholar
M. Gromov, Groups of polynomial growth and expanding maps, Inst. Hautes Études Sci. Publ. Math., 53 (1981), 53-73. doi: 10.1007/BF02698687. Google Scholar
M. Gromov, Metric Structures for Riemannian and Non-Riemannian Spaces, Birkhäuser Boston, Inc., Boston, MA, 2007. Google Scholar
M. Gromov and H. B. Lawson Jr, Spin and scalar curvature in the presence of a fundamental group Ⅰ, Ann. of Math., 111 (1980), 209-230. doi: 10.2307/1971198. Google Scholar
V. Kapovitch, Perelman's stability theorem, Surveys in Differential Geometry, 11 (2006), 103–136. arXiv: math/0703002. doi: 10.4310/SDG.2006.v11.n1.a5. Google Scholar
V. Kapovitch, Restrictions on collapsing with a lower sectional curvature bound, Math. Z., 249 (2005), 519-539. doi: 10.1007/s00209-004-0715-3. Google Scholar
V. Kapovitch, A. Petrunin and W. Tuschmann, Almost nonnegative curvature, and the gradient flow on Alexandrov spaces, Ann. of Math., 171 (2010), 343-373. doi: 10.4007/annals.2010.171.343. Google Scholar
M. G. Katz, Torus cannot collapse to a segment, J. Geom., 111 (2020), Paper No. 13, 8 pp. doi: 10.1007/s00022-020-0525-8. Google Scholar
G. Ya. Perelman, Alexandrov spaces with curvature bounded from below Ⅱ, Leningrad Branch of Steklov Institute. St. Petesburg (1991) Google Scholar
T. Yamaguchi, Collapsing and pinching under a lower curvature bound, Ann. of Math., 133 (1991), 317-357. doi: 10.2307/2944340. Google Scholar
Figure 1. Flat Klein bottles can converge to an interval
Figure 2. The Fibration Theorem gives us a decomposition $ X_n = S_1 \# S_2 $
Figure 3. The configuration $ (q_n; \tilde{p}_n ,a_n, b_n) $ violates the Alexandrov condition
Lucio Damascelli, Filomena Pacella. Sectional symmetry of solutions of elliptic systems in cylindrical domains. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3305-3325. doi: 10.3934/dcds.2020045
Xinlin Cao, Huaian Diao, Jinhong Li. Some recent progress on inverse scattering problems within general polyhedral geometry. Electronic Research Archive, 2021, 29 (1) : 1753-1782. doi: 10.3934/era.2020090
Guojie Zheng, Dihong Xu, Taige Wang. A unique continuation property for a class of parabolic differential inequalities in a bounded domain. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020280
Petr Pauš, Shigetoshi Yazaki. Segmentation of color images using mean curvature flow and parametric curves. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1123-1132. doi: 10.3934/dcdss.2020389
Larissa Fardigola, Kateryna Khalina. Controllability problems for the heat equation on a half-axis with a bounded control in the Neumann boundary condition. Mathematical Control & Related Fields, 2021, 11 (1) : 211-236. doi: 10.3934/mcrf.2020034
Kohei Nakamura. An application of interpolation inequalities between the deviation of curvature and the isoperimetric ratio to the length-preserving flow. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1093-1102. doi: 10.3934/dcdss.2020385
Tetsuya Ishiwata, Takeshi Ohtsuka. Numerical analysis of an ODE and a level set methods for evolving spirals by crystalline eikonal-curvature flow. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 893-907. doi: 10.3934/dcdss.2020390
Tomáš Bodnár, Philippe Fraunié, Petr Knobloch, Hynek Řezníček. Numerical evaluation of artificial boundary condition for wall-bounded stably stratified flows. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 785-801. doi: 10.3934/dcdss.2020333
PDF downloads (5)
Sergio Zamora | CommonCrawl |
Vector field
From formulasearchengine
A portion of the vector field (sin y, sin x)
In vector calculus, a vector field is an assignment of a vector to each point in a subset of space.[1] A vector field in the plane, for instance, can be visualized as a collection of arrows with a given magnitude and direction each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from point to point.
The elements of differential and integral calculus extend to vector fields in a natural way. When a vector field represents force, the line integral of a vector field represents the work done by a force moving along a path, and under this interpretation conservation of energy is exhibited as a special case of the fundamental theorem of calculus. Vector fields can usefully be thought of as representing the velocity of a moving flow in space, and this physical intuition leads to notions such as the divergence (which represents the rate of change of volume of a flow) and curl (which represents the rotation of a flow).
In coordinates, a vector field on a domain in n-dimensional Euclidean space can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, and there is a well-defined transformation law in passing from one coordinate system to the other. Vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point (a tangent vector).
More generally, vector fields are defined on differentiable manifolds, which are spaces that look like Euclidean space on small scales, but may have more complicated structure on larger scales. In this setting, a vector field gives a tangent vector at each point of the manifold (that is, a section of the tangent bundle to the manifold). Vector fields are one kind of tensor field.
1.1 Vector fields on subsets of Euclidean space
1.2 Coordinate transformation law
1.3 Vector fields on manifolds
2 Examples
2.1 Gradient field
2.2 Central field
3 Operations on vector fields
3.1 Line integral
3.2 Divergence
3.3 Curl
3.4 Index of a vector field
5 Flow curves
5.1 Complete vector fields
6 Difference between scalar and vector field
6.1 Example 1
7 f-relatedness
8 Generalizations
Vector fields on subsets of Euclidean space
{{#invoke:Multiple image|render}}
Given a subset S in Rn, a vector field is represented by a vector-valued function V: S → Rn in standard Cartesian coordinates (x1, ..., xn). If each component of V is continuous, then V is a continuous vector field, and more generally V is a Ck vector field if each component V is k times continuously differentiable.
A vector field can be visualized as assigning a vector to individual points within an n-dimensional space.[1]
Given two Ck-vector fields V, W defined on S and a real valued Ck-function f defined on S, the two operations scalar multiplication and vector addition
(fV)(p):=f(p)V(p){\displaystyle (fV)(p):=f(p)V(p)\,}
(V+W)(p):=V(p)+W(p){\displaystyle (V+W)(p):=V(p)+W(p)\,}
define the module of Ck-vector fields over the ring of Ck-functions.
Coordinate transformation law
In physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a geometrically distinct entity from a simple list of scalars, or from a covector.
Thus, suppose that (x1,...,xn) is a choice of Cartesian coordinates, in terms of which the components of the vector V are
Vx=(V1,x,…,Vn,x){\displaystyle V_{x}=(V_{1,x},\dots ,V_{n,x})}
and suppose that (y1,...,yn) are n functions of the xi defining a different coordinate system. Then the components of the vector V in the new coordinates are required to satisfy the transformation law Template:NumBlk
Such a transformation law is called contravariant. A similar transformation law characterizes vector fields in physics: specifically, a vector field is a specification of n functions in each coordinate system subject to the transformation law (Template:EquationNote) relating the different coordinate systems.
Vector fields are thus contrasted with scalar fields, which associate a number or scalar to every point in space, and are also contrasted with simple lists of scalar fields, which do not transform under coordinate changes.
Vector fields on manifolds
A vector field on a sphere
Given a differentiable manifold M, a vector field on M is an assignment of a tangent vector to each point in M.[2] More precisely, a vector field F is a mapping from M into the tangent bundle TM so that p∘F{\displaystyle p\circ F} is the identity mapping where p denotes the projection from TM to M. In other words, a vector field is a section of the tangent bundle.
If the manifold M is smooth or analytic—that is, the change of coordinates is smooth (analytic)—then one can make sense of the notion of smooth (analytic) vector fields. The collection of all smooth vector fields on a smooth manifold M is often denoted by Γ(TM) or C∞(M,TM) (especially when thinking of vector fields as sections); the collection of all smooth vector fields is also denoted by X(M){\displaystyle \scriptstyle {\mathfrak {X}}(M)} (a fraktur "X").
The flow field around an airplane is a vector field in R3, here visualized by bubbles that follow the streamlines showing a wingtip vortex.
A vector field for the movement of air on Earth will associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind; the length (magnitude) of the arrow will be an indication of the wind speed. A "high" on the usual barometric pressure map would then act as a source (arrows pointing away), and a "low" would be a sink (arrows pointing towards), since air tends to move from high pressure areas to low pressure areas.
Velocity field of a moving fluid. In this case, a velocity vector is associated to each point in the fluid.
Streamlines, Streaklines and Pathlines are 3 types of lines that can be made from vector fields. They are :
streaklines — as revealed in wind tunnels using smoke.
streamlines (or fieldlines)— as a line depicting the instantaneous field at a given time.
pathlines — showing the path that a given particle (of zero mass) would follow.
Magnetic fields. The fieldlines can be revealed using small iron filings.
Maxwell's equations allow us to use a given set of initial conditions to deduce, for every point in Euclidean space, a magnitude and direction for the force experienced by a charged test particle at that point; the resulting vector field is the electromagnetic field.
A gravitational field generated by any massive object is also a vector field. For example, the gravitational field vectors for a spherically symmetric body would all point towards the sphere's center with the magnitude of the vectors reducing as radial distance from the body increases.
Gradient field
A vector field that has circulation about a point cannot be written as the gradient of a function.
Vector fields can be constructed out of scalar fields using the gradient operator (denoted by the del: ∇).[3]
A vector field V defined on a set S is called a gradient field or a conservative field if there exists a real-valued function (a scalar field) f on S such that
V=∇f=(∂f∂x1,∂f∂x2,∂f∂x3,…,∂f∂xn).{\displaystyle V=\nabla f={\bigg (}{\frac {\partial f}{\partial x_{1}}},{\frac {\partial f}{\partial x_{2}}},{\frac {\partial f}{\partial x_{3}}},\dots ,{\frac {\partial f}{\partial x_{n}}}{\bigg )}.}
The associated flow is called the gradient flow, and is used in the method of gradient descent.
The path integral along any closed curve γ (γ(0) = γ(1)) in a gradient field is zero:
∮γ⟨V(x),dx⟩=∮γ⟨∇f(x),dx⟩=f(γ(1))−f(γ(0)).{\displaystyle \oint _{\gamma }\langle V(x),\mathrm {d} x\rangle =\oint _{\gamma }\langle \nabla f(x),\mathrm {d} x\rangle =f(\gamma (1))-f(\gamma (0)).}
where the angular brackets and comma: Template:Langle, Template:Rangle denotes the inner product of two vectors (strictly speaking - the integrand V(x) is a 1-form rather than a vector in the elementary sense).[4]
Central field
A C∞-vector field over Rn \ {0} is called a central field if
V(T(p))=T(V(p))(T∈O(n,R)){\displaystyle V(T(p))=T(V(p))\qquad (T\in \mathrm {O} (n,\mathbf {R} ))}
where O(n, R) is the orthogonal group. We say central fields are invariant under orthogonal transformations around 0.
The point 0 is called the center of the field.
Since orthogonal transformations are actually rotations and reflections, the invariance conditions mean that vectors of a central field are always directed towards, or away from, 0; this is an alternate (and simpler) definition. A central field is always a gradient field, since defining it on one semiaxis and integrating gives an antigradient.
Operations on vector fields
Line integral
{{#invoke:main|main}} A common technique in physics is to integrate a vector field along a curve, i.e. to determine its line integral. Given a particle in a gravitational vector field, where each vector represents the force acting on the particle at a given point in space, the line integral is the work done on the particle when it travels along a certain path.
The line integral is constructed analogously to the Riemann integral and it exists if the curve is rectifiable (has finite length) and the vector field is continuous.
Given a vector field V and a curve γ parametrized by [a, b] (where a and b are real) the line integral is defined as
∫γ⟨V(x),dx⟩=∫ab⟨V(γ(t)),γ′(t)dt⟩.{\displaystyle \int _{\gamma }\langle V(x),\mathrm {d} x\rangle =\int _{a}^{b}\langle V(\gamma (t)),\gamma '(t)\;\mathrm {d} t\rangle .}
{{#invoke:main|main}} The divergence of a vector field on Euclidean space is a function (or scalar field). In three-dimensions, the divergence is defined by
divF=∇⋅F=∂F1∂x+∂F2∂y+∂F3∂z,{\displaystyle \operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} ={\frac {\partial F_{1}}{\partial x}}+{\frac {\partial F_{2}}{\partial y}}+{\frac {\partial F_{3}}{\partial z}},}
with the obvious generalization to arbitrary dimensions. The divergence at a point represents the degree to which a small volume around the point is a source or a sink for the vector flow, a result which is made precise by the divergence theorem.
The divergence can also be defined on a Riemannian manifold, that is, a manifold with a Riemannian metric that measures the length of vectors.
{{#invoke:main|main}} The curl is an operation which takes a vector field and produces another vector field. The curl is defined only in three-dimensions, but some properties of the curl can be captured in higher dimensions with the exterior derivative. In three-dimensions, it is defined by
curlF=∇×F=(∂F3∂y−∂F2∂z)e1−(∂F3∂x−∂F1∂z)e2+(∂F2∂x−∂F1∂y)e3.{\displaystyle \operatorname {curl} \,\mathbf {F} =\nabla \times \mathbf {F} =\left({\frac {\partial F_{3}}{\partial y}}-{\frac {\partial F_{2}}{\partial z}}\right)\mathbf {e} _{1}-\left({\frac {\partial F_{3}}{\partial x}}-{\frac {\partial F_{1}}{\partial z}}\right)\mathbf {e} _{2}+\left({\frac {\partial F_{2}}{\partial x}}-{\frac {\partial F_{1}}{\partial y}}\right)\mathbf {e} _{3}.}
The curl measures the density of the angular momentum of the vector flow at a point, that is, the amount to which the flow circulates around a fixed axis. This intuitive description is made precise by Stokes' theorem.
Index of a vector field
The index of a vector field is a way of describing the behaviour of a vector field around an isolated zero (i.e. non-singular point) which can distinguish saddles from sources and sinks. Take a small sphere around the zero so that no other zeros are included. A map from this sphere to a unit sphere of dimensions n−1{\displaystyle n-1} can be constructed by dividing each vector by its length to form a unit length vector which can then be mapped to the unit sphere. The index of the vector field at the point is the degree of this map. The index of the vector field is the sum of the indices of each zero.
The index will be zero around any non singular point, it is +1 around sources and sinks and -1 around saddles. In two dimensions the index is equivalent to the winding number. For an ordinary sphere in three dimension space it can be shown that the index of any vector field on the sphere must be two, this leads to the hairy ball theorem which shows that every such vector field must have a zero. This theorem generalises to the Poincaré–Hopf theorem which relates the index to the Euler characteristic of the space.
Magnetic field lines of an iron bar (magnetic dipole)
Vector fields arose originally in classical field theory in 19th century physics, specifically in magnetism. They were formalized by Michael Faraday, in his concept of lines of force, who emphasized that the field itself should be an object of study, which it has become throughout physics in the form of field theory.
In addition to the magnetic field, other phenomena that were modeled as vector fields by Faraday include the electrical field and light field.
Flow curves
{{#invoke:main|main}} Consider the flow of a fluid through a region of space. At any given time, any point of the fluid has a particular velocity associated with it; thus there is a vector field associated to any flow. The converse is also true: it is possible to associate a flow to a vector field having that vector field as its velocity.
Given a vector field V defined on S, one defines curves γ(t) on S such that for each t in an interval I
γ′(t)=V(γ(t)).{\displaystyle \gamma '(t)=V(\gamma (t))\,.}
By the Picard–Lindelöf theorem, if V is Lipschitz continuous there is a unique C1-curve γx for each point x in S so that
γx(0)=x{\displaystyle \gamma _{x}(0)=x\,}
γx′(t)=V(γx(t))(t∈(−ϵ,+ϵ)⊂R).{\displaystyle \gamma '_{x}(t)=V(\gamma _{x}(t))\qquad (t\in (-\epsilon ,+\epsilon )\subset \mathbf {R} ).}
The curves γx are called flow curves of the vector field V and partition S into equivalence classes. It is not always possible to extend the interval (−ε, +ε) to the whole real number line. The flow may for example reach the edge of S in a finite time. In two or three dimensions one can visualize the vector field as giving rise to a flow on S. If we drop a particle into this flow at a point p it will move along the curve γp in the flow depending on the initial point p. If p is a stationary point of V then the particle will remain at p.
Typical applications are streamline in fluid, geodesic flow, and one-parameter subgroups and the exponential map in Lie groups.
Complete vector fields
A vector field is complete if its flow curves exist for all time.[5] In particular, compactly supported vector fields on a manifold are complete. If X is a complete vector field on M, then the one-parameter group of diffeomorphisms generated by the flow along X exists for all time.
Difference between scalar and vector field
The difference between a scalar and vector field is not that a scalar is just one number while a vector is several numbers. The difference is in how their coordinates respond to coordinate transformations. A scalar is a coordinate whereas a vector can be described by coordinates, but it is not the collection of its coordinates.
This example is about 2-dimensional Euclidean space (R2) where we examine Euclidean (x, y) and polar (r, θ) coordinates (which are undefined at the origin). Thus x = r cos θ and y = r sin θ and also r2 = x2 + y2, cos θ = x/(x2 + y2)1/2 and sin θ = y/(x2 + y2)1/2. Suppose we have a scalar field which is given by the constant function 1, and a vector field which attaches a vector in the r-direction with length 1 to each point. More precisely, they are given by the functions
spolar:(r,θ)↦1,vpolar:(r,θ)↦(1,0).{\displaystyle s_{\mathrm {polar} }:(r,\theta )\mapsto 1,\quad v_{\mathrm {polar} }:(r,\theta )\mapsto (1,0).}
Let us convert these fields to Euclidean coordinates. The vector of length 1 in the r-direction has the x coordinate cos θ and the y coordinate sin θ. Thus in Euclidean coordinates the same fields are described by the functions
sEuclidean:(x,y)↦1,{\displaystyle s_{\mathrm {Euclidean} }:(x,y)\mapsto 1,}
vEuclidean:(x,y)↦(cosθ,sinθ)=(xx2+y2,yx2+y2).{\displaystyle v_{\mathrm {Euclidean} }:(x,y)\mapsto (\cos \theta ,\sin \theta )=\left({\frac {x}{\sqrt {x^{2}+y^{2}}}},{\frac {y}{\sqrt {x^{2}+y^{2}}}}\right).}
We see that while the scalar field remains the same, the vector field now looks different. The same holds even in the 1-dimensional case, as illustrated by the next example.
Consider the 1-dimensional Euclidean space R with its standard Euclidean coordinate x. Suppose we have a scalar field and a vector field which are both given in the x coordinate by the constant function 1,
sEuclidean:x↦1,vEuclidean:x↦1.{\displaystyle s_{\mathrm {Euclidean} }:x\mapsto 1,\quad v_{\mathrm {Euclidean} }:x\mapsto 1.}
Thus, we have a scalar field which has the value 1 everywhere and a vector field which attaches a vector in the x-direction with magnitude 1 unit of x to each point.
Now consider the coordinate ξ := 2x. If x changes one unit then ξ changes 2 units. Thus this vector field has a magnitude of 2 in units of ξ. Therefore, in the ξ coordinate the scalar field and the vector field are described by the functions
sunusual:ξ↦1,vunusual:ξ↦2{\displaystyle s_{\mathrm {unusual} }:\xi \mapsto 1,\quad v_{\mathrm {unusual} }:\xi \mapsto 2}
which are different.
f-relatedness
Given a smooth function between manifolds, f: M → N, the derivative is an induced map on tangent bundles, f*: TM → TN. Given vector fields V: M → TM and W: N → TN, we say that W is f-related to V if the equation W ∘ f* = f* ∘ V holds.
If Vi is f-related to Wi, i = 1, 2, then the Lie bracket [V1, V2] is f-related to [W1, W2].
Replacing vectors by p-vectors (pth exterior power of vectors) yields p-vector fields; taking the dual space and exterior powers yields differential k-forms, and combining these yields general tensor fields.
Algebraically, vector fields can be characterized as derivations of the algebra of smooth functions on the manifold, which leads to defining a vector field on a commutative algebra as a derivation on the algebra, which is developed in the theory of differential calculus over commutative algebras.
{{#invoke:Portal|portal}} Template:Colbegin
Eisenbud–Levine–Khimshiashvili signature formula
Field line
Lie derivative
Scalar field
Time-dependent vector field
Vector fields in cylindrical and spherical coordinates
Tensor fields
Template:Colend
{{ safesubst:#invoke:Unsubst||$N=Refimprove |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }}
↑ 1.0 1.1 {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ {{#invoke:citation/CS1|citation |CitationClass=citation }}
{{#invoke:citation/CS1|citation
|CitationClass=book }}
Template:Sister
|CitationClass=citation }}
Vector field — Mathworld
Vector field — PlanetMath
3D Magnetic field viewer
Vector Field Simulation Java applet illustrating vectors fields
Vector fields and field lines
Vector field simulation An interactive application to show the effects of vector fields
Vector Fields Software 2d & 3d electromagnetic design software that can be used to visualise vector fields and field lines
Retrieved from "https://en.formulasearchengine.com/index.php?title=Vector_field&oldid=223381"
Commons category with local link different than on Wikidata
Differential topology
Vector calculus
About formulasearchengine | CommonCrawl |
Feature-rich networks: going beyond complex network topologies
Roberto Interdonato ORCID: orcid.org/0000-0002-0536-62771,
Martin Atzmueller2,
Sabrina Gaito3,
Rushed Kanawati4,
Christine Largeron5 &
Alessandra Sala6
The growing availability of multirelational data gives rise to an opportunity for novel characterization of complex real-world relations, supporting the proliferation of diverse network models such as Attributed Graphs, Heterogeneous Networks, Multilayer Networks, Temporal Networks, Location-aware Networks, Knowledge Networks, Probabilistic Networks, and many other task-driven and data-driven models. In this paper, we propose an overview of these models and their main applications, described under the common denomination of Feature-rich Networks, i. e. models where the expressive power of the network topology is enhanced by exposing one or more peculiar features. The aim is also to sketch a scenario that can inspire the design of novel feature-rich network models, which in turn can support innovative methods able to exploit the full potential of mining complex network structures in domain-specific applications.
Structures built upon great quantities of networked entities, such as computer networks and social networks, have an undeniable central role in our everyday life. The need to study these complex real-world topologies, together with the growing ability to carry out these studies thanks to technological advances, recently made the use of complex network models pervasive in many disciplines such as computer science, physics, social science, as well as in interdisciplinary research environments.
Nowadays, it is straightforward to experience the use of complex networked data, thanks to the fact that collecting multirelational data from the Web is generally a simple and inexpensive task. Just think about the quantity of online social media platforms, crowdsourced data, online knowledge bases, and so on, that can be collected and studied with relatively low effort.
Nevertheless, besides relational data that can be modeled in a network topology, it is easy to recognize a quantity of "extra" features which serve as an inestimable source of information, that can be conveniently embedded in a network, thus enhancing the expressive power of the topology itself. Examples are given by temporal aspects of the data, quantitative and/or qualitative properties of the nodes, different relations between a common set of entities and different existence probabilities.
In this paper, we refer with the term Feature Rich-Networks to all the complex network models that expose one or more features in addition to the network topology. Some examples of feature-rich networks, which will be described in the paper, are:
Attributed graphs, e. g. networks enclosing (vectors of) generic attributes on nodes and edges ("Attributed graphs" section);
Heterogeneous information networks, e. g. networks modeling heterogeneous node and edge types ("Heterogeneous information networks" section);
Multilayer networks, e. g. representing different online/offline relations between the same set of users ("Multilayer networks" section);
Temporal networks, e. g. modeling discrete/continuous time aspects in networked data ("Temporal networks" section);
Location-aware Networks, e. g. useful for the definition of recommender system (RecSys) applications like itinerary routing and points of interest (PoIs) planning ("Location-aware networks" section);
Probabilistic networks, e. g. networks modeling uncertain relations, such as sensor networks, or networks inferred from survey data ("Probabilistic networks" section).
Please note that the definition of feature-rich network has been kept intentionally wide and flexible, with the aim to gather under a common denomination a series of network models exhibiting different structures and that were introduced for different needs, but that at the same time show some common characteristics and can lead to similar problems. For the same reason, the overview is not meant to be exhaustive, and other network models may exist which can be referred to as feature-rich ones.
In this paper, we will provide an insight in the current status of research in feature-rich network analysis and mining, describing the main types of feature-rich networks and related applications. The aim is to show how embedding features in complex network models can make it possible to improve solutions to classic tasks (e. g. centrality, community detection, link prediction, information diffusion, and so on) and to focus on domains and research questions that have not been deeply investigated so far.
Attributed graphs
Together with the relational information (i.e., the graph), many data sources may also provide attributes describing the relationships or the entities of the network leading to the notions of a node-attributed graph or an edge-attributed graph, respectively. When the attributes are associated with the relationships, the network can be represented by a weighted graph where the weights, usually used to measure the strength of the tie between the corresponding nodes, are replaced by a vector whose components correspond to attributes characterizing their relation. For instance, in a co-authorship network, the link between two coauthors can be described not only by the total number of their co-publications but also by their dates or by the number for each co-publications subtype (e. g. conference, journal, etc.). So, a vector can be assigned to the edges to take into account these attributes. Note that in specific cases, alternative network models may be used, such as temporal networks (cf. "Temporal networks" section) for modeling interactions over time or multiplex networks (cf. "Multilayer networks" section) for modeling each attribute by a specific relationship. The concept of (node-) attributed networks refers rather to the case where attributes are assigned to the nodes for describing the corresponding entities. In a friendship network, e. g. the actors can be described by their genre and their age.
In literature, different definitions have been introduced. A first model has been defined by Zhou et al. (2009), an alternative by Yin et al. (2010):
(Attributed Network - Zhou et al. (2009)) An attributed network is defined as a graph G = (V, E) where V and E denote sets of nodes and edges; each node v∈V is associated with a is associated with a vector of attributes (vj,j∈{1,.. p})
(Attributed Network with bipartite graph - Yin et al. (2010)) An attributed network is represented by
a graph G = (V, E) describing the relationships between the entities, and
a bipartite graph Ga=(V∪Va,Ea) describing the relationships between the entities and the attributes in such a way that each node v from V is connected to attribute-nodes from Va.
The choice of one of these models depends on the type and the number of the features retained to describe the entities of the network: The second definition is more appropriate when few categorical attributes are considered.
In different tasks, taking into account the attributes in addition to the relational information allows to improve the performance of the methods. Thus, attributed networks have been used with success for link prediction, inferring attributes or community detection (Zhou et al. 2010; Yang et al. 2013; Gong et al. 2014; Combe et al. 2015; Atzmueller et al. 2016). However, it is necessary to be careful because structure and attributes may disagree (Peel et al. 2017). Nevertheless, due to the homophily effect and to social influence, they are likely to be aligned, e. g. (McPherson et al. 2001; La Fond and Neville 2010; Mitzlaff et al. 2013; Mitzlaff et al. 2014; Atzmueller and Lemmerich 2018). Consequently, one can hope to benefit from the two sources, notably when one is missing or noisy. Finally it should be mentioned that generators have been recently designed to automatically build attributed networks (Akoglu and Faloutsos 2009; Palla et al. 2012; Kim and Leskovec 2012; Largeron et al. 2017). Such benchmarks are particularly useful for evaluating the performance of algorithms able to handle the two kinds of data.
A well known subcategory of attributed graphs includes the models used for direct organization and modeling of knowledge elements, e. g. given by concepts, their properties and (inter-)relations. Rooted in the theory on semantic networks (Sowa 2006), such models are known as knowledge networks or knowledge graphs (Bizer et al. 2009; Hoffart et al. 2013). In such network structures, data is integrated into a comprehensive knowledge model capturing the relations between concepts and their properties in an explicit way, cf. (Bizer et al. 2009; Hoffart et al. 2013; Ristoski and Paulheim 2016). For instance, entities (concepts) are usually represented as nodes, there can be categories (labels) associated to node, and conceptual relations are given by directed edges between the nodes (Pujara et al. 2013). Following Paulheim (2017), from the point of construction, a knowledge network then mainly describes real world entities and their interrelations. The possible classes and relations can then also be potentially interrelated in an arbitrary way. Knowledge networks can be exploited in many ways, for example, in order to facilitate modeling, mining, inference, and reasoning. Then, tasks that are supported by knowledge networks include, for example, advanced feature engineering, e. g. (Atzmueller and Sternberg 2017; Wilcke et al. 2017). Furthermore, the constructed knowledge graph can serve as a data integration and exploration mechanism, such that the considered relations and additional information about the contained entities can be utilized by advanced graph mining methods, that work on such feature-rich networks, e. g. by mining the respective attributed graph, e. g. (Atzmueller et al. 2016; Atzmueller et al. 2017). Knowledge graphs thus have a broad range of applications, ranging from knowledge modeling and structuring, cf. (Bizer et al. 2009; Hoffart et al. 2013) to advanced graph mining applications in diverse domains (Ristoski and Paulheim 2016; Wilcke et al. 2017; Atzmueller et al. 2016; Atzmueller and Sternberg 2017).
Heterogeneous information networks
The definition of Heterogeneous Information Network (HIN) models rises from the observation that sophisticated real-world networks can hardly be represented with standard network topologies. Most of real-world connections happen between entities that can be considered as different kinds, and describe different types of relations. A practical example is given by a bibliographic information network, containing entities of type paper, venue and author, where different relation types can connect nodes of different entity types (e. g. authorship between author and paper, publication between paper and venue, and so on) or even nodes of the same type (e. g. coauthorship between authors, citation between papers).
While HINs are a powerful tool to model real-world situations, on the other side the modeling process should be carried out by looking for a good trade-off between homogeneous networks (i. e. all nodes of the same type) and complete heterogeneity (i. e. each node establishes a different entity type), since both extremes would result in a loss of information. For this reason, the authors in Sun and Han (2012) propose a typed, semi-structured heterogeneous network model, defined as follows:
(Heterogeneous Information Network) An information network is defined as a directed graph \(G = (\mathcal {V}, \mathcal {E})\) with an object type mapping function \(\tau : \mathcal {V} \rightarrow \mathcal {A}\) and a link type mapping function \(\phi : \mathcal {E} \rightarrow \mathcal {R}\), where each object \(v \in \mathcal {V}\) belongs to one particular object type \(\tau (v) \in \mathcal {A}\), each link \(e \in \mathcal {E}\) belongs to a particular relation \(\phi (e) \in \mathcal {R}\), and if two links belong to the same relation type, the two links share the same starting object type as well as the ending object type. When the types of objects \(|\mathcal {A}| > 1\) or the types of relations \(|\mathcal {R}| > 1\), the network is called heterogeneous information network; otherwise, it is a homogeneous information network.
Given a complex heterogeneous information network, it is necessary to provide its meta level (i. e. schema-level) description for better understanding the object types and link types in the network. Therefore, the concept of network schema is proposed, in order to describe the meta structure of a network (Sun and Han 2012):
(Network Schema) The network schema, denoted as \(T_{G} = (\mathcal {A},\mathcal {R})\), is a meta template for a heterogeneous network \(G = (\mathcal {V}, \mathcal {E})\) with the object type mapping \(\tau : \mathcal {V} \rightarrow \mathcal {A}\) and the link mapping \(\phi : \mathcal {E} \rightarrow \mathcal {R}\), which is a directed graph defined over object types \(\mathcal {A}\), with edges as relations from \(\mathcal {R}\).
The network schema of a heterogeneous information network has specified type constraints on the sets of objects and relationships between the objects. These constraints make a heterogeneous information network semi-structured, guiding the exploration of the semantics of the network (Sun and Han 2012). This HIN model has been successfully used for several mining tasks, such us ranking-based clustering combinations (Sun et al. 2009; Sun et al. 2009), transductive and ranking-based classification (Ji et al. 2010; Ji et al. 2011), similarity search (Sun et al. 2011) and relationship prediction (Sun et al. 2012; Deng et al. 2014), and, more recently, learning of object-event embeddings (Gui et al. 2017) and named entity linking (Shen et al. 2018). However, the notion of HIN is general enough to include other network models which are inherently heterogeneous in node and relation types, e. g. networks related to the Internet-of-Things (George and Thampi 2018; Misra et al. 2012; Qiu et al. 2016).
Multilayer networks
Multilayer network models provide a powerful and realistic tool for the analysis of complex real-world network systems, enabling an in-depth understanding of the characteristics and dynamics of multiple, interconnected types of node relations and interactions (Dickison et al. 2016). While they can be seen as a form of HIN (cf. "Heterogeneous information networks" section), the main idea here is to model the different relations which may occur between the same set of entities in different layers. The layers can be seen as different interaction contexts, while the participation of an entity to different layers can be seen as a set of different instances of the same entity. When the only inter-layer edges (i. e. edges linking instances in different layers) are the coupling edges (i. e. edges linking different instances of the same entity), this model is generally referred to as Multiplex Network. As a practical example, in social computing, an individual often has multiple accounts across different social networks. Multilayer networks can be easily used to link distributed user profiles belonging to the same user from multiple platforms, thus enabling the definition of advanced mining tasks, e. g. multilayer community detection (Kim and Lee 2015; Loe and Jensen 2015). Similarly, different layers can be used to model online and offline relations of different types happening in a social network (Gaito et al. 2012; Dunbar et al. 2015), such as followship, like/comment interactions, working relationship, lunch relationship, etc. A multilayer network model which has become very popular in literature is that proposed by Kivela et al. (2014):
(Multilayer Network) Let \(\mathcal {L} = \{L_{1}, \ldots, L_{\ell }\}\) be a set of layers and \(\mathcal {V}\) be a set of entities. We denote with \(V_{\mathcal {L}} \subseteq \mathcal {V} \times \mathcal {L}\) the set containing the entity-layer combinations in which an entity is present in the corresponding layer. The set \({E_{\mathcal {L}} \subseteq V_{\mathcal {L}} \times V_{\mathcal {L}}}\) contains the undirected links between such entity-layer pairs. We hence denote with \(G_{\mathcal {L}} = (V_{\mathcal {L}}, E_{\mathcal {L}}, \mathcal {V}, \mathcal {L})\) the multilayer network graph with set of nodes \(\mathcal {V}\).
Another multilayer network model, specifically conceived to represent multilayer social networks, is proposed by Magnani and Rossi in Dickison et al. (2016):
(Multilayer Social Network) Given a set of actors \(\mathcal {A}\) and a set of layers \(\mathcal {L}\), a multilayer network is defined as a quadruple \(G = (\mathcal {A}, \mathcal {L}, V, E)\) where (V,E) is a graph, \(V \subseteq \mathcal {A} \times \mathcal {L}\) and E⊆V×V.
In this model the concept of an Actor is a model upon the physical user, while the Nodes can be seen as the "instances" of the actor/user in different contexts/layers (e. g. accounts on different online social networks, or participation in different offline social networks).
Beyond the social networks domain (Dickison et al. 2016; Perna et al. 2018), multilayer networks have been successfully used to model relations and address mining tasks in different domains, such as airline companies (Cardillo et al. 2013), protein-protein interactions (Bonchi et al. 2014), offline – online networks (Scholz et al. 2013), bibliographic networks (Boden et al. 2012), communication networks (Kim and Lee 2015; Bourqui et al2016), and remote sensing data (Interdonato et al. 2017).
Temporal networks
Real world phenomena are dynamic by nature, i. e. entities participating in a phenomenon and the interactions between them evolve over time, and each interaction typically happens at a specific time and lasts for a certain duration. Temporal networks (Li et al. 2017;Zignani et al. 2014) are the model used to represent these dynamic features in network graphs. Temporal networks have been referred to with different other terms, such as evolving graphs, time-varying graphs, timestamped graphs, dynamic networks, and so on.
Holme and Saramaki (2012) identify two main classes of temporal network, namely contact sequences and interval graphs. A contact sequence network is suitable for cases where there's a set of entities V interacting with each other at certain times, and the durations of the interactions are negligible. Typical systems suitable to be represented as a contact sequence include communication data (sets of e-mails, phone calls, text messages, etc.), and physical proximity data where the duration of the contact is less important (e.g. sexual networks) (Holme and Saramäki 2012). A contact sequence network can be defined as follows:
(Contact sequence network) A contact sequence network G=(V,C) is defined by a set of vertices V with an associated set of contacts C, where each contact c∈C is a triple (i,j,t) where i,j∈V and t is a timestamp denoting a time of contact between i and j. A contact sequence network can be equivalently defined as G=(V,E,T,f), where E is a set of edges, T is a set of non-empty timestamp lists, and f:E→T is a function associating each edge to its timestamp list such that for each e∈E exists f(e)=Te={t1,...,tn}.
If the duration of the interactions is considered (i. e. each edge is active at certain time intervals), then the interval graph model is more suitable:
(Interval graph) An interval graph G=(V,E,T,f) is defined by a graph G=(V,E), a set of lists of time intervals T, and a function f:E→T associating a list of time intervals to each edge e∈E, such that Te={(t1,t1′),...,(tn,tn′)}, with each couple (ti,ti′) denoting the beginning and ending time of a time interval.
Examples of systems that are natural to model as interval graphs include proximity networks (where a contact can represent that two individuals have been close to each other for some extent of time), seasonal food webs where a time interval represents that one species is the main food source of another at some time of the year, and infrastructural systems like the Internet (Holme and Saramäki 2012). In both cases (i. e. starting from a contact sequence network or from an interval graph), a static time aggregated graph can be derived, where an edge between two nodes i and j exists if and only if there is at least a contact between i and j. Temporal networks have been used to address problems in different domains, such as community detection in dynamic social networks (Rossetti et al. 2017), activity pattern analysis of editors (Yasseri et al. 2012), temporal aspects of protein interaction (Han et al. 2004) and gene-regulatory networks (Lèbre et al. 2010), analysis of temporal text networks (Vega and Magnani 2018), analysis of epidemic spreading (Moinet et al. 2018;Onaga et al. 2017) and problems related to mobile devices (Tang et al. 2011;Quadri et al. 2014), just to name a few.
Location-aware networks
As discussed for the time dimension (cf. "Temporal networks" section), in several cases modeling networks from real-world phenomena may require taking into account spatial features. The use of location-based (e. g. georeferenced) information is commonly related to specific research fields, e. g. the ones connected to geographical issues and analyses. Nevertheless, in recent years the increasing availability of gps-equipped mobile devices gave rise to the development of location-based social networking (LBSN) services, such as Foursquare, Facebook Places, Google Latitude, Tripadvisor and Yelp. Consequently, several research approaches have been proposed which make use of geographical and spatio-temporal features in social network analysis problems.
Based on the analysis inBao et al. (2015), in typical cases different types of location-aware networks can be defined, depending on which informations are extracted from the LBSN:
(Location-location graph) A location-location graph G=(V,E) is a graph where nodes in V represent locations and directed edges in E⊆V×V represent the relation between two locations. The semantic of the relation can be defined in different ways, e.g. distance between the location (i. e. expressed as edge weight), similarity or visits by the same users.
Definition 10
(User-location graph) A user-location graph G=(U,V,E) is a bipartite graph where nodes in U represent users, nodes in V represent locations and directed edges in E⊆U×V represent relations between users and locations. The semantic of the relation can be flexible, e.g. may indicate that a user visited or rated a certain location.
(User-user graph) A user-user graph G=(V,E) is a graph where nodes in V represent users and directed edges in E⊆V×V represent relations between users. Some typical edge semantics here may be physical distances, friendship on a LBSN, or features derived from users' location histories (e.g. edges may connect users having visited a common location).
Location-aware networks built upon LSBN data are generally used for Point-of-Interest (POI) recommendation tasks (Bao et al. 2015;Zhang and Chow 2015;Liu 2018), with the aim to combine geographical and social influence in the recommendation process. A location-based Influence Maximization problem is addressed inZhou et al. (2015), exploiting LSBN to carry out product promotion in a Online to Offline (O2O) business model. A location-aware multilayer network is proposed inInterdonato and Tagarelli (2017), for a POI recommendation task, which integrates location-aware features from a LSBN (Foursquare), geographical features from Google Maps and conceptual features from Wikipedia on different layers.
Networks based on geographical features can also be extracted from remote sensing data, i. e. satellite images. An approached based on evolution graphs is proposed inGuttler et al. (2017), in order to detect spatio-temporal dynamics satellite image time series. Different evolution graphs are produced for particular areas within the study site, which store information about the temporal evolution of a specific geographical area. Then the graph are both studied separately and compared to each other in order to provide a global analysis on the dynamical evolution of the site.
Probabilistic networks
When using networks to model real-world complex phenomena, it is easy to incur in situations where the existence of the relationship between two entities is uncertain. The sources of this uncertainty can be manifold, e. g. links may be derived from erroneous or noisy measurements, inferred from probabilistic models (Monti and Boldi 2017), or even intentionally obfuscated for various reasons. A practical example is offered by biological networks representing protein and gene interactions. Since the interactions are observed through noisy and error-prone experiments, link existence is uncertain, and a major part of uncertainty may arise in social networks for reasons related to data collection (e. g. data collected through automated sensors, inferred from anonymized communication data or from self-reporting/logging data (Adar and Ré 2007)), or because the network structure is based on prediction algorithms (e. g. approaches based on link prediction (Liben-Nowell and Kleinberg 2007)), or simply because actual interactions in online and offline social networks are difficult to measure. Similar issues may happen when coping with Temporal (cf. "Temporal networks" section) and Location-aware (cf. "Location-aware networks" section) networks, always due to data collection (von Landesberger et al. 2017;Wunderlich et al. 2017). In specific cases, uncertainty in the link structure may also be intentionally injected in a network for privacy reasons (Boldi et al. 2012).
All these situations can be handled by using probabilistic network models, often referred to as uncertain graphs, whose edges are labeled with a probability of existence. This probability represents the confidence with which one believes that the relation corresponding to the edge holds in reality (Parchas et al. 2015). A typical probabilistic network, referred to as Uncertain Graph, is defined inParchas et al. (2015):
(Uncertain Graph) An uncertain graph is defined as a triple \(\mathcal {G}=(V,E,p)\), where function p:E→(0,1] assigns a probability of existence to each edge.
Following the literature, the authors consider the edge probabilities independent (Potamias et al. 2010;Jin et al. 2011), and assume possible-world semantics (Abiteboul et al. 1987;Dalvi and Suciu 2004). Specifically, the possible-world semantics interprets \(\mathcal {G}\) as a set \(\{G=(V,E_{G})\}_{E_{G} \subseteq E}\) of 2|E| possible deterministic graphs (worlds), each defined by a subset of E. The probability of observing any possible world \(G=(V,E_{G}) \sqsubseteq \mathcal {G}\) is:
$$ Pr(G)= \prod_{e \in E_{G}} p(e) \prod_{e \in E \backslash E_{G}} (1-p(e)) $$
Nevertheless, the expressive power enabled by a probabilistic network schema naturally carries with it an explosion in complexity, e. g. the exponential number of possible worlds may even prevent exact query evaluation on the graph. More specifically, even simple queries on deterministic graphs become #P-complete problems on uncertain graphs, and also approximated approaches based on sampling may be too expensive in most cases. To overcome these issues, Parchas et al. propose to create deterministic representative instances of uncertain graphs that maintain the underlying graph properties (Parchas et al. 2015).
Conclusions and future challenges
In this paper, we discussed the main feature-rich network models, namely Attributed Graphs, Heterogeneous Information Networks, Multilayer Networks, Temporal Networks, Location-aware Networks and Probabilistic Networks. Table 1 summarizes the main features exposed for nodes and edges for each discussed model. We introduced the term Feature-rich Network in order to refer to all the complex network models that expose one or more features in addition to the network topology. We kept the definition intentionally wide, with the aim to gather under a common denomination a series of network models which were introduced for different needs, but that at the same time show some common characteristics and can lead to similar problems. Given the flexibility of the definition, this overview is not meant to be exhaustive, and many other feature-rich network models (e. g. data-driven ones) may exist or may be defined in different domains. The use of Feature-rich Networks can intuitively be perceived as beneficial for most research tasks based on graph data, given the greater quantity of information carried by the network object with respect to classic ones. Nevertheless, their expressive power has not been yet fully valued, therefore there is an emergence for providing insights into how the study of feature-rich network models can pave the way for the definition of domain-specific problems that might not be adequately addressed by classic ones. Moreover, the research community also needs an insight in how correctly handling a richer feature set can lead to the definition of network analysis and mining methods that are able to address classic tasks (e. g. community detection, link prediction, information propagation, and so on), improving upon classic models in terms of results quality, while limiting the impact on their efficiency and scalability. Moreover, the use of feature-rich network models may be beneficial for problems in interdisciplinary research fields. In fact, the interplay among researchers from different fields can help modeling most interesting features, and finding new semantics for well-known network analysis tasks. A (non exhaustive) list of domains which usually cope with interdisciplinary research environments and that would benefit from the use of these models include social sciences, physics, remote sensing, health care support, crime and crisis management.
Table 1 Table summarizing the main features exposed for nodes and edges for the discussed feature-rich network models
Abiteboul, S, Kanellakis PC, Grahne G (1987) On the representation and querying of sets of possible worlds In: Proceedings of the Association for Computing Machinery Special Interest Group on Management of Data 1987 Annual Conference, San Francisco, CA, USA, May 27-29, 1987, 34–48.
Adar, E, Ré C (2007) Managing uncertainty in social networks. IEEE Data Eng Bull 30(2):15–22.
Akoglu, L, Faloutsos C (2009) Rtg: a recursive realistic graph generator using random typing. Data Min Knowl Disc (DMKD) 19(2):194–209.
Atzmueller, M, Doerfel S, Mitzlaff F (2016) Description-Oriented Community Detection using Exhaustive Subgroup Discovery. Inf Sci 329:965–984. Publisher: Elsevier, United States.
Atzmueller, M, Kloepper B, Mawla HA, Jäschke B, Hollender M, Graube M, Arnu D, Schmidt A, Heinze S, Schorer L, Kroll A, Stumme G, Urbas L (2016) Big Data Analytics for Proactive Industrial Decision Support: Approaches & First Experiences in the Context of the FEE Project. atp edition 58(9).
Atzmueller, M, Lemmerich F (2018) Homophily at Academic Conferences In: Proc. WWW 2018 (Companion).. ACM Press, New York.
Atzmueller, M, Schmidt A, Kloepper B, Arnu D (2017) HypGraphs: An Approach for Analysis and Assessment of Graph-Based and Sequential Hypotheses In: New Frontiers in Mining Complex Patterns. Postproceedings NFMCP 2016, volume 10312 of LNAI.. Springer, Berlin/Heidelberg.
Atzmueller, M, Sternberg E (2017) Mixed-Initiative Feature Engineering Using Knowledge Graphs In: Proc. 9th International Conference on Knowledge Capture (K-CAP).. ACM Press, New York.
Bao, J, Zheng Y, Wilkie D, Mokbel MF (2015) Recommendations in location-based social networks: a survey. GeoInformatica 19(3):525–565.
Bizer, C, Lehmann J, Kobilarov G, Auer S, Becker C, Cyganiak R, Hellmann S (2009) Dbpedia-a crystallization point for the web of data. Web Semant Sci Serv Agents World Wide Web 7(3):154–165.
Boden, B, Günnemann S, Hoffmann H, Seidl T (2012) Mining coherent subgraphs in multi-layer graphs with edge labels In: Proc. ACM KDD, 1258–1266.. ACM Press, New York.
Boldi, P, Bonchi F, Gionis A, Tassa T (2012) Injecting uncertainty in graphs for identity obfuscation. PVLDB 5(11):1376–1387.
Bonchi, F, Gionis A, Gullo F, Ukkonen A (2014) Distance oracles in edge-labeled graphs In: Proc. EDBT, 547–558.
Bourqui, R, Ienco D, Sallaberry A, Poncelet P (2016) Multilayer graph edge bundling In: Proc. PacificVis, 184–188.. IEEE Computer Society, Washington, D.C.
Cardillo, A, Gomez-Gardenes J, Zanin M, Romance M, Papo D, del Pozo F, Boccaletti S (2013) Emergence of network features from multiplexity. Sci Rep 3:1344.
Combe, D, Largeron C, Géry M, Egyed-Zsigmond E (2015) I-louvain: An attributed graph clustering method In: Advances in Intelligent Data Analysis XIV - 14th International Symposium, IDA 2015, Saint Etienne, France, October 22-24, 2015, Proceedings, 181–192.. Springer, Berlin/Heidelberg.
Dalvi, NN, Suciu D (2004) Efficient query evaluation on probabilistic databases In: (e)Proceedings of the Thirtieth International Conference on Very Large Data Bases, Toronto, Canada, August 31 - September 3 2004, 864–875.. Morgan Kaufmann, Burlington.
Deng, H, Han J, Li H, Ji H, Wang H, Lu Y (2014) Exploring and inferring user-user pseudo-friendship for sentiment analysis with heterogeneous networks. Stat Anal Data Min 7(4):308–321.
Dickison, ME, Magnani M, Rossi L (2016) Multilayer social networks. Cambridge University Press, Cambridge.
Dunbar, RIM, Arnaboldi V, Conti M, Passarella A (2015) The structure of online social networks mirrors those in the offline world. Soc Networks 43:39–47.
Gaito, S, Rossi GP, Zignani M (2012) Facencounter: Bridging the gap between offline and online social networks In: Eighth International Conference on Signal Image Technology and Internet Based Systems, SITIS 2012, Sorrento, Naples, Italy, November 25-29, 2012, 768–775.. IEEE Computer Society, Washington, D.C.
George, G, Thampi SM (2018) A graph-based security framework for securing industrial iot networks from vulnerability exploitations. IEEE Access 6:43586–43601.
Gong, NZ, Talwalkar A, Mackey L, Huang L, Shin ECR, Stefanov E, (Runting) Shi E, Song D (2014) Joint link prediction and attribute inference using a social-attribute network. ACM Trans Intell Syst Technol 5(2):27:1–27:20.
Gui, H, Liu J, Tao F, Jiang M, Norick B, Kaplan LM, Han J (2017) Embedding learning with events in heterogeneous information networks. IEEE Trans Knowl Data Eng 29(11):2428–2441.
Guttler, F, Ienco D, Nin J, Teisseire M, Poncelet P (2017) A graph-based approach to detect spatiotemporal dynamics in satellite image time series. ISPRS J Photogramm Remote Sens 130:92–107.
ADS Article Google Scholar
Han, J-DJ, Bertin N, Hao T, Goldberg DS, Berriz GF, Zhang LV, Dupuy D, Walhout AJM, Cusick ME, Roth FP, Vidal M (2004) Evidence for dynamically organized modularity in the yeast protein-protein interaction network. Nature.
Hoffart, J, Suchanek FM, Berberich K, Weikum G (2013) Yago2: A spatially and temporally enhanced knowledge base from wikipedia. Artif Intell 194:28–61.
MathSciNet MATH Article Google Scholar
Holme, P, Saramäki J (2012) Temporal networks. Phys Rep 519(3):97–125.
Interdonato, R, Tagarelli A (2017) Personalized recommendation of points-of-interest based on multilayer local community detection In: Proc. Social Informatics - 9th International Conference, SocInfo 2017, Oxford, UK, September 13-15 2017, Proceedings, Part I, 552–571.. Springer, Berlin/Heidelberg.
Interdonato, R, Tagarelli A, Ienco D, Sallaberry A, Poncelet P (2017) Local community detection in multilayer networks. Data Min Knowl Discov 31(5):1444–1479.
Ji, M, Han J, Danilevsky M (2011) Ranking-based classification of heterogeneous information networks In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, August 21-24, 2011, 1298–1306.. ACM Press, New York.
Ji, M, Sun Y, Danilevsky M, Han J, Gao J (2010) Graph regularized transductive classification on heterogeneous information networks In: Machine Learning and Knowledge Discovery in Databases, European Conference, ECML PKDD 2010, Barcelona, Spain, September 20-24 2010, Proceedings, Part I, 570–586.. Springer, Berlin/Heidelberg.
Jin, R, Liu L, Aggarwal CC (2011) Discovering highly reliable subgraphs in uncertain graphs In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, August 21-24, 2011, 992–1000.. ACM Press, New York.
Kim, J, Lee J-G (2015) Community detection in multi-layer graphs: A survey. SIGMOD Record 44(3):37–48.
Kim, M, Leskovec J (2012) Multiplicative attribute graph model of real-world networks. Internet Math 8(1-2):113–160.
Kivela, M, Arenas A, Barthelemy M, Gleeson JP, Moreno Y, Porter MA (2014) Mutilayer networks. J Complex Networks 2(3):203–271.
La Fond, T, Neville J (2010) Randomization tests for distinguishing social influence and homophily effects In: Proceedings of the 19th international conference on World wide web, 601–610.. ACM, New York.
Largeron, C, Mougel P-N, Benyahia O, Zaïane OR (2017) Dancer: dynamic attributed networks with community structure generation. Knowl Inf Syst 53(1):109–151.
Lèbre, S, Becq J, Devaux F, Stumpf MPH, Lelandais G (2010) Statistical inference of the time-varying structure of gene-regulation networks. BMC Syst Biol 4(1):130.
Li, A, Cornelius SP, Liu Y-Y, Wang L, Barabási A-L (2017) The fundamental advantages of temporal networks. Science 358(6366):1042–1046.
Liben-Nowell, D, Kleinberg JM (2007) The link-prediction problem for social networks. JASIST 58(7):1019–1031.
Liu, S (2018) User modeling for point-of-interest recommendations in location-based social networks: The state of the art. Mob Inf Syst 2018:7807461:1–7807461:13.
Loe, CW, Jensen HJ (2015) Comparison of communities detection algorithms for multiplex. Physica A 431:29–45.
ADS MathSciNet MATH Article Google Scholar
McPherson, M, Smith-Lovin L, Cook JM (2001) Birds of a feather: Homophily in social networks. Annu Rev Sociol 27(1):415–444.
Misra, S, Barthwal R, Obaidat MS (2012) Community detection in an integrated internet of things and social network architecture In: 2012 IEEE Global Communications Conference (GLOBECOM), 1647–1652.. IEEE Computer Society, Washington, D.C.
Mitzlaff, F, Atzmueller M, Hotho A, Stumme, G (2014) The Social Distributional Hypothesis. J Soc Netw Anal Min 4(216):1–14.
Mitzlaff, F, Atzmueller M, Stumme G, Hotho A (2013) Semantics of User Interaction in Social Media. In: Ghoshal G, Poncela-Casasnovas J, Tolksdorf R (eds)Complex Networks IV, volume 476 of Studies in Computational Intelligence.. Springer, Heidelberg.
Moinet, A, Pastor-Satorras R, Barrat A (2018) Effect of risk perception on epidemic spreading in temporal networks. Phys Rev E 97:012313.
Monti, C, Boldi P (2017) Estimating latent feature-feature interactions in large feature-rich graphs. Internet Math:2017.
Onaga, T, Gleeson JP, Masuda N (2017) Concurrency-induced transitions in epidemic dynamics on temporal networks. Phys Rev Lett 119:108301.
Palla, K, Knowles DA, Ghahramani Z (2012) An infinite latent attribute model for network data In: Proceedings of the 29th International Conference on Machine Learning (ICML), 1607–1614.. Omnipress, USA.
Parchas, P, Gullo F, Papadias D, Bonchi F (2015) Uncertain graph processing through representative instances. ACM Trans Database Syst 40(3):20:1–20:39.
Paulheim, H (2017) Knowledge graph refinement: A survey of approaches and evaluation methods. Semant web 8(3):489–508.
Peel, L, Larremore DB, Clauset A (2017) The ground truth about metadata and community detection in networks. Sci Adv 3(5). American Association for the Advancement of Science.
Perna, D, Interdonato R, Tagarelli A (2018) Identifying users with alternate behaviors of lurking and active participation in multilayer social networks. IEEE Trans Comput Soc Syst 5(1):46–63.
Potamias, M, Bonchi F, Gionis A, Kollios G (2010) k-nearest neighbors in uncertain graphs. PVLDB 3(1):997–1008.
Pujara, J, Miao H, Getoor L, Cohen W (2013) Knowledge graph identification In: International Semantic Web Conference, 542–557.. Springer, Berlin/Heidelberg.
Qiu, T, Luo D, Xia F, Deonauth N, Si W, Tolba A (2016) A greedy model with small world for improving the robustness of heterogeneous Internet of Things. Comput Netw 101:127–143.
Quadri, C, Zignani M, Capra L, Gaito S, Rossi GP (2014) Multidimensional human dynamics in mobile phone communications. PLoS ONE 9(7):1–12.
Ristoski, P, Paulheim H (2016) Semantic Web in Data Mining and Knowledge Discovery: A Comprehensive Survey. Web Semant 36:1–22.
Rossetti, G, Pappalardo L, Pedreschi D, Giannotti F (2017) Tiles: an online algorithm for community discovery in dynamic social networks. Mach Learn 106(8):1213–1241.
Scholz, C, Atzmueller M, Barrat A, Cattuto C, Stumme G (2013) New Insights and Methods For Predicting Face-To-Face Contacts. In: Kiciman E, Ellison NB, Hogan B, Resnick P, Soboroff I (eds)Proc. 7th Intl. AAAI Conference on Weblogs and Social Media.. AAAI Press, Palo Alto.
Shen, W, Han J, Wang J, Yuan X, Yang Z (2018) SHINE+: A general framework for domain-specific entity linking with heterogeneous information networks. IEEE Trans Knowl Data Eng 30(2):353–366.
Sowa, JF (2006) Semantic networks. Encycl Cogn Sci. https://doi.org/10.1002/0470018860.s00065.
Sun, Y, Han J (2012) Mining Heterogeneous Information Networks: Principles and Methodologies. Synthesis Lectures on Data Mining and Knowledge Discovery. Morgan & Claypool Publishers.
Sun, Y, Han J, Zhao P, Yin Z, Cheng H, Wu T (2009) RankClus: integrating clustering with ranking for heterogeneous information network analysis In: Proc. Int. Conf. on Extending Database Technology (EDBT), 565–576.. ACM Press, New York.
Sun, Y, Yu Y, Han J (2009) Ranking-based clustering of heterogeneous information networks with star network schema In: Proc. ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (KDD), 797–806.. ACM Press, New York.
Sun, Y, Han J, Aggarwal CC, Chawla NV (2012) When will it happen?: relationship prediction in heterogeneous information networks In: Proceedings of the Fifth International Conference on Web Search and Web Data Mining, WSDM 2012, Seattle, WA, USA, February 8-12, 2012, 663–672.. ACM, New York.
Sun, Y, Han J, Yan X, Yu PS, Wu T (2011) Pathsim: Meta path-based top-k similarity search in heterogeneous information networks. PVLDB 4(11):992–1003.
Tang, JK, Mascolo C, Musolesi M, Latora V (2011) Exploiting temporal complex network metrics in mobile malware containment In: 12th IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, WOWMOM 2011, Lucca, Italy, 20-24 June, 2011, 1–9.. IEEE Computer Society, Washington, D.C.
Vega, D, Magnani M (2018) Foundations of temporal text networks. Appl Netw Sci 3(1):25:1–25:26.
von Landesberger, T, Bremm S, Wunderlich M (2017) Typology of uncertainty in static geolocated graphs for visualization. IEEE Comput Graph Appl 37(5):18–27.
Wilcke, X, Bloem P, de Boer V (2017) The Knowledge Graph as the Default Data Model for Learning on Heterogeneous Knowledge. Data Sci 1(1-2):39–57.
Wunderlich, M, Ballweg K, Fuchs G, von Landesberger T (2017) Visualization of delay uncertainty and its impact on train trip planning: A design study. Comput Graph Forum 36(3):317–328.
Yang, J, McAuley J, Leskovec J (2013) Community Detection in Networks with Node Attributes In: 2013 IEEE 13th International Conference on Data Mining, 1151–1156.. IEEE Computer Society, Washington, D.C.
Yasseri, T, Sumi R, Kertész J (2012) Circadian patterns of wikipedia editorial activity: A demographic analysis. PLoS ONE 7:1–8.
Yin, Z, Gupta M, Weninger T, Han J (2010) Linkrec: A unified framework for link recommendation with user attributes and graph structure In: Proceedings of the 19th International Conference on World Wide Web, WWW '10, 1211–1212.. ACM Press, New York.
Zhang, J-D, Chow C-Y (2015) Point-of-interest recommendations in location-based social networks. SIGSPATIAL Special 7(3):26–33.
Zhou, T, Cao J, Liu B, Xu S, Zhu Z, Luo J (2015) Location-based influence maximization in social networks In: Proceedings of the 24th ACM International Conference on Information and Knowledge Management, CIKM 2015, Melbourne, VIC, Australia, October 19 - 23, 2015, 1211–1220.. ACM Press, New York.
Zhou, Y, Cheng H, Yu JX (2009) Graph clustering based on structural/attribute similarities. Proc VLDB Endow 2(1):718–729.
Zhou, Y, Cheng H, Yu JX (2010) Clustering large attributed graphs: An efficient incremental approach In: 2010 IEEE International Conference on Data Mining, 689–698.. IEEE Computer Society, Washington, D.C.
Zignani, M, Gaito S, Rossi GP, Zhao X, Zheng H, Zhao BY (2014) Link and triadic closure delay: Temporal metrics for social network dynamics In: Proceedings of the Eighth International Conference on Weblogs and Social Media, ICWSM 2014, Ann Arbor, Michigan, USA, June 1-4, 2014.. The AAAI Press, USA.
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
CIRAD - UMR TETIS, Montpellier, France
Roberto Interdonato
Tilburg University, Tilburg, The Netherlands
Martin Atzmueller
Università degli Studi di Milano, Milan, Italy
Sabrina Gaito
Université Sorbonne Paris Cité, Paris, France
Rushed Kanawati
Laboratoire Hubert Curien, Université de Lyon, Université Jean Monnet, Saint-Etienne, France
Christine Largeron
Nokia Bell Labs, Dublin, Ireland
Alessandra Sala
RI contributed to "Introduction", "Heterogeneous information networks", "Multilayer networks", "Temporal networks", "Location-aware networks", "Conclusions and future challenges", sections and supervised the writing of the article. MA contributed to sections "Introduction", "Attributed graphs" and "Multilayer networks". CL and RK contributed to "Attributed graphs" and "Multilayer networks" sections. SG and AS contributed to "Temporal networks", "Location-aware networks" and "Probabilistic networks" sections. All authors read and approved the final manuscript.
Correspondence to Roberto Interdonato.
Roberto Interdonato is a Research Scientist at Cirad, UMR TETIS, Montpellier, France. He was previously a post-doc researcher at University of La Rochelle (France), Uppsala University (Sweden) and at University of Calabria (Italy), where he received his Ph.D. in computer engineering in 2015. His Ph.D. work focused on novel ranking problems in information networks. His research interests include topics in data mining and machine learning applied to complex network analysis (e.g., social media networks, trust networks, semantic networks, bibliographic networks) and to remote sensing. On these topics he has coauthored journal articles and conference papers, organized workshops, presented tutorials at international conferences and developed practical software tools.
Martin Atzmueller is Associate Professor at the Department of Cognitive Science and Artificial Intelligence at Tilburg University as well as Visiting Professor at the Université Sorbonne Paris Cité. He earned his habilitation (Dr. habil.) in 2013 at the University of Kassel, where he also was appointed as adjunct professor (Privatdozent). Further, he received his Ph.D. (Dr. rer. nat.) in Computer Science from the University of Würzburg in 2006. He studied Computer Science at the University of Texas at Austin (USA) and at the University of Würzburg where he completed his MSc in Computer Science. His research areas include data science, data mining network analysis, wearable sensors and big data. He has published more than 200 scientific articles in top venues, e.g., the International Joint Conference on Artificial Intelligence (IJCAI), the European Conference on Machine Learning and Principles and Practice on Knowledge Discovery in Databases (ECML PKDD), the IEEE Conference on Social Computing (SocialCom), the ACM/IEEE International Conference on Advances in Social Networks Analysis and Mining (ASONAM), the ACM International Conference on Information and Knowledge Management (CIKM) and the ACM Conference on Hypertext and Social Media (HT). He is the winner of several Best Paper and Innovation Awards. He regularly acts as PC member of several top-tier conferences and as co-organizer on a number of international workshops, conferences, and tutorials on the topics of data science and network science, in particular on community detection and mining attributed networks. He can be contacted at [email protected], and his web site is at https://martin.atzmueller.net. Contact info: Tilburg University, Department of Cognitive Science and Artificial Intelligence, Warandelaan 2, 5037 AB Tilburg, Netherlands, Tel: +31-(0)13 466 4736, [email protected]
Sabrina Gaito received a degree in Physics in 1996, a Master Degree in Material Science in 1998 and a Ph.D. in Applied Mathematics in 2002 from the University of Milano, Italy. She is currently Assistant Professor at the same University, where she teaches Social Media Mining and Computer Networks. Her research activity takes place within both the network science, with a focus on complex network theory applied to social networking, human mobility and behaving, and the network technology, with a focus on ad hoc networks and mobile applications.
Rushed Kanawati is an associate professor in computer science at University Paris 13, France, researcher at LIPN CNRS UMR 7030. He is also head of computer networks department at the technological institute, Villetaneuse. He has received a PhD degree in computer science from INPG France in 1998. He joined the INRIA as an expert engineer where he worked on designing and implementing recommender systems. His recent research work covers various topics such as link prediction and community detection in complex networks as well as multiplex and attributed network analysis. He is author of more than 150 papers in national and international venues. He is regularly involved in organizing conferences, workshops and tutorials mainly in the area of complex network analysis. More information can be found at his web site : http://lipn.fr/~kanawati Contact info: University Paris 13 - LIPN CNRS UMR 7030. 99 Av. J-B Clément 93430 Villetaneuse, Tel:+33-(0)1 49404077, [email protected]
Christine Largeron is a full professor in computer science. She received a Ph.D in Computer Science from Claude Bernard University (Lyon – France) in 1991. She is Professor at Jean Monnet University (France) since 2006 and, she is the head of the Data Mining and Information Retrieval group of the Hubert Curien Laboratory. Her research interests focus on machine learning, data mining, information retrieval, text mining, social mining, network analysis. She regularly acts as PC member of several top-tier conferences and as co-organizer on a number of international workshops and conferences. More information can be found on her web site : https://perso.univ-st-etienne.fr/largeron/Contact info: Laboratoire Hubert Curien - UMR CNRS 5516 (LHC), Jean Monnet University, Saint-Etienne 18, rue Benoit Lauras 42000 Saint-Etienne, Tel: +33-(0)4 77 91 57 56 (57 80) [email protected]
Alessandra Sala, head of Analytics Research Group in Bell-Labs, is the global lead of the analytics research program in Bell-Labs. She is responsible to deliver breakthrough research assets to create new marker opportunities and technology that has the potential to change our human lives. In her prior appointment, she was the technical manager of the Data Analytics and Operations Research group in Bell Labs Ireland. Before that, she held a research associate position in the Department of Computer Science at University of California Santa Barbara. During this appointment, she was a key contributor of several funded proposals from National Science Foundation in USA and her research was awarded with the Cisco Research Award in 2011. She focused her research on modeling massive graphs with an emphasis on mitigating privacy threats for Online Social Network users. Before that, she worked for two years as post-doctoral fellow with the CurrentLab research group led by Prof. Ben Y.Zhao. Before UCSB, she completed her Ph.D in Computer Science at University of Salerno, Italy. Her research focus lies on distributed algorithms, data analytics and complexity analysis with an emphasis on graph algorithms and recently AI, machine learning and deep learning. In her previous research she has developed efficient distributed routing algorithms that support robust and flexible application level services such as scalable search, flexible data dissemination, and reliable anonymous communication. She was a Track Chair of WWW 2016, the general chair of ACM COSN 2014 and has served on the TPC of several networking and data mining conferences as IEEE INFOCOM, WWW, SEA, ICWSM, P2P etc. In 2015 she was awarded Distinguished Member of the IEEE INFOCOM Technical Program Committee.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Interdonato, R., Atzmueller, M., Gaito, S. et al. Feature-rich networks: going beyond complex network topologies. Appl Netw Sci 4, 4 (2019). https://doi.org/10.1007/s41109-019-0111-x
Accepted: 04 January 2019
Modeling, Analyzing and Mining Feature-Rich Networks | CommonCrawl |
What are the characteristics of a hexagonal prism?
Hexagonal prisms are three-dimensional figures, which are formed by two parallel hexagonal bases. These bases are connected by six lateral rectangular faces. This means that in total, a hexagonal prism has 8 faces, two hexagonal and six rectangular. Since these figures are three-dimensional, they have two important properties, volume and surface area.
Here, we will learn about some of the most important characteristics of a prism. Also, we will learn about the most used formulas of these prisms and we will use them to solve some problems.
Learning about the characteristics of hexagonal prisms.
See characteristics
Definition of a hexagonal prism
Fundamental characteristics of a hexagonal prism
Important hexagonal prism formulas
Examples of hexagonal prism problems
A hexagonal prism is a 3D figure that has two hexagonal bases that are parallel to each other. These hexagonal bases are joined by six lateral rectangular faces. Hexagonal prisms have a total of 8 faces, 12 vertices, and 18 edges.
Hexagonal prisms can be regular or irregular. A regular hexagonal prism is a prism in which its bases are regular hexagons. Regular hexagons have all their sides of the same length. These are the most common hexagonal prisms.
On the other hand, irregular hexagonal prisms are prisms in which their bases are irregular hexagons. Irregular hexagons have sides with different lengths and internal angles with different measurements.
The following are the main characteristics of a hexagonal prism:
Both bases are hexagons.
The bases are parallel to each other.
If the prism is regular, both hexagonal faces are congruent and have sides of the same length.
If the prism is regular, the six lateral faces are congruent rectangles.
In total, these prisms have 8 faces: 2 hexagonal and 6 rectangular.
These prisms have 12 vertices.
These prisms have 18 edges.
Since hexagonal prisms are 3D figures, their most important formulas are the volume formula and the surface area formula.
Formula for the volume of a hexagonal prism
The volume of a hexagonal prism is equal to the area of its base multiplied by the height of the prism. Therefore, we have the following:
$latex V=\frac{3\sqrt{3}}{2}{{a}^2}h$
where a represents the length of one of the sides of the hexagon and h represents the height of the prism.
Formula for the surface area of a hexagonal prism
The surface area is calculated by adding the areas of all the faces of the hexagon. We have two hexagonal faces and two rectangular faces. The area of both rectangular faces is equal to $latex 3 \sqrt{3}{{a}^2}$ and the area of the six rectangular faces is equal to 6ah. Therefore, we have:
$latex A_{s}=3\sqrt{3}{{a}^2}+6ah$
The formulas seen above are applied to solve the following exercises.
A hexagonal prism has a height of 10 m and its base has sides of length 5 m. What is its volume?
Solution: We have the lengths $latex h=10$ and $latex a=5$. We use the volume formula with these values, and we have:
$latex V=\frac{3\sqrt{3}}{2}{{(5)}^2}(10)$
$latex V=\frac{3\sqrt{3}}{2}(25)(10)$
$latex V=649.5$
The volume is 649.5 m³.
What is the volume of a prism that has a height of 12 m and a hexagonal base with sides of length 4 m?
Solution: We use the volume formula with the given values:
What is the surface area of a hexagonal prism that has a height of 8 m and sides of length 4 m?
Solution: We use the formula for surface area with the given values:
$latex A_{s}=3\sqrt{3}{{a}^2}+6ha$
$latex A_{s}=3\sqrt{3}{{(4)}^2}+6(8)(4)$
$latex A_{s}=83.1+192$
$latex A_{s}=275.1$
The surface area is 275.1 m².
A hexagonal prism has a height of 11 m and sides of length 6 m. What is its surface area?
Solution: Using the formula for surface area, we have:
$latex A_{s}=3\sqrt{3}{{(6)}^2}+6(12)(6)$
$latex A_{s}=187.1+432$
Interested in learning more about hexagonal prisms? Take a look at these pages:
Volume of a Hexagonal Prism – Formulas and Examples
Surface Area of a Hexagonal Prism – Formulas and Examples
Apothem of a Hexagonal Prism – Formulas and Examples
Faces, Vertices and Edges in a Hexagonal Prism
Elements of a Hexagonal Prism
Jefferson Huera Guzman
Jefferson is the lead author and administrator of Neurochispas.com. The interactive Mathematics and Physics content that I have created has helped many students.
Learn mathematics with our additional resources in different topics
I need help with ...
NEUROCHISPAS
Neurochispas is a website that offers various resources for learning Mathematics and Physics.
Copyright © 2023 Neurochispas
About Neurochispas | CommonCrawl |
Physiological characterization of a new thermotolerant yeast strain isolated during Brazilian ethanol production and its application in high temperature fermentation
Cleiton Dias do Prado, Gustavo Patricio Lorca Mandrujano, Jonas Paulino de Souza, and 11 more
published 27 Oct, 2020
Read the published version in Biotechnology for Biofuels and Bioproducts →
posted 10 Jul, 2020
The use of thermotolerant yeast strains can improve the efficiency of ethanol fermentation, allowing fermentation to occur at temperatures higher than 40 °C. This increment in temperature could benefit traditional bio-ethanol production and allow simultaneous saccharification and fermentation (SSF) of starch or lignocellulosic biomass.
We identified and characterized the physiology of a new thermotolerant strain able to fermentate at 40 °C while producing high yields of ethanol. Our results showed that, in comparison to the industrial yeast CAT-1, our strain was more resistant to various stressors generated during the production of first- and second-generation ethanol, and it also was able to change the pattern of genes involved in sucrose assimilation (SUC2 and AGT1). The formation of secondary products of fermentation was different at 40ºC, with reduced expression of genes involved in the formation of glycerol (GPD2), acetate (ALD6 and ALD4), and acetyl-CoA (ACS2).
The LBGA-01 strain is a thermotolerant strain that modulates the production of key genes, changing metabolic pathways during high-temperature fermentation, and increasing its tolerance to the high concentration of ethanol, sugar, acetic lactic, acetic acid, furfural and HMF. This indicates that this strain can be used to improve first- and second-generation ethanol production in Brazil.
Biotechnology and Bioengineering
thermotolerant
ethanol production
Currently, Brazil is the second ethanol producer worldwide using sugar cane as a fermentative substrate since the high sucrose concentration is suitable for sugar and ethanol production. [1; 2; 3; 4]. During ethanol production, the wort, composed by molasses from sugar production and sugar cane juice, is mixed with yeasts in high cell densities (8–22% v/v). The fermentation occurs in approximately 6–8 hours with high ethanol yield at the end of the fermentation (90–92%) [3; 4]. An important particularity within the Brazilian process is the recycling process of yeast achieved by centrifugation followed by acid treatment in sulfuric acid (pH 2-2.25 for 2–3 hours) used to reduce bacteria contamination during the harvest. The fermentation occurs between 28–35 ºC and, during the summer, the maintenance of this temperature is only possible through water cooler use installed in fermentation tanks. This process is crucial to maintain the temperature within an ideal range, not exceeding 35 ºC, but is costly and requires large quantities of water [5].
In addition to temperature, several other stressing conditions such as pH variation, osmotic stress, contamination by bacteria, and wild yeasts can affect the fermentative process [5]. Therefore, the production process should be performed using strains able to survive all of these conditions in the fermentation tank. Brazil has a consolidated process, and for a long time, a significant number of ethanol producers have employed mainly two industrial yeast strains known as CAT-1 and PE-2. These yeasts were isolated from Brazilian industries in 1990s directly from the ethanol process, and their genomes were sequenced in 2009 [2; 6; 7; 8; 9].
Although these strains have had a high performance for several years, the constitution regarding Brazilian fermentation substrates has been changing throughout the last years, especially after the sugar cane burn was prohibited by law. As a result, there was a lower persistence of these strains in the fermentation tanks during the whole harvest that were then replaced by wild yeasts in several mills in Brazil [5]. Otherwise CAT-1 and PE-2 were not able to fermentate appropriately above 35 ºC, reducing their viability [10]
Aiming to maintain productivity or to improve the Brazilian fermentation process, our research group has been isolating new strains from distilleries since 2009 to identify new Saccharomyces cerevisiae strains that can adapt to the new conditions or that have new characteristics as ethanol, sugar and thermotolerance [5].
Here, we demonstrate the identification, and physiological and molecular characterization of a new thermotolerant S. cerevisiae strain that is able to grow at 40ºC and generates high ethanol yields under the aforementioned conditions. Also, this strain showed important changes in the gene expression pattern in pathways involved in fermentation efficiency, what can provide information about its thermotolerant phenotype and associated fermentative performance. In addition, this strain is resistant to the stressors produced during the first- and second-generation ethanol production, highlighting the attributes of this strain for employment in high-temperature fermentation, endeavoring improvements in the Brazilian ethanol production chain
Isolation, identification and molecular genotyping of a thermotolerant yeast strain for use in high temperature fermentation for ethanol production
To obtain personalized yeast for ethanol production in Brazil, several attributes need to be considered and evaluated, such as high yields of ethanol cell viability tolerance to stressors produced during ethanol production. Another important attribute is growth temperature. In Brazilian ethanol production, the strains currently used ferment at temperatures between 28 and 33ºC, and to maintain this temperature constant, the mills need water coolers. The use of thermotolerant yeasts could circumvent this problem improving ethanol production. The Laboratory of Biochemistry and Applied Genetics of the Federal University of São Carlos (LBGA-UFSCar) has been isolating yeasts from ethanol production since 2009, which have been deposited in the LBGA strain collection. We used this collection to screen for isolates that can grow at higher temperatures, above those used in the ethanol fermentation plants in Brazil. Within the approximately 300 strains, four isolates have been identified as thermotolerant since they are able to grow and display similar fitness at both 30 and 40ºC, unlike the industrial strain CAT-1 that is unable to growth at high temperatures [10]. These yeasts were identified as LBGA-01, LBGA-69, LBGA-157, and LBGA-175, respectively (Fig. 1)
To identify these strains, the amplification of ITS [11] and a genotyping test were carried out. The results showed that the four strains were genetically different among them and also from the industrial strain CAT-1. The ITS analysis indicated that only LBGA-01 and LBGA-69 presented the amplification of a 900 bp, expected to identify the strain as a possible Saccharomyces [11; 12]. The LBGA-157 and LBGA-175 showed a different pattern of amplification suggesting that these yeasts are non-Saccharomyces strains, but possible wild contaminating strains (see Additional File 1: Figure S1). To confirm these results, the ITS amplicons were sequenced, and we confirmed through the BLAST analyzes that LBGA-01 and LBGA-69 strains are S. cerevisiae isolates, while LBGA-157 and LBGA-175 strains are Kluyveromyces marxianus yeasts.
The thermotolerant LBGA strains present superior fermentation performance at 40ºC in comparison to the industrial strain at 30ºC
Although only strains LBGA-01 and LBGA-69 were identified as S. cerevisiae, LBGA-157 and LBGA-175 were also included in the cell growth and fermentation tests at 30 and 40ºC since it is already known that strains of Kluyveromyces also ferment at high temperatures [13]. The results of cell proliferation growth at 30ºC show that the strains LBGA-01 and LBGA-69 have the same growth profile as the industrial strain CAT-1. However, when subjected to growth at 40ºC, these strains have higher growth rates than the industrial strain. This result was already expected since it is largely reported that CAT-1 does not show good performance when subjected to high temperatures [9; 10; 14]. The strains LBGA-157 and LBGA-175 showed slower growth rates at both temperatures. In fact, the strains LBGA-01 and LBGA-69 showed a similar growth profile in both temperatures indicating a strong thermotolerant phenotype in these strains (Fig. 2a and 2b). As mentioned above, to be a yeast applicable in industrial use, strains have to present excellent fermentation characteristics and a good conversion of sugar to ethanol in addition to thermotolerant growth. It is known that in the ethanol production process, industrial yeasts perform this conversion in the first hours of fermentation, thus converting all available sugar to ethanol after approximately 4–6 hours of fermentation. To evaluate the fermentative potential of isolated yeasts, we first performed a fermentation experiment using 4% of glucose and all the isolates. The results showed that LBGA-01 and LBGA-69 had a pattern similar to the industrial strain CAT-1 in fermentation conducted at 30ºC, and had superior performance at 40ºC. Strains LBGA-157 and 175 presented a low performance in both temperatures (Fig. 2c and 2d). The yeasts used during ethanol fermentation are subjected to high concentrations of sugar and therefore are constantly exposed to osmotic stress. Since LBGA-01 showed good fermentation performance and presented a slight advantage over LBGA-69 in the glucose fermentation (Fig. 2c and 2d), we also conducted fermentative tests using 8% of sucrose concentration to simulate the conditions widely used in the standard ethanol production in Brazil, in which sucrose is the fermentable sugar used. In comparison to the CAT-1 industrial strain, LBGA-01 showed a better performance in both temperatures with a clear superiority at 40ºC (Fig. 2e and 2f). In the fermentation assays conducted at 30ºC, both strains had a similar pattern. However, at 40ºC, LBGA-01 strain converted 57.5% of the initial sugar while the industrial strain converted 45% (Fig. 2f). From an economic point of view regarding the expected yields for a sugar cane plant, these results are expressive, since the LBGA-01 strain converted 12.5% more sugar under stress conditions than the CAT-1 industrial strain. It is worth mentioning that the use of a thermotolerant yeast operating under stress conditions with good fermentation rates can represent a significant increase in ethanol production in the plant. Other advantages are a decrease in the contamination with wild yeasts and bacteria, and in water use to control the temperature in the fermentation tanks, consequently reducing energy costs, collaborating with environmental issues related to water usage and mitigating the use of natural resources.
LBGA-01 is resistant to stressors produced during 1G and 2G ethanol production process.
Yeasts undergo constant stressing conditions during the Brazilian fermentation process to produce first (1G) and second (2G)-generation ethanol, directly resulting in a decrease of final yield in the process. To evaluate the tolerance of LBGA-01 strain, we analyzed its growth and survival under different concentrations of stressors such as ethanol, sugar, lactic acid, acetic acid, HMF and furfural (the latter three inhibitors of the 2G ethanol production process) and compared them to the industrial strain CAT-1 and the laboratory haploid strain Sc9721. The concentration of each stressor was established according to the literature, as described in the methodology section. The results obtained through the dropout analysis showed that the LBGA-01 strain is more resistant in all tested stressors (Fig. 3).
The overall positive results exceeded the initial expectations, especially in the presence of 2% acetic acid, where we observed that the LBGA-01 strain was more resistant than both the CAT-1 strain and the laboratory strain. When subjected to 4% acetic acid, all of the strains suffered stress and had their growth inhibited. For the lactic acid test, the LBGA-01 strain was also more resistant in the two tested concentrations (2% and 4%), reveling that under conditions of contamination by Bacillus spp or other lactic acid-producing bacteria [15], this yeast would be resistant, and not affected during alcoholic fermentation. However, a wider range of concentration needs to be tested to determine in which concentration of lactic acid the LBGA-01 strain can survive. The high ethanol stress test showed that LBGA-01 has a similar resistance profile to that of the industrial strain for all the used ethanol concentrations: 12, 14 and 16%. We observed higher resistance of LBGA-01 in comparison to CAT-1 strain at 16% of ethanol. In this trial, the control laboratory strain was drastically affected by ethanol stress. When subjected to different concentrations of sucrose, the LBGA-01 strain was also more resistant than CAT-1 and SC9721 strains in the three tested sugar concentrations (20, 25 and 30%), corroborating once more the results obtained during the fermentative tests. For the HMF test, the LBGA-01 had a slight increase in resistance at the lowest evaluated concentration (40 mM). Growth tests with furfural showed that the LBGA-01 strain is more resistant than the industrial and the laboratory strain at both tested concentrations of 0.3 mM and 0.9 mM. Furfural are potent inhibitors of Saccharomyces during lignocellulosic fermentation [16]. For this reason, the thermotolerance conjugated to furfural resistance, as described for LBGA-01, could be an important feature for the production and improvement of 2G ethanol. Taken together, these results demonstrate the robustness of the LBGA-01 strain in the presence of several stressors during the fermentation process, thus becoming a potential strain for the production of 1G ethanol production. Industrial strains have shown to be more robust towards the main lignocellulosic inhibitors produced during biomass pre-treatment processes [17]. The present results are in accordance to these findings, in which the laboratory strain (Sc-9721) was more affected by the presence of HMF, furfural and acetic acid than the industrial LBGA-01 and CAT-1 strains. Regarding the stressors of 2G ethanol production, further tests need to be carried out to confirm the resistance to the same stressors. Nonetheless, our initial results point out that the LBGA-01 strain is more resistant than the CAT-1 industrial strain when submitted to HMF, furfural and acetic acid concentrations, thus calling attention to its potential use in the 2G ethanol production.
Transcriptional responses of LBGA-01 under high temperature fermentation conditions.
To better understand the metabolic changes in LBGA-01 strain during high temperature fermentation, the expression of genes involved in efficiency of fermentation and membrane biosynthesis and sucrose assimilation were evaluated using qPCR, and compared with the industrial yeast CAT-1 at both fermentation temperatures (30ºC and 40ºC). The mRNA levels for genes involved in the efficiency of fermentation are summarized in Fig. 4.
During the fermantation assays, in both glucose-limited chemostats, and in fed-batch fermentation, increase in glycerol production rate and glycerol titer were observed at 40ºC, respectively, when compared to control condition (30ºC),. Since GPD1 and GPD2 are key enzymes in glycerol synthesis, we hypothesized that the expression of GPD1 and GPD2 would increase during the fermentations. However, our results indicate that the expression of these genes did not changed (Fig. 4). GPD2 and GPD1 are paralog genes encoding the isoenzyme NAD-dependent glycerol-3-phosphate dehydrogenase, which has an important role in osmoadaptation (GPD1) and anoxic growth conditions (GPD2) [18; 19; 20] Mutants lacking both GPD1 and GPD2 do not produce detectable glycerol, leading to the accumulation of dihydroxyacetone phosphate (DHAP). This DHAP can be converted to methylglyoxal, a cytotoxic compound that can inhibit the yeast growth [21]. Differently, the growth of the LBGA-01 was not affected in either temperature. Interestingly, our results show that under high temperature (40 ºC), expression of SNF1 was highly increased in fermentations using the LGBA-01 strain (Fig. 4). The kinase SNF1 was described as an repressor of GPD2 via phosphorylation to halt glycerol production when nutrients are limited [22]. Therefore, we hypnotize that the unchanged GPD2 abundance is because its repression is exacerbated in the LBGA-01 strain due to the increase of SNF-1 expression (Fig. 5d).
As expected, a decrease in SUC2 (invertase) expression was observed in all the strains after 4 hours of fermentation (Fig. 4), what was due to the inhibition of SUC2 expression caused by the accumulation of glucose and fructose during the first hours of fermentation, as a result of the invertase activity [23]. However, a different pattern was found between CAT-1 and LBGA-01 after 8 hours of fermentation. During CAT1 fermentation, the SUC2 gene is reactivated as glucose concentration decreases and the invertase resumes the metabolization of the residual sucrose. The metabolic shift caused by glucose concentration and by inactivation and reactivation of SUC2 is accompanied by the expression of SNF1. The activation of this kinase is glucose-dependent and directly related to the inactivation of glucose transporters and activation of genes involved in the utilization of alternative carbon sources [23; 24; 25]. As previously described, the SNF1 expression in LBGA-01 strain is maintained at high levels during the fermentation. Meanwhile, the SUC2 expression decreases, as mentioned above, and the sucrose consumption remains unchanged (Fig. 2e and 2f). In fact, the ethanol production rate is higher than at 40ºC than at 30ºC in this strain, accompanying the SNF1 expression that is increased in this temperature. We suggest that this process happens because there is an augment in the internalization of sucrose by MAL31 and AGT1 transporters since both proteins are able to actively transport sucrose, maltose, and maltotriose, although this process naturally occurs in absence of glucose [26; 27]. Our hypothesis is also supported by the expression of the SNF1 gene, that is highly expressed in LBGA-01 strain during the whole fermentation process, even in the presence of glucose, thus possibly activating the receptors [28; 29; 30]. Interestingly the expression of SNF1 was higher in the middle and in the end of the fermentation conducted with LBGA-01 in both temperatures, when sucrose was used as carbon. Therefore, we argue that LBGA-01 can be used in higher levels of this sugar since sucrose would be inverted by SUC2 and transported by AGT1 at the same time, and later inverted by intracellular SUC2.
When the expression of genes involved in the formation of secondary products of fermentation such as glycerol (GPD2), acetate (ALD6 and ALD4), and acetil-CoA (ACS2) was evaluated, we found repression of all of these genes at 40 ºC (Fig. 5a-c). These results suggest that the alternative pathways for glucose utilization are inhibited at high temperatures in the LBGA-01 strain, which preferentially uses the available carbon source for the ethanol production pathways.
Quantitative physiological parameters of LBGA-01 during anaerobic glucose-limited chemostat at high temperature
Chemostat cultivations have been broadly applied on quantitative study of physiological parameters in S. cerevisiae. We deemed to investigate the impact of a high temperature (40 ºC) on the anaerobic physiology of LGBA-01 in comparison to a control temperature (30ºC) and to the experiment conducted by Della-Bianca et al. [31] that used the industrial strain PE-2, largely used for Brazilian ethanol production, using glucose-limited chemostat cultures (Table 1). An advantage of studying microbial cells under continuous culture instead of batch culture is that in the former, the specific growth rate can be held constant under different stressful conditions [32].
– Physiology of S. cerevisiae strains in glucose-limited.
S. cerevisiae strain
LBGA-01
(This work)
PE-2 [31]
q glucose
-5.28 ± 0.50
-5.06 ± 0.15
q CO2
8.51 ± 0.28
q Ethanol
11.50 ± 1.72
q Glycerol
q Lactate
q Pyruvate
q Acetate
X (g DW L− 1)
YX/S (g DW g glucose− 1)
YETH/S (g ethanol g glucose− 1)
YG/S (g glycerol g glucose− 1)
Residual glucose (mM)
2.4 ± 0.58
C recovery (%)
101.97 ± 1.73
100.9 ± 0.7
The assays were conducted using anaerobic chemostats with synthetic medium at a dilution rate of 0.1 h− 1. Specific q rates are given in mmol g− 1 h− 1. Data are the average value from duplicate or triplicate experiments ± deviation of the mean.
In anaerobic glucose-limited chemostat cultures of the LGBA-01 strains, carbon was mainly diverted to ethanol and CO2, and minor amounts of glycerol and lactic acid with a concomitant formation of yeast biomass were produced. When comparing the data obtained from LBGA-01 strain cultivated at 40ºC and 30ºC (control), we observed an increase in consumption of glucose (38%) as well as in the production rates of CO2 (51%), glycerol (54%) and ethanol (36%). On the contrary, we observed a substantial decrease in biomass yield (25%), and no effect on glycerol yield or on the maximum specific growth rate during the batch phase (Table 1). Interestingly, we did not observe difference between 40 ºC and the control condition in ethanol yield during steady-state in cultures of the strain LGBA-01. In contrast, we observed that glucose concentration was higher at 40ºC the during steady-state, suggesting a possible inhibition of glucose uptake.
Stressing conditions such as high temperature can generate perturbations in the redox balance inside the cells [33]. The increase in the rate of synthesis of by-products (such as acetate and lactate), which are involved in the reoxidation of NADH, are indicative of how cells are responding to this stressful condition, as well as the differential gene expression of ACS2 gene reported above.
A similar experimental setup was performed by Bianca et al. (2014) [31] using the industrial S. cerevisiae strain PE-2, known as highly stress-tolerant [8]. The results obtained in the present study showed that LGBA-01 presents higher ethanol and glycerol production rates than S. cerevisiae PE-2 under similar conditions, i.e., at 30ºC. These data suggest an advantage on the industrial process. Moreover, as mentioned by Bianca et al. (2014), the absence of acetic acid in all cultivations is a remarkable phenotypic characteristic found in a strain that grows in acidic environments, such as those found in the industrial ethanol process.
A previous report analyzed the thermotolerance of industrial S. cerevisiae strains isolated from Brazilian ethanol plants, such as CAT-1, PE-2, BG-1, and JP-1 in synthetic media with glucose as the sole carbon and energy source [14]. Although the authors have used different conditions from those reported in this study during the batch phase, i.e., oxygen-limited shake-flask cultures as opposed to anaerobic bioreactors, the growth rates of some strains (JP-1 and CAT-1) were higher at 37 ºC than at 30ºC (0.39 and 0.38 h− 1, respectively). Furthermore, they were lower than the growth rates obtained for the LBGA-01 strain at both 30 ºC and 40ºC. In respect to ethanol yield, JP-1 and BG-1 presented an increase in cultivations at 37ºC when compared to 30ºC, differently from our results. Instead, PE-2 presented a small increase in ethanol yield at 37 ºC than at 30 ºC.
In terms of the specific ethanol production rate, our results revealed that LBGA-01 strain has a higher rate at 40ºC than at 30ºC, although reaching similar values of ethanol yield under both conditions. Similarly, increased specific rates of glycerol production were also observed under such conditions, although glycerol yield was not affected. The deviation of carbon away from biomass formation at 40ºC seems to be due to pyruvate and lactate production. This result can be explained by the reduced expression of genes encoding the enzymes responsible for the production of secondary products (Fig. 5).
Fermentative performance of LBGA-01 in conditions mimicking the Brazilian industrial ethanol process under high temperature
Different aspects of S. cerevisiae strains as specific growth, yields in ethanol, glycerol, and cell productivity are commonly investigated under laboratory conditions, using a batch mode operation and synthetic defined culture media offering conditions containing all nutrients in adequate amount, to enable maximum growth rate. In the synthetic medium, the carbon source is usually a limiting nutrient [34].
Industrial conditions are not reproducible and vary from batch to batch. There is insufficient data reported for conditions that reproduce the different characteristics found in industrial environments. Specifically in Brazilian 1G ethanol production, sugarcane juice and molasses are often used as a lower cost carbon source for fermentation [35]. Its composition and quality also vary among different batches and harvesting periods; therefore, synthetic laboratory medium replicates of conditions are poor and may lead to misinterpreted conclusions [36]. Also, other stresses are associated with industrial production, such as toxicity of products, non-aseptic conditions, substrate inhibition, cell recycle, acid treatment, bacterial contamination, and temperature stress [34]. Thus, high tolerance for such a great variety of stressful conditions is a desirable feature for a yeast strain in the fuel ethanol industry [34; 37; 38].
To asses and study physiologic aspects and performance of LBGA-01 under highly stressful conditions, we scaled down the Brazilian 1G ethanol production with sugarcane molasses as carbon source using protocol described by Raghavendran et al. [34]. Thermotolerance was investigated by submitting cells to 34 °C and 40 °C that are unusual laboratory temperatures, commonly found in Brazilian sugarcane mills and inside reactors due to the exothermic reactions of ethanol production [36].
Fermentation capacity was monitored by plotting the produced CO2 per gram of wet biomass as a function of fermentation time. As shown in other studies, there could be an increase or decrease in biomass weight, and the plot of the total amount of CO2 against time could not represent well the fermentation capacity [34]. In this case, the normalization of the specific mass is necessary. The fermentation at 34 °C showed a slightly lower fermentation capacity (Fig. 6a) compared to fermentation at 40 °C in the first cycle. At both temperatures, the yeast started with virtually the same viability of approximately 80%. The fermentation capacity of the first cycle at 34 °C was kept in the other sequential cycles, so with a viability of about 100% (Fig. 6b). The fermentation capacity decreased cycle to cycle at 40 °C, as it can be seen from the reduction of the experimental data slope, probably associated with a reduction of viability. At 40 °C, LBGA-01 viability slightly decreased after the first cycle (54.9%), reaching 28.7% of viability after the fourth cycle.
During the experiments, both fermentations started with similar weights. (4 ± 0.06 g). Corroborating constant viability, the biomass at 34 °C had a negligible decrease at first, followed by a small increase (less than 5%) in all subsequent cycles (Fig. 6c), thus resulting in a total weight increase of 7%. At the high temperature, biomass decreased, similarly to viability, with a mean decrease of 6.2% for each cycle and a total reduction of 24% (Fig. 6c).
Ethanol yield and glycerol production were also assessed during fermentation. As mentioned above, glycerol production is associated with a part of the physiological response of cells to osmotic shock. Thus, its formation occurs in many kinds of stressful situations. As a cell response, intracellular glycerol is thought to decrease water activity in the cytosol, leading to higher water uptake [39]. Glycerol levels were monitored and showed an increase with cycle in both temperatures, and were higher at 40 °C than at 34 °C. As a protection mechanism, increased glycerol levels are caused by the high-temperature stressful condition that cells are submitted to (Fig. 6d).
Ethanol yield for each cycle was calculated as described elsewhere [35]. A correction factor for high cell density was applied as previously reported, and a specific volume of 0.7 mL g− 1 (wet basis) was considered for yeast cells. Thus, the ethanol yield accounts for ethanol from centrifuged wine and pelleted yeast biomass. A mass balance for ethanol is applied as a difference between ethanol content at the end of the cycle and ethanol in the beginning (returned wine plus pelleted yeast biomass from the previous cycle). The ethanol yield is expressed as a percentage of the maximum theoretical ethanol that could be produced by the total sugar content (Eq. 1):
$$Ethanol yield \left(%\right)=\left(\frac{10000}{51.11*{V}{s}TRS}\right)\left{\left({V}{w}+0.7*P\right)ET-\left({V}_{v}+0.7{P}{p}\right)*E{T}{p}\right}$$
where Vs is the total amount of substrate (mL) of concentration TRS (g.100 mL− 1), 51.11 gethanol.(100 gTRS)−1. Vw is the volume of centrifuged wine (mL), P is the pelleted yeast biomass (g), ET is the ethanol concentration in centrifuged wine (% w * v− 1). Pp is the pelleted yeast biomass from the previous cycle, ETp is the ethanol concentration from centrifuged wine from the previous cycle, and Vv is the volume of wine from the previous cycle.
The ethanol yields during the four cycles at 34 °C were similar, with a median value of 86.03 ± 1.56%. It is noteworthy that the results obtained for LBGA-01 are similar to those obtained for CEN.PK 113-7D, Baker's yeast, and S288c, that ranged from 86 to 92% [
34]. Moreover, the results are also comparable to those obtained for the two main industrial strains PE-2 (87.2 ± 3.9%) and Ethanol Red™ (87.6 ± 5.1%), both employed in Brazilian 1G production [14; 35]. As previously discussed, fermentation capacity was reduced after cycles at 40 °C, even though the ethanol yield after cycles were slightly lower than those obtained at 34 °C, with a mean value of 76.9 ± 2.72% (Fig.
6e). This reduction could be explained by the deviation for the production of other compounds in the fermentative metabolism, such as glycerol production, and also by a higher loss of ethanol attributed to the temperature elevation.
Collectively, these results showed that LBGA-01 has a good fermentative performance. However, this yeast needs to improve its viability along cell recycles to be used in the Brazilian ethanol production. Efforts using adaptive evolution will be attempted to fix the genetic characteristics and increase the viability of this yeast in the recycle of high temperature fermentation.
We have reported the characterization of a new S. cerevisiae strain resident with important fermentative characteristics. In addition, these isolated strains may have important biotechnological characteristics, such as the ability to grow under stress conditions, such as high concentrations of ethanol and sugar as well as high temperature. Our results showed that although there was a reducing viability throughout yeast recycles, the LBGA-01 strain is a potential thermotolerant strain producing a high yield of ethanol at 40 ºC. Furthermore, this strain changes its metabolic pathways to resist several stressors produced in 1G and 2G ethanol production, including high ethanol and sugar concentrations, generating better results for acetic acid, lactic acid, furfural, and higher HMF concentrations. These results contribute to the development of production processes using higher temperatures, reducing the use of cooling water during the process, and facilitating the persistence of these strains throughout fermentation, since few wild yeast strains can grow under these conditions. In addition, we hope to encourage new discoveries for the application of LBGA-01 in sugar cane mills with this manuscript.
Yeast Isolation And Identification Of Thermotolerants
The yeast strains used in this study were obtained from fermentation tanks after acid treatment, at the São Luiz sugar cane plant located in the city of Ourinhos–SP–Brazil. Samples were collected and stored in sterile conical flasks, taken to the laboratory and centrifuged at 3,000 rpm for 5 minutes. The precipitate was washed 3 times with sterile water, and after the last wash, five grams of the precipitate was eluted to a volume of 50 mL. Serial dilution (1:50; 1: 2500; 1: 12500) was performed and yeast was isolated using solid YPD (1% yeast extract, 2% peptone, 2% glucose and 2% agar) incubated for approximately 48 hours at 30 °C. After this period, 10 colonies were randomized for analyses.
For cultivation and storage of the strains, a pre-inoculation in 2% YPD (1% Yeast extract, 2% Peptone and 2% dextrose) liquid medium was performed, followed by growth for 16 hours in shaker at 30 ºC, 180 rpm. After growth, 500 uL of culture medium containing S. cerevisiae was transferred to flasks containing 500 uL of glycerol 30% (v/v), and kept in a freezer at -80 °C. All strains were genotyped as described below, and evaluated for growth at high temperatures using the dropout analysis with serial dilutions from 106 to 103 cells mL− 1.
Yeast DNA of all thermotolerant strains found in this study was extracted using the phenol-chloroform protocol. In summary, cells were grown overnight in 2% liquid YPD. After this period, cells were centrifuged and lysed with glass beads in 500 µL extraction buffer (200 mM Tris-HCl, 25 mM EDTA and 0.5% SDS), following Malavazi and Goldman [40]. Subsequently, 400 µL of phenol chloroform (50–50%) was added, and sample was centrifuged for 10 minutes at 13,000 rpm. DNA was precipitated using 600 µL of isopropanol and washed with 300 µL of 70% ethanol. DNA was eluted in 80 µL of water and stored at -20 °C for subsequent analyses.
Molecular identification for individual characterization of isolated strains and classification by species.
In order to uniquely identify thermotolerant strains, genotyping analysis were performed by PCR using specific primers that were developed based on the amplification of polymorphic regions [41]. To identify the yeast species, PCR experiments were performed to amplify the DNA fragment between intergenic regions ITS-1 and ITS4, using specific primers as described by Uranská et al and other authors [11; 42]. The amplification product (PCR) was sent out for sequencing and results were compared using BLASTN tool (http://blast.ncbi.nlm.nih.gov/Blast.cgi), as well as matching fragment size results amplified.
Fermentation Assays Using Thermotolerant Strains
To investigate the fermentative ability of thermotolerant cells, isolated yeasts were submitted to fermentation tests using glucose 4 or 8% as fermentative substrate, at two distinct temperatures, 30 and 40 °C. Yeasts were inoculated in fermentative medium (50 mL) in 250 mL conical flasks with final optical density at 600 nm (OD600nm) of 0.1 to ensure standardized inoculations, both temperatures without agitation. Samples were collected at 2-hour intervals for glucose consumption analyses. Sugar was measured by the glucose oxidase colorimetric enzymatic method (Glucose GOD-PAP), following the manufacturer's recommendations.
Characterization of yeast cell stress resistance (Ethanol, Sugar, Acetic Acid, Acid Lactic, Furfural, HMF)
For each stress, two or three concentrations were established, according to the literature. Concentrations higher than those described were used to evaluate the resistance of the thermotolerant yeast LBGA-01 in comparison to the industrial yeast CAT-1 and the laboratory haploid yeast SC9721. (Table 2)
Stressor Concentration used in this study
12, 14 e 16%
20, 25 e 30%*
2 e 4%
40 e 80 mM
0,1 e 0,9 mM
* in this case, the used medium was only composed of YP2 × (2% Yeast ex tract and 4% Peptone), adding the desired glucose concentration.
Strains were maintained in logarithmic proliferation phase overnight, at 30ºC with agitation of 180 rpm, then diluted to an OD600 of 0.1 in 200 µL of YPD medium (1% Yeast extract, 2% Peptone and 2% dextrose) containing the evaluated stressor at the desired concentration. The assay was performed in triplicate in 96-well plates and incubated at 30 ºC for 10 hours. Then, each strain was ten-fold diluted (10− 1, 10− 2 and 10− 3) and spotted to YPD solid medium (1% Yeast extract, 2% Peptone, 2% dextrose and 2% agar) containing the evaluated stressor. Figures are generated from the plates photographed at the incubation time that best demonstrated differences between control and experimental samples.
Qpcr Analysis
Total RNA was extracted using Trizol reagent (Invitrogen, Rockville, MS, USA), according to the manufacturers' protocol. Samples were quantified using a Nano Vue ND-1000 spectrophotometer (GE Healthcare, Chicago, Illinois, EUA).
RNA samples (1 µg) were subjected to DNAseI treatment (Invitrogen, Rockville, MS, USA) and reverse transcribed with High Capacity cDNA Reverse Transcription kit using oligo dTV and random primers blend (Thermo Scientific, Waltham, Massachusetts, EUA). Primers were designed using the PrimerExpress™ program (Applied Biosystems, Foster City, CA, USA) (see Additional file 2: Table S1). The concentration of each primer was determined (the best concentration was 150 nM for all primers used in this study) and the amplification efficiency was calculated according to the equation E (−1/slope) to confirm the accuracy and reproducibility of the reactions. Amplification specificity was verified by running a dissociation protocol. qPCRs were performed in a StepOne Plus Real-time PCR System (Thermo Scientific Waltham, Massachusetts, EUA). The fold change in mRNA abundance was calculated using 2−ΔΔCt [28] and all values were normalized as the expression of the beta actin (ACT1) gene.
Chemostat Cultivations
Chemostat cultivations with S. cerevisiae LBGA-01 strain were carried out in a 2.0 L water jacketed model Labfors 5 (Infors AG, Switzerland) with 1.0 L working volume kept constant by a mechanical drain controlled by a peristaltic pump. The culture medium composition for all cultivations was the one described by Verduyn et al. [48], containing glucose as carbon source, ammonium sulphate as nitrogen source, and supplemented with ergosterol and unsaturated fatty acids in the form of Tween 80, which were dissolved in boiling 96% (v/v) ethanol to final concentrations of 0.01 and 0.42 g L− 1, respectively [49; 50].. All chemostat cultivations were carried out under anaerobic condition, which was controlled by constant flush of industrial nitrogen gas.
Agitation frequency was set to 800 rpm, temperature was controlled at 30 °C, and pH was controlled at 5.0 via controlled addition of 2 M KOH solution. Precultures for batch bioreactor cultivations were grown overnight in an orbital shaker at 30 °C and 200 rpm in 2500 mL shake flasks containing 30 mL of the defined medium, with 20 g L− 1 initial glucose. After carbon source exhaustion (that was monitored by a sharp drop in the CO2 concentration in the off-gas), batch cultivation was switched to continuous mode with a fresh medium containing inhibitor compounds, either isolated or combined, in a feeding of 100 mL h− 1, which corresponded to a dilution rate of 0.10 h− 1, assuming a working volume of 1.0 L. Chemostate cultures were performed for at least five residence times prior to sampling.
Fermentation Assays Mimicking The Brazilian Industrial Ethanol Process
The bench-scale assays of the industrial fermentation were carried out in triplicate using the protocol described to scale down the Brazilian 1G ethanol production [34]. Pre-culture was prepared with one colony of LBGA-01 inoculated overnight in YPD medium (4% glucose, 1% yeast extract, and 2% peptone) under sterile conditions at 30 °C and 200 rpm. Propagation medium (1 L, 10° Brix, sugarcane molasses from São Luiz sugar cane industry was inoculated with 100 mL of YPD pre-culture under static conditions for approximately 36 h at ambient temperature. The flask was carefully agitated from time to time to release trapped CO2. This step was performed under non-sterile conditions. After 36 hours, the cells decanted and the content of the flask was carefully transferred to a vessel, so that cells did not re-suspend. The content was reserved and used in the fermentation step. In the end, 150 mL of the propagation medium concentrated in cells was mantained. The volume to achieve approximately 4 g of cells was centrifuged in 50-mL centrifuge tubes (2000 g, 4 °C, 15 min). The cell-free supernatant ("vinho") was stored. Fermentation started with the addition of 6 mL of water and 2 mL of the "vinho" to the 50-mL centrifuge tubes containing 4 g of cells. The cells corresponded to 10% w.v− 1 and the remaining "vinho" from the previous step simulated the efficiency of the industrial centrifuge [35]. The fermentation cycle started with the addition of 9.25 mL of fermentation medium (19% TRS sugarcane molasses from usina SJ) to the tubes. The initial mass of the tubes was marked to monitor CO2 loss. Tubes were stored in an incubator at 34 °C under static conditions and were weighted hourly until 10 hours. After two and four hours, 9.25 mL of fermentation medium was added, and tubes were weighed again. On the following day, the final weights were measured during the morning. The tubes were centrifuged (200 g, 4 °C, 15 min), and the supernatant was transferred to a different vessel and then stored. The 50 mL centrifuge tubes were weighed, to account for the biomass increase from cycle to cycle. Acid treatment was carried out after the tubes were weighted after the addition of 2 mL of "vinho" and 6 mL of water to the wet cells. The final pH was adjusted to 2-2.5 with 1 N H2SO4, and the tubes were left for one hour before the first addition of fermentation medium to start a new cycle. The simulation was assessed in four cycles, representing four days. The same procedure was performed to 40 °C in the static incubator, following the same described procedure.
LBGA: Laboratory of Biochemistry and Applied Genetics; UFSCar:Federal University of São Carlos; USP:University of São Paulo; SSF:Simultaneous Saccharification and Fermentation; SUC2:Sucrose transport protein; AGT1:Alpha-glucoside permease; CAT-1:Industrial Yeast Strain; GPD2:Glycerol-3-Phosphate Dehydrogenase 2; ALD6:Aldehyde dehydrogenase-6 ; ALD4:Aldehyde dehydrogenase-4; ACS2:Acetyl CoA Synthetase; HMF:Hydroxymethylfurfural; ITS:Internal Transcribed Spacer; 1G:First Generation; 2G:Second Generation; SNF1:Non-specific serine/threonine protein kinase; MAL31:Maltose permease; ACTB:beta actin; Eq:Equation; YPD:Yeast extract, Peptone and Dextrose; DNA:Deoxyribonucleic Acid; EDTA:Ethane-1,2-diyldinitrilo) tetraacetic acid; SDS:Sodium lauryl sulfate; PCR:Polymerase chain reaction OD:Optical Density; GOD-PAP:Glucose oxidase-phenol amino phenazone RNA:Ribonucleic acid; cDNA:qPCR:Real-time polymerase chain reaction; KOH:Potassium hydroxide; CO2:Carbom dioxide; TRS:Total Reducing Sugars; H2SO4:Sulfuric Acid.
This work was supported by FAPESP–Fundação de Amparo à Pesquisa do Estado de São Paulo (grants 2016/10130-7, 2017-19694-3, 2018/20697-0, 2018/17172-2, 2019/08393-8) and Brazilian National Council Scientific (CNPq) process nº 140046/2019-4
CDP an AFC participated in the design of the study. CDP and FBS performed the experimental work relating to yeast strains growth, fermentation and stressors analyses, and CDP drafted the manuscript. GLM, JPS, HRN, and MHRA performed the genomic DNA extraction and viability analyses of the yeast strain library. GLM and JPMOS performed qPCR analysis. KPE, GCGC, RG, DPP, and TOB performed the chemostat cultivation and bioreactor fermentation assays, as well as drafted the specific results for this portion of the manuscript. AFC and IM supported and supervised the study, as well as helped discuss and analyze the results. All authors read and approved the final version of the manuscript.
The authors are grateful to São Luiz Sugarcane Mill (Ourinhos, SP, Brazil) for providing the samples for yeasts isolation.
Amorim HV, Lopes ML, de Castro Oliveira JV, Buckeridge MS, Goldman GH. Scientific challenges of bioethanol production in Brazil. Appl Microbiol Biotechnol. 2011;91(5):1267–75. https://doi.org/10.1007/s00253-011-3437-6.
Basso LC, de Amorim HV, de Oliveira AJ, Lopes ML. Yeast selection for fuel ethanol production in Brazil. FEMS Yeast Res. 2008;8(7):1155–63. https://doi.org/10.1111/j.1567-1364.2008.00428.x.
Lopes ML, Paulillo SC, Godoy A, Cherubin RA, Lorenzi MS, Giometti FH,.. . Amorim HV. Ethanol production in Brazil: a bridge between science and industry. Braz J Microbiol. 2016;47(Suppl 1):64–76. https://doi.org/10.1016/j.bjm.2016.10.003.
Wheals AE, Basso LC, Alves DM, Amorim HV. Fuel ethanol after 25 years. Trends Biotechnol. 1999;17(12):482–7.
Paulino de Souza, J, Dias do Prado C, Eleutherio ECA, Bonatto D, Malavazi I, Ferreira da Cunha, A. Improvement of Brazilian bioethanol production - Challenges and perspectives on the identification and genetic modification of new strains of Saccharomyces cerevisiae yeasts isolated during ethanol process. Fungal Biol. 2018;122(6):583–91. https://doi.org/10.1016/j.funbio.2017.12.006.
Basso LC, Basso TO, Rocha SN. (2011). Ethanol Production in Brazil: The Industrial Process and Its Impact on Yeast Fermentation. In D. M. A. D. S. Bernardes, editor, Biofuel Production-Recent Developments and Prospects, : InTech.
Eliodório KP, Cunha GC d. G. e., Müller, C., Lucaroni, A. C., Giudici, R., Walker, G. M.,.. . Basso, T. O. (2019). Chapter Three - Advances in yeast alcoholic fermentations for the production of bioethanol, beer and wine. In G. M. Gadd & S. Sariaslani, editors, Advances in Applied Microbiology (pp. 61–119): Academic Press.
Argueso JL, Carazzolle MF, Mieczkowski PA, Duarte FM, Netto OV, Missawa SK,.. . Pereira GA. Genome structure of a Saccharomyces cerevisiae strain widely used in bioethanol production. Genome Res. 2009;19(12):2258–70. https://doi.org/10.1101/gr.091777.109.
Babrzadeh F, Jalili R, Wang C, Shokralla S, Pierce S, Robinson-Mosher A,.. . Stambuk BU. Whole-genome sequencing of the efficient industrial fuel-ethanol fermentative Saccharomyces cerevisiae strain CAT-1. Mol Genet Genomics. 2012;287(6):485–94. https://doi.org/10.1007/s00438-012-0695-7.
Costa DA, de Souza CJ, Costa PS, Rodrigues MQ, dos Santos AF, Lopes MR,.. . Fietto LG. Physiological characterization of thermotolerant yeast for cellulosic ethanol production. Appl Microbiol Biotechnol. 2014;98(8):3829–40. https://doi.org/10.1007/s00253-014-5580-3.
`uranská H, Vránová D, Omelková J. Isolation, identification and characterization of regional indigenous Saccharomyces cerevisiae strains. Brazilian Journal of Microbiology. 2016;47:181–90. http://www.scielo.br/scielo.php?script=sci_arttext. &pid=S1517-83822016000100181&nrm=iso..
Guimarães TM, Moriel DG, Machado IP, Picheth CMTF, Bonfim TMB. Isolation and characterization of Saccharomyces cerevisiae strains of winery interest. Revista Brasileira de Ciências Farmacêuticas. 2006;42:119–26. http://www.scielo.br/scielo.php?script=sci_arttext. &pid=S1516-93322006000100013&nrm=iso..
Fu X, Li P, Zhang L, Li S. Understanding the stress responses of Kluyveromyces marxianus after an arrest during high-temperature ethanol fermentation based on integration of RNA-Seq and metabolite data. Appl Microbiol Biotechnol. 2019;103(6):2715–29. https://doi.org/10.1007/s00253-019-09637-x.
Della-Bianca BE, Gombert AK. Stress tolerance and growth physiology of yeast strains from the Brazilian fuel ethanol industry. Antonie Van Leeuwenhoek. 2013;104(6):1083–95. https://doi.org/10.1007/s10482-013-0030-2.
Costa MAS, Cerri BC, Ceccato-Antonini SR. Ethanol addition enhances acid treatment to eliminate Lactobacillus fermentum from the fermentation process for fuel ethanol production. Lett Appl Microbiol. 2018;66(1):77–85. https://doi.org/10.1111/lam.12819.
Field SJ, Ryden P, Wilson D, James SA, Roberts IN, Richardson DJ,.. . Clarke TA. Identification of furfural resistant strains of Saccharomyces cerevisiae and Saccharomyces paradoxus from a collection of environmental and industrial isolates. Biotechnology for biofuels. 2015;8:33–3. https://doi.org/10.1186/s13068-015-0217-z.
Cola P, Procópio DP, Alves AT, d. C, Carnevalli LR, Sampaio IV, da Costa BLV, Basso TO. Differential effects of major inhibitory compounds from sugarcane-based lignocellulosic hydrolysates on the physiology of yeast strains and lactic acid bacteria. Biotech Lett. 2020;42(4):571–82. https://doi.org/10.1007/s10529-020-02803-6.
Hubmann G, Guillouet S, Nevoigt E. Gpd1 and Gpd2 fine-tuning for sustainable reduction of glycerol formation in Saccharomyces cerevisiae. Appl Environ Microbiol. 2011;77(17):5857–67. https://doi.org/10.1128/aem.05338-11.
Nevoigt E, Pilger R, Mast-Gerlach E, Schmidt U, Freihammer S, Eschenbrenner M,.. . Stahl U. Genetic engineering of brewing yeast to reduce the content of ethanol in beer. FEMS Yeast Res. 2002;2(2):225–32. https://doi.org/10.1111/j.1567-1364.2002.tb00087.x.
André L, Hemming A, Adler L. Osmoregulation in Saccharomyces cerevisiae Studies on the osmotic induction of glycerol production and glycerol 3-phosphate dehydrogenase (NAD+). FEBS Lett. 1991;286(1–2):13–7. https://doi.org/10.1016/0014-5793(91)80930-2.
Overkamp KM, Bakker BM, Kötter P, Luttik MAH, Van Dijken JP, Pronk JT. Metabolic engineering of glycerol production in Saccharomyces cerevisiae. Appl Environ Microbiol. 2002;68(6):2814–21. https://doi.org/10.1128/aem.68.6.2814-2821.2002.
Nicastro R, Tripodi F, Guzzi C, Reghellin V, Khoomrung S, Capusoni C,.. . Coccetti P. Enhanced amino acid utilization sustains growth of cells lacking Snf1/AMPK. Biochimica et Biophysica Acta (BBA) - Molecular Cell Research. 2015;1853(7):1615–25. https://doi.org/https://doi.org/10.1016/j.bbamcr.2015.03.014.
Kayikci Ö, Nielsen J. (2015). Glucose repression in Saccharomyces cerevisiae. FEMS Yeast Res, 15(6). https://doi.org/10.1093/femsyr/fov068.
Lesage P, Yang X, Carlson M. Yeast SNF1 protein kinase interacts with SIP4, a C6 zinc cluster transcriptional activator: a new role for SNF1 in the glucose response. Mol Cell Biol. 1996;16(5):1921–8. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC231179/.
Tomas-Cobos L, Sanz P. Active Snf1 protein kinase inhibits expression of the Saccharomyces cerevisiae HXT1 glucose transporter gene. Biochem J. 2002;368(Pt 2):657–63. https://doi.org/10.1042/bj20020984.
Stambuk BU, de Araujo PS. Kinetics of active alpha-glucoside transport in Saccharomyces cerevisiae. FEMS Yeast Res. 2001;1(1):73–8.
Stambuk BU, Alves SL Jr, Hollatz C, Zastrow CR. Improvement of maltotriose fermentation by Saccharomyces cerevisiae. Lett Appl Microbiol. 2006;43(4):370–6. https://doi.org/10.1111/j.1472-765X.2006.01982.x.
Bisson LF, Fan Q, Walker GA. Sugar and Glycerol Transport in Saccharomyces cerevisiae. Adv Exp Med Biol. 2016;892:125–68. https://doi.org/10.1007/978-3-319-25304-6_6.
Hedbacker K, Carlson M. SNF1/AMPK pathways in yeast. Frontiers in bioscience: a journal virtual library. 2008;13:2408–20. https://doi.org/10.2741/2854.
Young ET, Dombek KM, Tachibana C, Ideker T. Multiple pathways are co-regulated by the protein kinase Snf1 and the transcription factors Adr1 and Cat8. J Biol Chem. 2003;278(28):26146–58. https://doi.org/10.1074/jbc.m301981200.
Della-Bianca BE, de Hulster E, Pronk JT, van Maris AJ, Gombert AK. Physiology of the fuel ethanol strain Saccharomyces cerevisiae PE-2 at low pH indicates a context-dependent performance relevant for industrial applications. FEMS Yeast Res. 2014;14(8):1196–205. https://doi.org/10.1111/1567-1364.12217.
Regenberg B, Grotkjaer T, Winther O, Fausbøll A, Akesson M, Bro C,.. . Nielsen J. Growth-rate regulated genes have profound impact on interpretation of transcriptome profiling in Saccharomyces cerevisiae. Genome biology. 2006;7(11):R107–7. https://doi.org/10.1186/gb-2006-7-11-r107.
Ask M, Bettiga M, Mapelli V, Olsson L. The influence of HMF and furfural on redox-balance and energy-state of xylose-utilizing Saccharomyces cerevisiae. Biotechnology for biofuels. 2013;6(1):22. https://doi.org/10.1186/1754-6834-6-22.
Raghavendran V, Basso TP, da Silva JB, Basso LC, Gombert AK. A simple scaled down system to mimic the industrial production of first generation fuel ethanol in Brazil. Antonie Van Leeuwenhoek. 2017;110(7):971–83. https://doi.org/10.1007/s10482-017-0868-9.
Lino FS, d. O, Basso TO, Sommer MOA. A synthetic medium to simulate sugarcane molasses. Biotechnology for biofuels. 2018;11(1):221. https://doi.org/10.1186/s13068-018-1221-x.
Della-Bianca BE, Basso TO, Stambuk BU, Basso LC, Gombert AK. What do we know about the yeast strains from the Brazilian fuel ethanol industry? Appl Microbiol Biotechnol. 2013;97(3):979–91. https://doi.org/10.1007/s00253-012-4631-x.
Madeira-Jr JV, Gombert AK. Towards high-temperature fuel ethanol production using Kluyveromyces marxianus: On the search for plug-in strains for the Brazilian sugarcane-based biorefinery. Biomass Bioenerg. 2018;119:217–28. https://doi.org/https://doi.org/10.1016/j.biombioe.2018.09.010.
Saini P, Beniwal A, Kokkiligadda A, Vij S. Response and tolerance of yeast to changing environmental stress during ethanol fermentation. Process Biochem. 2018;72:1–12. https://doi.org/https://doi.org/10.1016/j.procbio.2018.07.001.
Tamas MJ, Rep M, Thevelein JM, Hohmann S. Stimulation of the yeast high osmolarity glycerol (HOG) pathway: evidence for a signal generated by a change in turgor rather than by water stress. FEBS Lett. 2000;472(1):159–65. https://doi.org/10.1016/s0014-5793(00)01445-9.
Malavazi I, Goldman GH. Gene disruption in Aspergillus fumigatus using a PCR-based strategy and in vivo recombination in yeast. Methods Mol Biol. 2012;845:99–118. https://doi.org/10.1007/978-1-61779-539-8_7.
Carvalho-Netto OV, Carazzolle MF, Rodrigues A, Braganca WO, Costa GG, Argueso JL, Pereira GA. A simple and effective set of PCR-based molecular markers for the monitoring of the Saccharomyces cerevisiae cell population during bioethanol fermentation. J Biotechnol. 2013;168(4):701–9. https://doi.org/10.1016/j.jbiotec.2013.08.025.
Josepa S, Guillamon J, Cano J. PCR differentiation of Saccharomyces cerevisiae from Saccharomyces bayanus/Saccharomyces pastorianus using specific primers. FEMS Microbiol Lett. 2000;193:255–9. https://doi.org/10.1111/j.1574-6968.2000.tb09433.x.
Pais TM, Foulquié-Moreno MR, Hubmann G, Duitama J, Swinnen S, Goovaerts A,.. . Thevelein JM. Comparative polygenic analysis of maximal ethanol accumulation capacity and tolerance to high ethanol levels of cell proliferation in yeast. PLoS Genet. 2013;9(6):e1003548–8. https://doi.org/10.1371/journal.pgen.1003548.
Mukherjee V, Radecka D, Aerts G, Verstrepen KJ, Lievens B, Thevelein JM. Phenotypic landscape of non-conventional yeast species for different stress tolerance traits desirable in bioethanol fermentation. Biotechnology for biofuels. 2017;10:216. https://doi.org/10.1186/s13068-017-0899-5.
Giannattasio S, Guaragnella N, Corte-Real M, Passarella S, Marra E. Acid stress adaptation protects Saccharomyces cerevisiae from acetic acid-induced programmed cell death. Gene. 2005;354:93–8. https://doi.org/10.1016/j.gene.2005.03.030.
Dorta C, Oliva-Neto P, de -Abreu-Neto MS, Nicolau-Junior N, Nagashima AI. Synergism among lactic acid, sulfite, pH and ethanol in alcoholic fermentation of Saccharomyces cerevisiae (PE-2 and M-26). World Journal of Microbiology Biotechnology. 2005;22(2):177. https://doi.org/10.1007/s11274-005-9016-1.
de Mello FdSB, Coradini ALV, Tizei PAG, Carazzolle MF, Pereira GAG, Teixeira GS. Static microplate fermentation and automated growth analysis approaches identified a highly-aldehyde resistant Saccharomyces cerevisiae strain. Biomass Bioenerg. 2019;120:49–58. https://doi.org/https://doi.org/10.1016/j.biombioe.2018.10.019.
Verduyn C, Postma E, Scheffers WA, Van Dijken JP. Effect of benzoic acid on metabolic fluxes in yeasts: a continuous-culture study on the regulation of respiration and alcoholic fermentation. Yeast. 1992;8(7):501–17. https://doi.org/10.1002/yea.320080703.
Andreasen AA, Stier TJ. Anaerobic nutrition of Saccharomyces cerevisiae. I. Ergosterol requirement for growth in a defined medium. J Cell Comp Physiol. 1953;41(1):23–36. https://doi.org/10.1002/jcp.1030410103.
Andreasen AA, Stier TJ. Anaerobic nutrition of Saccharomyces cerevisiae. II. Unsaturated fatty acid requirement for growth in a defined medium. J Cell Comp Physiol. 1954;43(3):271–81. https://doi.org/10.1002/jcp.1030430303.
Additionalfile2.pdf
First submitted to journal | CommonCrawl |
How did Lorentz transformations get their modern definition?
Historically, Special Relativity was motivated by apparent inconsistencies between Maxwell's Electrodynamics and Newtonian Mechanics. In Einstein's well known paper "On the electrodynamics of moving bodies" he explains quite well his motivations.
Central objects of the theory are the Lorentz transformations. If one forgets motivations, history and intuition the Lorentz transformations are formally defined as the linear transformations $\Lambda:\mathbb{R}^4\to \mathbb{R}^4$ such that
$$\eta(\Lambda v,\Lambda w)=\eta(v,w),$$
where $\eta = \operatorname{diag}(-1,1,1,1)$. Furthermost, it seems that before this definition they were defined as the transformations which keep the speed of light the same in all frames.
My question is: how did Lorentz transformations get this modern definition?
How were they first defined, how did they relate to Einstein's paper, and how did they get the modern definition as "transformations which preserve the spacetime inner product"? Specifically I'm interested in knowing how from the motivations for relativity physicists got to the definition of Lorentz transformations as the transformations $\Lambda$ such that $\eta(\Lambda x,\Lambda y) = \eta(x,y)$
physics theoretical-physics relativity-theory electromagnetism
$\begingroup$ This question doesn't really make much sense. You discuss two mathematically equivalent definitions, and ask when one gave way to the other. Since they're mathematically equivalent, there is no reason that one has to give way to the other. This is just a matter of a particular author's preferences regarding how to present the subject. $\endgroup$ – Ben Crowell May 3 '16 at 2:19
$\begingroup$ I believe that the wording came out in a confuse manner. I'm not asking why would one pick the latter instead of the former. I agree that it is a matter of preference. But as far as I know, the first definition used was that based on Einstein's postulates which appear on his paper. The other definition, equivalent to the first, I believe appeared latter. What I'm asking here, is how physicists got to the second definition. How, from the first approach, which is what Einstein presented, it was discovered that this other definition could do the same? It is not a question regarding which to pick. $\endgroup$ – user1620696 May 3 '16 at 2:24
Wikipedia has very adequate and well-sourced article on History of Lorentz transformations. Voigt formulated the not quite the modern ones back in 1887, of which Lorentz was unaware, and had to work out his own version independently. This might have been just as well since he later said he would have used Voigt's if he knew about them, but He presented them partially (without the time dilation) in 1895, the first complete version is due to Larmor (1897). Lorentz was apparently unaware of that either, and supplied his own full version in 1899, see What made Einstein believe (or know) that time was affected by speed and gravity?. None of them viewed the transformations algebraically or kinematically, they were seen as describing dynamic effects on bodies moving at high speeds. Larmor even supplemented a hypothesis that molecular forces are of electromagnetic nature, which would explain the effects. But as Poincare showed in 1905 purely electromagnetic forces could not account for the stability of the electron. He had to conjecture an additional stabilizing non-electromagnetic force that nonetheless obeyed the same transformation laws, which made it ad hoc.
The first algebraic observation, that the transformations form a group, was made by Poincare in his 1904-1906 papers on the dynamics of the electron, but according to Weinstein "Poincaré did not associate this quadratic form with propagation of light in order to define a null interval like Einstein or a metric like Minkowski". This is particularly surprising because groups involved in Kleinian geometries are usually obtained by considering all transformations that preserve a quadratic form, as Poincare well knew. As alluded to by Weinstein, it was Einstein in 1905 who first characterized them kinematically, as the transformations that preserve the speed of light in all frames (i.e. preserve the null interval), and only Minkowsky, inspired by Einstein's paper, gave the modern geometric formulation of them as the transformations that preserve a (pseudo) metric in 1907-1909, see What was the motivation for Minkowski spacetime before special relativity?
ConifoldConifold
Transformations must preserve the structure of Maxwell's equations, which then automatically preserves the speed of light. Voigt, 1887, was the first, but there were several independent derivations. Note that Voight's transformations are not quite the same as those of Lorentz. The latter are consistent with the Principle of Relativity.
Peter DiehrPeter Diehr
Not the answer you're looking for? Browse other questions tagged physics theoretical-physics relativity-theory electromagnetism or ask your own question.
What was the motivation for Minkowski spacetime before special relativity?
What made Einstein believe (or know) that time was affected by speed and gravity?
How did the 'Poincaré patches' get their name?
What attracted Einstein to the anomalous precession of Mercury?
How did the publication feat of Einstein's four 1905 Annus Mirabilis papers get through peer review?
How did 19th century physicists do their undergraduate/graduate studies?
Were there doubts that Voigt's time dilation was correct rather than Einstein's? | CommonCrawl |
Search Results: 1 - 10 of 191388 matches for " D. Loomba "
Page 1 /191388
Psoas Hematoma Following Lumbar Sympathetic Block in a Patient with Renal and Liver Diseases and Recent Use of Aggrenox [PDF]
Nashaat Rizk, Zirong Zhao, Munish Loomba
Open Journal of Anesthesiology (OJAnes) , 2014, DOI: 10.4236/ojanes.2014.44015
Lumbar sympathetic block is an analgesic procedure frequently performed in chronic pain clinics for ischemic lower limb pain from peripheral arterial disease. Although the lumbar sympathetic ganglia are anatomically near major vascular and neural structures, complications such as severe hemorrhage is rarely reported. Aspirin/extended release dipyridamole (Aggrenox) is indicated for secondary stroke prevention. Stroke is frequently a co-morbid condition in patients with peripheral vascular disease. Interventional pain physicians frequently face the difficulty of deciding whether to continue or stop antithrombotic medications in the periprocedural period because of the devastating consequences of both hemorrhagic and thrombotic complications. Due to a paucity of data, no guidelines have been specifically written for interventional procedures for chronic pain. To aid future decision making, we present a case report of psoas hematoma developed after lumbar sympathetic block in a patient with end stage renal failure and hepatic dysfunction who had limb-threatening ischemia. The patient was treated with Aggrenox until three days before the procedure.
Measurement of Optical Attenuation in Acrylic Light Guides for a Dark Matter Detector
M. Bodmer,N. Phan,M. Gold,D. Loomba,J. A. J. Matthews,K. Rielage
Physics , 2013, DOI: 10.1088/1748-0221/9/02/P02002
Abstract: Acrylic is a common material used in dark matter and neutrino detectors for light guides, transparent vessels, and neutron shielding, creating an intermediate medium between the target volume and photodetectors. Acrylic has low absorption within the visible spectrum and has a high capture cross section for neutrons. The natural radioactivity in photodetectors is a major source of background neutrons for low background detectors making the use of acrylic attractive for shielding and background reduction. To test the optical properties of acrylic we measured the transmittance and attenuation length of fourteen samples of acrylic from four different manufacturers. Samples were evaluated at five different wavelengths between 375 nm and 632 nm. We found that all samples had excellent transmittance at wavelengths greater than 550 nm. Transmittance was found to decrease below 550 nm. As expected, UV-absorbing samples showed a sharp decrease in transmittance below 425 nm compared to UV-transmitting samples. We report attenuation lengths for the three shortest wavelengths for comparison and discuss how the acrylic was evaluated for use in the MiniCLEAN single-phase dark matter detector.
Search for Free Fractional Electric Charge Elementary Particles
V. Halyo,P. Kim,E. R. Lee,I. T. Lee,D. Loomba,M. L. Perl
Physics , 1999, DOI: 10.1103/PhysRevLett.84.2576
Abstract: We have carried out a direct search in bulk matter for free fractional electric charge elementary particles using the largest mass single sample ever studied - about 17.4 mg of silicone oil. The search used an improved and highly automated Millikan oil drop technique. No evidence for fractional charge particles was found. The concentration of particles with fractional charge more than 0.16e (e being the magnitude of the electron charge) from the nearest integer charge is less than $4.71\times10^{-22}$ particles per nucleon with 95% confidence.
Achievement of surgically soft and safe eyes--a comparative study
Sud R,Loomba R
Indian Journal of Ophthalmology , 1991,
Abstract: With the advent of intra ocular lens implantation at the time of cataract extraction, especially by intracapsular method, it has become very important to prevent the loss of vitreous during surgery. This can be achieved by lowering the intraocular pressure by various methods. In order to find out the best method to achieve a soft & safe eye before surgery, a study was conducted on 90 patients, undergoing intracapsular cataract extraction. The patients were divided into 9 groups of 10 each, & different methods of lowering intraocular pressure were tried and results compared. It was observed that intravenous mannitol given preoperatively and pressure with mercury column together, formed the best combination to achieve the maximum tension lowering effect.
GEM-based TPC with CCD Imaging for Directional Dark Matter Detection
N. S. Phan,R. J. Lauer,E. R. Lee,D. Loomba,J. A. J. Matthews,E. H. Miller
Abstract: Directional dark matter detection will require scale-ups to large volumes if low-pressure gas Time Projection Chambers (TPCs) are the only viable technology. We discuss some of the challenges for this technology, where balancing the goal of achieving the best sensitivity with that of cost effective scale-up requires an optimization over a large parameter space. Critical for this are the precision measurements of the fundamental properties of both electron and nuclear recoil tracks down to the lowest energies. Such measurements would provide a benchmark for background discrimination and directional sensitivity that could be used for future optimization studies for directional dark matter experiments. In this paper we describe a small, high resolution, high signal-to-noise GEM-based TPC with a 2D CCD readout designed for this goal. The performance of the detector was characterized using X-rays, gamma-rays, and neutrons, enabling detailed measurements of electron and nuclear recoil tracks. Stable effective gas gains of greater than 1x10^5 were obtained in 100 Torr of pure CF4 by a cascade of three standard CERN GEMs each with a 140 um pitch. The high signal-to-noise and submillimeter resolution of the GEM amplifcation and CCD readout, together with low diffusion, allow for excellent background discrimination down to a recoil energy of ~ 20 keVr. Even lower thresholds, necessary for low mass WIMPs for example, might be achieved by lowering the pressure and/or with full 3D track reconstruction. These and other paths for improvements are discussed, as are possible fundamental limitations imposed by the physics of energy loss.
Methicillin and vancomycin resistant S. aureus in hospitalized patients
Loomba Poonam,Taneja Juhi,Mishra Bibhabati
Journal of Global Infectious Diseases , 2010,
Abstract: S. aureus is the major bacterial cause of skin, soft tissue and bone infections, and one of the commonest causes of healthcare-associated bacteremia. Hospital-associated methicillin-resistant S. aureus (MRSA) carriage is associated with an increased risk of infection, morbidity and mortality. Screening of high-risk patients at the time of hospital admission and decolonization has proved to be an important factor in an effort to reduce nosocomial transmission. The electronic database Pub Med was searched for all the articles on "Establishment of MRSA and the emergence of vancomycin-resistant S. aureus (VRSA)." The search included case reports, case series and reviews. All the articles were cross-referenced to search for any more available articles. A total of 88 references were obtained. The studies showed a steady increase in the number of vancomycin-intermediate and vancomycin-resistant S. aureus. Extensive use of vancomycin creates a selective pressure that favors the outgrowth of rare, vancomycin-resistant clones leading to heterogenous vancomycin intermediate S. aureus hVISA clones, and eventually, with continued exposure, to a uniform population of vancomycin-intermediate S. aureus (VISA) clones. However, the criteria for identifying hVISA strains have not been standardized, complicating any determination of their clinical significance and role in treatment failures. The spread of MRSA from the hospital to the community, coupled with the emergence of VISA and VRSA, has become major concern among healthcare providers. Infection-control measures, reliable laboratory screening for resistance, appropriate antibiotic prescribing practices and avoidance of blanket treatment can prevent long-term emergence of resistance.
Gastrointestinal histoplasmosis presenting as colonic pseudotumour
Sehgal S,Chawla R,Loomba P,Mishra B
Indian Journal of Medical Microbiology , 2008,
Abstract: We report a case of gastrointestinal histoplasmosis in a 45-year-old HIV positive man who was misdiagnosed as a case of colonic cancer. The patient presented with low-grade fever, pain in lower abdomen, anorexia and weight loss of six months duration. On examination a lump in the left iliac fossa was detected. Colonoscopy revealed stricture and ulcerated growth in the sigmoid colon. Radiological investigations suggested malignant/inflammatory mass in the sigmoid colon with luminal compromise. Patient was operated and ulcerated tissue was sent for histopathological examination, which revealed numerous intracellular, 2-4 μm, oval, narrow-based budding yeast cells suggestive of Histoplasma capsulatum . Subsequently, the patient developed fluffy opacities on X-ray chest. Examination of sputum revealed presence of acid-fast bacilli and yeast forms of H. capsulatum . Patient was started on amphotericin B but died on the seventeenth postoperative day. The diagnosis of histoplasmosis was made retrospectively. Atypical presentation and rarity of the disease led to this diagnostic pitfall. To the best of our knowledge this is the first report of gastrointestinal histoplasmosis presenting as colonic pseudotumour from India.
Cryptococcal granulomas in an immunocompromised HIV-negative patient
Taneja Juhi,Bhargava Aradhana,Loomba Poonam,Dogra Vinita
Indian Journal of Pathology and Microbiology , 2008,
Abstract: Disseminated cryptococcosis usually occurs in immunocompromised individuals with defective cell-mediated immunity, most commonly seen with HIV infection. We present a case of disseminated cryptococcosis in an HIV-negative male patient who presented with headache, fever, altered sensorium of short duration and multiple cutaneous lesions. An emergency CT scan of the head showed multiple intracranial and intraventricular granulomas. Routine laboratory investigations were within the normal range. A CSF examination revealed capsulated yeasts on India ink and a culture yielded cryptococcus neoformans. A cryptococcal antigen test by latex agglutination kit was positive. A biopsy revealed multiple capsulated yeasts cells in the cutaneous lesions, which were consistent with cryptococcus neoformans. The patient was successfully treated with Amphotericin B and Fluconazole with regression of cranial and cutaneous lesions.
Comparison of 12-Month Outcomes with Zotarolimus- and Paclitaxel-Eluting Stents: A Meta-Analysis
Rohit S. Loomba,Suraj Chandrasekar,Neil Malhotra,Rohit R. Arora
ISRN Cardiology , 2011, DOI: 10.5402/2011/675638
The Deep Lens Survey
D. Wittman,J. A. Tyson,I. P. Dell'Antonio,A. C. Becker,V. E. Margoniner,J. Cohen,D. Norman,D. Loomba,G. Squires,G. Wilson,C. Stubbs,J. Hennawi,D. Spergel,P. Boeshaar,A. Clocchiatti,M. Hamuy,G. Bernstein,A. Gonzalez,P. Guhathakurta,W. Hu,U. Seljak,D. Zaritsky
Physics , 2002, DOI: 10.1117/12.457348
Abstract: The Deep Lens Survey (DLS) is a deep BVRz' imaging survey of seven 2x2 degree fields, with all data to be made public. The primary scientific driver is weak gravitational lensing, but the survey is also designed to enable a wide array of other astrophysical investigations. A unique feature of this survey is the search for transient phenomena. We subtract multiple exposures of a field, detect differences, classify, and release transients on the Web within about an hour of observation. Here we summarize the scientific goals of the DLS, field and filter selection, observing techniques and current status, data reduction, data products and release, and transient detections. Finally, we discuss some lessons which might apply to future large surveys such as LSST. | CommonCrawl |
Journal of Cheminformatics
The influence of solid state information and descriptor selection on statistical models of temperature dependent aqueous solubility
Richard L. Marchese Robinson ORCID: orcid.org/0000-0001-7648-86451,
Kevin J. Roberts1 &
Elaine B. Martin1
Journal of Cheminformatics volume 10, Article number: 44 (2018) Cite this article
Predicting the equilibrium solubility of organic, crystalline materials at all relevant temperatures is crucial to the digital design of manufacturing unit operations in the chemical industries. The work reported in our current publication builds upon the limited number of recently published quantitative structure–property relationship studies which modelled the temperature dependence of aqueous solubility. One set of models was built to directly predict temperature dependent solubility, including for materials with no solubility data at any temperature. We propose that a modified cross-validation protocol is required to evaluate these models. Another set of models was built to predict the related enthalpy of solution term, which can be used to estimate solubility at one temperature based upon solubility data for the same material at another temperature. We investigated whether various kinds of solid state descriptors improved the models obtained with a variety of molecular descriptor combinations: lattice energies or 3D descriptors calculated from crystal structures or melting point data. We found that none of these greatly improved the best direct predictions of temperature dependent solubility or the related enthalpy of solution endpoint. This finding is surprising because the importance of the solid state contribution to both endpoints is clear. We suggest our findings may, in part, reflect limitations in the descriptors calculated from crystal structures and, more generally, the limited availability of polymorph specific data. We present curated temperature dependent solubility and enthalpy of solution datasets, integrated with molecular and crystal structures, for future investigations.
A plethora of computational approaches currently exist to predict the equilibrium solubility of organic chemicals, as well as related thermodynamic terms such as the free energy of solvation [1]. These approaches include data driven statistical modelling approaches, such as quantitative structure–property relationships (QSPRs), as well as various kinds of physics based models. The focus of much of this work is on the prediction of aqueous solubility at a single temperature, or a nominal single value around typical ambient temperatures, to support estimation of product performance, e.g. in terms of the bioavailability of active pharmaceutical ingredients (APIs) or the environmental fate of pollutants [1,2,3].
In contrast, we are interested in predicting the temperature dependence of equilibrium solubility. Predictions of the solubility of relevant organic crystalline materials, in all relevant solvents, across a range of temperatures are crucial for digital design of unit operations in pharmaceutical manufacturing. For example, they could support the design of cooling crystallization operations [4]. Determination of aqueous solubility at elevated temperatures may also be relevant to the design of wet granulation processes [5, 6].
It is important to note that various kinds of physics based approaches to modelling solution thermodynamics are capable of capturing temperature dependence, including in complex mixtures [1, 7,8,9]. If combined with estimations of solid state thermodynamic contributions, these might be applied to predict the temperature dependence of solubility [10,11,12,13].
However, physics based models are not necessarily more accurate and may be more computationally expensive than QSPR approaches [1, 14]. Interestingly, however, few QSPR models have been developed to capture the temperature dependence of solubility. Some QSPR models were reported to predict the solubilities of organic chemicals, across a range of temperatures, in supercritical carbon dioxide for small (less than 30 chemicals), non-diverse datasets [15, 16]. More recently, two QSPR studies sought to capture the temperature dependence of aqueous solubility for large, chemically diverse datasets [14, 17].
Specifically, Avdeef [17] developed QSPR models for the standard enthalpy of solution, for the unionized solute, in water. Under certain assumptions, the variation in solubility with temperature may be expressed in terms of the van't Hoff relationship in Eq. (1), where S is the solubility (in molar concentration units), T is the temperature (in Kelvin), R is the molar gas constant and \( \Delta H_{sol}^{0} \) is the standard enthalpy of solution [17,18,19,20]. If it is assumed that the standard enthalpy of solution is effectively constant over the temperature range of interest, Eq. (1) can be used to interpolate solubility values between temperatures or extrapolate solubility data from one temperature to another [17].
$$ \log_{10} S = \frac{{ - \Delta H_{sol}^{0} }}{{\ln \left( {10} \right)RT}} + constant $$
The models for the enthalpy of solution developed by Avdeef [17] were based on different combinations of molecular descriptors and melting point values and built using multiple linear regression (MLR) [21], recursive partition tree [22] and random forest [23, 24]. The melting points were measured or predicted from a molecular descriptors based model [25].
In contrast, Klimenko et al. [14] built a model for directly predicting aqueous solubility at a specified temperature. Their predictions were based on molecular descriptors and a descriptor derived from experimental temperature, with random forest used to train the model.
In the work reported in our current article, we extended the work of Avdeef [17] and Klimenko et al. [14] as follows. Firstly, we investigated the effect of incorporating crystallographic information, in the form of lattice energies or 3D descriptors calculated from an experimental crystal structure, into the models. Secondly, we compared models for the enthalpy of solution based on molecular descriptors with or without melting point values and examined the effect of including melting point values into direct predictions of temperature dependent solubility. In both respects, this means our work is a contribution to the wider debate in the recent literature regarding the importance of explicitly capturing solid state contributions in QSPR models of solubility and whether the availability of crystallographic or melting point information is essential to achieve this [3, 26,27,28,29,30]. Indeed, it has recently been suggested that the major source of error in QSPR prediction of solubility is the failure of molecular descriptors to fully capture solid state contributions [28]. Thirdly, we considered a larger variety of molecular descriptor permutations, with or without the explicit solid state contribution descriptors, including the application of a feature selection algorithm to produce parsimonious models from high dimensional descriptor sets. Finally, we introduced a novel pseudo-cross-validation protocol for evaluating direct models of temperature dependent solubility. This novel validation protocol allowed us to investigate potential optimistic bias when validating those models.
Methods and data
For brevity, the essential points are provided below and further details are provided, under corresponding sub-headings, in Additional file 1.
Solubility data curation
Electronic datasets were curated for two endpoints related to temperature dependent solubility: enthalpy of solution values and temperature specific solubility values. Enthalpy of solution data (in kJ/mol) were curated from the publication of Avdeef [17] and temperature dependent solubility data (log10[molar concentration]) were curated from the publication of Klimenko et al. [14].
Avdeef [17] reported enthalpy of solution values derived from temperature dependent intrinsic solubility values via van't Hoff analysis. (Intrinsic solubility refers to the solubility of the unionized solute [1]. Avdeef [17] estimated the intrinsic solubility values from experimental values reported in various literature studies.) Avdeef also presented curated enthalpy of solution values obtained from direct calorimetric measurements, which were considered more reliable [17]. Here, it has been assumed that all curated enthalpy of solution values closely corresponded to the standard enthalpy of solution, such that they could be used via Eq. (1) to interpolate or extrapolate intrinsic solubility data between temperatures.
As well as curating the endpoint values, we curated the corresponding metadata, including chemical names (or CAS numbers) identifying the molecular species and corresponding polymorph metadata, where this was reported. This included curating the data quality assessments made by Avdeef [17]. An overview of this curation process is provided in Fig. 1.
An overview of the curation of endpoint data and associated metadata for endpoints related to temperature dependent aqueous solubility which was carried out for our article. These endpoints were the enthalpy of solution and temperature specific solubility measurements for datasets curated starting from the work of Avdeef [17] and Klimenko et al. [14] respectively. The descriptions on the right hand side of this image refer to the curated datasets we prepared, starting from the information reported in these earlier studies and based on cross-referencing against other references where necessary, from which datasets for QSPR modelling were derived. Full details of the curation process, including explanations of how these curated datasets differed from those reported in the literature, are provided in Additional file 1. See the sections "Solubility data curation" and "Comparison to the literature" therein
It should be noted that the "Avdeef (2015) derived dataset" and "Klimenko et al. (2016) derived dataset" labels (Fig. 1) refer to the datasets curated in this work into the electronic template, starting from the work of Avdeef [17] and Klimenko et al. respectively [14], where differences in the datasets arose during the curation process. One key difference between our versions of these datasets and those reported in these earlier studies is that we filtered dataset entries where there was no evidence that the enthalpy of solution or solubility data corresponded to dissolution from the solid state.
Integration with molecular structures
In the first instance, SMILES representations of molecular structures were retrieved via querying the following online resources: the Chemical Identifier Resolver service [31], ChemSpider [32] and PubChem [33, 34]. For those scenarios where no, or inconsistent, molecular structures were retrieved, other references were consulted to determine the molecular structures.
Integration with crystal structures
Where possible, Cambridge Structural Database (CSD) refcodes were obtained for each combination of molecular structure identifier and polymorph description (i.e. each material), each refcode denoting a crystal structure [35]. Only a small proportion (< 3%) of solubility or enthalpy of solution data points were associated with a description of the corresponding polymorphic form in the Klimenko et al. [14]. or Avdeef [17] derived datasets respectively, i.e. the polymorph description was typically blank. Hence, in the majority of cases, only a possible match could be determined based upon cross-referencing the molecular identifiers (names and CAS numbers) and molecular structures associated with the data points against the CSD. Nonetheless, where polymorph information was available in the dataset and CSD for provisional matches, conflicting polymorph descriptions were manually identified and the corresponding matches deleted. In keeping with literature precedence, all multiple matches remaining were filtered to only keep the putative lowest energy structure, based upon calculated lattice energy [29, 30].
Calculation of lattice energies
Lattice energies were calculated from the available crystal structures, using the COMPASS force field [36,37,38], and used as a descriptor of solid state contribution to the modelled endpoints. This is justified by the fact that solubility can be related to the standard Gibbs free energy change, comprising enthalpic and entropic contributions, upon moving from the solid state to the solution phase [1, 4]. In turn, this may be decomposed into the free energy change of sublimation (breaking of the crystal lattice to form a gaseous phase) and solvation (transfer from the gas phase to the solution phase), i.e. hydration in the case of an aqueous solution [1, 26, 29]. Hence, the enthalpy of solution may be decomposed into the sublimation enthalpy and the solvation enthalpy. The lattice energy is a contribution to the sublimation enthalpy. It is defined as the energy change upon forming the crystal lattice from infinitely separated gas phase molecules [29]. Under certain assumptions, the enthalpy of sublimation may be related to the lattice energy as per Eq. (2) [29]. In Eq. (2), \( \Delta H_{sub} \) represents the enthalpy of sublimation, \( E_{latt} \) the lattice energy, \( R \) the gas constant and \( T \) the temperature in Kelvin.
$$ \Delta H_{sub} = - E_{latt} - 2RT $$
Validation of lattice energies
The calculated lattice energies were compared to the experimental estimates of lattice energies, obtained from experimental sublimation enthalpies via Eq. (2) and assuming a constant temperature of 298 Kelvin, for a subset of the SUB-48 dataset from McDonagh et al. [29]. (See "Filtering of SUB-48 Dataset" in Additional file 1.) In keeping with the solubility and enthalpy of solution datasets, this dataset also comprised a set of single component crystals, was a mixture of general organic and pharmaceutical API small molecules and was filtered in keeping with the crystal structure selection criteria applied when integrating the QSPR datasets with crystal structures.
Preparation of molecular structures for descriptor calculations
Prior to calculating 2D molecular descriptors, all molecular structures were standardized and filtered. Prior to calculating 3D molecular descriptors, from the conformer generator structure but not the crystal structure, similar standardization was applied to the structures retained for the QSPR ready datasets, yet stereochemistry was retained prior to conformer generation.
Calculation of 2D molecular descriptors
The choice of 2D molecular descriptors was based upon the different permutations considered by Klimenko et al. [14] and Avdeef [17]. Where possible, we sought to calculate the same subsets of descriptors as per these previous studies, and to consider the same combinations of these subsets, as well as the combined pool of all molecular descriptors. Each of the different subsets is denoted by a label explained in Additional file 1: Table S1 and the combinations of 2D molecular descriptors evaluated are enumerated, therein, using these labels. These labels are also used in the file names of the versions of the QSPR ready datasets (see Table 1) provided in Additional File 2, to denote the 2D molecular descriptors incorporated into the applicable combination of the available descriptors.
Calculation of crystal structure based 3D molecular descriptors
In addition to employing calculated lattice energies as a descriptor, the value of crystallographic information was evaluated via computing 3D molecular descriptors from the molecular structure found in the crystal. Specifically, charged partial surface area (CPSA) descriptors, representing the charge distribution at the molecular surface [39,40,41], were calculated using Mordred [42, 43]. These may partially capture intermolecular interactions in the solid state. Whereas the calculated lattice energies and experimental melting point data explicitly convey information about the solid state contribution, these descriptors may—in part—implicitly represent this information. However, if the solution state structures are not wholly different, they might partially capture molecular interactions associated with non-solid state contributions. Moreover, these descriptors may also be calculated for 3D molecular structures estimated from the available molecular information. Hence, in order to assess whether these descriptors added value due to their having been computed from the crystal structure, corresponding models were built using CPSA descriptors calculated from the 3D molecular structure derived from the originally curated SMILES using the ETKDG conformer generator algorithm [44, 45] and UFF force-field [46] geometry refinement. These descriptors were only calculated for those dataset entries which could be integrated with crystal structures.
Temperature descriptor
For a given material, assuming the standard enthalpy of solution may be approximated as a constant over the relevant temperature range, as well as other assumptions, the logarithm of solubility may be linearly related to (1/T), where T is the temperature in Kelvin [c.f. Eq. (1)] [17,18,19,20]. Hence, as temperature dependent solubility in log10[molar concentration] units was modelled for the Klimenko et al. [14]. derived dataset, the experimental temperature values were transformed to (1/T) to use as a descriptor. N.B. It should be noted that Klimenko et al. [14] proposed a more complicated temperature descriptor. However, for simplicity, and due to the grounding of the (1/T) dependence, under certain assumptions [17,18,19,20], in fundamental thermodynamics, we chose to use (1/T) as the descriptor.
Melting point descriptor
Experimental melting point data were used as a descriptor for all datasets. The data retrieved do not necessarily correspond to the polymorph for which enthalpy of solution or solubility data were modelled.
QSPR ready datasets
Table 1 summarizes the QSPR ready datasets which were used for the evaluation of modelling approaches investigated in our work. A summary of the derivation of these QSPR ready datasets is provided in Fig. 2. These datasets were derived from the curated datasets summarized in Fig. 1, following integration with structural information, standardizing molecular structures, and calculating descriptors. For the enthalpy of solution datasets, data points noted to be low quality by Avdeef [17] were also filtered. The derived dataset matched instance IDs to an endpoint value and a vector of descriptors.
Table 1 QSPR ready datasets
A summary of the steps taken to transform the curated experimental endpoint datasets (see Fig. 1) into the QSPR-ready datasets used for modelling studies (see Table 1). As is explained in the text of Additional file 1, some of these steps were carried out iteratively
The instances in these datasets, i.e. the unique identifiers associated with endpoint values and corresponding descriptor vectors used for modelling, represent different organic crystalline materials, typically corresponding to different molecular chemicals, and—for the Klimenko et al. [14] derived QSPR datasets—different temperatures. Where multiple endpoint data points were associated with a given instance identifier, the arithmetic mean endpoint value was assigned. Hence, each instance identifier only occurred once.
Descriptor combinations investigated
For the QSPR ready datasets which were not integrated with crystallographic information, all previously described combinations of 2D molecular descriptors were considered, with or without the melting point descriptor and in combination with the temperature descriptor for the Klimenko et al. [14] derived datasets. For the QSPR ready datasets integrated with crystallographic information, the same combinations of descriptors were considered, with or without the calculated lattice energy. In addition, a new set of descriptor combinations were evaluated for these datasets based upon the 3D descriptors calculated from the corresponding crystal structure, or the conformer generator structure. These descriptor combinations were obtained via adding the 3D descriptors, to all descriptor combinations involving the combined set of 2D molecular descriptors, or substituting the combined set of 2D molecular descriptors for the 3D descriptors. Finally, for the high dimensional descriptor combinations containing the complete set of 2D molecular descriptors, but not the 3D descriptors, feature selection was applied to yield another set of descriptor combinations. Feature selection was not applied to the sets containing the 3D descriptors, as initial results obtained with the 2D molecular descriptors were worse when feature selection was applied.
Feature selection
The feature selection algorithm and rationale is documented in Additional file 1.
Descriptor scaling
All descriptor values were range scaled to lie between 0 and 1, using the training set ranges, prior to modelling.
Models were built using Multiple Linear Regression (MLR) [21] and the non-linear random forest regression (RFR) [23, 24] algorithms. For all RFR models, the model was built five times using a different random number generator seed and each tree was grown on a training set sample without replacement, rather than bootstrapping. All cross-validation statistics were averaged (arithmetic mean) across these seeds, as were all descriptor importance values.
Validation statistics
Model performance was assessed in terms of the coefficient of determination (R2) and the root mean squared error (RMSE) [47]. (Definitions are provided in Additional file 1.) For the comparison of models on the same test set, these statistics are redundant. However, as R2 is a composite of the mean squared error and the variance for the test set endpoint values, propagation of errors necessarily makes it less robust. Hence, for comparisons on the same dataset, using the same cross-validation folds, the mean RMSE values were compared. However, RMSE estimates are not comparable for different endpoints or test sets where the range in endpoint values differs [47]. Hence, for comparisons across datasets, or on the same dataset using different cross-validation folds, the mean R2 values were compared.
Cross-validation protocols
Initially, a "vanilla" cross-validation protocol was applied: R repetitions of stratified K-fold cross-validation (R = 5, K = 5). In addition, results for a novel "pseudo cross-validation" protocol are reported: the "remove temperature" protocol, labelled the "CV = rt" protocol for brevity.
The application of the CV = rt protocol ensured that solubility values for the same organic material, measured at different temperatures, could not be included in the corresponding training and test set. The motivation for introducing this protocol was to assess whether simply applying a "vanilla" cross-validation protocol could give optimistically biased results, when applied to temperature dependent data—where the dataset instances could correspond to the same material, yet with the endpoint value measured at a different temperature. (Hence, this protocol was only applicable to the datasets derived from Klimenko et al. [14], where the endpoint was temperature dependent solubility.) The difference between the "vanilla" cross-validation protocol (CV = v) and the novel pseudo-cross-validation protocol (CV = rt) is illustrated by Figs. 3 and 4 respectively.
The application of a standard, or "vanilla", cross-validation protocol (fivefold CV) to temperature dependent endpoint data, where the instance IDs comprise the [MATERIAL IDENTITY]_[TEMPERATURE]. As shown here, instances corresponding to the same material, yet with endpoint values measured at different temperatures, might be assigned to different folds. (For this hypothetical dataset, this means [M1]_[T = 25] and [M1]_[T = 30] were assigned to folds F1 and F2 respectively.) Since each fold is used, in turn, as the test set, with the remaining data being used as the training set, this allows the same material to appear in corresponding training and test sets, when the corresponding endpoint values were measured at different temperatures
The application of the CV = rt pseudo-cross-validation protocol to the same hypothetical dataset shown in Fig. 3. The first step entails the transformation of 1 into 2, via removing the temperature [T = x] suffix from the ID, deleting all but one occurrence of each truncated ID and assigning this truncated ID the arithmetic mean endpoint value associated with all corresponding original IDs. The transformation of 2 into 3 just entails the application of the standard cross-validation protocol. (In the current case, the nominal endpoint values were required as stratified sampling, based on the distribution of endpoint values, was employed for cross-validation.) Finally, the original dataset IDs are assigned the folds associated with their truncated IDs, in 3, to give the CV = rt folds 4. This ensures that instance IDs corresponding to the same material, yet with endpoint IDs measured at different temperatures, are always assigned to the same fold. (For this hypothetical dataset, this means [M1]_[T = 25] and [M1]_[T = 30] were both assigned to fold F1.) This ensures they can never be placed in corresponding training and test sets
Statistical significance of differences in cross-validated results
Pairwise differences in arithmetic mean validation statistics, from cross-validation, were evaluated for statistical significance for the key scenarios of interest. These key scenarios were pairwise comparisons of all corresponding modelling protocols, or cross-validation protocols, differing only with respect to the following: (1) whether the lattice energy descriptor was included; (2) whether the melting point descriptor was included; (3) whether the crystal structure based 3D descriptors, as opposed to the conformer generator based 3D descriptors, were used; (4) whether feature selection was applied; (5) whether the CV = v or CV = rt cross-validation protocol was applied. For scenarios (1–4), p values were computed based on the paired RMSE values. For scenario (5), p values were computed based on the R2 values.
Statistical significance was assessed via calculating approximate p values which were then adjusted, separately for each key scenario, to account for the multiple comparisons made. All references to statistically significant results refer to adjusted p values < 0.05. However, only approximate assessments of statistical significance could be made and it is possible that the applied analysis somewhat overstated the degree to which statistically significant findings were obtained. Hence, all adjusted p values are considered apparent indicators of statistical significance.
Descriptor importance analysis
A final model, or set of models using five different random seeds for RFR, was built on the entirety of the relevant dataset and the corresponding descriptor importance values, or arithmetic mean values for RFR, were analyzed. For MLR, the magnitudes of the descriptor coefficients were retrieved. For RFR, the descriptor permutation based importance measure was employed [23].
Lattice energy predictions using molecular descriptors
To get some insight into the extent to which calculating lattice energies from the crystal structures added information to the models of enthalpy of solution and temperature dependent solubility, beyond that inherent in the molecular descriptors, models for the lattice energy descriptor were built using the combined set of 2D molecular descriptors and random forest regression. For the Avdeef_ExDPs_CS_True and Avdeef_ExDPs_Cal_CS_True datasets, the same cross-validation folds were used as per the enthalpy of solution models. For the Klimenko_CS_True dataset, the CV = v cross-validation folds were used and all repeated occurrences of the same combination of lattice energy and descriptor values, due to different solubility values at different temperatures, were removed. Each set of modelling results was generated five times, using different random seeds. Descriptor importance analysis was carried out as per the models of enthalpy of solution and temperature dependent solubility.
Computational details
Further details related to the software and hardware used to generate our results are documented in Additional file 1.
Summary of cross-validated results
Ultimately, cross-validated modelling results were generated for the Avdeef [17] (enthalpy of solution endpoint) and Klimenko et al. [14] (temperature dependent solubility endpoint) derived datasets, according to a variety of different combinations of molecular (plus temperature, for the temperature dependent solubility endpoint) descriptors, with or without computed lattice energies and with or without melting point values, modelling algorithms (RFR or MLR), feature selection (yes or no) and cross-validation schemes. (N.B. For brevity, we refer to different modelling approaches—meaning a given combination of modelling algorithm, descriptor set and use, or not, of feature selection—as different models.) The predictive performance has been summarized, for each scenario, in terms of the arithmetic mean RMSE and R2 values on the validation sets. Detailed results are presented, in Excel workbooks, in Additional file 3. These detailed results include all R2 and RMSE values obtained from cross-validation, along with the mean of those values and, for key scenarios described under "Statistical significance of differences in cross-validated results", pairwise differences in those mean values and the corresponding adjusted p values. As explained under "Methods and Data", models were ranked on the same dataset using the mean RMSE and, with the exception of comparisons between results obtained using different cross-validation folds, p values were computed based on the mean RMSE values. All code and dataset files required to generate these cross-validated results are, as documented in Additional file 1, provided in Additional files 4, 5, 6, 7, 8, 9, 10.
Choosing the most suitable cross-validation protocol
For the Klimenko et al. [14]. derived datasets, pairwise comparison of all corresponding performance estimates obtained with the same model evaluated via the CV = v and CV = rt cross-validation protocols clearly indicated that lower estimates of performance were almost always obtained using the CV = rt protocol. For 107 out of the relevant 116 scenarios, there was an apparent reduction in performance, in terms of the cross-validated mean R2, upon moving from the standard cross-validation (CV = v) to the pseudo-cross-validation (CV = rt) protocol. (Of the remaining nine scenarios, all of these corresponded to extreme overfitting of MLR models.) For 90 of these 107 pairwise comparisons, the mean differences appeared to be statistically significant (Additional file 3).
These results are expected: allowing data for the same material to appear in corresponding training and test sets at different temperatures is expected to lead to inflated performance. As we are most interested in predicting temperature dependent solubility profiles for untested materials, we focus on the results obtained with our novel CV = rt protocol, except when comparing our results to those obtained by Klimenko et al. [14].
Comparison to the literature
For the modelling scenarios which were most directly comparable to the work of Avdeef [17] and Klimenko et al. [14], we typically obtained similar results. Our best results, obtained using different modelling approaches, were better or fairly similar. However, due to refinements we made to their datasets and some differences in descriptor calculations, modelling protocols and validation protocols, we do not report perfectly like-for-like comparisons. Further details are provided in Additional file 1.
Summary of best cross-validated results
In order to assess the effect of incorporating different sets of descriptors on the predictive performance for both endpoints, we focus on the top performing results. We present the relevant top ranking and second best results in Tables 2 and 3. (Additional file 1: Table S3 includes the top ranking CV = v results for the Klimenko et al. [14] derived datasets.)
Table 2 Top ranked results according to various scenarios for the enthalpy of solution datasets
Table 3 Top ranked results according to various scenarios for the temperature dependent solubility datasets
Effect of incorporating crystallographic information: lattice energy descriptor
The top performing model for enthalpy of solution, evaluated on the Avdeef_ExDPs_CS_True dataset, included calculated lattice energy as a descriptor (Table 2). However, none of the other top models for enthalpy of solution (Table 2) or direct prediction of temperature dependent solubility (Table 3) incorporated the lattice energy descriptor.
This may be partially attributed to the presence of the melting point or crystal structure based 3D descriptors acting as a partial proxy for the solid state contribution which the lattice energy descriptor is designed to capture. When only those models including neither the melting point nor crystal structure based 3D descriptors are considered (see Additional file 3), it remains the case that the top model for the Avdeef_ExDPs_CS_True dataset does and the top model for the Avdeef_ExDPs_Cal_CS_True dataset does not incorporate the lattice energy descriptor. (However, the results on the smaller Avdeef_ExDPs_Cal_CS_True dataset may be less robust.) However, for the Klimenko_CS_True dataset, the new top ranking model does incorporate the lattice energy descriptor.
Nonetheless, for all scenarios in which the top performing model incorporated the lattice energy descriptor, the apparent performance enhancement over the corresponding model which did not incorporate the lattice energy descriptor was negligible (Fig. 5) and was never statistically significant. This remains the case when only those models not incorporating melting point or crystal structure based 3D descriptors are considered (Fig. 6). For all such scenarios, the increases in mean RMSE, upon removing the lattice energy descriptor, were around 0.03 kJ/mol and 0.00 (2dp) log units for predictions of enthalpy of solution and temperature dependent solubility respectively. The differences in mean R2 were around 0.00 (2dp).
Cross-validated performance (RMSE) of the top performing model where the lattice energy (LE) descriptor was incorporated (LHS), compared to the corresponding model which didn't include the lattice energy descriptor (RHS): dataset = Avdeef_ExDPs_CS_True. The distributions of cross-validated results are presented as a boxplot, with whiskers extending 1.5 times the interquartile range beyond the upper and lower quartiles, with the arithmetic mean superimposed as a black circle. The presence of a star denotes an apparently statistically significant difference in cross-validated mean RMSE
Cross-validated performance (RMSE) of the top performing models (excluding models incorporating the melting point or crystal structure based 3D descriptors) where the lattice energy (LE) descriptor was incorporated (LHS), compared to the corresponding model which didn't include the lattice energy descriptor (RHS): a dataset = Avdeef_ExDPs_CS_True; b dataset = Klimenko_CS_True, CV = rt. All results are presented as per Fig. 5
Pairwise comparison of all relevant corresponding models identified only a minority of paired results, for both enthalpy of solution datasets and direct predictions of temperature dependent solubility (CV = rt), for which the inclusion of the lattice energy descriptor appeared to result in a statistically significant reduction in mean RMSE. Further discussion of the trends across all datasets is presented in Additional file 1 and the details for all pairwise comparisons are presented in Additional file 3.
Effect of incorporating crystallographic information: 3D descriptors based on crystal structure
Only in the case of the Avdeef_ExDPs_CS_True dataset did the top performing model include the crystal structure based 3D descriptors (Tables 2, 3). However, upon removing the potentially confounding factors of the melting point and lattice energy descriptors, the best modelling results for Avdeef_ExDPs_CS_True and Klimenko_CS_True (CV = rt) were obtained using these descriptors (Additional file 3).
All of these results for the Avdeef_ExDPs_CS_True dataset (Figs. 7, 8) appeared statistically significantly different to the corresponding result obtained using the 3D descriptors based upon conformer generator structures. (This is in spite of the lattice energy descriptor also being incorporated into the overall top Avdeef_ExDPs_CS_True model.) This suggests the models may genuinely have benefited from the solid state information implicit in the 3D descriptors based upon the crystal structure. However, the same comparison for the top Klimenko_CS_True (CV = rt) model, after removing models with the lattice energy or melting point descriptor, found the difference in mean RMSE to the corresponding model using 3D descriptors based upon conformer generator structures appeared statistically insignificant.
Cross-validated performance (RMSE) of the top performing model where the crystal structure based 3D descriptors were incorporated (LHS), compared to the corresponding model using the conformer generator based 3D descriptors (RHS): dataset = Avdeef_ExDPs_CS_True. All results are presented as per Fig. 5
Cross-validated performance (RMSE) of the top performing models (excluding models incorporating the melting point or lattice energy descriptor) where the crystal structure based 3D descriptors were incorporated (LHS), compared to the corresponding model using the conformer generator based 3D descriptors (RHS): a dataset = Avdeef_ExDPs_CS_True; b dataset = Klimenko_CS_True, CV = rt. All results are presented as per Fig. 5
Pairwise comparison identified only a minority of paired results, for one of the enthalpy of solution datasets (Avdeef_ExDPs_CS_True), for which the inclusion of crystal structure based 3D descriptors appeared to result in a statistically significant reduction in mean RMSE compared to the corresponding model using conformer generator based 3D descriptors. For the other enthalpy of solution dataset and direct prediction of temperature dependent solubility, no apparently significantly different results were obtained. Further discussion of the trends across all datasets is presented in Additional file 1 and the details for all pairwise comparisons are presented in Additional file 3.
Effect of incorporating melting point
For all relevant scenarios, save for models developed using the Avdeef_ExDPs_CS_False or Avdeef_ExDPs_CS_True datasets, the best performing models incorporated the melting point descriptor (Tables 2, 3). When models incorporating the other solid state contribution descriptors—i.e. the lattice energy and crystal structure based 3D descriptors—were excluded, the top models for all crystal structure integrated datasets incorporated the melting point descriptor. However, it should be noted that the apparent increase in best predictive performance upon incorporating the melting point descriptor was, at most, modest (Figs. 9, 10). Moreover, only the performance increases for the Klimenko_CS_False and Klimenko_CS_True datasets appeared statistically significant.
Cross-validated performance (RMSE) of the top performing models for all scenarios where they incorporated the melting point (MP) descriptor (LHS), compared to the corresponding model which didn't include the MP descriptor (RHS): a dataset = Avdeef_ExDPs_Cal_CS_True; b dataset = Avdeef_ExDPs_Cal_CS_False; c dataset = Klimenko_CS_True, CV = rt; d Klimenko_CS_False, CV = rt. All results are presented as per Fig. 5
Cross-validated performance (RMSE) of the top performing models for all crystal structure integrated datasets, excluding models involving the lattice energy or crystal structure based 3D descriptors, where they incorporated the melting point (MP) descriptor (LHS), compared to the corresponding model which didn't include the MP descriptor (RHS): a dataset = Avdeef_ExDPs_CS_True; b dataset = Avdeef_ExDPs_Cal_CS_True; c dataset = Klimenko_CS_True, CV = rt. All results are presented as per Fig. 5
Pairwise comparison identified that, for a majority of scenarios, the inclusion of the melting point descriptor appeared to result in statistically significant enhancement in direct predictions of temperature dependent solubility. However, this was almost never observed from pairwise comparison of the models of enthalpy of solution. Further discussion of the trends across all datasets is presented in Additional file 1 and the details for all pairwise comparisons are presented in Additional file 3.
Effect of feature selection
The application of the feature selection algorithm never yielded one of the top models as assessed according to any of the evaluation protocols. Hence, it cannot be claimed that feature selection improved the best predictive performance.
This is in keeping with the pairwise comparison of modelling protocols which differed only in terms of whether feature selection was employed. All such comparisons where feature selection appeared to improve the mean RMSE corresponded to scenarios in which MLR was applied in combination with the high dimensional combination of all 2D molecular descriptors. Indeed, all scenarios involving RFR indicated a reduction in predictive performance upon applying feature selection.
Significance of the temperature descriptor
The importance of the (1/T) descriptor, in terms of its coefficient magnitude, was always close to the lowest for any descriptor for the evaluated MLR models. Conversely, for the evaluated models built using the non-linear RFR algorithm, the (1/T) descriptor was consistently in the top 20% of descriptors, excluding those models for which the molecular descriptors were based solely on the Absolv or Absolv and Ind (see Additional file 1: Table S1) or 3D descriptor sets.
These results can be explained by the van't Hoff relationship (see Eq. 1), which posits that, if the standard enthalpy of solution is roughly constant over the relevant temperature (T) range, log10(solubility) should be linearly related to (1/T) for a given material, with the slope of the trend line being proportional to the standard enthalpy of solution. Hence, due to the variation in the standard enthalpy of solution across materials, a non-linear relationship will exist between (1/T) and log10(solubility) across materials. We found that the van't Hoff relationship and the assumption that the standard enthalpy of solution is temperature independent holds well for most entries in the Klimenko et al. [14] derived datasets, for which an assessment was possible, and that the standard enthalpy of solution varied considerably across materials.
Detailed results supporting these comments are provided in Additional file 1.
Significant molecular descriptors
Regarding the question of which sets of molecular descriptors yielded the most predictive models, it can be seen (Tables 2, 3) that the top models for enthalpy of solution or direct prediction of temperature dependent solubility, under different scenarios, were built using a variety of descriptor sets. Regarding the question of which individual molecular descriptors were found to be most important for the models, descriptor analysis suggested that no single molecular descriptor stood out as being consistently important, but the descriptors identified as most important could generally be rationalized in terms of the information they conveyed regarding the potential for specific kinds of solid state and/or solution state interactions.
Discussion of the main findings
The main findings from this work relate to the outcome of evaluating temperature dependent solubility models via a novel (CV = rt) cross-validation scheme and the effect of explicitly incorporating various kinds of solid state information into these models or models of the related enthalpy of solution endpoint. Here, we offer possible explanations for our findings and put them within the context of previous studies.
Standard cross-validation protocols can overestimate the performance of models of temperature dependent solubility
Our findings make clear that standard cross-validation protocols are likely to significantly overestimate the performance of models designed to estimate temperature dependent solubility for untested materials, by allowing solubility data points measured for the same material at slightly different temperatures to be placed in corresponding training and test sets. Of course, this scenario could still be a fair test for a model designed to interpolate solubility data measured for a given material at a couple of temperatures for other relevant temperatures. Nonetheless, we can recommend our modified (CV = rt) cross-validation scheme for scenarios in which the predictive performance of QSPR models for the temperature dependent solubility profile of untested materials are being evaluated.
Solid state descriptors based on crystallographic information or melting point data never substantially improved the best models of temperature dependent solubility or the related enthalpy of solution endpoint
Our results suggest that incorporating the lattice energy descriptor, calculated from the assigned crystal structure, may improve predictive performance for both temperature dependent solubility related endpoints under some scenarios. However, no statistically significant findings were obtained to indicate that this descriptor improves the best predictions of enthalpy of solution or the best direct predictions of temperature dependent solubility. This remained the case when only models without either of the other solid state descriptors were considered.
The inclusion of crystallographic information in the form of crystal structure based 3D descriptors may genuinely improve predictions of enthalpy of solution. However, only for one of the two enthalpy of solution datasets modelled were apparently statistically significant improvements in the best predictions, due to the incorporation of crystallographic information in this fashion, observed. These descriptors never appeared to statistically significantly enhance the best direct predictions of temperature dependent solubility. This remained the case when the other solid state descriptors were removed.
We found that the inclusion of a melting point descriptor almost never appeared to yield statistically significant improvements in predictions of enthalpy of solution. Indeed, this descriptor was never observed to result in statistically significant improvement in the best enthalpy of solution predictions. This remained the case when only those models not incorporating any other solid state descriptors were considered. Contrastingly, we found that the inclusion of the melting point descriptor appears to result in statistically significant performance enhancement for the best direct predictions of temperature dependent solubility. This was in keeping with the observation that the inclusion of the melting point descriptor often led to apparently statistically significant improvements in direct predictions of temperature dependent solubility.
However, even when apparently statistically significant improvements in the best predictions were observed, they were not substantial (Figs. 5, 6, 7 8, 9, 10). The failure of any of the solid state descriptors to substantially improve either the best direct predictions of temperature dependent solubility or the best predictions of the related enthalpy of solution endpoint, even when the other solid state descriptors were not included in the models, may be attributed to a variety of possible, non-mutually exclusive, explanations. (1) The variation in the endpoint data, for the investigated datasets, might be primarily dominated by variations in non-solid state contributions. (2) The solid state descriptors are insufficiently good at capturing the variation in solid state contributions. (3) The molecular descriptors, not including the crystal structure based 3D descriptors, implicitly capture the variation in solid state contributions to a considerable extent. Each of these possible explanations is considered in turn.
Do our findings reflect greater variation in non-solid state contributions for the modelled datasets?
If it were the case that non-solid state contributions to temperature dependent solubility, or the related enthalpy of solution, made a greater contribution to the variation in the solubility (or enthalpy) values for our datasets, this would suggest that being able to better capture the variations in solid state contributions would only lead to a modest improvement in predictive power for these endpoints. In practice, such a modest improvement might be sufficiently small to be deemed statistically insignificant. However, whether this could be expected to be the case for our modelled datasets is unclear. Recent experimental and computational studies have variously indicated that the variation in solubility across different public datasets was [26] and was not [48] dominated by non-solid state contributions. Moreover, we can only speculate on whether the relative importance of solid and non-solid state contributions to the variability in solubility suggested by these analyses is representative of the situation for the datasets studied in our work.
Do our findings reflect the limitations of the solid state descriptors?
It should be noted that there are two kinds of possible limitations to the ability of the solid state descriptors to capture solid state contributions to the modelled endpoints: (a) inherent limitations; (b) limitations arising from the possibility that the solid state descriptors were calculated or, in the case of melting point data, measured for the wrong polymorph. As discussed under "Integration with crystal structures", a very small proportion of endpoint data points were annotated with the assessed polymorph and, typically, the predicted most stable available crystal structure was selected. It has previously been suggested that it is not essential to calculate lattice energies from the correct polymorph in order to predict solubility [49] and computational studies suggest most polymorph energies differ by less than 7 kJ/mol [50]. Nonetheless, the experimental solubilities of polymorphs may differ by around 0.60 log units (log10[molar]) [51]. (Elsewhere, higher apparent solubility differences between polymorphs are reported, although it is suggested that these differences are typically less than 1.0 log units (log10[molar]) [52].) However, given that whether the solid form corresponding to the solid state descriptors differs from the polymorph corresponding to the solubility or enthalpy of solution data modelled in the current work is typically unknown, we are unable to assess the extent to which non-inherent limitations of the solid state descriptors affect our findings.
The lattice energy descriptor obtained a Pearson's correlation coefficient of 0.77 (one-tail p value = 10−6) with the experimentally estimated lattice energies for the 27 SUB-48 [29] dataset entries which complied with our filtering criteria. This confirms that the force-field protocol used to compute the lattice energy descriptor was a reasonable choice. If this statistic was representative of the performance of the lattice energy descriptor for the modelled datasets, the lattice energy descriptor should significantly capture the solid state contribution to the temperature dependent solubility related endpoints. Since the SUB-48 dataset was [29], like the datasets modelled in our work, a mixture of pharmaceutical APIs and general organic compounds and, as per most entries in our datasets, the calculated lowest energy crystal structure was assigned in the absence of polymorph specific information, this statistic could reasonably be expected to be representative of how the lattice energy descriptor would perform on the modelled datasets.
However, whilst experimental lattice energy estimates were not available for the modelled datasets, the correlation between the lattice energy descriptor and the available melting point data may be considered indicative of the extent to which the former captures the solid state contributions to the enthalpy of solution and temperature dependent solubility data. This can be seen from consideration of Eqs. (2), (3) [53] and (4), from which it can be expected that lattice energy and melting point are negatively correlated. (In Eqs. 3 and 4, \( T_{m} \) denotes melting point, \( \Delta H_{fus} \) and \( \Delta S_{fus} \) the enthalpy and entropy of fusion respectively, \( \Delta H_{sub} \) the sublimation enthalpy and \( \Delta H_{cond} \) the condensation enthalpy.)
$$ T_{m} = \frac{{\Delta H_{fus} }}{{\Delta S_{fus} }} $$
$$ \Delta H_{fus} = \Delta H_{sub} + \Delta H_{cond} $$
Hence, the weak negative correlations between the lattice energy descriptor and the melting point descriptor could suggest the lattice energy descriptor did not capture solid state contributions well for the entirety of these datasets (Fig. 11a–c). Moreover, the fact that these correlations were observed to increase when only the subset of entries for which crystal structure specific melting point data were available was considered (Fig. 11d–f) could further suggest that the lattice energy descriptor was not uniformly good at capturing solid state contributions for all entries in the modelled datasets. (Here, it should be noted that around 2% of the Avdeef_ExDPs_CS_True and Klimenko_CS_True crystal structures were disordered. However, this was only observed to have caused lattice energy calculation errors for one structure in the Avdeef_ExDPs_CS_True dataset. Full details are provided in Additional file 1.) The fact that those correlations were negligibly changed when the actual crystal structure specific melting points, retrieved from the CSD (version 5.38) using the CSD Python API [54], were used (Fig. 11g–i) suggests that the poor correlations observed between the lattice energy descriptor and the melting point descriptor across the entirety of the modelled datasets did not reflect the fact that the melting point data used for the latter descriptor may not have corresponded to the polymorph for which the lattice energy descriptor was calculated.
Correlation, in terms of the Pearson correlation coefficients (r) and one-tail p values (p), between all N corresponding pairs of lattice energy (LE) descriptor values and melting point (MP) values, from different sources, for different datasets: a Klimenko_CS_True dataset, melting point descriptor values (N = 129, r = − 0.29, p = 0.00); b Avdeef_ExDPs_CS_True dataset, melting point descriptor values (N = 169, r = − 0.15, p = 0.02); c Avdeef_ExDPs_Cal_CS_True dataset, melting point descriptor values (N = 30, r = − 0.24, p = 0.10); d Klimenko_CS_True subset with CSD melting point data, melting point descriptor values (N = 17, r = − 0.39, p = 0.06); e Avdeef_ExDPs_CS_True subset with CSD melting point data, melting point descriptor values (N = 22, r = − 0.61, p = 0.00); f Avdeef_ExDPs_Cal_CS_True subset with CSD melting point data, melting point descriptor values (N = 5, r = − 0.48, p = 0.21); g Klimenko_CS_True subset with CSD melting point data, CSD melting point data (N = 17, r = − 0.37, p = 0.07); h Avdeef_ExDPs_CS_True subset with CSD melting point data, CSD melting point data (N = 22, r = − 0.60, p = 0.00); i Avdeef_ExDPs_Cal_CS_True subset with CSD melting point data, CSD melting point data (N = 5, r = − 0.52, p = 0.19). N.B. (1) These one tail p values denote the probability of getting as negative a correlation coefficient as observed, by chance, given the null-hypothesis of zero correlation. (2) In order to make the plot legible, one outlier (CSD refcode TEPHTH13, calculated lattice energy − 2735.71 kcal/mol) was excluded from plot (b). (3) Where a range of melting points was retrieved for the specific crystal structure from the CSD, the mean value was used
Nonetheless, it should be noted that an imperfect correlation would still be expected between the lattice energy descriptor and the crystal structure specific melting point, even if the former corresponded perfectly to the true lattice energy and there were no experimental errors in the latter. Melting point will be imperfectly correlated with the enthalpy of fusion, due to the variation in the entropy of fusion across materials (Eq. 3). The enthalpy of fusion, in turn, will be imperfectly correlated with the enthalpy of sublimation, due to the variation in the condensation enthalpy across materials (Eq. 4), and the latter, in turn, is only linearly related to the lattice energy under certain assumptions [29] and at constant temperature (Eq. 2). Hence, also taking into account the possibility that the melting point descriptor may correspond to a different polymorph, the poor correlation of the lattice energy descriptor with the melting point descriptor across the modelled datasets may overstate the extent to which the lattice energy descriptor fails to capture the solid state contribution to the modelled endpoints.
The fact that the melting point descriptor did not necessarily correspond to the experimental melting point for the polymorph for which enthalpy of solution or temperature dependent solubility data were available may have contributed to this descriptor failing to fully capture the solid state contribution to the modelled endpoints. Consideration of the previously mentioned subset of dataset entries for which crystal structure specific melting point data were retrieved from the CSD, indicates that melting point data for the same chemical can differ significantly in some cases. (Full details are provided in Additional file 3. Here, it should also be noted that we cannot be certain that these discrepancies in melting point data from different data sources necessarily reflected melting point differences between polymorphs.) For the Klimenko_CS_True dataset, across all 17 pairs of corresponding CSD retrieved melting points and melting point descriptor values, the median absolute deviation was 1.5, the 95th percentile value was 41.4 and the maximum value was 85 degrees Kelvin. The corresponding statistics for the 22 (5) pairs of melting point values obtained for the Avdeef_ExDPs_CS_True (Avdeef_ExDPs_Cal_CS_True) dataset were as follows: median = 1.75 (2), 95th percentile = 35.9 (12.5), maximum = 52.2 (14.2) degrees Kelvin.
Finally, regarding the inherent limitations of the solid state descriptors considered in this work, the crystal structure based 3D descriptors would not be able to fully capture all information relevant to lattice interactions. These CPSA descriptors [39,40,41] capture information related to polar and non-polar intermolecular forces, based on electrostatic distributions at the molecular surface, and a CPSA descriptor, calculated from the molecular structure via a conformer generator, was amongst those making a significant contribution to the enthalpy of sublimation model of Salahinejad et al. [53]. However, these descriptors would have been unable to take account of dispersion forces or properly take account of localized interactions such as hydrogen bonding. Indeed, they are indicated to be weakly dependent on molecular conformation [39, 40].
Future studies should consider computing additional 3D descriptors, based upon the crystal structure, which more fully account for those interactions. Ideally, these would explicitly take account of the lattice structure. This would arguably result in them better taking account of the actual solid state intermolecular interactions than crystal structure based 3D molecular descriptors, which can only implicitly take account of that information. However, as per our current work, if 3D molecular descriptors were computed using the crystal structure, they would need to be benchmarked against the performance of the same 3D descriptors computed without knowledge of the crystal structure, using a conformer generator.
Do our findings reflect the ability of the molecular descriptors to implicitly capture solid state contributions?
It should be acknowledged that molecular descriptors cannot capture variations in solid state contributions arising from polymorphism. Hence, decent solid state descriptors, calculated or measured for the solid form for which the endpoint data were measured, would be expected to add additional information that molecular descriptors cannot capture. However, as previously discussed, the experimentally assessed polymorph was not typically known for the modelled datasets.
Under this scenario, molecular descriptors may be just as good at capturing solid state contributions to the enthalpy of solution or temperature dependent solubility as the solid state descriptors. The extent to which molecular descriptors can capture solid state structural information may be illustrated by the reasonable quality of models built for the lattice energy descriptor using the combined set of 2D molecular descriptors and random forest for the Klimenko_CS_True dataset. The cross-validated mean R2 was 0.60 ± 0.01 (standard error of the mean).
The lower performance for the Avdeef_ExDPs_Cal_CS_True dataset (mean R2 = 0.40 ± 0.04) may reflect the fewer data points available for training. However, the poor cross-validation results obtained for the Avdeef_ExDPs_CS_True dataset (mean R2 = − 172.20 ± 26.56) are surprising. This arguably reflects the presence of some problematic instances distorting the results. For three out of five cross-validation repetitions, all R2 values were negative whilst, for two repetitions, a single fold yielded R2 values between 0.70 and 0.78. Nine instances were identified with absolute predictions errors greater than 50 kcal/mol. (These instances are identified in Additional file 1. Other than the fact that one of them, CSD refcode TEPHTH13, was an extreme lattice energy outlier of − 2735.71 kcal/mol, there was no obvious reason for them being prediction outliers.) When the modelling results were generated again without these instances, a much better mean R2 (0.76 ± 0.01) was obtained.
Descriptor importance analysis suggested partial consistency between the most important molecular descriptors, for predictions of calculated lattice energies, across all datasets. The common most important descriptors are expected to have a close link to solid state intermolecular interactions. Further details are provided in Additional file 1.
How do our findings regarding the importance of solid state descriptors compare to previous modelling studies of related endpoints?
Both our findings regarding the effect of incorporating the crystal structure derived descriptors (calculated lattice energy or 3D descriptors) and/or the melting point descriptor into our models for temperature dependent solubility, and the related enthalpy of solution endpoint, should be seen in the context of the wider debate in the recent literature regarding the importance of explicitly representing solid state contributions in models of aqueous solubility and the extent to which molecular descriptors can capture solid state contributions [3, 26,27,28,29,30]. Emami et al. [27] found that two parameter QSPR models for aqueous solubility incorporating experimental melting point as a descriptor did not perform better than two parameter models based solely on molecular descriptors. (However, their results did suggest that a two parameter model incorporating the entropy of melting outperformed molecular descriptor models.) This is consistent with our finding that the inclusion of a melting point descriptor within molecular descriptor based models was not responsible for statistically significant improvements in the best predictions of the related enthalpy of solution endpoint. However, it is possibly at odds with our finding that melting point data did result in an apparently statistically significant improvement in the best direct predictions of temperature dependent solubility. Nonetheless, it should be remembered that even these apparently statistically significant improvements were not substantial.
In keeping with our finding, that incorporating a lattice energy descriptor did not lead to a statistically significant improvement in the best model for temperature dependent aqueous solubility or the related enthalpy of solution, Salahinejad et al. [3] found that the availability of lattice energy or sublimation enthalpy descriptors did not significantly improve models of aqueous solubility. However, whilst those authors [3] used sublimation enthalpies, converted to lattice energies as per Eq. (2), estimated from molecular structure [53], our own analysis was based on lattice energies estimated from crystal structures. The fact that we obtained similar findings might suggest their results were not an artefact of failing to incorporate crystallographic information into their models. On the other hand, the fact that we needed to assign a nominal crystal structure in many cases, due to the few data points associated with polymorph information in our dataset, might be a contributory factor to this finding.
In contrast to our findings, McDonagh et al. [30] suggested that random forest models of aqueous solubility were statistically significantly improved upon adding theoretical descriptors, calculated in part from crystal structures assigned using a similar protocol to our own, to molecular descriptors. However, it should be noted that the theoretical descriptors calculated by these authors [30] were a combination of solid state energetic contributions, calculated from crystal structures using the DMACRYS program [55], and non-solid state contributions, computed using Hartree–Fock or MO6-2X calculations, and they only reported statistically significant improvements in their models when those theoretical descriptors were calculated using MO6-2X calculations. This suggests that the theoretical descriptors capturing the non-solid state contributions may have been most important here. (It should also be noted that their assessment of statistically significant differences was not identical to the protocol employed herein.) Hence, their findings are not necessarily at odds with our own observation that incorporating lattice energy descriptors, calculated from crystal structures, do not statistically significantly improve the best QSPR models of temperature dependent aqueous solubility or the related enthalpy of solution.
As regards our suggestion that this finding reflects, in part, the ability of molecular descriptors to serve, to a considerable degree, as proxies for solid state contributions, various recent studies have considered the extent to which molecular descriptors can capture solid state contributions to solubility [26, 28, 29, 53]. Both Salahinejad et al. [53] and Docherty et al. [26] reported that molecular descriptor based QSPR models could capture most of the variation in enthalpy of sublimation data for diverse organic compounds, with test set R2 values > 0.90. However, Abramov [28] recently suggested that the failure of molecular descriptors to fully capture solid state contributions was the major limiting factor in the prediction of aqueous solubility using QSPR methods and that the good performance reported for molecular models of enthalpy of sublimation could represent their ability to capture short range molecular interactions in the solid state, as opposed to long range interactions within the crystal. Furthermore, even for enthalpy of sublimation, McDonagh et al. [29] found that QSPR models built using theoretical chemistry descriptors, calculated from crystal structures, appeared substantially more predictive than models built using molecular descriptors, albeit for the relatively small SUB-48 dataset.
Hence, these earlier studies support the hypothesis that incorporating crystallographic information should be able to capture solid state contributions to solubility and its temperature dependence better than simply using molecular descriptors. This may be reflected in the fact that our results offer some evidence that the incorporation of this information, in the form of crystal based 3D molecular descriptors, may genuinely improve the best QSPR models of the related enthalpy of solution term. However, the fact that our results do not suggest statistically significant improvement, upon incorporating calculated lattice energies, in the best predictions of either temperature dependent solubility related endpoint could well reflect, in part, the limited extent to which our lattice energy calculations capture solid state contributions above and beyond the degree to which this is captured by molecular descriptors. In part, this may reflect greater discrepancy between the polymorphs for which endpoint data were available and for which the lattice energies were calculated, compared to some previous studies, albeit McDonagh et al. [29, 30] were obliged to handle missing polymorph data in much the same way as per our current work. It may also reflect, as suggested by our correlation analyses of melting point data and calculated lattice energies (Fig. 11), variations in the performance of the lattice energy calculations for different subsets of the modelled datasets.
In this work, we have built upon the few QSPR studies published to date which have explored the prediction of temperature dependent solubility. Specifically, we have extended previous work looking at modelling the enthalpy of solution, which can be related to temperature dependent solubility via the van't Hoff relationship, and direct prediction of temperature dependent solubility for aqueous solutions. We built upon these earlier studies via investigating the following factors: (a) the incorporation of crystallographic information, in the form of lattice energies or 3D descriptors calculated from crystal structures, into the models; (b) the effect of adding versus excluding melting point data from the models; (c) a larger variety of molecular descriptor permutations; (d) the use of feature selection to produce parsimonious models; (e) a novel pseudo-cross-validation protocol.
All the different descriptors of solid state contributions (crystal structure calculated lattice energies, crystal structure based 3D descriptors, melting point data) were indicated to improve the models for at least one of the modelled endpoints for some scenarios. However, none of these descriptors was responsible for any substantial improvement in the best direct predictions of temperature dependent solubility or the best predictions of the related enthalpy of solution endpoint. This remained the case when the effect of one kind of solid state descriptors was considered in isolation. This finding is noteworthy and surprising, since the importance of the solid state contribution to both endpoints is clear from the underlying thermodynamics and, since a variety of solid state arrangements are possible for the same molecular structure, molecular descriptors are unlikely to fully capture this contribution. Indeed, it has recently been suggested that the major source of error in QSPR prediction of solubility is the failure of molecular descriptors to fully capture solid state contributions. This finding may, in part, reflect limitations in the calculated 3D descriptors and lattice energies. In the case of the lattice energies, correlation analysis with melting point data suggests that the quality of the lattice energy calculations varies across different dataset entries. Our findings may also, in part, reflect the limited availability of polymorph metadata. Both of these reasons may have contributed to molecular descriptors implicitly capturing solid state contributions to the modelled endpoints comparably to the solid state descriptors for at least some of the scenarios considered, limiting the value added by incorporating the solid state descriptors.
Our best modelling results were typically comparable to those previously reported in the literature, albeit we cannot claim to have performed a perfectly like-for-like comparison, partly due to refinements we made to the previously modelled datasets. We found that feature selection, as applied in our work, never improved the best modelling results.
Finally, we found that, for direct prediction of temperature dependent solubility data, standard cross-validation protocols tend to overestimate the performance of models designed to predict temperature dependent solubility for untested materials, by allowing solubility data points measured for the same material at slightly different temperatures to be placed in corresponding training and test sets. Hence, we recommend the use of our novel pseudo-cross-validation protocol, which avoids including data points measured for the same material at different temperatures in corresponding training and test sets.
RFR:
random forest regression
MLR:
multiple linear regression
active pharmaceutical ingredient
Python API:
Python Application Programming Interface
Skyner RE, McDonagh JL, Groom CR et al (2015) A review of methods for the calculation of solution free energies and the modelling of systems in solution. Phys Chem Chem Phys 17:6174–6191. https://doi.org/10.1039/C5CP00288E
Palmer DS, Mitchell JBO (2014) Is experimental data quality the limiting factor in predicting the aqueous solubility of druglike molecules? Mol Pharma 11:2962–2972. https://doi.org/10.1021/mp500103r
Salahinejad M, Le TC, Winkler DA (2013) Aqueous solubility prediction: do crystal lattice interactions help? Mol Pharm 10:2757–2766. https://doi.org/10.1021/mp4001958
Muller FL, Fielding M, Black S (2009) A practical approach for using solubility to design cooling crystallisations. Org Process Res Dev 13:1315–1321. https://doi.org/10.1021/op9001438
Kristensen HG, Schaefer T (1987) Granulation: a review on pharmaceutical wet-granulation. Drug Dev Ind Pharm 13:803–872. https://doi.org/10.3109/03639048709105217
Pharmaceutical Granulation|Ensuring Better Control of Granulation|Pharmaceutical Manufacturing. http://www.pharmamanufacturing.com/articles/2008/096/. Accessed 13 June 2017
Papaioannou V, Lafitte T, Avendaño C et al (2014) Group contribution methodology based on the statistical associating fluid theory for heteronuclear molecules formed from Mie segments. J Chem Phys 140:54107. https://doi.org/10.1063/1.4851455
Klamt A (2011) The COSMO and COSMO-RS solvation models. WIREs Comput Mol Sci 1:699–709. https://doi.org/10.1002/wcms.56
Klamt A, Schüürmann G (1993) COSMO: a new approach to dielectric screening in solvents with explicit expressions for the screening energy and its gradient. J Chem Soc, Perkin Trans 2:799–805. https://doi.org/10.1039/P29930000799
Palmer DS, McDonagh JL, Mitchell JBO et al (2012) First-principles calculation of the intrinsic aqueous solubility of crystalline druglike molecules. J Chem Theory Comput 8:3322–3337. https://doi.org/10.1021/ct300345m
Misin M, Fedorov MV, Palmer DS (2015) Communication: accurate hydration free energies at a wide range of temperatures from 3D-RISM. J Chem Phys 142:91105. https://doi.org/10.1063/1.4914315
PSE: Events—Webinars—1609 gSAFT Series. https://www.psenterprise.com/events/webinars/2016/16-09-gsaft-series. Accessed 13 June 2017
Kholod YA, Gryn'ova G, Gorb L et al (2011) Evaluation of the dependence of aqueous solubility of nitro compounds on temperature and salinity: a COSMO-RS simulation. Chemosphere 83:287–294. https://doi.org/10.1016/j.chemosphere.2010.12.065
Klimenko K, Kuz'min V, Ognichenko L et al (2016) Novel enhanced applications of QSPR models: temperature dependence of aqueous solubility. J Comput Chem 37:2045–2051. https://doi.org/10.1002/jcc.24424
Khayamian T, Esteki M (2004) Prediction of solubility for polycyclic aromatic hydrocarbons in supercritical carbon dioxide using wavelet neural networks in quantitative structure property relationship. J Supercrit Fluids 32:73–78. https://doi.org/10.1016/j.supflu.2004.02.003
Tabaraki R, Khayamian T, Ensafi AA (2006) Wavelet neural network modeling in QSPR for prediction of solubility of 25 anthraquinone dyes at different temperatures and pressures in supercritical carbon dioxide. J Mol Graph Model 25:46–54. https://doi.org/10.1016/j.jmgm.2005.10.012
Avdeef A (2015) Solubility temperature dependence predicted from 2D structure. ADMET & DMPK. https://doi.org/10.5599/admet.3.4.259
Rosbottom I, Ma CY, Turner TD et al (2017) Influence of solvent composition on the crystal morphology and structure of p-Aminobenzoic acid crystallized from mixed ethanol and nitromethane solutions. Cryst Growth Des 17:4151–4161. https://doi.org/10.1021/acs.cgd.7b00425
Prankerd RJ (1992) Solid-state properties of drugs. I. Estimation of heat capacities for fusion and thermodynamic functions for solution from aqueous solubility-temperature dependence measurements. Int J Pharm 84:233–244. https://doi.org/10.1016/0378-5173(92)90161-T
Prankerd RJ, McKeown RH (1990) Physico-chemical properties of barbituric acid derivatives Part I. Solubility-temperature dependence for 5,5-disubstituted barbituric acids in aqueous solutions. Int J Pharm 62:37–52. https://doi.org/10.1016/0378-5173(90)90029-4
Jensen F (2007) Chapter 17: statistics and QSAR. In: Introduction to computational chemistry, 2nd edn. Wiley, New York, pp 547–561
Strobl C, Malley J, Tutz G (2009) An introduction to recursive partitioning: rationale, application and characteristics of classification and regression trees, bagging and random forests. Psychol Methods 14:323–348. https://doi.org/10.1037/a0016973
Svetnik V, Liaw A, Tong C et al (2003) Random forest: a classification and regression tool for compound classification and QSAR modeling. J Chem Inf Comput Sci 43:1947–1958. https://doi.org/10.1021/ci034160g
Breiman L (2001) Random forests. Mach Learn 45:5–32. https://doi.org/10.1023/A:1010933404324
Lang AS, Bradley J-C QDB archive #104. QsarDB repository. http://dx.doi.org/10.15152/QDB.104. Accessed 20 July 2017
Docherty R, Pencheva K, Abramov YA (2015) Low solubility in drug development: de-convoluting the relative importance of solvation and crystal packing. J Pharm Pharmacol 67:847–856. https://doi.org/10.1111/jphp.12393
Emami S, Jouyban A, Valizadeh H, Shayanfar A (2015) Are crystallinity parameters critical for drug solubility prediction? J Solut Chem 44:2297–2315. https://doi.org/10.1007/s10953-015-0410-5
Abramov YA (2015) Major source of error in QSPR prediction of intrinsic thermodynamic solubility of drugs: solid vs nonsolid state contributions? Mol Pharm 12:2126–2141. https://doi.org/10.1021/acs.molpharmaceut.5b00119
McDonagh JL, Palmer DS, van Mourik T, Mitchell JBO (2016) Are the sublimation thermodynamics of organic molecules predictable? J Chem Inf Model 56:2162–2179. https://doi.org/10.1021/acs.jcim.6b00033
McDonagh JL, Nath N, De Ferrari L et al (2014) Uniting cheminformatics and chemical theory to predict the intrinsic aqueous solubility of crystalline druglike molecules. J Chem Inf Model 54:844–856. https://doi.org/10.1021/ci4005805
NCI/CADD Chemical Identifier Resolver. https://cactus.nci.nih.gov/chemical/structure. Accessed 21 July 2017
ChemSpider | Search and share chemistry. http://www.chemspider.com/. Accessed 21 July 2017
Kim S, Thiessen PA, Bolton EE et al (2016) PubChem substance and compound databases. Nucleic Acids Res 44:D1202–D1213. https://doi.org/10.1093/nar/gkv951
The PubChem Project. http://pubchem.ncbi.nlm.nih.gov/. Accessed 24 Nov 2011
Groom CR, Bruno IJ, Lightfoot MP, Ward SC (2016) The Cambridge structural database. Acta Cryst B, Acta Cryst Sect B, Acta Crystallogr B, Acta Crystallogr Sect B, Acta Crystallogr B Struct Crystallogr Cryst Chem, Acta Crystallogr Sect B Struct Crystallogr Cryst Chem 72:171–179. https://doi.org/10.1107/S2052520616003954
Sun H (1998) COMPASS: an ab initio force-field optimized for condensed-phase applications overview with details on alkane and benzene compounds. J Phys Chem B 102:7338–7364. https://doi.org/10.1021/jp980939v
Sun H, Ren P, Fried JR (1998) The COMPASS force field: parameterization and validation for phosphazenes. Comput Theor Polym Sci 8:229–246. https://doi.org/10.1016/S1089-3156(98)00042-7
Rigby D, Sun H, Eichinger BE (1997) Computer simulations of poly(ethylene oxide): force field, PVT diagram and cyclization behaviour. Polym Int 44:311–330. https://doi.org/10.1002/(SICI)1097-0126(199711)44:3%3c311:AID-PI880%3e3.0.CO;2-H
Stanton DT, Jurs PC (1990) Development and use of charged partial surface area structural descriptors in computer-assisted quantitative structure-property relationship studies. Anal Chem 62:2323–2329. https://doi.org/10.1021/ac00220a013
Stanton DT, Dimitrov S, Grancharov V, Mekenyan OG (2002) Charged partial surface area (CPSA) descriptors QSAR applications. SAR QSAR Environ Res 13:341–351. https://doi.org/10.1080/10629360290002811
Mordred CPSA Module Documentation. http://mordred-descriptor.github.io/documentation/master/api/mordred.CPSA.html. Accessed 11 June 2018
Moriwaki H, Tian Y-S, Kawashita N, Takagi T (2018) Mordred: a molecular descriptor calculator. J Cheminform 10:4. https://doi.org/10.1186/s13321-018-0258-y
(2018) Mordred: a molecular descriptor calculator (version 1.0.0). https://github.com/mordred-descriptor/mordred
Riniker S, Landrum GA (2015) Better informed distance geometry: using what we know to improve conformation generation. J Chem Inf Model 55:2562–2574. https://doi.org/10.1021/acs.jcim.5b00654
RDKit (version 2017.03.1). http://www.rdkit.org/. Accessed 25 July 2017
Rappe AK, Casewit CJ, Colwell KS et al (1992) UFF, a full periodic table force field for molecular mechanics and molecular dynamics simulations. J Am Chem Soc 114:10024–10035. https://doi.org/10.1021/ja00051a040
Alexander DLJ, Tropsha A, Winkler DA (2015) Beware of R2: simple, unambiguous assessment of the prediction accuracy of QSAR and QSPR models. J Chem Inf Model 55:1316–1322. https://doi.org/10.1021/acs.jcim.5b00206
Perlovich GL (2016) Poorly soluble drugs: disbalance of thermodynamic characteristics of crystal lattice and solvation. RSC Adv 6:77870–77886. https://doi.org/10.1039/C6RA14333D
Palmer DS, Llinàs A, Morao I et al (2008) Predicting intrinsic aqueous solubility by a thermodynamic cycle. Mol Pharm 5:266–279. https://doi.org/10.1021/mp7000878
Nyman J, Day GM (2015) Static and lattice vibrational energy differences between polymorphs. CrystEngComm 17:5154–5165. https://doi.org/10.1039/C5CE00045A
Abramov YA, Pencheva K (2010) Thermodynamics and relative solubility prediction of polymorphic systems. In: Ende DJ (ed) Chemical engineering in the pharmaceutical industry. Wiley, New York, pp 477–490
Huang L-F, Tong W-Q (2004) Impact of solid state properties on developability assessment of drug candidates. Adv Drug Deliv Rev 56:321–334. https://doi.org/10.1016/j.addr.2003.10.007
Salahinejad M, Le TC, Winkler DA (2013) Capturing the crystal: prediction of enthalpy of sublimation, crystal lattice energy, and melting points of organic compounds. J Chem Inf Model 53:223–229. https://doi.org/10.1021/ci3005012
CSD Python API (version 1.3.0). Quick primer to using the CSD Python API. https://downloads.ccdc.cam.ac.uk/documentation/API/descriptive_docs/primer.html. Accessed 24 July 2017
Price SL, Leslie MA, Welch GW et al (2010) Modelling organic crystal structures using distributed multipole and polarizability-based model intermolecular potentials. Phys Chem Chem Phys 12:8478–8490. https://doi.org/10.1039/c004164e
The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript. K.R. proposed investigating temperature dependent solubility. R.L.M.R. prepared the curated datasets, designed the modelling and analysis workflows in consultation with E.M., wrote the "QSPR Data Processing Tools Suite", all Python and R scripts, the final version of the Perl script for lattice energies, the final version of the "Feature Selection Tool" and the first draft of the paper. All authors read and approved the final manuscript.
Dr. Alex Avdeef (in-ADME Research) and Dr. Kyrylo Klimenko (Université de Strasbourg) are thanked for very helpful correspondence regarding their published work. Dr. Anatoly Artemenko (O.V. Bogatsky Physico-Chemical Institute National Academy of Science of Ukraine) is thanked for providing a copy of the HiT-QSAR software. Dr. James McDonagh (IBM research UK) and Dr. John Mitchell (University of St. Andrews) are thanked for providing a copy of the SUB-48 dataset in electronic form. Dr. Andrew Maloney (Cambridge Crystallographic Data Centre) is thanked for providing guidance on identifying CSD entries with one molecule in the asymmetric unit and for assisting with the identification of racemic crystals in the CSD. Jakub Janowiak (University of Leeds) is thanked for providing assistance with the curation of polymorph descriptions and melting point data from the Handbook of Aqueous Solubility (2nd edition). Alex Moldovan (University of Leeds) is thanked for providing assistance with scripting for Materials Studio. Dr. Ernest Chow (Pfizer), Dr. Klimentina Pencheva (Pfizer), Dr. Jian-Jie Liang (Biovia) and Dr. Carsten Menke (Biovia) are thanked for guidance regarding the use of Materials Studio. Dr. Rebecca Mackenzie (Science and Technology Facilities Council) is thanked for writing feature selection code which was adapted for use in the current work. Chris Morris (Science and Technology Facilities Council) is thanked for his input to the design of the feature selection algorithm and for useful discussions which improved the analysis and presentation of results in this manuscript. Dr. Dawn Geatches (Science and Technology Facilities Council) is thanked for her contribution to scripting. We also gratefully acknowledge the funding provided from the 'Advanced Digital Design of Pharmaceutical Therapeutics' (ADDoPT) project, funded by the UK's Advanced Manufacturing Supply Chain Initiative (AMSCI).
The datasets supporting the conclusions of this article are included within the article and its Additional files. The curated datasets, QSPR ready datasets containing descriptor values, the "QSPR Data Processing Tools Suite" and "Feature Selection Tool", along with all relevant scripts required to reproduce our results, have been made available as ZIP archives under "Additional files" for this article. As documented in the LICENSE.txt file assigned within each archive, all datasets have been made available under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/2.0/uk/) and all source code files are made available under the terms of the BSD Open Source license (https://opensource.org/licenses/BSD-3-Clause). In addition, an automated test suite for the "QSPR Data Processing Tools Suite" has been made available under the terms of the GPL Open Source license (https://www.gnu.org/licenses/gpl-3.0.en.html). Adaptations of SMARTS patterns taken from the Rdkit [56] online documentation are made available under the terms of the Creative Commons Attribution-ShareAlike License (http://creativecommons.org/licenses/by-sa/4.0/).
We gratefully acknowledge the funding provided from the 'Advanced Digital Design of Pharmaceutical Therapeutics' (ADDoPT) project, funded by the UK's Advanced Manufacturing Supply Chain Initiative (AMSCI).
School of Chemical and Process Engineering, University of Leeds, Leeds, LS2 9JT, UK
Richard L. Marchese Robinson, Kevin J. Roberts & Elaine B. Martin
Richard L. Marchese Robinson
Kevin J. Roberts
Elaine B. Martin
Correspondence to Elaine B. Martin.
Extensions of the Methods and Data section (section A), step-by-step instructions for reproducing our results using the datasets and source code we have made available (section B) and detailed comparisons to results reported in the literature, along with further details regarding our results (Section C).
All QSPR ready datasets, prior to feature selection, associated with all considered sets of descriptors. N.B. In the case of the temperature (T) values, these were replaced with (1/T) using additional scripts prior to both feature selection and modelling.
Additional results files, in electronic format. These additional results are (a) SUB-48 calculated lattice energies for the complete set and filtered set of 27 crystal structures, (b) results from analysis of the correspondence between our temperature dependent solubility data and (1/T), (c) Excel workbooks documenting all R2 and RMSE values obtained from cross-validation, their mean values and the p values (both raw and adjusted) obtained from pairwise comparisons of corresponding models, (d) comparison of melting point data used for the melting point descriptor and retrieved from the CSD for linked refcodes.
The "QSPR Data Processing Tools Suite", used by the scripts employed to generate all QSPR results.
An automated test suite for "QSPR Data Processing Tools Suite".
The "Feature Selection Tool".
The Perl script used for Materials Studio lattice energy calculations, along with the Python scripts used to generate CIF input files from CSD refcodes, including filtering, as well as the Python scripts used for evaluation of the lattice energy protocol on the SUB-48 dataset.
The curated temperature dependent solubility and enthalpy of solution datasets, including the calculated lattice energies, and the SUB-48 dataset.
All scripts used to generate QSPR input files, modelling results and perform analysis.
Additional file 10.
SMARTS patterns used by one of the QSPR input file scripts (see Additional file 1), which were adapted from the Rdkit documentation [56].
Marchese Robinson, R.L., Roberts, K.J. & Martin, E.B. The influence of solid state information and descriptor selection on statistical models of temperature dependent aqueous solubility. J Cheminform 10, 44 (2018). https://doi.org/10.1186/s13321-018-0298-3
Quantitative structure–property relationships
Temperature dependent solubility data
Enthalpy of solution
Lattice energy
Submission enquiries: [email protected] | CommonCrawl |
F - Feynman
Time limit: 3 s
Memory limit: 256 MiB
Languages: C, C++, Java, Pascal, ... (details)
Accepted submissions
Richard Phillips Feynman was a well known American physicist and a recipient of the Nobel Prize in Physics. He worked in theoretical physics and also pioneered the field of quantum computing. He visited South America for ten months, giving lectures and enjoying life in the tropics. He is also known for his books "Surely You're Joking, Mr. Feynman!" and "What Do You Care What Other People Think?", which include some of his adventures below the equator.
His life-long addiction was solving and making puzzles, locks, and cyphers. Recently, an old farmer in South America, who was a host to the young physicist in $1949$, found some papers and notes that is believed to have belonged to Feynman. Among notes about mesons and electromagnetism, there was a napkin where he wrote a simple puzzle: "how many different squares are there in a grid of $N \times N$ squares?".
In the same napkin there was a drawing which is reproduced below, showing that, for $N = 2$, the answer is $5$.
The input contains several test cases. Each test case is composed of a single line, containing only one integer $N$, representing the number of squares in each side of the grid $(1 \leq N \leq 100)$.
The end of input is indicated by a line containing only one zero.
The input must be read from standard input.
For each test case in the input, your program must print a single line, containing the number of different squares for the corresponding input.
The output must be written to standard output.
Sample test(s) | CommonCrawl |
Towards classification of multiple-end solutions to the Allen-Cahn equation in $\mathbb{R}^2$
NHM Home
Singular limit of an activator-inhibitor type model
December 2012, 7(4): 805-836. doi: 10.3934/nhm.2012.7.805
Small populations corrections for selection-mutation models
Pierre-Emmanuel Jabin 1,
CSCAMM and Department of Mathematics, University of Maryland, College Park, MD 20742-4015, United States
Received March 2012 Revised August 2012 Published December 2012
We consider integro-differential models describing the evolution of a population structured by a quantitative trait. Individuals interact competitively, creating a strong selection pressure on the population. On the other hand, mutations are assumed to be small. Following the formalism of [20], this creates concentration phenomena, typically consisting in a sum of Dirac masses slowly evolving in time. We propose a modification to those classical models that takes the effect of small populations into accounts and corrects some abnormal behaviours.
Keywords: Hamilton-Jacobi equation with constraints, Adaptive dynamics, small populations., Dirac concentration.
Mathematics Subject Classification: Primary: 35B25, 35K55, 92D1.
Citation: Pierre-Emmanuel Jabin. Small populations corrections for selection-mutation models. Networks & Heterogeneous Media, 2012, 7 (4) : 805-836. doi: 10.3934/nhm.2012.7.805
, M. Bardi and I. Capuzzo Dolcetta,, M, (). Google Scholar
G. Barles, "Solutions de Viscosite et Équations de Hamilton-Jacobi," Collec. SMAI, Springer-Verlag, 2002. Google Scholar
G. Barles, S. Mirrahimi and B. Perthame, Concentration in Lotka-Volterra parabolic or integral equations: a general convergence result, Methods Appl. Anal., 16 (2009), 321-340. Google Scholar
G. Barles and B. Perthame, Concentrations and constrained Hamilton-Jacobi equations arising in adaptive dynamics, Recent Developments in Nonlinear Partial Differential Equations, 57-68, Contemp. Math., 439, Amer. Math. Soc., Providence, RI, (2007). doi: 10.1090/conm/439/08463. Google Scholar
R. Bürger and I. M. Bomze, Stationary distributions under mutation-selection balance: structure and properties, Adv. Appl. Prob., 28 (1996), 227-251. doi: 10.2307/1427919. Google Scholar
A. Calsina and S. Cuadrado, Small mutation rate and evolutionarily stable strategies in infinite dimensional adaptive dynamics, J. Math. Biol., 48 (2004), 135-159. doi: 10.1007/s00285-003-0226-6. Google Scholar
J. A. Carrillo, S. Cuadrado and B. Perthame, Adaptive dynamics via Hamilton-Jacobi approach and entropy methods for a juvenile-adult model, Math. Biosci., 205 (2007), 137-161. doi: 10.1016/j.mbs.2006.09.012. Google Scholar
N. Champagnat, A microscopic interpretation for adaptive dynamics trait substitution sequence models, Stoch. Proc. Appl., 116 (2006), 1127-1160. doi: 10.1016/j.spa.2006.01.004. Google Scholar
N. Champagnat, R. Ferrière and G. Ben Arous, The canonical equation of adaptive dynamics: A mathematical view, Selection, 2 (2001), 71-81. Google Scholar
N. Champagnat, R. Ferrière and S. Méléard, From individual stochastic processes to macroscopic models in adaptive evolution, Stoch. Models, 24 (2008), 2-44. doi: 10.1080/15326340802437710. Google Scholar
N. Champagnat and P.-E. Jabin, The evolutionary limit for models of populations interacting competitively via several resources, J. Differential Equations 251 (2011), 176-195. doi: 10.1016/j.jde.2011.03.007. Google Scholar
N. Champagnat, P.-E. Jabin and G. Raoul, Convergence to equilibrium in competitive Lotka-Volterra and chemostat systems, C. R. Math. Acad. Sci. Paris, 348 (2010), 1267-1272. doi: 10.1016/j.crma.2010.11.001. Google Scholar
N. Champagnat and S. Méléard, Polymorphic evolution sequence and evolutionary branching, To appear in Probab. Theory Relat. Fields (published online, 2010). doi: 10.1007/s00440-010-0292-9. Google Scholar
M. G. Crandall and P.-L. Lions, Users guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc., 27 (1992), 167. Google Scholar
R. Cressman and J. Hofbauer, Measure dynamics on a one-dimensional continuous trait space: theoretical foundations for adaptive dynamics, Theor. Pop. Biol., 67 (2005), 47-59. Google Scholar
L. Desvillettes, P.-E. Jabin, S. Mischler and G. Raoul, On selection dynamics for continuous structured populations, Commun. Math. Sci., 6 (2008), 729-747. Google Scholar
U. Dieckmann and R. Law, The dynamical theory of coevolution: A derivation from stochastic ecological processes, J. Math. Biol., 34 (1996), 579-612. doi: 10.1007/s002850050022. Google Scholar
O. Diekmann, A beginner's guide to adaptive dynamics. In Mathematical modelling of population dynamics, Banach Center Publ., 63, 47-86, Polish Acad. Sci., Warsaw, (2004). Google Scholar
O. Diekmann, M. Gyllenberg, H. Huang, M. Kirkilionis, J. A. J. Metz and H. R. Thieme, On the formulation and analysis of general deterministic structured population models. II. Nonlinear theory, J. Math. Biol., 43 (2001), 157-189. doi: 10.1007/s002850170002. Google Scholar
O. Diekmann, P. E. Jabin, S. Mischler and B. Perthame, The dynamics of adaptation: An illuminating example and a Hamilton-Jacobi approach, Theor. Popul. Biol., 67 (2005), 257-271. Google Scholar
S. Genieys, N. Bessonov and V. Volpert, Mathematical model of evolutionary branching, Math. Comput. Modelling, 49 (2009), 2109-2115. doi: 10.1016/j.mcm.2008.07.018. Google Scholar
S. A. H. Geritz, J. A. J. Metz, E. Kisdi and G. Meszéna, Dynamics of adaptation and evolutionary branching, Phys. Rev. Lett., 78 (1997), 2024-2027. Google Scholar
S. A. H. Geritz, E. Kisdi, G. Meszéna and J. A. J. Metz, Evolutionary singular strategies and the adaptive growth and branching of the evolutionary tree, Evol. Ecol., 12 (1998), 35-57. Google Scholar
M. Gyllenberg and G. Meszéna, On the impossibility of coexistence of infinitely many strategies, J. Math. Biol., 50 (2005), 133-160. doi: 10.1007/s00285-004-0283-5. Google Scholar
J. Hofbauer and R. Sigmund, Adaptive dynamics and evolutionary stability, Applied Math. Letters, 3 (1990), 75-79. doi: 10.1016/0893-9659(90)90051-C. Google Scholar
P. E. Jabin and G. Raoul, Selection dynamics with competition, J. Math. Biol., 63 (2011), 493-517. doi: 10.1007/s00285-010-0370-8. Google Scholar
A. Lorz, S. Mirrahimi and B. Perthame, Dirac mass dynamics in multidimensional nonlocal parabolic equations, Comm. Partial Differential Equations, 36 (2011), 1071-1098. doi: 10.1080/03605302.2010.538784. Google Scholar
S. Méléard, Introduction to stochastic models for evolution, Markov Process. Related Fields, 15 (2009), 259-264. Google Scholar
S. Méléard and V. C. Tran, Trait substitution sequence process and canonical equation for age-structured populations, J. Math. Biol., 58 (2009), 881-921. doi: 10.1007/s00285-008-0202-2. Google Scholar
J. A. J. Metz, R. M. Nisbet and S. A. H. Geritz, How should we define 'fitness' for general ecological scenarios?, Trends in Ecology and Evolution, 7 (1992), 198-202. Google Scholar
J. A. J. Metz, S. A. H. Geritz, G. Meszéna, F. A. J. Jacobs and J. S. van Heerwaarden, Adaptive Dynamics, a geometrical study of the consequences of nearly faithful reproduction, in "Stochastic and Spatial Structures of Dynamical Systems" (eds. S. J. van Strien & S. M. Verduyn Lunel), North Holland, Amsterdam, (1996), 183-231. Google Scholar
S. Mirrahimi, G. Barles, B. Perthame and P. E. Souganidis, Singular Hamilton-Jacobi equation for the tail problem,, Submitted., (). Google Scholar
B. Perthame and M. Gauduchon, Survival thresholds and mortality rates in adaptive dynamics: Conciliating deterministic and stochastic simulations, IMA Journal of Mathematical Medicine and Biology, to appear (published online, 2009). doi: 10.1093/imammb/dqp018. Google Scholar
B. Perthame and S. Génieys, Concentration in the nonlocal Fisher equation: the Hamilton-Jacobi limit, Math. Model. Nat. Phenom., 2 (2007), 135-151. doi: 10.1051/mmnp:2008029. Google Scholar
G. Raoul, Long time evolution of populations under selection and rare mutations, Acta Applicandae Mathematica, 114 (2011), 114. doi: 10.1007/s10440-011-9603-0. Google Scholar
F. Yu, Stationary distributions of a model of sympatric speciation, Ann. Appl. Probab., 17 (2007), 840-874. doi: 10.1214/105051606000000916. Google Scholar
Joan-Andreu Lázaro-Camí, Juan-Pablo Ortega. The stochastic Hamilton-Jacobi equation. Journal of Geometric Mechanics, 2009, 1 (3) : 295-315. doi: 10.3934/jgm.2009.1.295
Melvin Leok, Diana Sosa. Dirac structures and Hamilton-Jacobi theory for Lagrangian mechanics on Lie algebroids. Journal of Geometric Mechanics, 2012, 4 (4) : 421-442. doi: 10.3934/jgm.2012.4.421
Tomoki Ohsawa, Anthony M. Bloch. Nonholonomic Hamilton-Jacobi equation and integrability. Journal of Geometric Mechanics, 2009, 1 (4) : 461-481. doi: 10.3934/jgm.2009.1.461
Nalini Anantharaman, Renato Iturriaga, Pablo Padilla, Héctor Sánchez-Morgado. Physical solutions of the Hamilton-Jacobi equation. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 513-528. doi: 10.3934/dcdsb.2005.5.513
María Barbero-Liñán, Manuel de León, David Martín de Diego, Juan C. Marrero, Miguel C. Muñoz-Lecanda. Kinematic reduction and the Hamilton-Jacobi equation. Journal of Geometric Mechanics, 2012, 4 (3) : 207-237. doi: 10.3934/jgm.2012.4.207
Larry M. Bates, Francesco Fassò, Nicola Sansonetto. The Hamilton-Jacobi equation, integrability, and nonholonomic systems. Journal of Geometric Mechanics, 2014, 6 (4) : 441-449. doi: 10.3934/jgm.2014.6.441
Yoshikazu Giga, Przemysław Górka, Piotr Rybka. Nonlocal spatially inhomogeneous Hamilton-Jacobi equation with unusual free boundary. Discrete & Continuous Dynamical Systems, 2010, 26 (2) : 493-519. doi: 10.3934/dcds.2010.26.493
Nicolas Forcadel, Mamdouh Zaydan. A comparison principle for Hamilton-Jacobi equation with moving in time boundary. Evolution Equations & Control Theory, 2019, 8 (3) : 543-565. doi: 10.3934/eect.2019026
Yuxiang Li. Stabilization towards the steady state for a viscous Hamilton-Jacobi equation. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1917-1924. doi: 10.3934/cpaa.2009.8.1917
Alexander Quaas, Andrei Rodríguez. Analysis of the attainment of boundary conditions for a nonlocal diffusive Hamilton-Jacobi equation. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 5221-5243. doi: 10.3934/dcds.2018231
Renato Iturriaga, Héctor Sánchez-Morgado. Limit of the infinite horizon discounted Hamilton-Jacobi equation. Discrete & Continuous Dynamical Systems - B, 2011, 15 (3) : 623-635. doi: 10.3934/dcdsb.2011.15.623
Claudio Marchi. On the convergence of singular perturbations of Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1363-1377. doi: 10.3934/cpaa.2010.9.1363
Isabeau Birindelli, J. Wigniolle. Homogenization of Hamilton-Jacobi equations in the Heisenberg group. Communications on Pure & Applied Analysis, 2003, 2 (4) : 461-479. doi: 10.3934/cpaa.2003.2.461
Manuel de León, David Martín de Diego, Miguel Vaquero. A Hamilton-Jacobi theory on Poisson manifolds. Journal of Geometric Mechanics, 2014, 6 (1) : 121-140. doi: 10.3934/jgm.2014.6.121
Manuel de León, Juan Carlos Marrero, David Martín de Diego. Linear almost Poisson structures and Hamilton-Jacobi equation. Applications to nonholonomic mechanics. Journal of Geometric Mechanics, 2010, 2 (2) : 159-198. doi: 10.3934/jgm.2010.2.159
Laura Caravenna, Annalisa Cesaroni, Hung Vinh Tran. Preface: Recent developments related to conservation laws and Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : i-iii. doi: 10.3934/dcdss.201805i
Fabio Camilli, Paola Loreti, Naoki Yamada. Systems of convex Hamilton-Jacobi equations with implicit obstacles and the obstacle problem. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1291-1302. doi: 10.3934/cpaa.2009.8.1291
Giuseppe Marmo, Giuseppe Morandi, Narasimhaiengar Mukunda. The Hamilton-Jacobi theory and the analogy between classical and quantum mechanics. Journal of Geometric Mechanics, 2009, 1 (3) : 317-355. doi: 10.3934/jgm.2009.1.317
Yasuhiro Fujita, Katsushi Ohmori. Inequalities and the Aubry-Mather theory of Hamilton-Jacobi equations. Communications on Pure & Applied Analysis, 2009, 8 (2) : 683-688. doi: 10.3934/cpaa.2009.8.683
Olga Bernardi, Franco Cardin. On $C^0$-variational solutions for Hamilton-Jacobi equations. Discrete & Continuous Dynamical Systems, 2011, 31 (2) : 385-406. doi: 10.3934/dcds.2011.31.385
PDF downloads (70)
Pierre-Emmanuel Jabin | CommonCrawl |
Why does the degree of dissociation change when we dilute a weak acid even though the equilibrium constant $K$ is constant?
$K$ represents the ratio of concentrations of molecules in a solution at equilibrium, which means that $Q_\mathrm{r}$ (that ratio at any given point) looks to be identical to $K$. In other words, the molecules in that solution react accordingly so that they reach equilibrium and the ratio of their concentrations is equal to $K$.
If $K$ is large enough (bigger than $10^4$ in my curriculum), this means that the the concentration of the reactants is almost zero. In other words, the equilibrium position of that solution looks very much like a reaction that went to completion.
The more we dilute an acidic/basic solution, the higher the degree of dissociation, even though $K$ stays the same. So, does that mean that the more we dilute a solution the harder it is for it to reach the point of equilibrium for that specific molecule/solution or what?
For instance, say you found $K$ of solution to be $10^{-5}$. This means that when the reaction happens there are lots of reactants left, and not much products produced, which means that the degree of dissociation is low. But the more we dilute a solution, the closer it gets to a "complete reaction" (if you pour a small amount of weak-acid molecules into a large tank of water, it's certain that all of the weak-acid molecules are going to react with the water, i.e. the degree of dissociation approaches $100\%$).
So, how come $K$ can be independent of the initial reactants concentrations, and tell if a reaction was complete or not, when the "completion" of a reaction (the degree of dissociation) depends on the initial concentrations of reactants?
physical-chemistry acid-base equilibrium solutions
TheSimpliFire
Elhamer YacineElhamer Yacine
$\begingroup$ These two videos might help : khanacademy.org/science/chemistry/thermodynamics-chemistry/… ; khanacademy.org/science/chemistry/thermodynamics-chemistry/… . It's down to the chemical potential. Two systems with different concentrations of reactants and products may both be at equilibrium, there is nothing contradictory about that. The equilibrium constant tells you how to identify such systems. $\endgroup$ – user6376297 Mar 17 '19 at 11:13
If K is large enough (bigger than $10^4$ in my curriculum), this means that the the concentration of the reactants is almost zero.
This statement is not always true - it depends on the stoichiometry of the reaction.
For the reaction $$\ce{A(aq) <=> B(aq)}$$
the concentration of A is much smaller (ten thousand times) than that of B if $K = 10^4$, and this is independent of diluting the solution. It's fair to say that there is almost no A compared to B.
For the reaction $$\ce{4A(aq) <=> 4B(aq)}$$
the concentration of A is just ten times smaller than that of B if $K = 10^4$, and this is also independent of diluting the solution. However, there is quite a bit of A compared to B, and it is a bit misleading to say the the concentration of A is almost zero.
Both of these reactions had the same number of reactant species as product species (all in the same solution, so all equally affected by dilution).
The more we dilute an acidic/basic solution, the more the reaction is "complete", the more the reactants disappear, but K stays the same.
Here, "complete" would mean the absence of reactants, i.e. a reaction that goes to completion. Weak acids are defined by not completely dissociating, in contrast to strong acids. The general reaction for a weak acid is:
$$\ce{AH(aq) <=> H+(aq) + A-(aq)}$$
Notice that there are more product particles than reactant particles. For those type of reactions, dilution favors the products. In fact, if $K = 10^-5$ for this reaction and all concentrations are $\pu{10^-5 M}$, the reaction is at equilibrium. This means even though the equilibrium constant is much smaller than one, you still can have reactants and products at the same concentrations.
On the other hand, if there are more reactant particles than product particles, dilution has the opposite effect. For a complex formation reaction between a metal M and a ligand L
$$\ce{M(aq) + 4 L(aq) <=> ML4(aq)}$$
dilution will cause the complex to fall apart. If $K = 10^4$ and the free ligand concentration is one molar, there will be a lot of complex and very little free metal. On the other hand, if the free ligand concentration is one millimolar (0.001 M), there is hardly any complex and most of the metal will be in the form of the free metal.
Why does the degree of dissociation change when we dilute a weak acid even though the equilibrium constant K is constant?
As the examples above illustrated, this is because for these specific reactions, the sum of exponents for the products is higher than that of the reactants in the equilibrium constant expression. As you dilute, Q will decrease, and the reaction will go forward to reach K again. At infinite dilution, the product will be favored no matter what the value of the equilibrium constant is.
If you want to have a general statement about what the value of K means, it would be something like "as K increases, the equilibrium concentration of products will increase and those of reactants will decrease".
Karsten TheisKarsten Theis
See my comment above.
As a numerical example, take acetic acid ($\ce{AcOH}$), which has $K_\mathrm{a} = 1.8 \cdot 10^{-5}$.
This means that:
$$K_\mathrm{a} = \frac{[\ce{AcO-}][\ce{H+}]}{[\ce{AcOH}]}$$
And the total nominal concentration of acid is:
$$C_\mathrm{a} = [\ce{AcOH}] + [\ce{AcO-}]$$
Combining these two equations, you can see that the % of dissociated acid is:
$$\alpha = \frac{[\ce{AcO-}]}{C_\mathrm{a}} = \frac{K_\mathrm{a}}{K_\mathrm{a} + [\ce{H+}]}$$
i.e. not a constant, but a value that depends on the $\mathrm{pH}$. So no, there is no need for the 'degree of completion' of the reaction to be a constant.
You can even calculate the explicit concentrations of all the species.
Given that:
$$C_\mathrm{a} = \left(\frac{1}{[\ce{H+}]} + \frac{1}{K_\mathrm{a}}\right) \cdot ( [\ce{H+}]^2 - K_\mathrm{w}^2)$$
you can see that, for $\mathrm{pH} = 5$ you need:
$$C_\mathrm{a} = \left(\frac{1}{10^{-5}} + \frac{1}{1.8 \cdot 10^{-5}}\right) \cdot \left((10^{-5})^2 - (10^{-14})^2\right) = 1.5554 \cdot 10^{-5}$$
$$[\ce{AcOH}] = \frac{C_\mathrm{a} \cdot [\ce{H+}]}{K_\mathrm{a} + [\ce{H+}]} = 5.555 \cdot 10^{-6}$$
$$[\ce{AcO-}] = C_\mathrm{a} - [\ce{AcOH}] = 9.999 \cdot 10^{-6}$$
So the % of dissociated $\ce{AcOH}$ is:
$$\alpha = \frac{[\ce{AcO-}]}{C_\mathrm{a}} \approx 64 \%$$
Now repeat for $\mathrm{pH} = 3$:
$$C_\mathrm{a} = \left(\frac{1}{10^{-3}} + \frac{1}{1.8 \cdot 10^{-5}}\right) \cdot \left((10^{-3})^2 - (10^{-14})^2\right) \approx 0.05656$$
$$[\ce{AcOH}] = \frac{C_\mathrm{a} \cdot [\ce{H+}]}{K_\mathrm{a} + [\ce{H+}]} \approx 0.05556$$
$$[\ce{AcO-}] = C_\mathrm{a} - [\ce{AcOH}] \approx 0.001$$
$$\alpha = \frac{[\ce{AcO-}]}{C_\mathrm{a}} \approx 1.8 \%$$
So you see, in a more dilute mixture the $\mathrm{pH}$ is higher, and the dissociation reaction of acetic acid is 'more complete', whereas at higher concentration the pH is lower, and there is proportionally less acetate anion.
All of these systems are at equilibrium.
andselisk♦
$K$, the equilibrium constant, is independent of the composition of the system. It simply describes the preference of the system for reactants or products.
On the other hand, there is the reaction quotient, $Q$, which describes the position of the system relative to equilibrium. $Q$ is the value that is variable and that you adjust to be closer to $K$. Technically, $K$ can also be computed from the standard free energy change of the reaction, $\Delta G ^{\circ}$.
What's confusing is the $Q$ and $K$ are computed in the exact same way, with the sole difference that for $Q$, you plug in the values you have right now, and for $K$, you plug in the values at any equilibrium.
If $Q=K$, you are at equilibrium.
If $Q < K$, you need to push the reaction towards more products.
If $Q > K$, you need to push the reaction towards more reactants.
When you dilute a solution (or make other changes), you may change the value of $Q$, which may put you farther from equilibrium, but the opposite may also be true. It depends on the specific reaction and the form of the equilibrium expression.
Your confusion is because you're confusing $Q$ measured at different points in time. At the end of the reaction $t = \infty$, $Q = K$. At the beginning of the reaction, $Q_{0} < K$. Over time, as the reaction goes towards equilibrium, the value of $Q$ will get closer to $K$. If at the beginning of the reaction, you dilute the solution, you may get a different value of $Q$. If it's smaller, then you have farther to go to get to equilibrium. But again, this depends on the molecularity of the reaction.
$$\ce{A + B -> C}$$
$$\ce{A -> C}$$
$$\ce{A -> C + D}$$
Here are three different reactions.
In the first reaction, at the beginning of the reaction, with $Q < K$, dilution pushes you towards equilibrium because reactant concentrations enter into the equilibrium expression twice.
$$Q = \frac{\ce{[C]}}{\ce{[A][B]}}$$
If you halve all three concentrations, the value of $Q$ will get bigger.
In a similar vein, the second equation is unaffected by dilution. In fact, at equilibrium, dilution preserves equilibrium.
And for reaction 3, dilution pushes you farther away from equilibrium.
For all three reactions, at the end of the reaction, you will have $Q = K$. But for reactions 1 and 3, dilution at equilibrium changes the value of $Q$ and the reaction will need to adjust again to reach equilibrium.
ZheZhe
$\begingroup$ "You may change the value of Q" the Q at the end of the diluted-solution's reaction? If so, then does that mean that if you dilute a solution to a certain point, Q stops going towards K? Also, If K is small (say smaller than 10⁴), can you dilute a solution to a certain concentration so that it's reaction is complete (α=1)? If not, then how can this saying be explained: "K is only affected by temperature and not the initial concentration of the reactants, while the 'advancement' of a reaction (α) is affected by both temperature and the initial concentration of the reactants"? $\endgroup$ – Elhamer Yacine Mar 17 '19 at 13:47
$\begingroup$ No, immediately after dilution. $Q$ will be $K$ at the end. You cannot dilute the reaction to completion. But it might get closer or farther depending on coefficients of the balanced reaction. I think you are confusing $K$ and $Q$. The first is constant. The latter is not. $\endgroup$ – Zhe Mar 17 '19 at 20:59
$\begingroup$ Hmm, I think I'm having trouble wrapping my head around how can a diluted-reaction be closer to completion with Q at the end of that reaction still being the same as before $\endgroup$ – Elhamer Yacine Mar 17 '19 at 21:05
$\begingroup$ Let me add an edit. I think that will help. $\endgroup$ – Zhe Mar 17 '19 at 22:34
$\begingroup$ The question is inactive, I doubt anyone will see it, but go ahead $\endgroup$ – Elhamer Yacine Mar 18 '19 at 10:12
A really handwavy explanation would be that the system, attempting to maximize its entropy, will prefer dissociated states as the system dilutes since there will be an increasing number of microscopic configurations with atoms $A$ and $B$ far apart (i.e. molecule $AB$ dissociated) as opposed to close together (i.e. atoms $A$ and $B$ bound together into molecule $AB$).
creillyuclacreillyucla
Not the answer you're looking for? Browse other questions tagged physical-chemistry acid-base equilibrium solutions or ask your own question.
Please explain the mystery of the equilibrium constant expression for chemical reactions
How do I calculate the degree of dissociation in equilibrium?
Does the number of H+ ions in solution go up on dilution of a weak acid?
Zwitterion (amino acid) equilibrium and the equilibrium constant
Prove that a Change of Volume will not Change the Value of the Equilibrium Constant
What is the effect of dilution on the degree of dissociation of a weak acid or base?
Why is the hydrolysis of the conjugate base of a weak acid neglected in buffer solutions?
Do weak acid/weak base neutralisation reactions go to completion?
How to calculate the percentage dissociation of phosgene using the pressure equilibrium constant?
Which rate, the forward or reverse rate of acid dissociation, is more strongly affected when diluting acetic acid in aqueous solution? | CommonCrawl |
Why is it necessary to consider infinitesimal changes in p,V,T for H,U and G given that they're state functions?
State functions such as $G$ only depend on the state of the system and are not dependent on the "path" that took the system to that state (which would be the case for work, for example, which is not a state function.
$$G=V\mathrm{d}p-S\mathrm{d}T$$ So... $$G=\left(\frac{\partial G}{\partial p}\right)_T\mathrm{d}p+\left(\frac{\partial G}{\partial T}\right)_p\mathrm{d}T$$ Consequently, by comparing coefficients:
$$V=\left(\frac{\partial G}{\partial p}\right)_T$$ and $$-S=\left(\frac{\partial G}{\partial T}\right)$$
Just taking the equation involving $V$ now to save time and space: $$\int_{p_1}^{p_2}V\mathrm{d}p=\int_{p_1}^{p_2}\left(\frac{\partial G}{\partial p}\right)_T\mathrm{d}p$$
Using the perfect gas equation and integrating leaves the result:
$$G(p_2)-G(p_1)=n\mathcal{R}T\ln\left(\frac{p_2}{p_1}\right)$$
But if $G$ is independent of the path taken to get to the final state why shouldn't I use the equation: $$\Delta G=V(p_2-p_1)$$
Martin - マーチン♦
RobChemRobChem
It is important to make clear any requirement that a variable be constant for each equation you write.
To go from $G=V\mathrm{d}p-S\mathrm{d}T$
to $G=V\mathrm{d}p$
you would need to assume temperature is constant.
And to go from $G=V\mathrm{d}p$
to $\Delta G=V(p_2-p_1)$
you would need to assume volume is constant.
If temperature and volume are both constant, the pressure is constant also.
So your final equation is only saying: when temperature, pressure and volume are constant, there is no change in gibbs free energy.
DavePhDDavePhD
When you write $dG=VdP-SdT$, you are talking about the differential change in G between two closely neighboring thermodynamic equilibrium states, one at (T,P) and the other at (T+dT,P+dP). So, when you integrate this equation, you must integrate over a path comprised to a continuous sequence of thermodynamic equilibrium states. Over such a path, at constant temperature, the molar volume V changes with pressure P, and this needs to be taken into account in the integration.
Chet MillerChet Miller
Not the answer you're looking for? Browse other questions tagged thermodynamics or ask your own question.
How to derive the pressure dependency for the Gibbs free energy?
How to find and use the Clausius-Clapeyron equation
Chemical potential of mixtures
What are analytical expressions for the isothermal change of Gibbs energy?
Determining the equation of state when given thermodynamic relations
How does one solve the following differential equation (mass balance equation)
Isothermal process gives V = A(T)exp(-KP)?
What is the relation between pressure and chemical potential?
Expression for fugacity coefficient derived from a pressure-explicite EOS
How to derive the relation between gibbs energy and equilibrium constant? | CommonCrawl |
Efficient electrochemical production of glucaric acid and H2 via glucose electrolysis
Streamlined hydrogen production from biomass
Ping Zhang, Yan-Jun Guo, … Yang Li
Linear paired electrochemical valorization of glycerol enabled by the electro-Fenton process using a stable NiSe2 cathode
Hongyuan Sheng, Aurora N. Janes, … Song Jin
Nickel-molybdenum nitride nanoplate electrocatalysts for concurrent electrolytic hydrogen and formate productions
Yan Li, Xinfa Wei, … Mingyuan He
Robust and biocompatible catalysts for efficient hydrogen-driven microbial electrosynthesis
Frauke Kracke, Andrew Barnabas Wong, … Alfred M. Spormann
Coverage-driven selectivity switch from ethylene to acetate in high-rate CO2/CO electrolysis
Pengfei Wei, Dunfeng Gao, … Xinhe Bao
Raw biomass electroreforming coupled to green hydrogen generation
Hu Zhao, Dan Lu, … Hong Li
Electroreduction of nitrogen with almost 100% current-to-ammonia efficiency
Hoang-Long Du, Manjunath Chatti, … Alexandr N. Simonov
An industrial perspective on catalysts for low-temperature CO2 electrolysis
Richard I. Masel, Zengcai Liu, … Curtis P. Berlinguette
Self-powered H2 production with bifunctional hydrazine as sole consumable
Xijun Liu, Jia He, … Yi Ding
Wu-Jun Liu ORCID: orcid.org/0000-0002-5696-11801,2,
Zhuoran Xu2,
Dongting Zhao2,
Xiao-Qiang Pan1,
Hong-Chao Li ORCID: orcid.org/0000-0002-8081-09801,
Xiao Hu1,
Zhi-Yong Fan ORCID: orcid.org/0000-0001-6138-179X3,
Wei-Kang Wang ORCID: orcid.org/0000-0003-0090-31223,
Guo-Hua Zhao3,
Song Jin ORCID: orcid.org/0000-0001-8693-70104,
George W. Huber ORCID: orcid.org/0000-0002-7838-68932 &
Han-Qing Yu ORCID: orcid.org/0000-0001-5247-62441
Nature Communications volume 11, Article number: 265 (2020) Cite this article
Electrocatalysis
Glucose electrolysis offers a prospect of value-added glucaric acid synthesis and energy-saving hydrogen production from the biomass-based platform molecules. Here we report that nanostructured NiFe oxide (NiFeOx) and nitride (NiFeNx) catalysts, synthesized from NiFe layered double hydroxide nanosheet arrays on three-dimensional Ni foams, demonstrate a high activity and selectivity towards anodic glucose oxidation. The electrolytic cell assembled with these two catalysts can deliver 100 mA cm−2 at 1.39 V. A faradaic efficiency of 87% and glucaric acid yield of 83% are obtained from the glucose electrolysis, which takes place via a guluronic acid pathway evidenced by in-situ infrared spectroscopy. A rigorous process model combined with a techno-economic analysis shows that the electrochemical reduction of glucose produces glucaric acid at a 54% lower cost than the current chemical approach. This work suggests that glucose electrolysis is an energy-saving and cost-effective approach for H2 production and biomass valorization.
Biomass conversion into commodity chemicals is a promising strategy to reduce society's dependence on fossil fuel resource1,2,3,4,5,6. Glucose, one of the most abundant biomass-based compounds, can be converted into various commodity chemicals like 5-hydroxymethylfurfural, sorbitol, gluconic acid (GNA), and glucaric acid (GRA)2,7,8. GRA is recognized as a "top value added compound" produced from biomass9, because it is a key intermediate for the production of biodegradable polymers, biodegradable detergents, and metal complexation agents10,11. Moreover, GRA and its derivatives (e.g., GRA-Ca, GRA-1,4-lactone) can also be used for healthcare purposes such as cancer chemotherapy and cholesterol reduction12,13,14. According to a market report by Grand View Research, Inc.15, the global GRA market size in 2016 was about USD 550.4 million and is expected to reach USD 1.30 billion by 2025.
GRA is currently produced from either chemical oxidation16,17,18 or microbial fermentation, and the former is the main industrial process for GRA production19,20,21. For example, Rivertop Renewables Inc. (Montana, USA) has developed a catalytic oxidation process to produce GRA from glucose with an annual output of 25k tons22. Chemical oxidation is performed either by stoichiometric oxidation of glucose with HNO3 in the absence of catalysts23,24, or by catalytic oxidation of glucose with O2 (air) in the presence of noble metal (e.g., Au, Pt, Pd, and Ru)-based catalysts at 45−120 °C25,26,27,28. In chemical oxidation, various byproducts such as GNA, glucuronic acid, and other organic acids are formed. For example, Qi et al.29 investigated glucose oxidation over an Au-based catalyst with O2 (0.3 MPa) at 120 °C, with a GNA yield of 92% and a GRA yield of less than 5%. Jin et al.30 reported oxidation of glucose to GRA with a yield of 45% over bimetallic Pt-Cu catalysts at 45 °C and 0.1 MPa O2. Conventional catalytic oxidation of glucose to GRA has several shortcomings: (1) a large amount of toxic oxidant is required (more than double the stoichiometric ratio); (2) a low selectivity to GRA (the GRA selectivity is reported to be less than 60%); and (3) generation of various byproducts (e.g., 2,3-dihydroxysuccinic acid, 2,3-dihydroxy-4-oxobutanoic acid, and oxalic acid) with similar chemical properties; and (4) use of high pressure O2 (or air and HNO3). Avoiding the use of high pressure O2 (or air) reduces the use of high pressure vessels and safety risks. Microbial fermentation does not consume as many raw materials and operates at less severe operating conditions compared to chemical oxidation. However, fermentation also suffers from various disadvantages such as long fermentation time (more than 2 days), low selectivity (GRA yield < 20%), and difficulty in product separation (large amounts of microbial biomass and hundreds of byproducts with similar properties are co-produced)31.
Electrochemical oxidation involves electron transfer in the reactions (Scheme 2) and eliminates the use of high pressure O2 or hazardous chemical oxidants. The electrochemical oxidation process can be operated under mild conditions, and production of other byproducts can be easily suppressed via tuning the electrode potential, leading to a high GRA selectivity. Electrochemical oxidation could be practiced commercially at small scales in distributed areas and becomes cheaper as the price of renewable electricity produced continues to decline32. The electrochemical oxidation process is particularly suitable for GRA production, as GRA is produced on a smaller scale, and therefore cannot take advantage of improved economics due to large economies of scale that larger bulk chemicals have. Glucose electrolysis to GRA provides H2 gas as a product stream as hydrogen evolution reaction (HER) occurs at the cathode and the oxidation of glucose occurs at the anode. This reaction also has a lower standard redox potential (0.05 V) than conventional water electrolysis (1.23 V)33,34.
Although the anodic oxidation of glucose can be achieved efficiently with the noble metal (e.g., Pt, Ru, Rh, and Pd)-based catalysts35,36, their scarcity and high costs motivate researchers to seek abundant and inexpensive alternative catalysts. Recent research on electrochemical oxidation has developed a series of earth-abundant transition metal-based catalysts with high catalytic activity and stability towards the electro-oxidation of various organic compounds, such as Co3O4 nanosheets for ethanol electro-oxidation37, Ni–Mo-based nanostructures for urea electro-oxidation38, Co-Cu-based nanostructures for benzyl alcohol electro-oxidation39, and Ni-Fe LDH nanosheets for 5-Hydroxymethylfurfural electro-oxidation40.
In previous studies of electrochemical oxidation of glucose, GRA yields were reported to be less than 40%41, although Ibert et al.42 reported that 2, 2, 6, 6-tetramethyl-1-piperidinyloxy (TEMPO)-mediated electro-oxidation of glucose could obtain GRA with a yield of 85%. However, the high-cost and difficulty in TEMPO separation and recycle may hinder this technology's industrial application. It has been a challenge to produce GRA with a high yield in a cost-effective way.
In this work, we synthesize the NiFe oxides (NiFeOx) and NiFe nitrides (NiFeNx) using the NiFe LDH nanosheet arrays as a precursor material, and investigate their electrocatalytic performance toward anodic glucose oxidation and cathodic HER using various electrochemical approaches. These catalysts show a high activity and selectivity towards anodic glucose oxidation and cathodic hydrogen evolution respectively.
Synthesis and characteristics of the catalyst materials
The structural and composition characterizations of the as-synthesized catalysts are presented in Supplementary Method 1. Figure 1a is a schematic for the synthesis of Ni-Fe-OH catalyst. A commercially available nickel foam (NF) with open fibrous structure was used to confer the 3D structure of the electrode and provide the Ni source for the NiFe(OH)x. The NiFe(OH)x precursor grew on the NF through a hydrothermal reaction of FeCl3 and CO(NH2)2 (urea) in a mixed solution. Urea was added to the solution to increase the pH, which resulted in the controlled hydrolysis of the Ni and Fe metal ions40,43. The X-ray diffraction (XRD) pattern (Supplementary Fig. 1) of the resulting solid indicates the formation of hydrotalcite-like NiFe layer double hydroxides, Ni(OH)2, and FeOOH crystalline phases.
Fig. 1: Synthesis and structure characterization of the catalysts.
a Schematic illustration for the synthesis of NiFeNx-NF and NiFeOx-NF catalysts. b XRD patterns of the NiFeOx-NF and NiFeNx-NF catalysts in comparison with standard XRD patterns. c XPS Fe 2p spectra and d Ni 2p spectra of the NiFeOx-NF and NiFeNx-NF catalysts.
The NiFe(OH)x precursor was then heated at 300 °C for 3 h under air flow to form Ni-Fe oxides (NiFeOx). The XRD pattern (Fig. 1b) shows that the diffraction peaks matched those of NiFe2O4 (PDF: 00-044-1485). The NiFe(OH)x precursor was also heated in an NH3 flow to obtain NiFe nitrides (NiFeNx), and the diffraction peaks of this product matched those of Ni3FeN (PDF: 00-050-1434). Three large diffraction peaks of Ni metal are in both XRD patterns, which are the Ni foam.
The surface chemistry and composition of the catalysts were probed with X-ray photoelectron spectroscopy (XPS). The Fe 2p spectra of NiFeOx and NiFeNx are shown in Fig. 1c. Two peaks located at binding energy of 725.2 and 711.7 eV of NiFeOx spectrum are assigned to the Fe 2p1/2 and Fe 2p3/2, respectively, with a difference of 13.5 eV, implying that the Fe in the NiFeOx was in its tri-valent oxidation state (Fe(III))44. The Fe 2p3/2 peak in the NiFeNx was deconvoluted into two peaks, which are assigned to Fe-O species (711.9 eV) and Fe-N species (710.5 eV), respectively45,46. The Ni 2p spectra of NiFeOx and NiFeNx are presented in Fig. 1d. Two prominent peaks located at 856 and 874 eV of NiFeOx spectrum were attributed to the 2p3/2 peaks from Ni and 2p1/2 peaks in Ni-O species, respectively. The Ni 2p3/2 in the NiFeNx was deconvoluted into two peaks attributed to the Ni-O (856.1 eV) and Ni-N (854.7 eV) species, respectively. For comparison, the Fe 2p and Ni 2p XPS spectra of the NiFe(OH)x precursor in Supplementary Fig. 2 show that the Ni 2p spectrum of NiFe(OH)x was almost the same as that of NiFeOx, suggesting that the chemical state of Ni was unchanged during the heat treatment in air. The Fe 2p 3/2 branch in NiFe(OH)x could be deconvoluted into two peaks, which are attributed to the Fe(II) (710.4 eV) and Fe(III) (712.8 eV) species, respectively. The Fe(II) species cannot be found in the NiFeOx sample, confirming that heat treatment transformed Fe(II) into Fe(III). In comparison with NiFe(OH)x, the oxidation states of Ni and Fe in the NiFeNx sample were reduced to lower valence.
The morphology of the NiFe(OH)x, NiFeOx, and NiFeNx samples were examined with scanning electron microscopy (SEM). Typical SEM images of NiFe(OH)x (Supplementary Fig. 3) show that the as-synthesized hydroxides consisted of nanosheets aligned vertically on the NF covering the entire substrate. Energy dispersive X-ray spectroscopy (EDS) elemental mapping results (Supplementary Fig. 4) indicate that the Ni, Fe, and O elements were distributed evenly on the NF. After calcination, the obtained NiFeOx product retained the nanosheet morphology of the hydroxide precursor (Fig. 2a, b). The EDS results also confirm that the Ni, Fe, and O elements were distributed homogenously (Fig. 2c). However, after the heat treatment in NH3, the produced NiFeNx material did not maintain the morphology of the hydroxide precursor, but consisted of irregularly shaped interconnected particles with micropores and mesopores (Fig. 2d, e). The EDS elemental mapping (Fig. 2f) confirms the homogenous distribution of the Fe, Ni, and N in the NiFeNx sample, and some O element was also found, which might be due to the incomplete nitridation of the NiFe(OH)x.
Fig. 2: Morphology and elemental compositions of the electrocatalysts.
SEM images of the NiFeOx-NF catalyst in low (a) and high (b) magnification; c EDS elemental mapping of the NiFeOx-NF catalyst; SEM images of the NiFeNx-NF catalyst in low (d) and high (e) magnification; and f EDS elemental mapping of the NiFeNx-NF catalyst.
Electrocatalytic glucose oxidation
The glucose oxidation involves two steps: (1) the oxidation of glucose into GNA, which involves two electrons, and (2) the oxidation of GNA to GRA, which involves four electrons (Fig. 3). Oxygen evolution reaction (OER) is the main undesired competing reaction47,48. The linear sweep voltammetry (LSV) profiles and the corresponding parameters for the glucose (100 mM) oxidation and OER with the NiFeOx-NF and NiFeNx-NF electrodes in 1 M KOH solution (pH = 13.9) are presented in Fig. 4a, Table 1, and Supplementary Table 5. NiFeOx-NF presented a current density of 87.6 mA cm−2 at a potential of 1.30 V (vs. reversible hydrogen electrode, RHE) for glucose oxidation with a TOF value of 0.16 s−1 (entry 5, Table 1). The NiFeNx-NF electrode displayed a lower current density (22.1 mA cm−2) and with a TOF value of only 0.04 s−1 (entry 6, Table 1), confirming that NiFeOx was a more active catalyst for glucose oxidation than NiFeNx. However, for the OER, NiFeNx (entry 2, Table 1) was more efficient than NiFeOx (entry 1, Table 1). Both the current densities and TOF values for glucose oxidation were much lower than the current densities and TOF for OER. NiFeNx oxidized into high surface-area metal oxides/hydroxides in the oxidative environment of OER reaction (Supplementary Fig. 5) and also likely in the glucose oxidation reactions49. The Tafel slopes (Fig. 4b) for glucose oxidation with NiFeOx-NF and NiFeNx-NF electrodes were calculated as 19 and 23 mV dec−1, respectively, which were much lower than the Tafel slopes for OER with those two electrodes. The Tafel slope is closely related to the electron transfer rate, and a smaller Tafel slope means a more rapid electron transfer rate and more favorable catalytic reaction kinetics50. The lower Tafel slope of the anodic glucose oxidation confirms that more rapid electron transfer occurs in the oxidation of CHO group to COOH in C1 position (two-electron transfer) and the oxidation of CH2OH group to COOH in C6 position (four-electron transfer) than the cleavage of O−H bonds in H2O molecule to produce O2. The low value of the Tafel slope also implies a lower adsorption potential of glucose to the electrode51 and the intimate contact between the electrode catalysts and electrolyte52.
Fig. 3: Possible reaction pathway from glucose to GRA.
Schematic illustration of the possible pathway for the electrochemical oxidation of glucose to GNA and GRA.
Fig. 4: Anodic glucose oxidation.
a LSV profiles of the NiFeOx-NF and NiFeNx-NF catalysts for glucose oxidation and OER (scan rate of 5 mV s−1; electrolyte: 1 M KOH; glucose concentration 100 mM). b Corresponding Tafel plots. c Capacitive current densities of different electrodes for glucose oxidation as a function of scan rate. d Nyquist plots (taken at 1.3 V vs. RHE) of different electrodes for glucose oxidation. e Concentration of glucose and oxidation products as a function of time for chronoamperometric tests at 1.30 V vs. RHE. f Glucose concentration changes in five successive cycles.
Table 1 Electrochemical anodic glucose oxidation and OER in 1 M KOH.
It should be noted that unlike many previous reports, in which the electrode materials with high OER performance usually have high catalytic performance for oxidation of organic compounds37,38,40, the NiFeOx-NF electrode had a lower OER performance compared to NiFeNx-NF, but possessed a higher glucose oxidation performance in a low potential region. To probe why this phenomenon occurred, the relative electrochemically active surface areas (ECSA) in glucose oxidation with the electrodes were compared by extracting their double-layer capacitances using cyclic voltammetry (CV). The CV profiles (Supplementary Fig. 6) were collected in the non-Faradaic potential region (0.925−1.0 V), in which only double-layer capacitance is accounted for the current response. The capacitance for NiFeOx-NF was calculated to be 53.3 mF cm−2 based on the CV results, higher than those of NiFeNx-NF (42.1 mF cm−2) and only NF (11.1 mF cm−2) (Fig. 4c). Thus, the NiFeOx-NF electrode had a larger ECSA than the NiFeNx-NF electrode. This indicates that the higher catalytic activity of NiFeOx-NF was likely due to the higher number of catalytic active sites, and the in situ generated surface Ni-Fe oxyhydroxides (FeOOH and NiOOH) could be the catalytic active sites of NiFeOx-NF electrode for both glucose oxidation and OER53,54. For example, the formation of Ni oxyhydroxides was confirmed by their polarization curves of OER (Fig. 4a), in which a small peak located at E = 1.36 V could be attributed to the oxidation of Ni(II) into NiOOH: (NiO + OH− → NiOOH + e−). The formation of FeOOH could not be confirmed by the polarization curves as it does not experience a valence variation in the electrochemical process, but the XPS spectra of NiFeOx-NF after glucose oxidation (Supplementary Fig. 5) confirm the formation of NiOOH and FeOOH species. We have also carried out control experiments to verify that the Ni-Fe oxyhydroxides are the catalytic active sites. The NiFeOx-NF and NiFeNx-NF catalysts were treated with H2O2 in 1 M of KOH solution to produce more NiOOH and FeOOH species. These treated catalysts were then used as anodic catalysts for both glucose oxidation and OER. The LSV profiles of these materials shown in Supplementary Fig. 7 indicate that after the H2O2 treatments, both NiFeOx-NF and NiFeNx-NF catalysts exhibited higher catalytic activities for OER (Supplementary Fig. 7a) and glucose oxidation (Supplementary Fig. 7b). Such improvements in the catalytic activities should be attributed to the higher activity of the Ni-Fe oxyhydroxides sites.
Electrochemical impedance spectroscopy (EIS) was used to investigate the kinetics of the electrode materials. An equivalent circuit consisting of a series resistance (Rs), a charge-transfer resistance (Rct), and a constant phase element (CPE) was constructed. As shown in Fig. 4d, the NiFeOx-NF electrode displayed a smaller charge-transfer resistance (Rct) of about 1.7 Ohm, in contrast to the NiFeOx-NF (3.8 Ohm) and raw nickel foam electrodes (9.5 Ohm). The small Rct means favorable electron transport rate and catalytic kinetics, resulting in a small Tafel slope55. Meanwhile, the EIS profiles also show that the NiFeOx-NF and NiFeNx-NF electrodes had small Rs values (1.21 and 1.26 Ohm, respectively), revealing good electrical contacts between the catalysts and the nickel foam substrate. This was attributed to the formation of NiFeOx and NiFeNx catalysts via direct reactions of Ni foam, leading to strong adhesion to the Ni foam substrate.
We also investigated the catalytic performance of other catalysts, including NiFe(OH)x-NF, NF, as well as the benchmark Pt/C and RuO2 catalysts, for glucose oxidation (Supplementary Fig. 8; also see entries 7−10, Table 1). Among all the examined catalysts, the NiFeOx-NF electrode demonstrated the highest catalytic activity, with the lowest Eonset and Ej = 100 mA cm−2 values. The highest catalytic activity of the NiFeOx-NF electrode could be attributed to the following two reasons: (1) it possessed the highest number of catalytic active sites as indicated from the ECSA (Supplementary Fig. 9); and (2) it had the lowest charge-transfer resistance (Supplementary Fig. 10), which means a high electron transport rate and rapid catalytic kinetics. The rapid catalytic kinetics of the NiFeOx-NF electrode can also be revealed by its lowest Tafel slope (Fig. 4b).
The effect of glucose concentration on glucose oxidation was studied with the NiFeOx-NF electrode (Table 1). Supplementary Fig. 11a shows the LSV profiles of the NiFeOx-NF for glucose oxidation at various concentrations. The current density at E = 1.25 V had a linear relationship with the glucose concentration for 0−150 mM glucose concentration (Supplementary Fig. 11b). The current density did not change with a further increase in glucose concentration (150−500 mM). The electrochemical glucose oxidation followed the first-order reaction kinetics at low glucose concentrations, but changed into the zero-order reaction kinetics after the glucose concentration exceeding 150 mM. A similar result was reported for the glucose oxidation on RuO2 electrodes in 1 M of NaOH solution56.
The chronoamperometric measurements of glucose oxidation were conducted at a constant potential of 1.30 V, and the concentrations of glucose and its oxidation products (GNA and GRA) in chronoamperometric tests were monitored with HPLC as detailed in Supplementary Method 2 and Supplementary Fig. 12. The concentrations of the products and reactant as a function of reaction time are shown in Fig. 4e (initial concentration: 10 mM). These results indicate that GNA was the initial product, which was then converted into GRA. The glucose conversion after 120-min reaction was 98.3%, with a yield of GNA plus GRA of 91.9%, and a Faradaic efficiency (FE) for GNA plus GRA production of 87% (entry 3, Table 1). We also studied the potential glucose decomposition in an alkaline aqueous solution on open circuit with 1H, 13C nuclear magnetic resonance (NMR) and 2D-HSQC NMR (Supplementary Fig. 13). Glucose (10 mmol/L) was not degraded after 24 h in the alkaline solution, suggesting that no significant self-decomposition of glucose occurred under the reaction conditions. This result is consistent with other glucose oxidation studies in alkaline solutions57,58.
Five successive cycles of chronoamperometric measurements were conducted to evaluate the stability and durability of the NiFeOx-NF electrode for glucose oxidation. The conversion of glucose slightly decreased from 98.3 to 91.2% after five cycles (Fig. 4f), and the reaction rates slightly decreased from 1.36 × 10−4 to 1.27 × 10−4 mmol glucose s−1 in these five successive cycles, but the FE values (Supplementary Fig. 14) of each cycle remained almost unchanged. The XRD patterns (Supplementary Fig. 15) of the catalyst after five cycles show that NiFe2O4 was the main crystalline phase. The XPS spectra of the reused NiFeOx-NF catalyst (Supplementary Fig. 16) show a new peak at 857.4 eV in the Ni 3d spectrum, which could be attributed to the Ni(III) species, suggesting that some Ni(III) species were formed in the anodic oxidation process. It is reported that the formation of Ni(III) is important for the anodic organic compound oxidation because the higher-valence state of Ni facilitates the adsorption of OH− ions and reduces the energy barrier for the transition of Ni species from lower- to higher-valence states in promoting anodic oxidation reactions59. The SEM image (Supplementary Fig. 17) shows that after reuse, the nanosheet structure of the NiFeOx kept almost unchanged. The EDS mapping results (Supplementary Fig. 18) of the reused catalyst demonstrate that the Fe and Ni elements were evenly distributed on the NF.
To gain some insights into the mechanism of the anodic glucose oxidation over the NiFeOx-NF catalyst (Fig. 5), in situ ATR-FTIR, 2D-HSQC NMR and LC-MS analysis was performed. The IR spectra collected in the potential step experiments ranging from 1.0 to 1.6 V provides useful information on the glucose oxidation pathways (Fig. 6a). The cleavage of C−C bond did not occur because of the mild conditions of the electrochemical oxidation process. Thus, the possible intermediates and products of glucose oxidation were proposed as follows: GNA and gluconolactone (oxidation of H−C=O group to COOH in C1 position, two-electron transfer), glucuronic acid (oxidation of CH2OH group to H−C=O in C6 position, two-electron transfer), and GRA (further oxidation of H−C=O group in C6 position, two-electron transfer). The assignments of IR bands as well as the identification of the reaction intermediates were conducted by comparing with the reference spectra (Supplementary Fig. 19 and Supplementary Table 1). The bands appearing synchronously at wavenumbers of 1573−1506 cm−1 and 1483−1431 cm−1 were attributed to the asymmetric stretching and symmetric vibrations of O−C−O, respectively, confirming the formation of glucuronic acid (Pyranuronic form). No C−C bond breaking occurred in the electrochemical glucose oxidation process, as no C−C bond cleavage compounds (CO: 1900−2100 cm−1 and CO32−: 1390−1400 cm−1) were observed in the IR spectra. However, the complete oxidation of glucose into GNA and GRA cannot be achieved in the in situ ATR-FTIR reactor due to insufficient reaction time. Another note is that the formation of GNA and GRA could not be differentiated with the FTIR spectra only, as these two compounds have very similar functional group compositions. In addition to the IR spectra, the 1H, 13C and 2D HSQC NMR as well as LC-MS analyses were also performed. The 1H and 13C NMR spectra of the reactant and 6-h products from glucose electrolysis are presented in Supplementary Fig. 20. The signals in the chemical shift of 0 and 2.50 ppm could be attributed to the reference TMS (tetramethylsilane) and DMSO-H6 (a main impurity of the deuterated DMSO solvent for NMR), while the signals in the chemical shift of 3−6 ppm could be ascribed to glucose and its oxidation products. A very small signal could be found in the chemical shift of 11.1 ppm, which could be attributed to the H atom in the COOH groups. This small signal does not mean that the COOH content in the product was of a very low level because the intensity of the 1 H NMR signal for COOH is not proportional to the concentration of COOH in the samples. Such a weak H signal is likely to be attributed to the mobility of H in the COOH group60,61. Nevertheless, the presence of COOH in the reacted sample was confirmed by the 13C NMR spectra. The C1, C6 (COOH) regions in the 2D HSQC NMR spectrum (Fig. 6b) could be attributed to the GNA, GRA and guluronic acid. To further identify these compounds, we examined the 13C NMR of the standard glucose and its main oxidation products (e.g., GNA, GRA, and guluronic acid). As shown in Supplementary Fig. 21, the presence of glucose was confirmed by the characteristic peaks at chemical shifts of 60.9 and 92.4 ppm, while the presence of GNA could be observed via the characteristic peak at chemical shift of 64.4 ppm and the signal of COOH (δ = 175.4 ppm). The presence of GRA could be observed via the characteristic peak at chemical shift of 69.7 ppm and the signal of COOH (δ = 175.5 ppm), while the presence of guluronic acid could be observed via the characteristic peak at chemical shift of 97.2 ppm and the signal of COOH (δ = 175.7 ppm). The presence of 1,5-gluconolactone could be inferred from the C1, C6 (R−O−C=O) regions in Fig. 6b. The LC-MS analytic results of the reaction solution after 6-h electrolysis (Supplementary Fig. 22) further confirm the presence of GNA, GRA and some intermediates like 1,5-gluconolactone and guluronic acid.
Fig. 5: Possible reaction mechanism.
Schematic illustration of the reactions occurring in the electrochemical oxidation of glucose to GRA.
Fig. 6: Analysis of the possible products of the glucose electrolysis process.
a In situ ATR-FTIR spectra collected at a potential ranging from 1.0 to 1.6 V vs. RHE with a step of 100 mV. b 2D HSQC NMR spectrum of the reaction solution after 6-h electrolysis (initial glucose concentration: 10 mM, potential: 1.3 V).
Cathodic HER
The cathodic HER under alkaline conditions was also investigated using the Ni-Fe-OH-based electrodes together with other electrodes, as shown in the LSV profiles in Fig. 7a. Unlike the glucose oxidation, the NiFeNx-NF electrode showed a higher HER activity in 1 M KOH solution than the NiFeOx-NF and NiFe(OH)x-NF electrodes. The overpotentials for reaching current densities of 10 and 100 mA cm−2 by the NiFeNx-NF electrode were 40.6 and 104 mV, respectively, higher than those of 20% Pt/C catalyst (Supplementary Fig. 23). The Tafel slope of the NiFeNx-NF electrode for HER was calculated as 39 mV dec−1, much lower than those of the NiFe(OH)x-NF (142 mV dec−1) and NiFeOx-NF (97 mV dec−1) electrodes (Fig. 7b), further confirming that NiFeNx-NF was an efficient cathodic HER catalyst. For comparison, an in situ grown iron−nickel nitride nanostructure (Ni3FeN-NF) was reported to deliver 10 mA cm−2 of current at an overpotential of 75 mV with a Tafel slope of 98 mV dec−1 62. The data in Supplementary Table 2 further confirm that, with the low overpotential (40.6 mV for 10 mA cm−2) and Tafel slope (39 mV dec−1) values, the NiFeNx-NF electrode is among the best performing noble-metal-free HER catalysts in alkaline solution. The EIS spectra (Supplementary Fig. 24) indicate a much lower charge-transfer resistance of the NiFeNx-NF electrode than those of the NiFeOx-NF and NF electrodes.
Fig. 7: Cathodic HER of the catalyst.
a LSV profiles of the different electrodes for HER in 1 M KOH electrolyte (scan rate of 5 mV s−1). b Corresponding Tafel plots. c Comparison of LSV profiles of the NiFeNx-NF catalyst for HER with and without 100 mM of glucose. d Chronoamperometric test of the NiFeNx-NF catalysts for HER with 100 mM of glucose, left inset: LSV profiles of the NiFeNx-NF catalysts before and after 24-h chronoamperometric test, right inset: amplified region of the chronoamperometric curve.
In order to integrate the anodic glucose oxidation with cathodic HER, the cathodic electrode must have stable catalytic activity for HER in the presence of glucose, as the glucose in the anode compartment of the electrolyzer can cross over the anion-exchange membrane into the cathode compartment. Thus, the glucose tolerance of the NiFeNx-NF electrode for HER was evaluated with the same glucose concentration found in the anode compartment. Figure 7c shows that the LSV profiles of the NiFeNx-NF electrode with and without 100 mM of glucose were very similar—only about 20 mV higher overpotential was required to reach the same current density when 100 mM of glucose was in the cathode compartment. A 24-h chronoamperometry test of HER was also conducted in the presence of 100 mM glucose at a potential of −0.135 V (Fig. 7d and right inset), which shows that the electrocatalytic current density of the NiFeNx-NF electrode remained almost unchanged, confirming its strong glucose tolerance. The LSV profiles of the fresh and 24-h-used NiFeNx-NF electrodes exhibited no difference (left inset of Fig. 5d), demonstrating the stability of the electrode.
A glucose electrolyzer was constructed using NiFeOx-NF as the anodic electrode and NiFeNx-NF as the cathodic electrode, separated by a anion-exchange membrane, in an H-cell with 1 M KOH and 0.5 M glucose + 1 M KOH as the anodic and cathodic electrolytes, respectively (Supplementary Fig. 25). A water electrolyzer with the same configuration was also constructed in the absence of glucose. The LSV profiles (Fig. 8a) show that the glucose oxidation electrolysis exhibited a higher current density than the water electrolysis with the same cell potential. Such low cell voltages required for glucose electrolyzer also compare favorably to the values reported for electrolyzers for the oxidation of many other organic compounds (e.g., urea, ethanol, benzyl alcohol, etc.) (Supplementary Table 3), demonstrating the higher energy efficiency of glucose electrolyzers using NiFeOx-NF and NiFeNx-NF electrodes. After 24-h at cell voltage of 1.4 V, 21.3% of the glucose was converted with the GRA and GNA yields or 11.6% and 4.7%, respectively. A high glucose concentration of 0.5 M was used to ensure that the current density did not fade in the entire electrolysis process, because the electrolysis current density would decrease with a low glucose concentration (less than 0.15 M) (Supplementary Fig. 11).
Fig. 8: Process evaluation of the glucose electrochemical oxidation.
a Comparison of glucose electrolysis and water electrolysis with the same anodic NiFeOx-NF and cathodic NiFeNx-NF catalysts (scan rate: 5 mV s−1, glucose concentration: 100 mM, 1 M KOH). b Long-term stability of the glucose electrolysis at a cell voltage of 1.4 V. c Comparison of the revenues and costs for electrocatalytic glucose oxidation and nonelectrocatalytic oxidation processes (HNO3 oxidation) for GRA production (1000 tons GRA per year).
The H2 production in the glucose electrolyzer was observed by the evolution of bubbles at the cathode, and further confirmed by gas chromatography analysis. The long-term stability of the glucose electrolyzer was evaluated via chronoamperometry, which shows that the electrolyzer delivered a current density of 101.2 mA cm−2 at a voltage of 1.4 V and exhibited less than 4% decrease in this value (97.8 mA cm−2 remaining) after 24-h operation (Fig. 8b). It should be mentioned that in the chronoamperometry test, no bubbles were observed at the anode, confirming that the competing OER process did not occur in the glucose electrolysis process.
The economic feasibility for the electrocatalytic and nonelectrocatalytic glucose oxidation strategies was estimated assuming a production scale of 1000 tons GRA per year63. The minimum selling price (MSP) of GRA was calculated via a discounted cash flow analysis and setting the NPV (net present value) to 0 using the economic and technological assumptions and parameters shown in Supplementary Methods 3 and 4 and simulated with ASPEN Plus (Aspen Engineering V8.4, AspenTech, USA) (Supplementary Figs. 26–29 and Supplementary Tables 6−15). As shown in Supplementary Table 12, the electrochemical glucose oxidation has a lower capital costs ($10.5 vs. $19.3 million), lower raw material cost ($1.2 vs. $1.9 million yr−1), lower operating cost ($5.9 vs. 7.6 million yr−1) and higher revenues ($17.2 vs. 15.1 million yr−1) than the chemical glucose oxidation. The MSP for GRA for the electrocatalytic glucose oxidation approach is calculated to be $9.32 kg−1 (Fig. 8c). For comparison, the MSP of GRA for the nonelectrocatalytic oxidation process is $17.04 per kg.
In Supplementary Table 4, the electrocatalytic glucose oxidation process proposed in this work is compared with conventional chemical oxidation23,24 and microbial fermentation19,20 processes for GRA production to highlight the sustainability of glucose electrolysis. The electrocatalytic glucose oxidation process has several advantages over the other two processes because of its higher GRA yield, shorter reaction time and lower E-factor (the mass ratio of the generated waste to target products). The electrocatalytic glucose oxidation process has lower operation and downstream separation costs and a much smaller environmental impact. One main challenge for the electrocatalytic glucose oxidation process is that it will require large amounts of KOH (370 tons KOH for production of 1000 tons GRA) and the associated equipment. Fortunately, KOH is not directly discharged into the environment after the reaction, but converted into K2SO4 (650 tons per year), which can be sold as a byproduct to produce fertilizers.
In conclusion, an electrolysis method was developed to convert glucose into GRA and H2. The NiFeOx-NF and NiFeNx-NF electrodes derived from NiFe LDH nanosheet arrays demonstrate high yields toward anodic glucose oxidation and cathodic HER respectively. The low onset potentials (1.13 V) and Tafel slopes (19 mV dec−1) confirm that glucose oxidation is much more favorable than OER, leading to a high Faradaic efficiency (87%) toward GNA and GRA production. A two-electrode glucose electrolyzer constructed with NiFeOx-NF as the anode for glucose oxidation and NiFeNx-NF as the cathode for HER can deliver a current density of 200 mA cm−2 with a voltage of only 1.48 V, and run stably for 24 h, placing such a glucose electrolyzer among the best organic compound electrolyzers with noble-metal-free electrodes reported so far. Because of the abundant nature of these catalyst materials and the cost-effective and energy-saving production of value-added chemicals like GRA and H2, this organic compound electrolysis strategy is expected to be promising and sustainable for valorization of biomass feedstocks.
Synthesis of NiFe-OH nanosheets-based electrocatalysts
The chemicals and materials used in this work are listed in Supplementary Note 1. The NiFe hydroxides nanosheets were grown on the NF via a facile hydrothermal approach described as follows: FeCl3•6H2O (540.6 mg, 2 mmol) was dissolved in 10 mL of H2O, which was then transferred to a Teflon-lined stainless steel autoclave of 50 mL. A piece of NF was immersed into the Fe(III) solution and reacted under ultrasonic for 30 min. Urea (900 mg, 15 mmol) was dissolved in 10 mL H2O and mixed with the Fe(III) solution, and then 20 mL of ethanol was added into the solution. The autoclave was subsequently sealed and heated to 160 °C for 24 h. Afterwards, the NF was washed with H2O and ethanol several times to obtain the NiFe hydroxide nanosheets (denoted as NiFe(OH)x-NF). The NiFe(OH)x-NF was then heated in air or NH3-Ar gas mixture (20:80) at 300 °C (heating rate 2 °C per min) for 3 h to obtain Ni-Fe oxides (denoted as NiFeOx-NF) or nitrides (denoted as NiFeNx-NF), respectively.
Electrochemical measurements
The glucose electrochemical oxidation was conducted with a CHI 760D electrochemical workstation (ChenHua Instrument Inc., China), in which the Ag/AgCl electrode was used as reference electrode, the as-synthesized catalysts on NF electrodes (NiFeOx-NF, NiFeNx-NF, and NiFe(OH)x-NF) were used directly as working electrodes, and a Pt wire was used as counter electrode (unless otherwise stated). The tested potential vs. Ag/AgCl can be converted into potential vs. RHE via the Nernst equation listed as follows:
$${{E}}_{{\mathrm{RHE}}} = {{E}}_{{\mathrm{Ag}}/{\mathrm{AgCl}}} + 0.059\,{\mathrm{pH}} + 0.197.$$
The water electrolysis experiments were conducted in an H-type electrochemical cell, in which the anode and cathode electrolyte were both 1 M KOH (100 mL), and an anion-exchange membrane (AMI-7001, Membranes International Inc., USA) was used to separate the cathode and anode compartments. The glucose electrolysis experiments were conducted in the same manner as water electrolysis except that the anode electrolyte was 1 M of KOH solution dissolved with glucose (0−100 mM). The polarization curves were collected with LSV at 5 mV s−1. The stability of the catalysts for glucose electrolysis was evaluated through chronoamperometry at 1.30 V in 100 mL of glucose solution (10 mM in 1 M of KOH) for five successive cycles. EIS were recorded at 1.3 V and −0.3 V for glucose electrolysis and HER, respectively, with a frequency range from 1 Hz to 100 kHz. The concentration profiles of glucose and its oxidation products in the electrolysis process were analyzed with an HPLC equipped with a refractive index detector.
All data are available from the corresponding authors upon reasonable request.
Besson, M., Gallezot, P. & Pinel, C. Conversion of biomass into chemicals over metal catalysts. Chem. Rev. 114, 1827–1870 (2014).
Mika, L. T., Cséfalvay, E. & Németh, Á. Catalytic conversion of carbohydrates to initial platform chemicals: chemistry and sustainability. Chem. Rev. 118, 505–613 (2017).
Liu, C., Wang, H., Karim, A. M., Sun, J. & Wang, Y. Catalytic fast pyrolysis of lignocellulosic biomass. Chem. Soc. Rev. 43, 7594–7623 (2014).
Zakzeski, J., Bruijnincx, P. C. A., Jongerius, A. L. & Weckhuysen, B. M. The catalytic valorization of lignin for the production of renewable chemicals. Chem. Rev. 110, 3552–3599 (2010).
Huber, G. W., Iborra, S. & Corma, A. Synthesis of transportation fuels from biomass: chemistry, catalysts, and engineering. Chem. Rev. 106, 4044–4098 (2006).
Bozell, J. J. Connecting biomass and petroleum processing with a chemical bridge. Science 329, 522–523 (2010).
Zhang, Z. & Huber, G. W. Catalytic oxidation of carbohydrates into organic acids and furan chemicals. Chem. Soc. Rev. 47, 1351–1390 (2018).
van Putten, R.-J. et al. Hydroxymethylfurfural, a versatile platform chemical made from renewable resources. Chem. Rev. 113, 1499–1597 (2013).
Ragauskas, A. J. The path forward for biofuels and biomaterials. Science 311, 484–489 (2006).
Mangiameli, M. F., González, J. C., Bellú, S., Bertoni, F. & Sala, L. F. Redox and complexation chemistry of the Cr VI/Cr V-d-glucaric acid system. Dalton Trans. 43, 9242–9254 (2014).
Mehtiö, T. et al. Production and applications of carbohydrate-derived sugar acids as generic biobased chemicals. Crit. Rev. Biotechnol. 36, 904–916 (2016).
Khaw, B.-A., Narula, J. & Hartner, W. Attenuation of microvascular injury in preconditioned ischemic myocardium: imaging with Tc-99m glucaric acid and In-111 human fibrinogen. Curr. Mol. Imag. 4, 20–28 (2015).
Houson, H. A., Nkepang, G. N., Hedrick, A. F. & Awasthi, V. Imaging of isoproterenol-induced myocardial injury with 18F labeled fluoroglucaric acid in a rat model. Nucl. Med. Biol. 59, 9–15 (2018).
Sinch, J. & Gupta, K. P. Calcium glucarate prevents tumor formation in mouse skin. Biomed. Environ. Sci. 16, 9–16 (2003).
Grand-View-Research. Glucaric Acid Market Size & Share. Global Industry Report (2017).
Derrien, E., Marion, P., Pinel, C. & Besson, M. Influence of residues contained in softwood hemicellulose hydrolysates on the catalytic oxidation of glucose to glucarate in alkaline aqueous solution. Org. Process Res. Dev. 20, 1265–1275 (2016).
Boussie, T. R., Dias, E. L., Fresco, Z. M. & Murphy, V. J. Production of glutaric acid and derivatives from carbohydrate-containing materials. US Patent 8,785,683 (2014).
Kunz, M., Schwarz, A. & Kowalczyk, J. Method and apparatus for producing di-and more highly oxidized carboxylic acids. US Patent 5,772,013 (1998).
Moon, T. S., Yoon, S.-H., Lanza, A. M., Roy-Mayhew, J. D. & Prather, K. L. J. Production of glucaric acid from a synthetic pathway in recombinant Escherichia coli. Appl. Environ. Microbiol. 75, 589–595 (2009).
Moon, T. S., Dueber, J. E., Shiue, E. & Prather, K. L. J. Use of modular, synthetic scaffolds for improved production of glucaric acid in engineered E. coli. Metab. Eng. 12, 298–305 (2010).
Liu, Y. et al. Production of glucaric acid from myo-inositol in engineered Pichia pastoris. Enzym. Microb. Technol. 91, 8–16 (2016).
Rivertop Renewables begins commercial production of glucaric acid. http://www.rivertop.com/products/glucarates.php (2015).
Mehltretter, C. & Rist, C. Sugar oxidation, saccharic and oxalic acids by the nitric acid oxidation of dextrose. J. Agric. Food Chem. 1, 779–783 (1953).
Mustakas, G., Slotter, R. & Zipf, R. Pilot plant potassium acid saccharate by nitric acid oxidation of dextrose. Ind. Eng. Chem. 46, 427–434 (1954).
Lee, J., Saha, B. & Vlachos, D. G. Pt catalysts for efficient aerobic oxidation of glucose to glucaric acid in water. Green Chem. 18, 3815–3822 (2016).
Solmi, S., Morreale, C., Ospitali, F., Agnoli, S. & Cavani, F. Oxidation of d-glucose to glucaric acid using Au/C catalysts. ChemCatChem 9, 2797–2806 (2017).
Derrien, E. et al. Aerobic oxidation of glucose to glucaric acid under alkaline-free conditions: Au-based bimetallic catalysts and the effect of residues in a hemicellulose hydrolysate. Ind. Eng. Chem. Res. 56, 13175–13189 (2017).
Jin, X. et al. Synergistic effects of bimetallic PtPd/TiO2 nanocatalysts in oxidation of glucose to glucaric acid: structure dependent activity and selectivity. Ind. Eng. Chem. Res. 55, 2932–2945 (2016).
Qi, P. et al. Catalysis and reactivation of ordered mesoporous carbon-supported gold nanoparticles for the base-free oxidation of glucose to gluconic acid. ACS Catal. 5, 2659–2670 (2015).
Jin, X. et al. Exceptional performance of bimetallic Pt1Cu3/TiO2 nanocatalysts for oxidation of gluconic acid and glucose with O2 to glucaric acid. J. Catal. 330, 323–329 (2015).
Jiang, Y. et al. Gluconic acid production from potato waste by gluconobacter oxidans using sequential hydrolysis and fermentation. ACS Sustain. Chem. Eng. 5, 6116–6123 (2017).
Weber, R. S. Effective use of renewable electricity for making renewable fuels and chemicals. ACS Catal. 9, 946–950 (2019).
Vogt, S. & Schneider, M. Schäfer-Eberwein, H. & Nöll, G. Determination of the pH dependent redox potential of glucose oxidase by spectroelectrochemistry. Anal. Chem. 86, 7530–7535 (2014).
Komori, H. et al. An extended-gate CMOS sensor array with enzyme-immobilized microbeads for redox-potential glucose detection. In Biomedical Circuits and Systems Conference (BioCAS), 2014 IEEE 464−467 (IEEE, 2014).
Pasta, M., La Mantia, F. & Cui, Y. Mechanism of glucose electrochemical oxidation on gold surface. Electrochim. Acta 55, 5561–5568 (2010).
Cui, H.-F., Ye, J.-S., Liu, X., Zhang, W.-D. & Sheu, F.-S. Pt–Pb alloy nanoparticle/carbon nanotube nanocomposite: a strong electrocatalyst for glucose oxidation. Nanotechnology 17, 2334 (2006).
Dai, L. et al. Electrochemical partial reforming of ethanol into ethyl acetate using ultrathin Co3O4 nanosheets as a highly selective anode catalyst. ACS Cent. Sci. 2, 538–544 (2016).
Yu, Z.-Y. et al. Ni-Mo-O nanorod-derived composite catalysts for efficient alkaline water-to-hydrogen conversion via urea electrolysis. Energy Environ. Sci. 11, 1890–1897 (2018).
Jian, Z. et al. Hierarchical porous NC@CuCo nitride nanosheet networks: Highly efficient bifunctional electrocatalyst for overall water splitting and selective electrooxidation of benzyl alcohol. Adv. Func. Mater. 27, 1704169 (2017).
Liu, W.-J. et al. Electrochemical oxidation of 5-hydroxymethylfurfural with NiFe layered double hydroxide (LDH) nanosheet Catalysts. ACS Catal. 8, 5533–5541 (2018).
Bin, D. et al. Controllable oxidation of glucose to gluconic acid and glucaric acid using an electrocatalytic reactor. Electrochim. Acta 130, 170–178 (2014).
Ibert, M. et al. Improved preparative electrochemical oxidation of d-glucose to d-glucaric acid. Electrochim. Acta 55, 3589–3594 (2010).
Dang, L. et al. Direct synthesis and anion exchange of noncarbonate-intercalated NiFe-layered double hydroxides and the influence on electrocatalysis. Chem. Mater. 30, 4321–4330 (2018).
Zboril, R., Mashlan, M. & Petridis, D. Iron (III) oxides from thermal processes synthesis, structural and magnetic properties, Mössbauer spectroscopy characterization, and applications. Chem. Mater. 14, 969–982 (2002).
Grosvenor, A., Kobe, B., Biesinger, M. & McIntyre, N. Investigation of multiplet splitting of Fe 2p XPS spectra and bonding in iron compounds. Surf. Interf. Anal. 36, 1564–1574 (2004).
Yamashita, T. & Hayes, P. Analysis of XPS spectra of Fe2+ and Fe3+ ions in oxide materials. Appl. Surf. Sci. 254, 2441–2449 (2008).
Xie, S. et al. Hydrogen production from solar driven glucose oxidation over Ni (OH)2 functionalized electroreduced-TiO2 nanowire arrays. Green Chem. 15, 2434–2440 (2013).
Martínez-Huitle, C. A. & Ferro, S. Electrochemical oxidation of organic pollutants for the wastewater treatment: direct and indirect processes. Chem. Soc. Rev. 35, 1324–1340 (2006).
Jin, S. Are Metal chalcogenides, nitrides, and phosphides oxygen evolution catalysts or bifunctional catalysts? ACS Energy Lett. 2, 1937–1938 (2017).
Lv, L., Yang, Z., Chen, K., Wang, C. & Xiong, Y. 2D layered double hydroxides for oxygen evolution reaction: from fundamental design to application. Adv. Energy Mater. 9, 1803358 (2019).
Exner, K. S. Is thermodynamics a good descriptor for the activity? Re-investigation of dabatier's principle by the free energy diagram in electrocatalysis. ACS Catal. 9, 5320–5329 (2019).
Yu, L. et al. Cu nanowires shelled with NiFe layered double hydroxide nanosheets as bifunctional electrocatalysts for overall water splitting. Energy Environ. Sci. 10, 1820–1827 (2017).
Zhang, J. et al. Single-atom Au/NiFe layered double hydroxide electrocatalyst: probing the origin of activity for oxygen evolution reaction. J. Am. Chem. Soc. 140, 3876–3879 (2018).
Wang, T. et al. NiFe (Oxy) Hydroxides derived from NiFe disulfides as an efficient oxygen evolution catalyst for rechargeable Zn–Air batteries: the effect of surface S residues. Adv. Mater. 30, 1800757 (2018).
Zhou, H. et al. Highly efficient hydrogen evolution from edge-oriented WS2(1–x)Se2x particles on three-dimensional porous NiSe2 foam. Nano Lett. 16, 7604–7609 (2016).
Lyons, M. E., Fitzgerald, C. A. & Smyth, M. R. Glucose oxidation at ruthenium dioxide based electrodes. Analyst 119, 855–861 (1994).
Pasta, M., Ruffo, R., Falletta, E., Mari, C. & Della Pina, C. Alkaline glucose oxidation on nanostructured gold electrodes. Gold. Bull. 43, 57–64 (2010).
Aoun, S. B. et al. Effect of metal ad-layers on Au (1 1 1) electrodes on electrocatalytic oxidation of glucose in an alkaline solution. J. Electroanal. Chem. 567, 175–183 (2004).
Barwe, S. et al. Electrocatalytic oxidation of 5-(Hydroxymethyl)furfural using high-durface-area nickel boride. Angew. Chem. Int. Ed. 57, 11460–11464 (2018).
Iida, T. et al. 1H and 13C NMR signal assignments of carboxyl-linked glucosides of bile acids. Magn. Reson. Chem. 41, 260–264 (2003).
Dong, W., Zhou, Y., Yan, D., Li, H. & Liu, Y. pH-responsive self-assembly of carboxyl-terminated hyperbranched polymers. Phys. Chem. Chem. Phys. 9, 1255–1262 (2007).
Zhang, B. et al. Iron–nickel nitride nanostructures in situ grown on surface-redox-etching nickel foam: efficient and ultrasustainable electrocatalysts for overall water splitting. Chem. Mater. 28, 6934–6941 (2016).
Kim, H. J. et al. Coproducing value-added chemicals and hydrogen with electrocatalytic glycerol oxidation technology: experimental and techno-economic investigations. ACS Sustain. Chem. Eng. 5, 6626–6634 (2017).
The authors thank the National Natural Science Foundation of China (21590812, 51538011, 21607147 and 51821006), and the Program for Changjiang Scholars and Innovative Research Team in University of the Ministry of Education of China for supporting this work. S.J. thanks the support by National Science Foundation Grant (1508558).
CAS Key Laboratory of Urban Pollutant Conversion, Department of Applied Chemistry, University of Science & Technology of China, Hefei, 230026, China
Wu-Jun Liu, Xiao-Qiang Pan, Hong-Chao Li, Xiao Hu & Han-Qing Yu
Department of Chemical and Biological Engineering, University of Wisconsin-Madison, Madison, WI, 53706, USA
Wu-Jun Liu, Zhuoran Xu, Dongting Zhao & George W. Huber
School of Chemical Science and Engineering, Tongji University, Shanghai, China
Zhi-Yong Fan, Wei-Kang Wang & Guo-Hua Zhao
Department of Chemistry, University of Wisconsin-Madison, Madison, WI, 53706, USA
Song Jin
Wu-Jun Liu
Zhuoran Xu
Dongting Zhao
Xiao-Qiang Pan
Hong-Chao Li
Xiao Hu
Zhi-Yong Fan
Wei-Kang Wang
Guo-Hua Zhao
George W. Huber
Han-Qing Yu
G.W.H., S.J., H.-Q.Y. and W.-J.L. devised the concept. H.-Q.Y., supervised the project. W.-J.L. performed the experiments of materials synthesis and glucose electrolysis. Z.X. and X.-Q.P. helped for the materials characterizations and products analysis. D.Z., X.H. and H.-C.L. helped for the materials synthesis. Z.-Y.F., W.-K.W., G.-H.Z., helped for the in situ FTIR analysis. W.-J.L. wrote the manuscript. G.W.H., S.J., H.-Q.Y. and W.-J.L. revised the manuscript.
Correspondence to George W. Huber or Han-Qing Yu.
The authors declare no competing interests
Peer review information Nature Communications thanks Mike Lyons, Basudeb Saha, and other, anonymous, reviewers for their contributions to the peer review of this work.
Supplementary Dataset
Liu, WJ., Xu, Z., Zhao, D. et al. Efficient electrochemical production of glucaric acid and H2 via glucose electrolysis. Nat Commun 11, 265 (2020). https://doi.org/10.1038/s41467-019-14157-3
Selective photoelectrochemical oxidation of glucose to glucaric acid by single atom Pt decorated defective TiO2
Zhangliu Tian
Yumin Da
Revisited Mechanisms for Glucose Electrooxidation at Platinum and Gold Nanoparticles
Neha Neha
Thibault Rafaïdeen
Christophe Coutanceau
Electrocatalysis (2023)
Synergetic effect of Ni-Au bimetal nanoparticles on urchin-like TiO2 for hydrogen and arabinose co-production by glucose photoreforming
Malin Eqi
Cai Shi
Jiang Guo
Advanced Composites and Hybrid Materials (2023)
Bioresource Upgrade for Sustainable Energy, Environment, and Biomedicine
Fanghua Li
Yiwei Li
Xingcai Zhang
Nano-Micro Letters (2023)
Insights into the activity of nickel boride/nickel heterostructures for efficient methanol electrooxidation
Yanbin Qi
Yue Zhang
Chunzhong Li | CommonCrawl |
Female genital mutilation/cutting in Italy: an enhanced estimation for first generation migrant women based on 2016 survey data
Livia Elisa Ortensi ORCID: orcid.org/0000-0003-1163-84401,
Patrizia Farina1 &
Els Leye2
Migration flows of women from Female Genital Mutilation/Cutting practicing countries have generated a need for data on women potentially affected by Female Genital Mutilation/Cutting. This paper presents enhanced estimates for foreign-born women and asylum seekers in Italy in 2016, with the aim of supporting resource planning and policy making, and advancing the methodological debate on estimation methods.
The estimates build on the most recent methodological development in Female Genital Mutilation/Cutting direct and indirect estimation for Female Genital Mutilation/Cutting non-practicing countries. Direct estimation of prevalence was performed for 9 communities using the results of the survey FGM-Prev, held in Italy in 2016. Prevalence for communities not involved in the FGM-Prev survey was estimated using to the 'extrapolation-of-FGM/C countries prevalence data method' with corrections according to the selection hypothesis.
It is estimated that 60 to 80 thousand foreign-born women aged 15 and over with Female Genital Mutilation/Cutting are present in Italy in 2016. We also estimated the presence of around 11 to 13 thousand cut women aged 15 and over among asylum seekers to Italy in 2014–2016. Due to the long established presence of female migrants from some practicing communities Female Genital Mutilation/Cutting is emerging as an issue also among women aged 60 and over from selected communities. Female Genital Mutilation/Cutting is an additional source of concern for slightly more than 60% of women seeking asylum.
Reliable estimates on Female Genital Mutilation/Cutting at country level are important for evidence-based policy making and service planning. This study suggests that indirect estimations cannot fully replace direct estimations, even if corrections for migrant socioeconomic selection can be implemented to reduce the bias.
Female genital mutilation/cutting (FGM/C) is an umbrella term for any procedure of modification, partial or total removal or other injury to the female genital organs for non-medical reasons [1]. In 1990 the Inter-African Committee on Traditional Practices Affecting the Health of Women and Children adopted the term 'female genital mutilation'. However, as objections have been raised to this terminology, the more culturally sensitive term 'female genital cutting' or the more complete term 'female genital mutilation/cutting (FGM/C)' has become widely used among researchers and international development agencies. FGM/C is recognized internationally as an 'irreparable, irreversible abuse', a violation of human rights and an extreme form of discrimination against women [2]. Although it occurs differently across communities, regions and countries, research has underlined some recurrent factors underpinning FGM/C, such as cultural tradition, sexual morals, marriageability, religion, perceived health benefits and male sexual enjoyment [3, 4].
According to the last available estimates for the 31 FGM/C practicing countries in Africa, the Middle East and Asia with available data from national household surveys (30 plus the new country of South Sudan), more than 200 million girls and women alive today have been cut [5]. This estimate does not account for other known FGM/C practicing countries (e.g. Malaysia) nor for women living in western countries as the consequence of female emigration flows from practicing countries to areas where FGM/C was previously unknown such as Europe, Australia or North America [6]. These migration flows have generated a need for data on the prevalence of women potentially affected by FGM/C whose importance has been reaffirmed by the European Parliament in 2014 [7] and the Istanbul Convention of the Council of Europe [8]. Data on FGM/C are a fundamental tool for targeted and evidence-based policy making in western countries [9]. Building on the most recent methodological developments in FGM/C direct and indirect estimation for non-FGM/C practicing countries, this paper presents detailed estimates for foreign-born women and asylum seekers aged 15 and over with FGM/C in Italy in 2016, with the aim of supporting resource planning and policy making.
Even though detailed information is needed for the planning and commissioning of health services, as well as to calibrate policies towards the discontinuation of the practice, data on FGM/C are less reliable in the countries of emigration because data based on surveys are usually unavailable. Researchers aiming at estimating the number of women affected by FGM/C must overcome two major challenges: determining a reliable number of women living in emigration (including hypothetically irregular stayers, naturalized women and second generations) and estimating the prevalence among different national groups.
As for the first issue mentioned, examples of the data used as a basis for estimates include labor force surveys [10], population census or survey data on smaller census samples [11, 12], residence permits [13, 14], population's or foreigners' registers [15, 16] and data on school attendance [17]. In some studies, data on women requesting political asylum and unaccompanied female minors who were not asylum seekers are also included [18] as citizens from FGM/C practicing countries are usually well-represented among this particular subpopulation. Omission of undocumented migrants, second generation and naturalized citizens causes an underestimation of women with FGM/C. Despite this awareness data covering all women potentially affected or at risk of FGM/C are rarely available.
The second issue is related to prevalence estimation. Most studies build on the application of prevalence data observed in FGM/C practicing countries to women with a practicing country background living abroad [11, 19, 20]. This technique, known as 'indirect estimation' or 'extrapolation-of-FGM/C countries prevalence data method', is the most systematic, least complex and least costly way of estimating the number of women with FGM/C in Western country settings [21]. However, despite the multiple advantages, the method does not provide a real picture of the phenomenon. Indirect estimation is, in fact, only a combination of FGM/C trends observed in practicing countries and of trends in female migration flows in countries of emigration. The technique has strong methodological limitations as it fails to consider the process of social, geographical and age selection of migrants [22]. Evidence from FGM/C practicing countries indicates that some individual characteristics, such as belonging to younger age cohorts, having higher levels of wealth and education or urban residence, are usually correlated with a lower occurrence of FGM/C [23]. At the same time, the recent surge in studies on contemporary African migration has confirmed the existence of mechanisms of positive selection in international flows from Africa, not least because of the relatively high costs of the journey to Europe [24,25,26,27]. The same correlations between migration and good levels of education, middle class status and a young age have also been observed for the subgroup of African female migrants, suggesting a direct impact on the occurrence of FGM/C among immigrants [28,29,30,31]. The estimation of FGM/C occurrence among second generation, usually considered less at risk compared to first generations, is also a challenge [32] because the effect of migration on the risk is difficult to assess and can vary according to contexts and communities. For this reason second generations have not been included in this study.
In the field of indirect estimation, recent efforts have been aimed at developing corrections to reduce the bias derived from the application of national estimations to immigrant communities. The work of Exterkate [33] on Dutch data underlines the role of age- and region-specific FGM/C prevalence data to obtain the most realistic approximations of prevalence in immigrant communities. Ortensi and colleagues [22] aimed at obtaining some coefficients in order to correct indirect estimation on the basis of the expected socioeconomic composition of migrants' flows (the selection hypothesis method). Finally Andro and colleagues [12] corrected indirect estimation on the basis of the women's ages at arrival and their places of birth.
At the same time, to overcome limitations related to indirect FGM/C prevalence estimation, researchers are increasingly trying to develop methodologies aimed at the direct estimation of FGM/C. The European Directorate-General for Justice has recently funded the Daphne Project FGM-Prev (Grant just/2013/dap/ag/5636) in order to promote a pilot study to test a replicable methodology to estimate FGM/C in Europe [34]. Results from two fieldwork-based studies in Italy and Belgium and the lessons thereby learned have been discussed extensively among experts in order to enhance the possibility of repeating a direct study on FGM/C in a growing number of countries [34].
The current study builds both on direct and indirect methodology aiming at producing an updated and enhanced estimation for Italy in 2016 according to the suggestions of Leye and colleagues [20].
Data on the presence of women in Italy were extracted from the Eurostat database:
Foreign-born women from practicing countries by five year age group (migr_pop3ctb) as of 1 January 2016
First time asylum applicants by citizenship, age and sex, annual data (migr_asyappctza) years 2014–2016
These data are available for most EU member states.
Data on the prevalence of FGM/C for women born in Nigeria, Egypt, Eritrea, Senegal, Burkina Faso, Somalia and the Ivory Coast were obtained from the survey conducted in Italy as a part of the Daphne project FGM-Prev. In order to estimate the prevalence of FGM/C in the main communities from FGM/C practicing countries in Italy, a survey was conducted from June to December 2016 covering 1378 women aged 18 and over living in Italy. The methodology developed in the FGM-Prev project is a combination of facility based and respondent driven sampling. The survey was conducted in many Italian cities covering also suburban and mountain areas. The FGM/C status was self-reported by the women interviewed and no physical examination was performed in relation to the survey. The interviews were carried out by a team of female foreign interviewers well acquainted with the issues, and belonging to the communities selected in the sample, who were thus able to translate and formulate questions appropriately. This has been a key factor in facilitating intimate conversation among women trying to reduce voluntary underreporting. We are however aware that these data share most of the limitations expected of surveys on hard-to-reach populations [34] and of survey based on self-reported data on FGM/C status [35].
Prevalence data on FGM/C by 5 year age group were obtained from the latest available DHS [36], MICS (Multiple Indicators Cluster Surveys) [37], PHS (Population and Health Surveys) [38]; or HHS data (Household and Health Survey) [39]. These surveys are the main sources of information about FGM/C in practicing countries [40].
Exceptions are data for Indonesia that were taken from UNICEF [41] and data for South Sudan that were retrieved from Oxfam [42] that estimated prevalence using unpublished data from the Southern Sudan Household Survey of 2010. For Indonesia, the prevalence is available only for girls aged 0–11, and could therefore be considered as a minimum value, while for South Sudan, the prevalence is available for women aged 15–49 absent the detail by 5 year age group. Detailed information on the sources used can be found in column (c) of Table 1.
Table 1 Estimated prevalence of FGM/C among foreign-born women from FGM/C practicing countries. Italy 2016
Prevalence for communities i included in the FGM-Prev survey (Nigeria, Egypt, Eritrea, Senegal, Burkina Faso, Somalia and Ivory Coast) was obtained directly. The subsample for each community was not enough to ensure the possibility of calculating a 5 five year age group prevalence, so, for each community, we calculated the proportion of cut women (\( {p}_j^i \)) aged j = 18 − 34 and j = 35+ . This passage was implemented in order to account for broader age differences in FGM/C prevalence and obtain a more accurate estimation compared to that based on the overall prevalence of women aged 18 and over. As women aged 15–17 were not included in the survey for ethical reasons (minors), we applied to this group the 18–34 age prevalence.
For countries i included in the survey the number of women aged 15 and above with FGM/C was calculated as
$$ \overline{W^i}=\sum \limits_{j=15-34,35+}\left({p}_j^i\right)\left({W}_j\right) $$
Where W j is the number of women aged j and born in the country i in Italy as of 1 January 2016 according to Eurostat data.
Indirect estimation was calculated starting from the last available prevalence data by 5 five year age group for each community k lacking a direct estimation on the basis of the FGM-Prev survey. Before applying DHS/MICS prevalence data to the female population from practicing countries in Italy, we applied the procedure of FGM/C prevalence correction for immigrant communities according to the selection hypothesis (the detailed procedure is explained in [22, 43]. The method is based on the theoretical assumption that migration is a selective process and is aimed at reducing the bias arising from the correlation observed in practicing countries of FGM/C occurrence with wealth, education and urban residence [23].
The selection hypothesis was implemented excluding the correction for age as the real 5 years-age structure for each community is known in this study.
So for each practicing country k we computed the correction
$$ {s}_k= mean\left(\ \frac{m_{urb,k}}{m_k},\frac{m_{hedu,k}}{m_k},\frac{m_{hw,k}}{m_k}\right) $$
according to the most recent DHS/MICS/PHS/HHS data available.
murb, k is the prevalence of FGM/C among women settled in urban areas in the country k.
mhedu, k is the prevalence of FGM/C among women with a higher level of education in the country k.
mhw, k is the prevalence of FGM/C among women belonging to the highest wealth quintile in the country k.
m k is the prevalence of FGM/C among all women in the country k.
The use of an unweighted mean is due to the fact that we miss detailed information about the composition of past flows of migrants by education level, wealth quintile of the family of origin or place of birth (urban/rural). The correction is expected to get the order of magnitude and the direction of the difference between national prevalence and overseas community prevalence for communities where other factors correlated with FGM/C prevalence (e.g. a strong geographical or a strong ethnic selection) are not preponderant. The coefficients applied for each community k are reported in column (b) of Table 1. The estimation \( \left({p}_j^k\right) \) corrected on the basis of the selection hypothesis is obtained by simply applying the set of coefficient s k to the baseline estimation of the number of expected women with FGM/C from each practicing country k \( {P}_{fgm/c}^k \)
$$ {p}_j^k=\left({P}_{fgm/c}^k\right){s}_k $$
For communities k not included in the survey, the number of women aged 15 and above with FGM/C was calculated as:
$$ \overline{W^k}={\sum}_{j=x-\left(x+4\right)}\left({p}_j^k\right)\left({W}_j\right)\;\mathrm{where}\;\mathrm{x}-\left(\mathrm{x}+4\right)=15-19,20-24,.\dots 65+\mathrm{is}\kern0.17em \mathrm{the}\;5\ \mathrm{years}\ \mathrm{group} $$
and W j is the number of women aged j = x-(x + 4) and born in the country k in Italy as of 1 January 2016 according to Eurostat migr_asyappctza data.
The final number of estimated foreign born women with FGM/C is the simple sum of the direct and indirect estimations
$$ \overline{W}={\sum}_i{W}^i+{\sum}_k{W}^k $$
Each estimated prevalence was provided with a confidence interval.
We repeated the same procedure for data on first time asylum applicants in the period 2014–2016. In the application of indirect estimation to first time asylum application data, prevalence based on two age groups (15–34, 35+) was applied due to the structure of Eurostat data.
According to latest population data, communities selected in FGM-Prev survey account for 66% of the foreign-born women from practicing countries in Italy in 2016.
For countries with small differences at the national level in FGM/C prevalence in terms of education, wealth index and urban setting [23], the prevalence estimated applying the extrapolation-of-FGM/C countries prevalence data method with corrections is substantially unchanged for Italy compared to the national level. This is the case for Mali, Uganda, Sudan or Djibouti. On the contrary, for the other communities such as Benin, Tanzania, Togo or Cameroon the expected prevalence in emigration was substantially reduced compared to the country estimation.
The proportion of women with FGM/C among communities varies significantly, ranging from a group of very high prevalence countries (>80%) such as Somalia, Sudan, Mali and Djibouti to a group characterized by a very low prevalence (<2%) such as Uganda, Ghana, Niger, Cameroon and South Sudan (Table 1).
As a consequence of the estimated prevalence rates, 60 to 80 thousand foreign-born women aged 15 and over with FGM/C are present in Italy in 2016. (Table 2).
Table 2 Number of foreign-born women and estimated cut women from FGM/C practicing countries. Italy 2016
Given the combination of large communities and high FGM/C prevalence rates, Nigerian and Egyptian women made up more than half of the foreign-born women with FGM/C. Another 14% of cut women was born in Ethiopia and the 7% was born in Senegal.
The composition of cut women by age is also the result of historic female flows from Africa to Italy. Women from Eritrea, Somalia and Ethiopia were among the first to migrate to Italy, forerunning the mass immigration that started from the beginning of the 90s [44]. The age structures of foreign-born women from Eritrea, Somalia and Ethiopia therefore differ from those of other practicing countries, showing a high proportion of women aged 65 and over (respectively 47.6% among Eritrean, 45.3% among Ethiopians and 23.6% among Somalis compared to an overall proportion of 8.7%) most of them cut (Table 3). The presence of around 18,000 women aged 60 and above with FGM/C is a new issue for health services dedicate to elderly in Italy.
Table 3 Number of foreign-born women and estimated cut women from FGM/C practicing countries by 5 year age groups. Italy 2016a
We also estimate the presence of around 11 to 13 thousand cut women among asylum seekers aged 15 and over to Italy during 2014–2016 (Table 4). The presence of around 60% of cut women among such a vulnerable population requires further attention in terms of assistance at their reception to the country. Of course, we are aware that some of these women especially rejected asylum applicants may have left Italy. Nigerians women are largely predominant among cut asylum seekers (78.6%). Other groups with an expected large numbers of cut women are from Eritrea and Somalia (respectively 7.1% and 6.5% of all expected cut asylum seekers).
Table 4 Estimated number of women with FGM/C and prevalence of FGM/C among asylum applicants. Italy 2014–2016
Reliable data on women with FGM/C are needed to guide effective policies and interventions on health care and prevention. Studies on this topic are key to estimate and allocate the resources to meet the actual needs of women who are in potential need of health care for related physical and psychological complications [19]. The use of dedicated surveys instead of indirect estimations is of particular importance because the prevalence found among immigrants may be different from that estimated in the country of origin. Our study shows that in the case of Burkina Faso, Eritrea, Senegal and Somalia, the indirect estimations with corrections according to the selection hypothesis fall in the confidence interval of the direct estimation, although they are sometimes close to the extreme bound (Fig. 1). In the other cases, the correction based on the selection hypothesis (a reduction for Egypt and Ivory Coast and an increase in the case of Nigeria) predicts correctly the direction of the expected variation as compared with the country of origin. However, the intensity of the variation is underestimated, confirming previous results [22].
Comparison between direct and indirect estimation of prevalence for women aged 15 and over for selected FGM/C practicing countries. Italy 2016. Source: Authors' elaboration from FGM-Prev Survey and DHS/MICS/PHS/HHS surveys
The underestimation of the phenomenon is particularly problematic in the case of Nigeria, one of the main communities affected by FGM/C. The high prevalence observed among Nigerian immigrants is due to the strong geographic selection of flows to Italy. Most flows from Nigeria to Italy are from the Edo State, but some women are also from the nearby areas of Delta State, Lagos State, Ogun State and Anambra State [45]. All these areas are characterized by a higher FGM/C prevalence rate than the overall country [46]. The high FGM/C prevalence among Nigerian women is also a consequence of selection: in Nigeria an association between FGM/C occurrence and positive socioeconomic selection, uncommon in most of other FGM/C practicing countries, is in fact observed [46]. When we strongly underestimate the occurrence of FGM/C in one of the main communities settled in a country we also underestimate the magnitude of resources needed for care and prevention. The high occurrence of FGM/C among Nigerian women in Italy is also of particular concern because cases of trafficking and forced prostitution have been frequently reported by social workers for migrants in this community. The high occurrence of FGM/C is therefore an additional concern in a community characterized by a high degree of vulnerability [45].
We also underline that second generation girls and women are not included in this study, because we are aware that different techniques of estimation are required to address this particular subpopulation and detailed data for Italy are unavailable [32]. Readers and policy makers should be therefore aware that our estimation lacks the detail for girls at risk or cut aged 0–14, which are an additional source of concern.
Given the role of Italy as a major receiver of asylum applications, the high number of expected women with FGM/C is an additional source of concern. We know that migration along the central Mediterranean route is particularly risky for women: the rates of trafficking for sexual exploitation are high and increasing, torture, slavery and sexual violence are often experienced by asylum seekers before they reach the Italian shores [47]. FGM/C is an additional source of concern for women seeking asylum.
It is not possible to compare directly our estimates on legally present foreign born women aged 15 and over to previous data for Italy [11, 48]. The work of Farina and colleagues [48], who estimate the presence of 57,000 foreign girls with FGM/C in 2010, builds on a methodological approach to the estimation of FGM/C prevalence similar to our study but the prevalence is applied to foreign women aged 15–49 including also undocumented migrants. The work of Van Baelen and colleagues [11] who estimate the presence of 59,700 legally present foreign born women in 2011 is based on an extrapolation from age-specific FGM/C prevalence rates without corrections on data census data on girls and women aged 10 and over. This work is therefore based on a different method for the estimation of prevalence and on a different data source and age span of girls and women included in the study.
The difference in the number of estimated women with FGM/C is due to the overall growth in the number of women from FGM/C practicing countries between 2010/2011 and 2016, to different age spans considered, different classification and legal status of the women involved (foreign born vs. foreign, only legal migrants vs. undocumented migrants) and different methods of prevalence estimation.
Despite the difference in data and methods with previous studies we can assume that the number of women affected by FGM/C in Italy is rising, while projections for Italy suggest that around 65,000 women with FGM/C will migrate to Italy between 2016 and 2030 due to economic driven factors [43].
Reliable estimates on FGM/C at country level are important for evidence-based policy making and service planning. This study presents an example of enhanced estimation of women with FGM/C born in practicing countries, based on the results of a dedicated survey covering the most important communities in Italy. In this study, the bias arising from the application of the extrapolation-of-FGM/C countries prevalence data method is limited to smaller communities and corrections according to the selection hypothesis have been implemented. We aspired to estimate the number of FGM/C cases in two groups with different policy implications: foreign born women and asylum applicants. Our estimate suggests that around 60 to 80 thousand foreign-born women aged 15 and over with FGM/C are present in Italy in 2016. We also estimated the presence of around 11 to 13 thousand cut women aged 15 and over among asylum seekers to Italy in 2014–2016 who may be in particular need of assistance. Second generations girls who may be at risk of undergoing FGM/C are not included in this estimation; further studies are needed to assess the risk in this particular subgroup of women and children.
DHS:
Demographic and Health Surveys
FGM/C:
Female genital mutilation/cutting
HHS:
Household and Health Survey
MICS:
Multiple Indicators Cluster Surveys
PHS:
Population and Health Surveys
WHO - World Health Organization Eliminating Female Genital Mutilation. An interagency statement. Geneva, Switzerland: World Health Organization; 2008.
Un General Assembly. "Intensifying global efforts for the elimination of female genital mutilations". Online resource. 2012. http://www.unfpa.org/sites/default/files/resource-pdf/67th_UNGA-Resolution_adopted_on_FGM_0.pdf. Accessed Dec 2017.
Berg R, Denison EA. Tradition in transition: factors perpetuating and hindering the continuance of female genital mutilation/cutting (FGM/C) summarized in a systematic review. Health Care for Women International. 2013;34(10):837–59.
Andro A, Lesclingand M. Les mutilations génitales féminines. État des lieux et des connaissances. Population. 2016;71:217–96.
UNICEF. Female Genital Mutilation/Cutting: A Global Concern New York: UNICEF; 2016.
UNFPA. Demographic perspectives on female genital mutilation. New York: UNFPA; 2015.
European Parliament. European Parliament resolution of 6 February 2014 on the Commission communication entitled 'Towards the elimination of female genital mutilation'. Resource document. European Parliament. 2014. Online resource http://www.europarl.europa.eu/sides/getDoc.do?type=TA&reference=P7-TA-2014-0105&language=EN#def_1_10. Accessed Sept 2017.
COE – Council Of Europe. Istanbul convention action against violence against women and domestic violence. Online Resource 2011 http://www.coe.int/en/web/istanbul-convention/text-of-the-convention. Accessed Sept 2017.
EIGE - European Institute for Gender Equality. Female genital mutilation in the European Union and Croatia Vilnius: EIGE; 2013.
Kwateng-Kluvitse A. Legislation in Europe regarding female genital mutilation and the implementation of the law in Belgium. France: Spain, Sweden and the UK. International Centre for Reproductive Health, Ghent University; 2004.
Van Baelen L, Ortensi LE, Leye E. Estimates of first-generation women and girls with female genital mutilation in the European Union, Norway and Switzerland. Eur J Contracept Reprod Health Care. 2016; https://doi.org/10.1080/13625187.2016.1234597.
Andro A, Lesclingand M, Cambois E, Cirbeau C. Excision et Handicap (ExH): Mesure des lésions et traumatismes et évaluation des besoins en chirurgie réparatrice. Paris: INED; 2009.
Italian Ministry of Health/Ministero della Salute.Linee guida destinate alle figure professionali sanitarie nonché ad altre figure professionali che operano con le comunità di immigrati provenienti da Paesi dove sono effettuate le pratiche di mutilazione genitale femminile (MGF), per realizzare una attività di prevenzione, assistenza e riabilitazione delle donne e delle bambine già sottoposte a tali pratiche, decreto del Ministero della Salute del 17.12.2008 (G.U. 25.03.08, n.71 SO 70). 2008.
Gallard C. Female genital mutilation in France. British Medical Journal 1995 ; 17; 310(6994): 1592–1593.
Dubourg D, Richard F, Leye E, Ndame S, Rommens T, Maes S. Estimating the number of women with female genital mutilation in Belgium. European Journal of Contraception and reproductive health care. 2001;16(4):248–57.
Leye E, Deblonde J. Legislation in Europe regarding female genital mutilation and the implementation of the law in Belgium. France: Spain, Sweden and the UK. International Centre for Reproductive Health Ghent University; 2004.
L'albero della vita. Il diritto di essere bambine. Dossier sulle Mutilazioni Genitali Femminili, 2011 Online Resource. https://www.alberodellavita.org/wp-content/uploads/2014/10/Il-diritto-di-essere-bambine-.pdf. Accessed Dec 2017.
Dubourg D, Richard F. Studie over de prevalentie van en het risico op vrouwelijke genitale verminking in Belgie. Brussels: FOD Volksgezondheid; 2014.
Ziyada MM, Norberg-Schulz M, Johansen RE. Estimating the magnitude of female genital mutilation/cutting in Norway: an extrapolation model. BMC Public Health. 2016;16:110. https://doi.org/10.1186/s12889-016-2794-6.
Leye E, Mergaert L, Arnaut C, O'Brien Green S. Towards a better estimation of prevalence of female genital mutilation in the European Union: interpreting existing evidence in all EU Member States. Genus. 2014; LXX(1), 99–121.
Now E, City University. London Institute for Women's health, and FORWARD. Research methodological workshop report: estimating the prevalence of FGM in England and Wales. London: Equality Now; 2012.
Ortensi LE, Farina P, Menonna A. Improving estimates of the prevalence of female genital mutilation/cutting among migrants in western countries. Demogr Res. 2015;32:543–62.
UNICEF. Female genital mutilation/cutting: a statistical overview and exploration of the dynamics of change. New York: UNICEF; 2013.
Wouterse F, Van Den Berg M. Heterogeneous migration flows from the central plateau of Burkina Faso: the role of natural and social capital. Geogr J. 2011;177(4):357–66.
Schoumaker B, Flahaux ML, Schans D, Beauchemin C, Mazzucato V, Sakho P. Changing patterns of African migration: a comparative analysis. In: Beauchemin C, editor. Migration between Africa and Europe: trends, factors and effects. New-York: Springer-Verlag & INED Population Studies series; 2015.
De Haas H. The myth of invasion: the inconvenient realities of African migration to Europe. Third World Q. 2008;29(7):1305–22.
Flahaux ML, De Haas H. African migration: trends, patterns, drivers. Comparative Migration Studies. 2016;4:1. https://doi.org/10.1186/s40878.
Jamie FOM. Gender and migration in Africa: female Ethiopian migration in Post-2008 Sudan. Journal of Politics and Law. 2013;6(1):186–92.
IMI [International Migration Institute], RMMS [Regional Mixed Migration Secretariat]. Global Migration Futures. Using scenarios to explore future migration in the Horn of Africa & Yemen. Project report. November 2012. Oxford & Nairobi: IMI & RMMS. Online resource https://www.imi.ox.ac.uk/publications/global-migration-futures-using-scenarios-to-explore-future-migration-in-the-horn-of-africa-yemen. Accessed Sept 2017.
Thomas KJA, Logan I. African female immigration to the United States and its policy implications. Can J Afr Stud. 2016;46(1):87–107.
Reynolds RR. Professional Nigerian women, household economy, and immigration decisions. International. 2006;44(5):167–88.
Eige - European Institute for Gender Equality. Estimation of girls at risk of female genital mutilation in the European Union: Report. Vilnius: EIGE. 2015.
Exterkate M. Female genital mutilation in the Netherlands. Prevalence, incidence and determinants. Pharos: Utrecht; 2013.
Leye E, Leye E, De Schrijver L, Van Baelen L, Andro A, Lesclingand M, Ortensi LE, Farina P. Estimating FGM prevalence in Europe. ICRH: Findings of a pilot study. Research report. Ghent; 2017.
Yoder PS, Abderrahim N, Zhuzhuni A. Female genital cutting in the demographic and health surveys: a critical and comparative analysis. DHS comparative reports no 7. ORC Macro: Calverton, Maryland; 2004.
DHS. The DHS Program. Tool and resources. Online Resource. http://www.dhsprogram.com/. Accessed Sept 2017.
UNICEF. Multiple Indicator Cluster Survey (MICS). Statistics and Monitoring. Online Resource. UNICEF. http://www.unicef.org/statistics/index_24302.html. Accessed 24 September 2015. Accessed Sept 2017.
NSO - National Statistics Office Eritrea and Fafo AIS. Eritrea population and health survey 2010. Asmara, Eritrea: National Statistics Office and Fafo Institute for Appled. Int Stud. 2013;
Sudan Federal Ministry of Health and Central Bureau of Statistics. Sudan household and health survey. National report. Khartoum: Federal Ministry of Health and central bureau of. Statistics. 2012;
Yoder PS, Shanxiao W. Female genital cutting: the interpretation of recent DHS data. DHS comparative reports no. 33. Calverton: ICF International; 2013.
UNICEF. Indonesia. Statistical profile on Female Genital Mutilation/Cutting. New York. 2016.
Country Profile OXFAM. South Sudan. Ottawa: Oxfam. Online. Resource. 2013. https://www.oxfam.ca/sites/default/files/imce/country-profile-south-sudan.pdf. Accessed Dec 2017.
Ortensi LE, Menonna A. Migrating with special needs? Projections of flows of migrant women with female genital mutilation/cutting towards Europe 2016-2030. Eur J Popul. 2017; https://doi.org/10.1007/s10680-017-9426-4.
Campani G. Gender and migration in Italy: State of art. Florence: University of Florence Working paper N.6 – WP4, 2007.
Open Migration. From Nigeria to Catania, the path of victims of sex trafficking. Online Resource. 2017. http://openmigration.org/en/analyses/from-nigeria-to-catania-the-path-of-victims-of-sex-trafficking/.
National Population Commission (NPC) [Nigeria] and ICF International. Nigeria Demographic and Health Survey 2013. Abuja, Nigeria, and Rockville. Maryland, USA: NPC and ICF International; 2014.
IOM. Assessing the risks of migration along the central and eastern Mediterranean routes: Iraq and Nigeria as Case Study Countries. Geneva: IOM. 2016.
Farina P, Ortensi LE, Menonna A. Estimating the number of foreign women with female genital mutilation/cutting in Italy. The European Journal of Public Health. 2016 Aug;26(4):656–61. https://doi.org/10.1093/eurpub/ckw015.
The authors wish to thanks all researchers involved in the FGM-Prev studies, the Evaluator and the members of the Steering Committee for the fruitful results achieved together.
The study was funded by the Directorate-General for Justice, https://doi.org/10.13039/501100000897 [Grant just/2013/dap/ag/5636].
Additional details on the FGM-Prev project are published in the research report: Leye E., Leye E, De Schrijver L, Van Baelen L, Andro A, Lesclingand M, Ortensi L.E., Farina P. Estimating FGM prevalence in Europe. Findings of a pilot study. Research report. Ghent: ICRH. 2017. The data generated during the current study are available from the corresponding author on reasonable request.
Department of Sociology and Social Research – University of Milan Bicocca, Milan, Italy
Livia Elisa Ortensi & Patrizia Farina
International Centre for Reproductive Health – Ghent University, Ghent, Belgium
Els Leye
Livia Elisa Ortensi
Patrizia Farina
LEO contributed substantially to the study design and performed estimations. She contributed substantially to data interpretation and wrote the manuscript. PF is the Project leader of the Italian Unit of FGM-Prev, she contributed substantially to the study design, to data interpretation and revised the final draft. EL is the Project Supervisor of the project, she contributed substantially to the study design and revised the final draft. All authors read and approved the final manuscript.
Correspondence to Livia Elisa Ortensi.
The study was approved by the ethical committee of the University of Milan – Bicocca.
Written informed consent to participate in this study was obtained from the women interviewed. No physical examination was performed in relation to the survey. No women below the age of 18 were interviewed.
Ortensi, L.E., Farina, P. & Leye, E. Female genital mutilation/cutting in Italy: an enhanced estimation for first generation migrant women based on 2016 survey data. BMC Public Health 18, 129 (2018). https://doi.org/10.1186/s12889-017-5000-6
Foreign-born Women
Selection Hypothesis
Indirect Estimation
Recent Methodological Developments | CommonCrawl |
DCDS-B Home
A hybrid model of collective motion of discrete particles under alignment and continuum chemotaxis
January 2020, 25(1): 473-487. doi: 10.3934/dcdsb.2019190
On the forward dynamical behavior of nonautonomous systems
Chunqiu Li 1, , Desheng Li 2, and Xuewei Ju 3,,
College of Mathematical Physics and Electronic Information Engineering, Wenzhou University, Wenzhou 325035, China
School of Mathematics, Tianjin University, Tianjin 300072, China
Department of Mathematics, School of Science, Civil Aviation University of China, Tianjin 300300, China
* Corresponding author: Xuewei Ju
Dedicated to Professor Peter E. Kloeden on the occasion of his 70th birthday
Received September 2018 Revised April 2019 Published September 2019
This paper is concerned with the forward dynamical behavior of nonautonomous systems. Under some general conditions, it is shown that in an arbitrary small neighborhood of a pullback attractor of a nonautonomous system, there exists a family of sets $ \{\mathcal{A}_\varepsilon(p)\}_{p\in P} $ of phase space $ X $, which is forward invariant such that $ \{\mathcal {A}_\varepsilon(p)\}_{p\in P} $ uniformly forward attracts each bounded subset of $ X $. Furthermore, we can also prove that $ \{\mathcal{A}_\varepsilon(p)\}_{p\in P} $ forward attracts each bounded set at an exponential rate.
Keywords: Nonautonomous dynamical systems, forward attraction, pullback attractors, exponential attraction.
Mathematics Subject Classification: 35B41, 37B25, 37B55, 37L30.
Citation: Chunqiu Li, Desheng Li, Xuewei Ju. On the forward dynamical behavior of nonautonomous systems. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 473-487. doi: 10.3934/dcdsb.2019190
A. Y. Abdallah, Uniformly exponential attractors for first order nonautonomous lattice dynamical systems, J. Diff. Equa., 251 (2011), 1489-1504. doi: 10.1016/j.jde.2011.05.030. Google Scholar
A. V. Babin and B. Nicolaenko, Exponential attractor of reaction diffusion systems in an unbounded domain, J. Dyn. Diff. Equa., 7 (1995), 567-590. doi: 10.1007/BF02218725. Google Scholar
T. Caraballo, J. A. Langa and R. Obaya, Pullback, forward and chaotic dynamics in 1D non-autonomous linear-dissipative equations, Nonlinearity, 30 (2017), 274-299. doi: 10.1088/1361-6544/30/1/274. Google Scholar
A. Carvalho, J. A. Lange and J. Robinson, Attractors for Infinite-dimensional Nonautonomous Dynamical Systems, Springer, New York, 2013. doi: 10.1007/978-1-4614-4581-4. Google Scholar
A. Carvalho, J. A. Lange, J. Robinson and A. Suárez, Characterization of nonautonomous attractors of a perturbed gradient system, J. Diff. Equa., 236 (2007), 570-603. doi: 10.1016/j.jde.2007.01.017. Google Scholar
A. N. Carvalho and S. Sonner, Pullback exponential attractors for evolution processes in Banach spaces: theoretical results, Commun. Pure Appl. Anal., 12 (2013), 3047-3071. doi: 10.3934/cpaa.2013.12.3047. Google Scholar
A. N. Carvalho and S. Sonner, Pullback exponential attractors for evolution processes in Banach spaces: properties and applications, Commun. Pure Appl. Anal., 13 (2014), 1141-1165. doi: 10.3934/cpaa.2014.13.1141. Google Scholar
D. Cheban, P. E. Kloeden and B. Schmalfuss, The relationship between pullback, forward and global attractors of nonautonomous dynamical systems, Nonlinear Dyn. Syst. Theory, 2 (2002), 125-144. Google Scholar
V. V. Chepyzhov, Attractors of Mathematical Physics, Regional Conference Series in Mathematics 38, Amer. Math. Soc., Providence RI, 1978. Google Scholar
V. V. Chepyzhov and M. I. Vishik, Attractors of nonautonmous dynamical systems and their dimension, J. Math. Pures Appl., 73 (1994), 279-333. Google Scholar
H. Crauel, A. Debussche and F. Flandoli, Random attractors, J. Dyn. Diff. Equa., 9 (1997), 307-341. doi: 10.1007/BF02219225. Google Scholar
R. Czaja, Pullback exponential attractors with adimissible exponential growth in the past, Nonlinear Anal., 104 (2014), 90-108. doi: 10.1016/j.na.2014.03.020. Google Scholar
L. Dung and B. Nicolaenko, Exponential attractors in Banach spaces, J. Dyn. Diff. Equa., 13 (2001), 791-806. doi: 10.1023/A:1016676027666. Google Scholar
A. Eden, C. Foias, B. Nicolaenko and R. Temam, Exponential Attractors for Dissipative Evolution Equations, Research in Applied Mathematics, Vol. 37, Wiley, New York, 1994. Google Scholar
M. Efendiev, A. Miranville and S. Zelik, Exponential attractors for a nonlinear reaction diffusion system in $\mathbb{R}^3$, C. R. Acad. Sci. Paris, 330 (2000), 713-718. doi: 10.1016/S0764-4442(00)00259-7. Google Scholar
M. Efendiev, A. Miranville and S. Zelik, Exponential attractors and finite-dimensional reduction for nonautonomous dynamical systems, Proc. Roy. Soc. Edinburgh Sect. A, 135 (2005), 703-730. doi: 10.1017/S030821050000408X. Google Scholar
M. Efendiev, Y. Yamamoto and A. Yagi, Exponential attractors for nonautonomous dissipative system, J. Math. Soc. Japan, 63 (2011), 647-673. Google Scholar
D. Henry, Geometric Theory of Semilinear Parabolic Equations, Lecture Notes in Mathematics, 840, Springer-Verlag, Berlin, 1981. doi: 10.1007/BFb0089647. Google Scholar
X. W. Ju, D. S. Li and J. Q. Duan, Forward attraction of pullback attractors and synchronizing behavior of gradient-like systems with nonautonomous perturbations, Disc. Contin. Dyn. Syst. B, 24 (2019), 1175-1197. Google Scholar
X. W. Ju, D.S. Li, C. Q. Li and A. L. Qi, Aproximate forward attractors of the nonautonomous dynamical systems, Chinese Annals of Mathematics, 40 (2019), 541-554. Google Scholar
P. E. Kloeden and T. Lorenz, Construction of nonautonomous forward attractors, Proc. Amer. Math. Soc., 144 (2016), 259-268. doi: 10.1090/proc/12735. Google Scholar
G. Sell and Y. You, Dynamics of Evolutionary Equations, Springer, New York, 2002. doi: 10.1007/978-1-4757-5037-9. Google Scholar
R. Temam, Infnite Dimensional Dynamical Systems in Mechanics and Physics, 2nd edition, Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar
[24] M. I. Vishik, Asymptotic Behavior of Solutions of Evlutionary Equations, Cambridge University Press, Cambriage, England, 1992. Google Scholar
Y. Wang, D. Li and P. E. Kloeden, On the asymptotical behavior of nonautonomous dynamical systems, Nonlinear Anal., 59 (2004), 35-53. doi: 10.1016/j.na.2004.03.035. Google Scholar
Y. S. Zhong and C. K. Zhong, Exponential attractors for semigroups in Banach spaces, Nonlinear Anal.: Theory, Method & Applications, 75 (2012), 1799-1809. doi: 10.1016/j.na.2011.09.020. Google Scholar
S. Zhou and X. Han, Pullback exponential attractors for nonautonomous lattice systems, J. Dyn. Diff. Equa., 24 (2012), 601-631. doi: 10.1007/s10884-012-9260-7. Google Scholar
Nitha Niralda P C, Sunil Mathew. On properties of similarity boundary of attractors in product dynamical systems. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021004
Hai-Yang Jin, Zhi-An Wang. Global stabilization of the full attraction-repulsion Keller-Segel system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3509-3527. doi: 10.3934/dcds.2020027
Mauricio Achigar. Extensions of expansive dynamical systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020399
Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074
Manil T. Mohan. Global attractors, exponential attractors and determining modes for the three dimensional Kelvin-Voigt fluids with "fading memory". Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020105
The Editors. The 2019 Michael Brin Prize in Dynamical Systems. Journal of Modern Dynamics, 2020, 16: 349-350. doi: 10.3934/jmd.2020013
João Marcos do Ó, Bruno Ribeiro, Bernhard Ruf. Hamiltonian elliptic systems in dimension two with arbitrary and double exponential growth conditions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 277-296. doi: 10.3934/dcds.2020138
Toshiko Ogiwara, Danielle Hilhorst, Hiroshi Matano. Convergence and structure theorems for order-preserving dynamical systems with mass conservation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3883-3907. doi: 10.3934/dcds.2020129
Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331
Amira M. Boughoufala, Ahmed Y. Abdallah. Attractors for FitzHugh-Nagumo lattice systems with almost periodic nonlinear parts. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1549-1563. doi: 10.3934/dcdsb.2020172
Alessandro Fonda, Rodica Toader. A dynamical approach to lower and upper solutions for planar systems "To the memory of Massimo Tarallo". Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021012
Stefan Siegmund, Petr Stehlík. Time scale-induced asynchronous discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1011-1029. doi: 10.3934/dcdsb.2020151
Gervy Marie Angeles, Gilbert Peralta. Energy method for exponential stability of coupled one-dimensional hyperbolic PDE-ODE systems. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020108
Guillaume Cantin, M. A. Aziz-Alaoui. Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020283
Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020384
Marcello D'Abbicco, Giovanni Girardi, Giséle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli. Equipartition of energy for nonautonomous damped wave equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 597-613. doi: 10.3934/dcdss.2020364
Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270
Álvaro Castañeda, Pablo González, Gonzalo Robledo. Topological Equivalence of nonautonomous difference equations with a family of dichotomies on the half line. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020278
Musen Xue, Guowei Zhu. Partial myopia vs. forward-looking behaviors in a dynamic pricing and replenishment model for perishable items. Journal of Industrial & Management Optimization, 2021, 17 (2) : 633-648. doi: 10.3934/jimo.2019126
Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217
Chunqiu Li Desheng Li Xuewei Ju | CommonCrawl |
Search all SpringerOpen articles
EJNMMI Research
Original research | Open | Published: 26 January 2018
Renal sympathetic nerve activity after catheter-based renal denervation
Linn C. Dobrowolski1 na1,
Daan W. Eeftinck Schattenkerk2 na1,
C. T. Paul Krediet1,
Peter M. Van Brussel3,
Liffert Vogt1,2,
Frederike J. Bemelman1,
Jim A. Reekers4,
Bert-Jan H. Van Den Born2 &
Hein J. Verberne4
EJNMMI Researchvolume 8, Article number: 8 (2018) | Download Citation
Catheter-based renal sympathetic denervation (RDN) has been considered a potential treatment for therapy resistant hypertension (RHT). However, in a randomized placebo-controlled trial, RDN did not lead to a substantial blood pressure (BP) reduction. We hypothesized that variation in the reported RDN efficacy might be explained by incomplete nerve disruption as assessed by renal 123I–meta-iodobenzylguanidine (123I–mIBG) scintigraphy.
In 21 RHT patients (median age 60 years), we performed 123I–mIBG scintigraphy before and 6 weeks after RDN. Additionally, we assessed changes in BP (24 h day, night, and average), plasma- and urinary-catecholamines and plasma renin activity (PRA) before and after RDN. Planar scintigraphy was performed at 15 min and 4 h after 123I–mIBG administration. The ratio of the mean renal (specific) counts vs. muscle (non-specific) counts represented 123I–mIBG uptake. Renal 123I–mIBG washout was calculated between 15 min and 4 h.
After RDN office-based systolic BP decreased from 172 to 153 mmHg (p = 0.036), while diastolic office BP (p = 0.531), mean 24 h systolic and diastolic BP (p = 0.602, p = 0.369, respectively), PRA (p = 0.409) and plasma catecholamines (p = 0.324) did not significantly change post-RDN. Following RDN, 123I–mIBG renal uptake at 15 min was 3.47 (IQR 2.26–5.53) compared to 3.08 (IQR 2.79–4.95) before RDN (p = 0.289). Renal 123I–mIBG washout did not change post-RDN (p = 0.230). In addition, there was no significant correlation between the number of denervations and the renal 123I–mIBG parameters.
No changes were observed in renal 123I–mIBG uptake or washout at 6 weeks post-RDN. These observations support incomplete renal denervation as a possible explanation for the lack of RDN efficacy.
Reduction of sympathetic nerve activity by catheter-based renal sympathetic denervation (RDN) has raised considerable attention as a new treatment modality for resistant hypertension (RHT). This interest was fueled by the promising results of RDN in the initial open label studies Symplicity HTN-1 and HTN-2 [1,2,3]. However, the recent randomized sham-controlled Symplicity HTN-3 trial did not show a difference in blood pressure (BP) lowering efficacy between RDN and sham treatment [4]. One of the potential causes for the lack of efficacy might be the failure of the RDN procedure to sufficiently ablate renal sympathetic nerves. Yet, a routine technique to measure the extent of renal denervation is lacking and potential causes of insufficient denervation remain hypothetical.
123I–meta-iodobenzylguanidine (123I–mIBG) scintigraphy offers the possibility to evaluate organ specific presynaptic sympathetic nerve activity. mIBG is an analogue of the "false" neurotransmitter guanetidine, a potent neuron blocking agent that acts selectively on sympathetic nerves. mIBG follows similar uptake mechanisms as norepinephrine: as such, mIBG-uptake enables assessment of the intactness and density of the neural tissue. Radiolabelling of mIBG with 123Iodide enables scintigraphic assessment. 123I–mIBG organ uptake and washout reflect sympathetic activity. Uptake of 123I–mIBG reflects the density and functional intactness of the neural tissue within the organ, whereas washout is thought to reflect sympathetic activity [5, 6]. Previously, we tested this technique for visualizing renal sympathetic innervation by showing its ability to detect changes in sympathetic innervation during kidney allograft re-innervation [7].
Based on the inter-individual variation in BP response after RDN, we hypothesized that there is a wide variability in kidney sympathetic denervation following RDN. Secondly, we hypothesized that changes in renal sympathetic activity would relate to changes in BP and neurohormonal activity following RDN. Against this background, we examined changes in renal 123I–mIBG uptake and washout in RHT patients before and after RDN treatment.
From July 2011 to December 2013, we performed a prospective observational study using 123I–mIBG scintigraphy as a parameter of renal sympathetic activity in patients with RHT undergoing RDN. Objectives were to compare measures of renal 123I–mIBG uptake (uptake at 15 min and washout between 15 min and 4 h) on planar and single photon emission computed tomography-CT (SPECT-CT) images, changes in office based BP and ambulatory BP measurements (ABPM) and neurohormonal activation before and 6 weeks after RDN.
In the present study, we enrolled 21 consecutive patients aged 40–70 years with a clinical indication for RDN because of therapy resistant hypertension defined as a mean daytime BP ≥ 150/100 mmHg despite the use of three or more anti-hypertensive drugs including or with intolerance to a diuretic [8]. Secondary causes of hypertension (e.g., renal artery stenosis, pheochromocytoma, primary aldosteronism, and hyper- or hypothyroidism) and abnormal renal artery anatomy, including the presence of accessory renal arteries, were ruled out prior to the intervention. Patients with renal insufficiency (estimated glomerular filtration rate (eGFR) < 45 mL/min/1.73 m2) or proteinuria (> 1 g/24 h) or having a pacemaker, implantable cardioverter-defibrillator (ICD), atrial fibrillation, or type 1 diabetes mellitus were excluded.
Antihypertensive treatment was performed according to international guidelines and included instructions on dietary sodium restriction, physical activity, and instructions to remain compliant to antihypertensive medication [8, 9]. Six weeks prior to the first measurements, patients were screened to assess eligibility for study participation. Patients were deemed eligible for study participation if they were at least 3 weeks on stable BP lowering medication prior to the first study visit. BP lowering medication was kept unchanged throughout the study until the final visit 6 weeks after RDN.
When fully informed and willing to participate, patients were asked to provide written informed consent. Six weeks hereafter, office BP and ABPM were measured. Patients were required to maintain the same antihypertensive drug regimen throughout study participation. This study was a part of a larger effort to assess the sympaticolytic potential of RDN with the predetermined idea to assess the effects of RDN on renal 123I–mIBG uptake and washout.
For reference, we used data of five patients (aged 39–66 years) in whom 123I–mIBG was performed of the kidney allograft after recent kidney transplantation (0.1 to 1.5 years after transplantation), whose detailed characteristics are described elsewhere [7]. In summary, all these surgically denervated kidneys functioned well with creatinine clearance rates (calculated from 24 h urine collections) ranging from 54 to 128 ml/min. As a negative control, we also included 123I–mIBG data from a patient with complete renal denervation after autologous kidney transplantation for renal artery stenosis [10]. Although 123I–mIBG is primarily cleared via the kidneys, we have shown that both the cardiac as well as the renal 123I–mIBG parameters (i.e., late heart-to-mediastinal ratio, renal uptake, and renal washout) are not influenced by kidney function [7, 11].
The study protocol met the ethical guidelines of the Declaration of Helsinki (originally adopted by the 18th WMA General Assembly, Helsinki, Finland, June 1964 and last amended in Fortaleza, Brazil 2013) and was approved by the local ethics committee of the Academic Medical Center at the University of Amsterdam (number NL.36755.018.11). All patients gave oral and written informed consent before study inclusion.
Renal sympathetic denervation procedure
The renal denervation procedure was performed via the femoral artery approach by a single highly experienced interventional radiologist (JAR) with > 5 RDN procedures before this study was initiated. RDN was performed by use of radiofrequency energy delivered by the Symplicity renal-denervation catheter (Medtronic Inc., Santa Rosa, California, USA). Prior to the procedure, midazolam 1.0 mg and metoclopramide 10 mg was given intravenously. After inserting a 6 F introducer sheath in the right femoral artery, the guiding catheter was introduced in the aorta and an aortagram was made. The guiding catheter was advanced in the right and left renal artery in no pre-specified order. The denervation catheter was introduced in the renal artery via the guiding catheter. After nitroglycerine 0.2 mg and fentanyl 0.02 mg intravenously, catheter ablations were performed in a helical pattern with the goal of at least 4–6 ablations per renal artery to cover each short axis transaxial quadrant, according to the user's instruction of the device. No peri-procedural complications occurred.
At baseline and 6 weeks after RDN 24 h ABPM was performed using the Spacelabs 90,217 ABPM monitoring device (Spacelabs Healthcare, Issaquah, Washington, USA). During day time between 06.00 am and 23.00 pm, measurements were performed every 15 min and at night-time (i.e., 23.00 pm and 6.00 am) every 30 min. BP readings were accepted when the success rate of the measurements was minimally 70% per 24 h. Patients were blinded to their BP readings. Instructions were given to continue usual daily activities during 24 h of BP recording, but avoiding strenuous exercise. Office brachial BP using appropriate cuff-sizes was measured with a validated semi-automated oscillometric device (Omron 705it, Omron Healthcare Europe BV, Hoofddorp, The Netherlands), while seated and after 5 min rest in a quiet room, three times at 1 min intervals by a trained research assistant or physician. The mean of the last two measurements was recorded as representative of office brachial BP. No BP measurements were performed in the kidney transplant recipient group.
Blood and urine analysis
Plasma renin activity (PRA) (μgA1/L/h) was analyzed using radioimmunoassays. Urine and plasma epinephrine, norepinephrine (NE), metanephrine, and normetanephrine were analyzed using liquid chromatography-mass spectrometry. Epinephrine and NE and were obtained in supine as well as after 5 min in standing position. The delta of supine minus standing position was calculated. Urinary sodium excretion (mmol/24 h), urine creatinine (μmol/L), was calculated from 24 h urine collections obtained before and 6 weeks post-RDN.
123I–mIBG scintigraphy
The protocol of the renal 123I–mIBG scintigraphy has been previously described [7]. In summary, 2 h prior to the administration of 185 MBq (5 mCi ± 10%) 123I–mIBG (AdreView™, GE Healthcare, Eindhoven, the Netherlands) patients received 100 mg potassium-iodide to block thyroid uptake of "free" 123I. In addition subjects were given a single oral dose of furosemide retard 60 mg to promote the urinary excretion of 123I–mIBG. No specific instructions on fluid intake were given to enhance excretion of 123I–mIBG. Anterior and posterior planar semi-whole body images were performed at 15 min and 4 h after administration of 123I–mIBG. A vial with a reference amount of radioactivity of 123I was included in the planar images. Additionally, at 4 h post-injection (p.i.), SPECT-CT (low dose) was performed. The CT-images were used for an adequate anatomical registration of 123I–mIBG uptake.
Since we recently showed that uptake at 15 min p.i. of 123I–mIBG and washout between 15 min and 4 h can detect renal sympathetic reinnervation over time after transplantation, we report in this study the 123I–mIBG uptake on the 15 min p.i. images and analyzed the mean counts/pixel for calculation of washout between 15 min and 4 h [7].
123I–mIBG imaging procedures
The planar images were acquired with a 20% energy window centered at 159 keV, using medium-energy collimators. Anterior and posterior planar semi-whole body acquisitions were used to create geometrical mean images.
123I–mIBG image analysis
An investigator (LCD) analyzed the geometric mean (GM) planar images (Hybrid Viewer™, Hermes Medical Solutions, Stockholm, Sweden) by manually drawing regions of interest (ROI) for kidneys, muscle (M. quadriceps femoris), and the 123I vial. A predefined and fixed ROI for the muscle (50 pixels) was used for all patients.
We analyzed the counts of the left kidney only since scatter or overlay of the liver with a high uptake of 123I–mIBG resulted in poor delineation of the right kidney. Mean counts per pixel per ROI were used to calculate 123I–mIBG uptake: the relative uptake between kidney (specific) versus muscle (nonspecific) quantifies neural uptake of 123I–mIBG and reflects neuron function that results from 123I–mIBG uptake, storage and release. These can be derived using mean counts from the 15 min and 4 h p.i. GM images and the 4 h p.i. 123I–mIBG SPECT-CT images. Washout (WO) between 15 min and 4 h p.i. based on GM images reflects sympathetic activity and was calculated from the kidney-to-muscle ratio between 15 min and 4 h p.i.. Formulas to calculate uptake and washout were
$$ \mathrm{Relative}\ \mathrm{uptake}=\frac{\mathrm{kidney}\ \left(\mathrm{specific}\right)-\mathrm{muscle}\ \left(\mathrm{non}-\mathrm{specific}\right)}{\mathrm{muscle}\ \left(\mathrm{non}-\mathrm{specific}\right)} $$
$$ \mathrm{Washout}=\frac{\left(\ \frac{\mathrm{uptake}\ \mathrm{kidney}\ 15\ \min }{\mathrm{uptake}\ \mathrm{muscle}\ 15\min}\right)-\left(\frac{\mathrm{uptake}\ \mathrm{kidney}\ 4\ \mathrm{h}\ }{\mathrm{uptake}\ \mathrm{muscle}\ 4\ \mathrm{h}}\right)\ }{\left(\ \frac{\mathrm{uptake}\ \mathrm{kidney}\ 15\ \min }{\mathrm{uptake}\ \mathrm{muscle}\ 15\min}\right)}\mathrm{x}100\% $$
The percentage uptake of the injected dosage of 123I–mIBG was calculated using the actual injected dose and mean counts per pixel in relation to the activity in the 123I–vial. Washout [WO) in the left kidney was calculated from 15 min and 4 h images using skeletal muscle as a reference.
A secondary analysis was focused on the SPECT-CT images. In this method, the transverse CT images were used to optimize anatomical delineation of the kidney contours. The main advantage of this method is the availability of anatomical information obtained from the low dose CT, allowing for a superior delineation of kidneys and subsequently a potential better estimation of the renal 123I–mIBG uptake. ROIs were drawn on the CT-images along the contours of kidney cortices, excluding the calyces. ROIs were then fused into volumes of interest (VOIs) and copied to the co-registered SPECT. Mean counts/voxel expressed 123I–mIBG uptake. VOIs in muscle served as background activity.
Based on the difference in 123I–mIBG uptake, we divided patients with a positive change in 123I–mIBG uptake, i.e., indicating an increase in 123I–mIBG uptake or washout and those with a negative change, i.e., a decrease in 123I–mIBG uptake or washout after RDN.
This study was part of a larger effort to study sympatholytic effects of RDN. The sample size has been described elsewhere [12]. Data are presented as medians and interquartile ranges (IQR with 25 and 75 percentiles) and comparisons were performed by non-parametrical tests (Wilcoxon signed rank tests as well as the Mann–Whitney U test). P values below 0.05 were considered statistically significant. All analyses were performed using IBM SPSS Statistics software for Windows version 21.0 (IBM Corp. Armonk, New York, USA).
Baseline characteristics
We studied 21 patients with therapy resistant hypertension (Table 1). The majority of patients were male (71% with a median 60 years) and Caucasian (76%). Median body mass index was 28.0 kg/m2 (24.8–30.5 kg/m2). Type 2 diabetes mellitus was present in 33% and left ventricular hypertrophy, according to electrocardiography voltage criteria, was present in 29% of the patients. A history of a cardiovascular disease (coronary artery disease, angina pectoris, heart failure, stroke, peripheral arterial disease) was present in 48% of the study participants.
Table 1 Characteristics of patients treated with RDN (n = 21)
Renal 123I–mIBG uptake and washout in the left kidney
Renal 123I–mIBG uptake was evident in all patients (Fig. 1). The planar derived mean relative uptake of 123I–mIBG of the left kidney at 15 min p.i. did not change significantly from pre RDN 3.08 (2.79–4.95) to post RDN 3.47 (2.26–5.53), p = 0.289 (Table 2). Figure 2 represents pre vs. post RDN 123I–mIBG uptake at 15 min p.i. including recently transplanted kidneys as controls.
Anterior planar and SPECT-CT 123I–mIBG scintigraphy. The planar image (a) shows clear uptake of 123I–mIBG uptake in various organ: liver, urinary bladder and evident uptake of 123I–mIBG in both kidneys. b Shows the ROI on the planar image of the left kidney trying to exclude any pelvic activity. c Shows a coronal slice of the SPECT-CT showing the proximity of the liver to the right kidney. The proximity of the liver to the right kidney can also be appreciated on the planar images (a). Thereby, both planar and SPECT images illustrate the possible impact of liver activity on parameters of 123I–mIBG uptake in the right kidney
Table 2 Pre and post RDN differences in quantifications of 123I–mIBG uptake (n = 21)
Change in renal uptake of 123I–mIBG after RDN. The planar derived mean relative uptake of 123I–mIBG of the left kidney at 15 min p.i. did not change significantly from pre RDN 3.08 (2.79–4.95) to post RDN 3.47 (2.26–5.53), p = 0.289. Included on the right side of the figure is depicted the relative kidney uptake of 123I–mIBG in a group patients with kidney transplantations, serving as a reference
The percentage uptake of the injected dosage of 123I–mIBG in the left kidneys showed a non-significant decrease after RDN from 17.8 to 15.4% (delta − 13%, p = 0.881). Washout rate between 15 min and 4 h p.i. was 41.5% before and 42.7% after RDN, p = 0.230. The SPECT derived uptake at 4 h decreased non-significantly after RDN (1.41 to 1.07, p = 0.526). None of the renal uptake or washout parameters were correlated with kidney function (data not shown).
Number of denervations and renal 123I–mIBG uptake and washout
No significant correlation was found between the number of denervations (left renal artery (4.3 ± 0.6), right renal artery 4.2 ± 0.5), and renal uptake of 123I–mIBG in the left kidney at either 15 min (R = − 0.27, p = 0.243), 4 h p.i. (R = − 0.37, p = 0.103) or 123I–mIBG washout (R = 0.05, p = 0.837).
Effect of RDN on blood pressure, PRA, and catecholamines
Table 3 shows the effect of RDN on blood pressure and catecholamines. RDN resulted in a significant decrease in systolic office BP (p = 0.036), without reducing diastolic BP (p = 0.531). Systolic and diastolic daytime ABPM were not significantly different after denervation. Neither antihypertensive medication nor sodium intake, as inferred from urinary sodium excretion, were significantly different between pre vs. post-RDN (Table 2).
Table 3 Blood pressure, kidney function and catecholamines
At baseline, plasma and urine catecholamine levels were within reference values. Plasma epinephrine and NE did not change (p = 0.780 and p = 0.324, respectively) nor did the 24 h urinary excretion of metanephrine (p = 0.506) and normetanephrine (p = 0.911) following RDN (Table 3).
Renal 123I–mIBG uptake and washout and blood pressure, PRA, and catecholamines
Except for the correlation between renal 123I–mIBG uptake and office systolic BP (p = 0.018), no correlations were found between any of the renal 123I–mIBG uptake and washout parameters and blood pressure, PRA or catecholamines (Fig. 3).
Renal 123I–mIBG uptake in relation to blood pressure and biochemistry data. There was only a significant correlation between renal 123I–mIBG uptake and office systolic BP (p = 0.018). No other correlations were found between any of the renal 123I–mIBG uptake and washout parameters and blood pressure, PRA or catecholamines
Subgroup analyses
Patients with the largest decrease in 123I–mIBG uptake at 15 min (i.e., delta ≤ − 1.0) (n = 5) and patients with the largest increase in 123I–mIBG uptake at 15 min (i.e., delta of ≥1.0) (n = 5) did not differ in ABPM, kidney function, or catecholamine levels after RDN (Fig. 4 and Table 4), nor did the patients with the largest change (i.e., both increased and decreased) in 123I–mIBG uptake differ in baseline characteristics from the patients without these changes in 123I–mIBG uptake (data not shown).
a Pre- and post-RDN office systolic BP change in patients with the largest decrease in 123I–mIBG uptake. b Pre- and post-RDN mean 24 h systolic BP in patients with the largest decrease in 123I–mIBG uptake. c Pre- and post-RDN office systolic BP change in patients with the largest decrease in 123I–mIBG washout in patients with the largest decrease in 123I–mIBG uptake. d Pre- and post-RDN mean 24 h systolic BP change in patients with the largest decrease in 123I–mIBG washout in patients with the largest decrease in 123I–mIBG washout
Table 4 Parameters pre-RDN in patients with a positive delta (i.e., increase in 123I–mIBG uptake at 15 min) versus patients with a negative delta (i.e., decrease in 123I–mIBG uptake at 15 min) after RDN
In patients with the largest decrease in washout (i.e., delta ≤ − 5.0) (n = 5), there were no changes in BP measurements, neither in catecholamines nor in kidney function (data not shown). In patients with the largest decrease in 123I–mIBG washout, only the 24 h urine metanephrine was significantly higher at baseline compared to patients with the largest increase in washout after RDN (p = 0.045). In patients with the largest increase in 123I–mIBG washout (i.e. delta ≥5.0) (n = 10) there was a difference in office systolic BP only (pre vs. post RDN median 181.5 vs. 158.0 mmHg, p = 0.05), while diastolic BP did not change. In addition this subgroup did not show significant changes in ABPM, kidney function or catecholamines after RDN (data not shown).
No correlations were found between any of the renal 123I–mIBG uptake parameters and BP measurements (data not shown).
In the present study we were unable to demonstrate that treatment with RDN results in significant changes in renal 123I–mIBG uptake and washout. These data suggest that RDN does not significantly alter renal sympathetic tone and does not sufficiently denervate renal sympathetic nerves. This is further supported by the finding that ABPM and biochemical markers of sympathetic nerve activity remained unchanged after RDN, while the reduction in office BP was similar compared to Symplicity HTN-1 and HTN-2 [1, 2]. The absence of consistent changes in 123I–mIBG uptake and washout as well as the lack of a sustained BP decrease after RDN suggests that the present RDN technique fails to achieve adequate denervation of the kidneys. The degree of renal sympathetic nerve disruption required for inducing a sustained BP response remains unclear, but likely falls short with the current RDN technique. The lack of efficacy may be related to the number of ablations, since in a subset of patients of Symplicity HTN-3 a more profound BP decrease was observed in patients with more ablations, suggesting a relation between the quantity of ablations and the BP lowering effects [4]. This effect, however, was also observed in patients receiving sham treatment. We found no association between the number of ablations and renal 123I–mIBG uptake or washout, while the number of denervations in our study was similar to the Symplicity HTN-1 and HTN-2 trials that demonstrated a significant decrease in office BP [1, 2].
In a post-mortem study of a patient who received RDN it was shown that nerves in the (peri-) adventitial parts of the renal artery were unaffected, indicating that interruption of the nerve fiber continuity had not been successful [13]. This suggests that the ablation pulse may not be sufficient to generate adequate denervation of renal sympathetic nerves [14]. A previous study using NE spill-over to assess the effect of the nerve fiber continuity had not been successful [13]. In another study, using NE spill-over to assess the effect of RDN on renal sympathetic activity in 10 patients with resistant hypertension showed that RDN reduced NE spill-over by 47% (95% CI 28–65%) [15]. In the present study, we could not replicate these findings.
Besides lack of procedural effectiveness, this discrepancy could also be explained by differences in population characteristics or technical shortcomings of 123I–mIBG scintigraphy. The patients in our study were however fully comparable to the populations studied in Symplicity HTN-1 and Symplicity HTN-2.
Although, we used ABPM instead of office BP to include patients with resistant hypertension, baseline office BP in our study and the number of BP lowering drugs were comparable to that observed in Symplicity HTN-1 and Symplicity HTN-2. In addition, office BP was reduced to a similar extent with a decrease of 29 mmHg for systolic office BP following RDN. All other baseline parameters of our study population were similar to that of previous studies [1, 2, 4]. In kidney transplant recipients we recently showed that uptake at 15 min p.i. of 123I–mIBG and washout is correlated with time after transplantation independent of kidney graft function [7]. This suggests that renal 123I–mIBG scintigraphy can be used to assess differences in renal innervation.
We previously showed that cardiac sympathetic activity did not change after RDN [12]. This is also supported by the lack of change in neurohormonal activation following RDN in the present and in previous studies [16, 17]. Whether this is caused by insufficient denervation or results from a limited overall contribution of renal nerves in determining efferent sympathetic activity could not be assessed because quality parameters for successful RDN are lacking. In the present study we show that the lack of change in renal sympathetic activity may be caused by an inability of RDN to cause a sufficient decrease in afferent sympathetic nerve activity as 123I–mIBG-uptake did not change significantly.
The amount of published data on renal 123I–mIBG imaging for the assessment of renal sympathetic innervation is very limited. In addition to our own data, Takamura et al. showed that renal 123I–mIBG scintigraphy was associated with measurements of muscle sympathetic nerve activity (as a measure of generalized sympathetic outflow) in patients with primary hypertension [18]. In line with our findings, these authors concluded that renal 123I–mIBG scintigraphy could be a non-invasive clinical tool for assessing renal sympathetic nerve function.
A few limitations of our study merit discussion. Firstly, it remains possible that the modulation of sympathetic nerve activity (SNA) induced by RDN lies below the detection level of 123I–mIBG. However, it may well be that sympathicolysis is achieved by RDN but that this does not influence BP nor activity of the renin-angiotensin system and 123I–mIBG parameters. Radiotracer dilution NE spill-over for organ specific assessment of sympathetic nerve activity is an alternative to 123I–mIBG scintigraphy. Although this technique is considered the gold standard, its application is limited by its invasive nature. Moreover a widespread use of the technique is restricted by the poor availability of the required compounds. Furthermore, 123I–mIBG is primarily cleared via the kidneys and therefore kidney function may have influenced our data. However, we have shown that both cardiac and renal 123I–mIBG parameters are not influenced by kidney function [7, 11]. Finally, we were aware of the potential influence of antihypertensive medication (calcium blocking agents, beta blocking agents) that may alter sympathetic drive and thereby uptake of 123I–mIBG. In two patients, BP lowering medication had to be tapered because of hypotension post RDN. In the remaining patients, however, BP lowering medication and sodium excretion were unchanged during the study period. We therefore feel that changes in antihypertensive medication do not explain the lack of change in the 123I–mIBG parameters.
In conclusion, we could not observe significant changes in functional kidney denervation as assessed with 123I–mIBG scintigraphy following RDN with the Symplicity Catheter System. Our data suggest that the lack of BP lowering efficacy in the sham-controlled Simplicity HTN-3 study may be related to lack of procedural effectiveness. In comparison to available clinical tools, renal 123I–mIBG scintigraphy is minimally invasive and more widely available for clinical use. For future studies, renal 123I–mIBG scintigraphy may be used as a parameter to assess RDN effectiveness.
123I–mIBG:
123I–meta-Iodobenzylguanidine
ABPM:
Ambulatory blood pressure measurement
Estimated glomerular filtration rate
HTN:
MDRD:
Modification of diet in renal disease
NE:
p.i.:
Post-injection
PRA:
Plasma renin activity
RDN:
Catheter based renal sympathetic denervation
RTH:
SNA:
Sympathetic nerve activity
SPECT:
Single photon emission computed tomography
Esler MD, Krum H, Sobotka PA, Schlaich MP, Schmieder RE, Bohm M. Renal sympathetic denervation in patients with treatment-resistant hypertension (the Symplicity HTN-2 trial): a randomised controlled trial. Lancet. 2010;376(9756):1903–9. https://doi.org/10.1016/s0140-6736(10)62039-9.
Krum H, Schlaich MP, Sobotka PA, Bohm M, Mahfoud F, Rocha-Singh K, et al. Percutaneous renal denervation in patients with treatment-resistant hypertension: final 3-year report of the Symplicity HTN-1 study. Lancet. 2014;383(9917):622–9. https://doi.org/10.1016/s0140-6736(13)62192-3.
Schlaich MP, Sobotka PA, Krum H, Whitbourn R, Walton A, Esler MD. Renal denervation as a therapeutic approach for hypertension: novel implications for an old concept. Hypertension. 2009;54(6):1195–201. https://doi.org/10.1161/hypertensionaha.109.138610.
Bhatt DL, Kandzari DE, O'Neill WW, D'Agostino R, Flack JM, Katzen BT, et al. A controlled trial of renal denervation for resistant hypertension. N Engl J Med. 2014;370(15):1393–401. https://doi.org/10.1056/NEJMoa1402670.
Patel AD, Iskandrian AE. MIBG imaging. J Nucl Cardiol. 2002;9(1):75–94.
Somsen GA, Verberne HJ, Fleury E, Righetti A. Normal values and within-subject variability of cardiac I-123 MIBG scintigraphy in healthy individuals: implications for clinical studies. J Nucl Cardiol. 2004;11(2):126–33. https://doi.org/10.1016/j.nuclcard.2003.10.010.
Dobrowolski LC, Verberne HJ, van den Born BJ, ten Berge IJ, Bemelman FJ, Krediet CT. Kidney transplant (123)I-mIBG Scintigraphy and functional sympathetic reinnervation. Am J Kidney Dis. 2015;66(3):543–4. https://doi.org/10.1053/j.ajkd.2015.04.049.
Mancia G, De Backer G, Dominiczak A, Cifkova R, Fagard R, Germano G, et al. 2007 guidelines for the Management of Arterial Hypertension: the task force for the Management of Arterial Hypertension of the European Society of Hypertension (ESH) and of the European Society of Cardiology (ESC). J Hyperten. 2007;25(6):1105–87. https://doi.org/10.1097/HJH.0b013e3281fc975a.
Chobanian AV, Bakris GL, Black HR, Cushman WC, Green LA, Izzo JL Jr, et al. The seventh report of the joint National Committee on prevention, detection, evaluation, and treatment of high blood pressure: the JNC 7 report. JAMA. 2003;289(19):2560–72. https://doi.org/10.1001/jama.289.19.2560.
Dobrowolski LC, Eeftinck Schattenkerk DW, Idu MM, van den Born BJ, Verberne HJ. Renal 123I-MIBG Scintigraphy before and after kidney autotransplantation. Clin Nucl Med. 2015;40(10):810–1. https://doi.org/10.1097/rlu.0000000000000901.
Verberne HJ, Verschure DO, Somsen GA, van Eck-Smit BL, Jacobson AF. Vascular time-activity variation in patients undergoing (1)(2)(3)I-MIBG myocardial scintigraphy: implications for quantification of cardiac and mediastinal uptake. Eur J Nucl Med Mol Imaging. 2011;38(6):1132–8. https://doi.org/10.1007/s00259-011-1783-3.
van Brussel PM, Eeftinck Schattenkerk DW, Dobrowolski LC, de Winter RJ, Reekers JA, Verberne HJ, et al. Effects of renal sympathetic denervation on cardiac sympathetic activity and function in patients with therapy resistant hypertension. Int J Cardiol. 2016;202:609–14. https://doi.org/10.1016/j.ijcard.2015.09.025.
Vink EE, Goldschmeding R, Vink A, Weggemans C, Bleijs RL, Blankestijn PJ. Limited destruction of renal nerves after catheter-based renal denervation: results of a human case study. Nephrol Dial Transplant. 2014;29(8):1608–10. https://doi.org/10.1093/ndt/gfu192.
Sakakura K, Ladich E, Cheng Q, Otsuka F, Yahagi K, Fowler DR, et al. Anatomic assessment of sympathetic peri-arterial renal nerves in man. J Am Coll Cardiol. 2014;64(7):635–43. https://doi.org/10.1016/j.jacc.2014.03.059.
Krum H, Schlaich M, Whitbourn R, Sobotka PA, Sadowski J, Bartus K, et al. Catheter-based renal sympathetic denervation for resistant hypertension: a multicentre safety and proof-of-principle cohort study. Lancet. 2009;373(9671):1275–81. https://doi.org/10.1016/s0140-6736(09)60566-3.
Ewen S, Cremers B, Meyer MR, Donazzan L, Kindermann I, Ukena C, et al. Blood pressure changes after catheter-based renal denervation are related to reductions in total peripheral resistance. J Hyperten. 2015;33(12):2519–25. https://doi.org/10.1097/hjh.0000000000000752.
Ezzahti M, Moelker A, Friesema EC, van der Linde NA, Krestin GP, van den Meiracker AH. Blood pressure and neurohormonal responses to renal nerve ablation in treatment-resistant hypertension. J Hyperten. 2014;32(1):135–41. https://doi.org/10.1097/HJH.0b013e3283658ef7.
Takamura M, Murai H, Okabe Y, Okuyama Y, Hamaoka T, Mukai Y, et al. Significant correlation between renal 123I-metaiodobenzylguanidine scintigraphy and muscle sympathetic nerve activity in patients with primary hypertension. J Nucl Cardiol. 2017;24(2):363–71. https://doi.org/10.1007/s12350-016-0760-4.
We gratefully acknowledge Edwin Poel for his help in acquiring the 123I-mIBG images.
CTPK received grants from the Dutch Kidney Foundation (IP-11.40 and KJPB12.29, Bussum, The Netherlands) and from ZonMW Clinical Fellowship (40007039712461), Zorg Onderzoek Nederland/Medische Wetenschappen (ZonMW, Den Haag, The Netherlands). This support is gratefully acknowledged.
Linn C. Dobrowolski and Daan W. Eeftinck Schattenkerk contributed equally to this work.
Department of Internal Medicine - Nephrology and Kidney Transplantation, Academic Medical Center at the University of Amsterdam, Amsterdam, the Netherlands
Linn C. Dobrowolski
, C. T. Paul Krediet
, Liffert Vogt
& Frederike J. Bemelman
Department of Internal Medicine - Vascular Medicine, Academic Medical Center at the University of Amsterdam, Amsterdam, the Netherlands
Daan W. Eeftinck Schattenkerk
& Bert-Jan H. Van Den Born
Department of Cardiology, Academic Medical Center at the University of Amsterdam, Amsterdam, the Netherlands
Peter M. Van Brussel
Department of Radiology and Nuclear Medicine, F2-238 Academic Medical Center, University of Amsterdam, Meibergdreef 9, 1105, AZ, Amsterdam, The Netherlands
Jim A. Reekers
& Hein J. Verberne
Search for Linn C. Dobrowolski in:
Search for Daan W. Eeftinck Schattenkerk in:
Search for C. T. Paul Krediet in:
Search for Peter M. Van Brussel in:
Search for Liffert Vogt in:
Search for Frederike J. Bemelman in:
Search for Jim A. Reekers in:
Search for Bert-Jan H. Van Den Born in:
Search for Hein J. Verberne in:
LD carried out the patient recruitment, performed the image analysis and statistical analysis, and drafted the manuscript. DE carried out the patient recruitment and helped to draft the manuscript. CK participated in the design of the study. PB carried out the patient recruitment. LV participated in the design of the study. FB participated in the design of the study. JR conducted the renal denervation procedures. BB conceived of the study, participated in its design and coordination, and helped to draft the manuscript. HV conceived of the study, participated in its design and coordination, and helped to draft the manuscript. All authors read and approved the final manuscript.
Correspondence to Hein J. Verberne.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent was obtained from all individual participants included in the study.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Renal catheter ablation
Radionuclide imaging
Renal nerves
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Template:Pp-move-indef {{#invoke:Hatnote|hatnote}} Template:Redirect3
A two-dimensional perspective projection of a sphere
r – radius of the sphere
A sphere (from Greek σφαῖρα — sphaira, "globe, ball"[1]) is a perfectly round geometrical and circular object in three-dimensional space that resembles the shape of a completely round ball. Like a circle, which, in geometric contexts, is in two dimensions, a sphere is defined mathematically as the set of points that are all the same distance r from a given point in three-dimensional space. This distance r is the radius of the sphere, and the given point is the center of the sphere. The maximum straight distance through the sphere passes through the center and is thus twice the radius; it is the diameter.
In mathematics, a distinction is made between the sphere (a two-dimensional closed surface embedded in three-dimensional Euclidean space) and the ball (a three-dimensional shape that includes the interior of a sphere).
1 Surface area{{safesubst:#invoke:anchor|main}}
2 Enclosed volume{{safesubst:#invoke:anchor|main}}
3 Equations in R 3 {\displaystyle {\mathbb {R} }^{3}}
5 Hemisphere
6 Generalization to other dimensions
7 Generalization to metric spaces
8 Topology
9 Spherical geometry
10 Eleven properties of the sphere
11 Cubes in relation to spheres
Surface area{{safesubst:#invoke:anchor|main}}
The surface area of a sphere is:
A = 4 π r 2 . {\displaystyle A=4\pi r^{2}.}
Archimedes first derived this formula[2] from the fact that the projection to the lateral surface of a circumscribed cylinder (i.e. the Lambert cylindrical equal-area projection) is area-preserving; it equals the derivative of the formula for the volume with respect to r because the total volume inside a sphere of radius r can be thought of as the summation of the surface area of an infinite number of spherical shells of infinitesimal thickness concentrically stacked inside one another from radius 0 to radius r. At infinitesimal thickness the discrepancy between the inner and outer surface area of any given shell is infinitesimal, and the elemental volume at radius r is simply the product of the surface area at radius r and the infinitesimal thickness.
At any given radius r, the incremental volume (δV) equals the product of the surface area at radius r (A(r)) and the thickness of a shell (δr):
δ V ≈ A ( r ) ⋅ δ r . {\displaystyle \delta V\approx A(r)\cdot \delta r.\,}
The total volume is the summation of all shell volumes:
V ≈ ∑ A ( r ) ⋅ δ r . {\displaystyle V\approx \sum A(r)\cdot \delta r.}
In the limit as δr approaches zero[3] this equation becomes:
V = ∫ 0 r A ( r ) d r . {\displaystyle V=\int _{0}^{r}A(r)\,dr.}
Substitute V:
4 3 π r 3 = ∫ 0 r A ( r ) d r . {\displaystyle {\frac {4}{3}}\pi r^{3}=\int _{0}^{r}A(r)\,dr.}
Differentiating both sides of this equation with respect to r yields A as a function of r:
4 π r 2 = A ( r ) . {\displaystyle \!4\pi r^{2}=A(r).}
Which is generally abbreviated as:
A = 4 π r 2 {\displaystyle \!A=4\pi r^{2}}
Alternatively, the area element on the sphere is given in spherical coordinates by d A = r 2 sin θ d θ d ϕ . {\displaystyle dA=r^{2}\sin \theta \,d\theta \,d\phi .} . With Cartesian coordinates, the area element d S = r r 2 − ∑ i ≠ k x i 2 Π i ≠ k d x i , ∀ k {\displaystyle dS={\frac {r}{\sqrt {r^{2}-\sum _{i\neq k}x_{i}^{2}}}}\Pi _{i\neq k}dx_{i},\;\forall k} . More generally, see area element.
The total area can thus be obtained by integration:
A = ∫ 0 2 π ∫ 0 π r 2 sin θ d θ d ϕ = 4 π r 2 . {\displaystyle A=\int _{0}^{2\pi }\int _{0}^{\pi }r^{2}\sin \theta \,d\theta \,d\phi =4\pi r^{2}.}
Enclosed volume{{safesubst:#invoke:anchor|main}}
Circumscribed cylinder to a sphere
In 3 dimensions, the volume inside a sphere (that is, the volume of a ball) is derived to be
V = 4 3 π r 3 {\displaystyle \!V={\frac {4}{3}}\pi r^{3}}
where r is the radius of the sphere and π is the constant pi. Archimedes first derived this formula, which shows that the volume inside a sphere is 2/3 that of a circumscribed cylinder. (This assertion follows from Cavalieri's principle.) In modern mathematics, this formula can be derived using integral calculus, i.e. disk integration to sum the volumes of an infinite number of circular disks of infinitesimally small thickness stacked centered side by side along the x axis from x = 0 where the disk has radius r (i.e. y = r) to x = r where the disk has radius 0 (i.e. y = 0).
At any given x, the incremental volume (δV) equals the product of the cross-sectional area of the disk at x and its thickness (δx):
δ V ≈ π y 2 ⋅ δ x . {\displaystyle \!\delta V\approx \pi y^{2}\cdot \delta x.}
The total volume is the summation of all incremental volumes:
V ≈ ∑ π y 2 ⋅ δ x . {\displaystyle \!V\approx \sum \pi y^{2}\cdot \delta x.}
In the limit as δx approaches zero[3] this equation becomes:
V = ∫ − r r π y 2 d x . {\displaystyle \!V=\int _{-r}^{r}\pi y^{2}dx.}
At any given x, a right-angled triangle connects x, y and r to the origin; hence, applying the Pythagorean theorem yields:
y 2 = r 2 − x 2 . {\displaystyle \!y^{2}=r^{2}-x^{2}.}
Thus, substituting y with a function of x gives:
V = ∫ − r r π ( r 2 − x 2 ) d x . {\displaystyle \!V=\int _{-r}^{r}\pi (r^{2}-x^{2})dx.}
Which can now be evaluated as follows:
V = π [ r 2 x − x 3 3 ] − r r = π ( r 3 − r 3 3 ) − π ( − r 3 + r 3 3 ) = 4 3 π r 3 . {\displaystyle \!V=\pi \left[r^{2}x-{\frac {x^{3}}{3}}\right]_{-r}^{r}=\pi \left(r^{3}-{\frac {r^{3}}{3}}\right)-\pi \left(-r^{3}+{\frac {r^{3}}{3}}\right)={\frac {4}{3}}\pi r^{3}.}
Therefore the volume of a sphere is:
V = 4 3 π r 3 . {\displaystyle \!V={\frac {4}{3}}\pi r^{3}.}
Alternatively this formula is found using spherical coordinates, with volume element
d V = r 2 sin θ d r d θ d φ {\displaystyle {\mathrm {d} }V=r^{2}\sin \theta \,{\mathrm {d} }r\,{\mathrm {d} }\theta \,{\mathrm {d} }\varphi }
V = ∫ 0 2 π ∫ 0 π ∫ 0 r r 2 sin θ d r d θ d φ = 4 3 π r 3 {\displaystyle V=\int _{0}^{2\pi }\int _{0}^{\pi }\int _{0}^{r}r^{2}\sin \theta \,\mathrm {d} r\,\mathrm {d} \theta \,\mathrm {d} \varphi ={\frac {4}{3}}\pi r^{3}}
For most practical purposes, the volume inside a sphere inscribed in a cube can be approximated as 52.4% of the volume of the cube, since π / 6 ≈ 0.5236 {\displaystyle \pi /6\approx 0.5236} . For example, a sphere with diameter 1m has 52.4% the volume of a cube with edge length 1m, or about 0.524m3.
In higher dimensions, the sphere (or hypersphere) is usually called an n-ball. General recursive formulas exist for the volume of an n-ball.
Equations in R 3 {\displaystyle {\mathbb {R} }^{3}}
In analytic geometry, a sphere with center (x0, y0, z0) and radius r is the locus of all points (x, y, z) such that
( x − x 0 ) 2 + ( y − y 0 ) 2 + ( z − z 0 ) 2 = r 2 . {\displaystyle \,(x-x_{0})^{2}+(y-y_{0})^{2}+(z-z_{0})^{2}=r^{2}.}
The points on the sphere with radius r can be parameterized via
x = x 0 + r cos θ sin φ {\displaystyle \,x=x_{0}+r\cos \theta \;\sin \varphi }
y = y 0 + r sin θ sin φ ( 0 ≤ θ ≤ 2 π and 0 ≤ φ ≤ π ) {\displaystyle \,y=y_{0}+r\sin \theta \;\sin \varphi \qquad (0\leq \theta \leq 2\pi {\mbox{ and }}0\leq \varphi \leq \pi )\,}
z = z 0 + r cos φ {\displaystyle \,z=z_{0}+r\cos \varphi \,}
(see also trigonometric functions and spherical coordinates).
A sphere of any radius centered at zero is an integral surface of the following differential form:
x d x + y d y + z d z = 0. {\displaystyle \,x\,dx+y\,dy+z\,dz=0.}
This equation reflects that position and velocity vectors of a point traveling on the sphere are always orthogonal to each other.
An image of one of the most accurate human-made spheres, as it refracts the image of Einstein in the background. This sphere was a fused quartz gyroscope for the Gravity Probe B experiment, and differs in shape from a perfect sphere by no more than 40 atoms (less than 10 nanometers) of thickness. It was announced on 1 July 2008 that Australian scientists had created even more nearly perfect spheres, accurate to 0.3 nanometers, as part of an international hunt to find a new global standard kilogram.[4]
The sphere has the smallest surface area of all surfaces that enclose a given volume, and it encloses the largest volume among all closed surfaces with a given surface area. The sphere therefore appears in nature: for example, bubbles and small water drops are roughly spherical because the surface tension locally minimizes surface area.
The surface area relative to the mass of a sphere is called the specific surface area and can be expressed from the above stated equations as
S S A = A V ρ = 3 r ρ , {\displaystyle SSA={\frac {A}{V\rho }}={\frac {3}{r\rho }},}
where ρ{\displaystyle \rho } is the ratio of mass to volume.
A sphere can also be defined as the surface formed by rotating a circle about any diameter. Replacing the circle with an ellipse rotated about its major axis, the shape becomes a prolate spheroid; rotated about the minor axis, an oblate spheroid.
Pairs of points on a sphere that lie on a straight line through the sphere's center are called antipodal points. A great circle is a circle on the sphere that has the same center and radius as the sphere and consequently divides it into two equal parts. The shortest distance along the surface between two distinct non-antipodal points on the surface is on the unique great circle that includes the two points. Equipped with the great-circle distance, a great circle becomes the Riemannian circle.
If a particular point on a sphere is (arbitrarily) designated as its north pole, then the corresponding antipodal point is called the south pole, and the equator is the great circle that is equidistant to them. Great circles through the two poles are called lines (or meridians) of longitude, and the line connecting the two poles is called the axis of rotation. Circles on the sphere that are parallel to the equator are lines of latitude. This terminology is also used for such approximately spheroidal astronomical bodies as the planet Earth (see geoid).
{{safesubst:#invoke:anchor|main}} Any plane that includes the center of a sphere divides it into two equal hemispheres. Any two intersecting planes that include the center of a sphere subdivide the sphere into four lunes or biangles, the vertices of which all coincide with the antipodal points lying on the line of intersection of the planes.
The antipodal quotient of the sphere is the surface called the real projective plane, which can also be thought of as the northern hemisphere with antipodal points of the equator identified.
The round hemisphere is conjectured to be the optimal (least area) filling of the Riemannian circle.
The circles of intersection of any plane not intersecting the sphere's center and the sphere's surface are called spheric sections.[5]
Generalization to other dimensions
{{#invoke:main|main}} Spheres can be generalized to spaces of any dimension. For any natural number n, an "n-sphere," often written as S n {\displaystyle S^{n}} , is the set of points in (n + 1)-dimensional Euclidean space that are at a fixed distance r from a central point of that space, where r is, as before, a positive real number. In particular:
S 0 {\displaystyle S^{0}} : a 0-sphere is a pair of endpoints of an interval (−r, r) of the real line
S 1 {\displaystyle S^{1}} : a 1-sphere is a circle of radius r
S 2 {\displaystyle S^{2}} : a 2-sphere is an ordinary sphere
S 3 {\displaystyle S^{3}} : a 3-sphere is a sphere in 4-dimensional Euclidean space.
Spheres for n > 2 are sometimes called hyperspheres.
The n-sphere of unit radius centered at the origin is denoted Sn and is often referred to as "the" n-sphere. Note that the ordinary sphere is a 2-sphere, because it is a 2-dimensional surface (which is embedded in 3-dimensional space).
The surface area of the (n − 1)-sphere of radius 1 is
2 π n / 2 Γ ( n / 2 ) {\displaystyle {\frac {2\pi ^{n/2}}{\Gamma (n/2)}}}
where Γ(z) is Euler's Gamma function.
Another expression for the surface area is
{ ( 2 π ) n / 2 r n − 1 2 ⋅ 4 ⋯ ( n − 2 ) , if n is even ; 2 ( 2 π ) ( n − 1 ) / 2 r n − 1 1 ⋅ 3 ⋯ ( n − 2 ) , if n is odd . {\displaystyle {\begin{cases}\displaystyle {\frac {(2\pi )^{n/2}\,r^{n-1}}{2\cdot 4\cdots (n-2)}},&{\text{if }}n{\text{ is even}};\\\\\displaystyle {\frac {2(2\pi )^{(n-1)/2}\,r^{n-1}}{1\cdot 3\cdots (n-2)}},&{\text{if }}n{\text{ is odd}}.\end{cases}}}
and the volume is the surface area times r n {\displaystyle {r \over n}} or
{ ( 2 π ) n / 2 r n 2 ⋅ 4 ⋯ n , if n is even ; 2 ( 2 π ) ( n − 1 ) / 2 r n 1 ⋅ 3 ⋯ n , if n is odd . {\displaystyle {\begin{cases}\displaystyle {\frac {(2\pi )^{n/2}\,r^{n}}{2\cdot 4\cdots n}},&{\text{if }}n{\text{ is even}};\\\\\displaystyle {\frac {2(2\pi )^{(n-1)/2}\,r^{n}}{1\cdot 3\cdots n}},&{\text{if }}n{\text{ is odd}}.\end{cases}}}
Generalization to metric spaces
More generally, in a metric space (E,d), the sphere of center x and radius r > 0 is the set of points y such that d(x,y) = r.
If the center is a distinguished point that is considered to be the origin of E, as in a normed space, it is not mentioned in the definition and notation. The same applies for the radius if it is taken to equal one, as in the case of a unit sphere.
Unlike a ball, even a large sphere may be an empty set. For example, in Zn with Euclidean metric, a sphere of radius r is nonempty only if r2 can be written as sum of n squares of integers.
In topology, an n-sphere is defined as a space homeomorphic to the boundary of an (n + 1)-ball; thus, it is homeomorphic to the Euclidean n-sphere, but perhaps lacking its metric.
a 0-sphere is a pair of points with the discrete topology
a 1-sphere is a circle (up to homeomorphism); thus, for example, (the image of) any knot is a 1-sphere
a 2-sphere is an ordinary sphere (up to homeomorphism); thus, for example, any spheroid is a 2-sphere
The n-sphere is denoted Sn. It is an example of a compact topological manifold without boundary. A sphere need not be smooth; if it is smooth, it need not be diffeomorphic to the Euclidean sphere.
The Heine–Borel theorem implies that a Euclidean n-sphere is compact. The sphere is the inverse image of a one-point set under the continuous function ||x||. Therefore, the sphere is closed. Sn is also bounded; therefore it is compact.
Smale's paradox shows that it is possible to turn an ordinary sphere inside out in a three-dimensional space with possible self-intersections but without creating any crease, a process more commonly and historically called sphere eversion.
Great circle on a sphere
{{#invoke:main|main}} The basic elements of Euclidean plane geometry are points and lines. On the sphere, points are defined in the usual sense, but the analogue of "line" may not be immediately apparent. Measuring by arc length yields that the shortest path between two points that entirely lie in the sphere is a segment of the great circle the includes the points; see geodesic. Many, but not all (see parallel postulate) theorems from classical geometry hold true for this spherical geometry as well. In spherical trigonometry, angles are defined between great circles. Thus spherical trigonometry differs from ordinary trigonometry in many respects. For example, the sum of the interior angles of a spherical triangle exceeds 180 degrees. Also, any two similar spherical triangles are congruent.
Eleven properties of the sphere
In their book Geometry and the imagination[6] David Hilbert and Stephan Cohn-Vossen describe eleven properties of the sphere and discuss whether these properties uniquely determine the sphere. Several properties hold for the plane, which can be thought of as a sphere with infinite radius. These properties are:
The points on the sphere are all the same distance from a fixed point. Also, the ratio of the distance of its points from two fixed points is constant.
The first part is the usual definition of the sphere and determines it uniquely. The second part can be easily deduced and follows a similar result of Apollonius of Perga for the circle. This second part also holds for the plane.
The contours and plane sections of the sphere are circles.
This property defines the sphere uniquely.
The sphere has constant width and constant girth.
The width of a surface is the distance between pairs of parallel tangent planes. Numerous other closed convex surfaces have constant width, for example the Meissner body. The girth of a surface is the circumference of the boundary of its orthogonal projection on to a plane. Each of these properties implies the other.
A normal vector to a sphere, a normal plane and its normal section. The curvature of the curve of intersection is the sectional curvature. For the sphere each normal section through a given point will be a circle of the same radius, the radius of the sphere. This means that every point on the sphere will be an umbilical point.
All points of a sphere are umbilics.
At any point on a surface a normal direction is at right angles to the surface because the sphere these are the lines radiating out from the center of the sphere. The intersection of a plane that contains the normal with the surface will form a curve that is called a normal section, and the curvature of this curve is the normal curvature. For most points on most surfaces, different sections will have different curvatures; the maximum and minimum values of these are called the principal curvatures. Any closed surface will have at least four points called umbilical points. At an umbilic all the sectional curvatures are equal; in particular the principal curvatures are equal. Umbilical points can be thought of as the points where the surface is closely approximated by a sphere.
For the sphere the curvatures of all normal sections are equal, so every point is an umbilic. The sphere and plane are the only surfaces with this property.
The sphere does not have a surface of centers.
For a given normal section exists a circle of curvature that equals the sectional curvature, is tangent to the surface, and the center lines of which lie along on the normal line. For example, the two centers corresponding to the maximum and minimum sectional curvatures are called the focal points, and the set of all such centers forms the focal surface.
For most surfaces the focal surface forms two sheets that are each a surface and meet at umbilical points. Several cases are special:
For channel surfaces one sheet forms a curve and the other sheet is a surface
For cones, cylinders, tori and cyclides both sheets form curves.
For the sphere the center of every osculating circle is at the center of the sphere and the focal surface forms a single point. This property is unique to the sphere.
All geodesics of the sphere are closed curves.
Geodesics are curves on a surface that give the shortest distance between two points. They are a generalization of the concept of a straight line in the plane. For the sphere the geodesics are great circles. Many other surfaces share this property.
Of all the solids having a given volume, the sphere is the one with the smallest surface area; of all solids having a given surface area, the sphere is the one having the greatest volume.
It follows from isoperimetric inequality. These properties define the sphere uniquely and can be seen in soap bubbles: a soap bubble will enclose a fixed volume, and surface tension minimizes its surface area for that volume. A freely floating soap bubble therefore approximates a sphere (though such external forces as gravity will slightly distort the bubble's shape).
The sphere has the smallest total mean curvature among all convex solids with a given surface area.
The mean curvature is the average of the two principal curvatures, which is constant because the two principal curvatures are constant at all points of the sphere.
The sphere has constant mean curvature.
The sphere is the only imbedded surface that lacks boundary or singularities with constant positive mean curvature. Other such immersed surfaces as minimal surfaces have constant mean curvature.
The sphere has constant positive Gaussian curvature.
Gaussian curvature is the product of the two principal curvatures. It is an intrinsic property that can be determined by measuring length and angles and is independent of how the surface is embedded in space. Hence, bending a surface will not alter the Gaussian curvature, and other surfaces with constant positive Gaussian curvature can be obtained by cutting a small slit in the sphere and bending it. All these other surfaces would have boundaries, and the sphere is the only surface that lacks a boundary with constant, positive Gaussian curvature. The pseudosphere is an example of a surface with constant negative Gaussian curvature.
The sphere is transformed into itself by a three-parameter family of rigid motions.
Rotating around any axis a unit sphere at the origin will map the sphere onto itself. Any rotation about a line through the origin can be expressed as a combination of rotations around the three-coordinate axis (see Euler angles). Therefore a three-parameter family of rotations exists such that each rotation transforms the sphere onto itself; this family is the rotation group SO(3). The plane is the only other surface with a three-parameter family of transformations (translations along the x and y axis and rotations around the origin). Circular cylinders are the only surfaces with two-parameter families of rigid motions and the surfaces of revolution and helicoids are the only surfaces with a one-parameter family.
Cubes in relation to spheres
Template:Expand section For every sphere there are multiple cuboids that may be inscribed within the sphere. The largest cuboid which can be inscribed within a sphere is a cube.
3-sphere
Affine sphere
Alexander horned sphere
Ball (mathematics)
Banach–Tarski paradox
Circle of a sphere
Directional statistics
Dome (mathematics)
Dyson sphere
Hoberman sphere
Homology sphere
Homotopy groups of spheres
Homotopy sphere
Hypersphere
Lenart Sphere
Metric space
Napkin ring problem
Pseudosphere
Riemann sphere
Smale's paradox
Sphere packing
Spherical cap
Spherical coordinates
Spherical Earth
Spherical helix, tangent indicatrix of a curve of constant precession
Spherical sector
Spherical segment
Spherical shell
Spherical wedge
Spherical zone
Zoll sphere
Template:Wikisource1911Enc
↑ σφαῖρα, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus
↑ Weisstein, Eric W., "Sphere", MathWorld.
↑ 3.0 3.1 Pages 141, 149. {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ New Scientist | Technology | Roundest objects in the world created
↑ Weisstein, Eric W., "Spheric section", MathWorld.
↑ {{#invoke:citation/CS1|citation |CitationClass=book }}
William Dunham. "Pages 28, 226", The Mathematical Universe: An Alphabetical Journey Through the Great Proofs, Problems and Personalities, ISBN 0-471-17661-3.
Find more about Sphere at Wikipedia's sister projects
Definitions and translations from Wiktionary
Media from Commons
Source texts from Wikisource
Learning resources from Wikiversity
Sphere (PlanetMath.org website)
Weisstein, Eric W., "Sphere", MathWorld.
Mathematica/Uniform Spherical Distribution
Template:Cite video (computer animation showing how the inside of a sphere can turn outside.)
Program in C++ to draw a sphere using parametric equation
Surface area of sphere proof.
{{ safesubst:#invoke:Unsubst||$N=Use dmy dates |date=__DATE__ |$B= }}
Retrieved from "https://en.formulasearchengine.com/index.php?title=Sphere&oldid=220992"
Use dmy dates from March 2011
Elementary shapes
Homogeneous spaces | CommonCrawl |
RG Analysis and Partial Differential Equations
Graduate seminar on Advanced topics in PDE
Prof. Dr. Herbert Koch
Prof. Dr. Christoph Thiele
Marco Fraccaroli
This seminar takes place regularly on Fridays, at 14.00 (c.t.). Because of the current regulations regarding the Corona pandemic, the seminar will take place online on the Zoom platform. Please join the pdg-l mailing list or contact Marco Fraccaroli (mfraccar at math.uni-bonn.de) for further information.
April 16 - Organisational meeting
April 23 - Andreia Chapouto, University of Edinburgh
Invariance of the Gibbs measures for the periodic generalized KdV equations
In this talk, we consider the periodic generalized Korteweg-de Vries equations (gKdV). In particular, we study gKdV with the Gibbs measure initial data. The main difficulty lies in constructing local-in-time dynamics in the support of the measure. Since gKdV is analytically ill-posed in the L2-based Sobolev support, we instead prove deterministic local well-posedness in some Fourier-Lebesgue spaces containing the support of the Gibbs measure. New key ingredients are bilinear and trilinear Strichartz estimates adapted to the Fourier-Lebesgue setting. Once we construct local-in-time dynamics, we apply Bourgain's invariant measure argument to prove almost sure global well-posedness of the defocusing gKdV and invariance of the Gibbs measure. Our result completes the program initiated by Bourgain (1994) on the invariance of the Gibbs measures for periodic gKdV equations. This talk is based on joint work with Nobu Kishimoto (RIMS, University of Kyoto).
April 30 - Itamar Oliveira, Cornell University
The Fourier Extension problem through a time-frequency perspective
An equivalent formulation of the Fourier Extension (F.E.) conjecture for a compact piece of the paraboloid states that the F.E. operator maps $ L^{2+\frac{2}{d}}([0,1]^{d}) $ to $ L^{2+\frac{2}{d}+\varepsilon}(\mathbb{R}^{d+1}) $ for every $ \varepsilon>0 $. It has been fully solved only for $ d=1 $ and there are many partial results in higher dimensions regarding the range of $ (p,q) $ for which $ L^{p}([0,1]^{d}) $ is mapped to $ L^{q}(\mathbb{R}^{d+1}) $. In this talk, we will take an alternative route to this problem: one can reduce matters to proving that a model operator satisfies the same mapping properties, and we will show that the conjecture holds in higher dimensions for tensor functions , meaning for all $ g $ of the form $ g(x_{1},\ldots,x_{d})=g_{1}(x_{1})\cdot\ldots\cdot g_{d}(x_{d})$. Time permitting, we will also address multilinear versions of the statement above and get similar results, in which we will need only one of the many functions involved in each problem to be of such kind to obtain the desired conjectured bounds. This is joint work with Camil Muscalu.
May 7 - No speaker
May 14 - Andriy Bondarenko, NTNU
Fourier interpolation with zeros of zeta and L-functions
We construct a large family of Fourier interpolation bases for functions analytic in a strip symmetric about the real line. Interesting examples involve the nontrivial zeros of the Riemann zeta function and other L-functions. We establish a duality principle for Fourier interpolation bases in terms of certain kernels of general Dirichlet series with variable coefficients. Such kernels admit meromorphic continuation, with poles at a sequence dual to the sequence of frequencies of the Dirichlet series, and they satisfy a functional equation. Our construction of concrete bases relies on strengthening of Knopp's abundance principle for Dirichlet series with functional equations and a careful analysis of the associated Dirichlet series kernel, with coefficients arising from certain modular integrals for the theta group.
May 21 - Alix Deleporte, Laboratoire de Mathématiques d'Orsay, Université Paris-Saclay
Szegő kernels and Toeplitz operators
The Berezin-Toeplitz quantization allows to associate, to functions on Kähler (complex+symplectic) manifolds, self-adjoint operators on Hilbert spaces, depending on a semiclassical parameter. When the manifold is $ \mathbb{R}^{2n}= \mathbb{C}^n $, we retrieve (up to the FBI or Bargmann transform) the standard classes of pseudodifferential operators. Toeplitz operators also contain quantum spin systems as an important class of examples (when the manifold is the sphere $S^2$ or its powers) which describe the interaction between a solid and a magnetic field.
In this talk, I will introduce Toeplitz operators and their principal geometric ingredient, the Szegő kernel, with main motivation a concrete example of quantum spin system with exotic behaviour. I will then describe the semiclassical techniques that are developed and used in the study of eigenfunction localisation in Berezin-Toeplitz quantization.
May 28 - Pfingstferien
June 4 - Dorothee Frey, KIT
Wave equations with low regularity coefficients
In this talk we discuss fixed-time $ L^p $ estimates and Strichartz estimates for wave equations with low regularity coefficients. It was shown by Smith and Tataru that wave equations with $ C^{1,1} $ coefficients satisfy the same Strichartz estimates as the unperturbed wave equation on $ \mathbb{R}^n $, and that for less regular coefficients a loss of derivatives in the data occurs. We improve these results for Lipschitz coefficients with additional structural assumptions. We show that no loss of derivatives occurs at the level of fixed-time $ L^p $ estimates, and that existing Strichartz estimates can be improved. The permitted class in particular excludes singular focussing effects. We also discuss perturbation results, and related results on Strichartz estimates for Schrödinger equations.
June 11 - Chenmin Sun, Laboratoire de Mathématiques AGM, CY Cergy-Paris Université
On the probabilistic well-posedness for the fractional cubic NLS
Gibbs measure is an important object in the study of macroscopic behavior of solutions of dispersive equations. Motivated by testing the dispersive effect for the construction of invariant measures, we consider the fractional cubic NLS with weak dispersion. For some ranges of dispersion, we constructed global weak solutions and strong solutions (resulting the flow property) via different methods. The construction of strong solutions is based on a "good" probabilistic local well-posedness result, which is difficult as the dispersion becomes very weak. Our resolution relies on a very recent refined resolution ansatz introduced by Deng-Nahmond-Yue. To overcome the difficulties caused by the weak dispersion in our problem, we benefit from some "physical-space" properties of the random averaging operators. This talk is based on collaborations with Nikolay Tzvetkov (CY Cergy-Paris Université).
June 18 - João P.G. Ramos, ETH Zütrich
Stability for geometric and functional inequalities
Abstract: Abstract.
June 25 - Stefan Buschenhenke, Christian-Albrechts-Universität Kiel
Factorisation and near-extremisers in restriction theory
We give an alternative argument to the application of the so-called Maurey-Nikishin-Pisier factorisation in Fourier restriction theory. Based on an induction-on-scales argument, our comparably simple method applies to any compact quadratic surface, in particular compact parts of the paraboloid and the hyperbolic paraboloid. This is achieved by constructing near extremisers with big "mass", which itself might be of interest.
July 2 - Maria Ntekoume, Rice University
Symplectic non-squeezing for the KdV flow on the line
In this talk we prove that the KdV flow on the line cannot squeeze a ball in $\dot H^{-\frac 1 2}(\mathbb R)$ into a cylinder of lesser radius. This is a PDE analogue of Gromov's famous symplectic non-squeezing theorem for an infinite dimensional PDE in infinite volume.
July 9 - Moritz Egert, Laboratoire de Mathématiques d'Orsay, Université Paris-Saclay
Dirichlet problem for elliptic systems with block structure
I'll consider a very simple elliptic PDE in the upper half-space: divergence form, transversally independent coefficients and no mixed transversal-tangential derivatives. For L2-data the Dirichlet problem can be solved via a semigroup. For other data classes X (Lebesgue, Hardy, Sobolev, Besov,…) the question, whether the corresponding Dirichlet problem is well-posed, is inseparably tied to the question, whether there is a compatible semigroup on X.
On a "semigroup space" the infinitesimal generator has most properties that one can dream of and these can be used to prove well-posedness. However, there are genuinely more "well-posedness spaces" than "semigroup spaces". For example, up to boundary dimension n=4 there is a well-posed BMO-Dirichlet problem, whose unique solution has no reason to keep its tangential regularity in the interior of the domain.
I'll give an introduction to the general theme and discuss some new results, all based on a recent monograph jointly written with Pascal Auscher.
July 9 - Alexander Volberg, Michigan State University
Why do we not have embedding theorems in the bi-disc?
Embedding theorems for the classical Hardy space on bidisc are not known when I am writing this abstract.
On the other hand, embedding theorems for the Dirichlet space of analytic functions on bidisc were recently found. And they were also found for the Dirichlet space of analytic functions on tri-disc. But not on four-disc…
We will explain the obstacles that preclude us to have those answers.
This talk is based on joint works with N. Arcozzi, I. Holmes, P. Mozolyako, G. Psaromiligkos and P. Zorin-Kranich.
July 16 - Alexei Poltoratski, University of Wisconsin-Madison
Pointwise convergence of scattering data
Тhe scattering transform, appearing in the study of differential operators, can be viewed as an analog of the Fourier transform in non-linear settings. This connection brings up numerous questions on finding non-linear analogs of classical results of Fourier analysis. One of the fundamental results of linear analysis is a theorem by L. Carleson on pointwise convergence of the Fourier series. In this talk I will discuss convergence for the scattering data of a real Dirac system on the half-line and present an analog of Carleson's theorem for the non-linear Fourier transform.
July 23 - Dominique Maldague, MIT
A new approach to small cap decoupling for the parabola
I will discuss forthcoming work in collaboration with Yuqiu Fu and Larry Guth. Strichartz estimates for solutions to the periodic Schrodinger equation are a direct corollary of the (l^2,L^p) decoupling theorem of Bourgain and Demeter. In the setting for small cap decoupling (see the paper of Demeter, Guth, Wang), we continue to measure the L^p norm of solutions to the periodic Schrodinger equation (as well as more general functions) but over a spatial scale which does not see the full periodicity of the solutions. By further developing the approach used by Guth, Maldague, and Wang to re-prove decoupling for the parabola, we obtain sharp level set estimates for the size of the solutions on these smaller spatial domains. The level set estimates refine and recover the results of Demeter, Guth, Wang for the parabola, and lead to new (l^q,L^p) small cap decoupling inequalities. | CommonCrawl |
Mahmoud* and Ren*: Forest Fire Detection and Identification Using Image Processing and SVM
Mubarak Adam Ishag Mahmoud* and Honge Ren*
Forest Fire Detection and Identification Using Image Processing and SVM
Abstract: Accurate forest fires detection algorithms remain a challenging issue, because, some of the objects have the same features with fire, which may result in high false alarms rate. This paper presents a new video-based, image processing forest fires detection method, which consists of four stages. First, a background-subtraction algorithm is applied to detect moving regions. Secondly, candidate fire regions are determined using CIE L∗a∗b∗ color space. Thirdly, special wavelet analysis is used to differentiate between actual fire and fire-like objects, because candidate regions may contain moving fire-like objects. Finally, support vector machine is used to classify the region of interest to either real fire or non-fire. The final experimental results verify that the proposed method effectively identifies the forest fires.
Keywords: Background Subtraction , CIE L∗a∗b∗ Color Space , Forest Fire , SVM , Wavelet
Forest-fires are real threats to human lives, environmental systems and infrastructure. It is predicted that forest fires could destroy half of the world's forests by the year 2030 [1]. The only efficient way to minimize the forest fires damage is adopt early fire detection mechanisms. Thus, forest-fire detection systems are gaining a lot of attention on several research centers and universities around the world. Currently, there exists many commercial fire detection sensor systems, but all of them are difficult to apply in big open areas like forests, due to their delay in response, necessary maintenance, high cost and other problems.
In this study, image processing based has been used due to several reasons such as quick development of digital cameras technology, the camera can cover large areas with excellent results, the response time of image processing methods is better than that of the existing sensor systems, and the overall cost of the image processing systems is lower than sensor systems.
Several forest-fire detection methods based on image processing have been proposed. The methods presented in [2,3] share the same framework. These methods proposed forest fire detection using YCbCr color space. In these methods, detection of the forest-fire is based on four rules: the first and second rules are used to segment flame regions, while the third and fourth rules are used to segment high-temperature regions. The first one is based on the fact that, in any fire image, the red color value is larger than the green and the green is larger than the blue, this fact is represented in YCbCr as luminance Y is larger than chrominance blue (Y>Cb). In the second rule, the luminance Y value is larger than the average values of the Y component for the same image (Y>Ymean) while the Cb component is smaller than the average values of the Cb (Cb< Cbmean). Additionally, the Cr is larger than the average values Cr (Cr>Crmean). The third rule depends on the fact that the fire region center at high temperature is white in color, this results in reducing the red component and increasing the blue component at the fire center, which is presented as (Cb>Y>Cr). The fourth rule is that the Cr is smaller than the standard deviation for the same image (Crstd) multiplied by constant τ (Crτ*Crstd). These methods are fast. However, they are susceptible to false positives because they are not able to differentiate between moving fire-like objects and actual fire. Wang and Ye [4] proposed a forest-fire disaster prevention method that can detect fire and smoke. For fire detection, in any fire image, the red color value is larger if compared with the green. Besides, the green value is larger if compared with the blue. The R component is also larger than the average of the R component for the same image. This rule is represented as (R>G>B), (R>Rmean). The RGB images are then converted to HSV color space. Fire pixels are determined if the following conditions are met: 0≤H≤60, 0.2≤S≤1, 100≤V≤255. For smoke detection, RGB and k-means algorithms are used. Standard RGB smokes values C are taken from the image with significant smoke. The C value must be experimentally adjusted based on the results. Cluster center P is determined from video stream after the image frames are clustered by k-means algorithm. Smoke is detected if |P–C| < threshold. This method works well, nevertheless, smoke can spread quickly and has different colors based on the burning materials, leading to false alarm. Chen et al. [5] designed a fire detection algorithm which combines the saturation channel of the HSV color and the RGB color. This method detected fire using three rules: R≥RT, R≥G>B, and S≥((255-R)*ST/RT). Determinations of two thresholds (ST and RT) are needed. Based on the experimental results, the selected range is 55–65 for ST values and 115–135 for RT. This method is fast and computationally simple compared to the other methods. However, it suffers from false-positive alarms in case of moving fire-like objects.
In this study, a forest-fire detection method is proposed. It depends on multi-stages to identify forestfire. The final results indicate that the proposed algorithm has a good detection rate and fewer false alarms. The proposed algorithm is able to distinguish between fire and fire-like objects which are the main crucial problems for most of the existing methods.
The paper is organized as follows: Section 2 describes the Methodology, Section 3 presents the experimental results, and Section 4 summarizes the achieved results and potential future direction.
2. Methodology
In this part, the proposed method is presented. It consists of multi-stages. First, background subtraction is applied, because the fire boundaries continuously changes. Second, a color segmentation model is used to mark the candidate regions. Third, special wavelet analysis is carried out to distinguish between actual fire and fire like objects. Finally, support vector machine (SVM) is used for classifying the candidate regions to either actual fire or non-fire. The proposed algorithm stages will be described in details in the following subsections. Fig. 1 shows a flowchart of the proposed method.
2.1 Background Subtraction
Detecting moving objects is an essential step in most of the fire detection methods based on a video, because the fire boundaries continuously fluctuates. Eq. (1) calculates the contrast between the current image and background to determine the region of motion. Fig. 2 shows an example of background subtraction. A pixel at (x, y) is supposed to be moving if it satisfies Eq. (1) as follows.
[TeX:] $$\left| I _ { n } ( x , y ) - B _ { n } ( x , y ) \right| > t h r$$
where In(x, y) and Bn(x, y) represents the pixel value at (x, y) for the current and background frame, and thr refers to a threshold value which is set to 3 experimentally.
The background value is continuously updated using Eq. (2) as follows:
[TeX:] $$B _ { n + 1 } ( i , j ) = \left\{ \begin{array} { c c } { B _ { n } ( x , y ) + 1 i f I _ { n } ( x , y ) > B _ { n } ( x , y ) } \\ { B _ { n } ( x , y ) - 1 i f I _ { n } ( x , y ) < B _ { n } ( x , y ) } \\ { B _ { n } ( x , y ) } { i f I _ { n } ( x , y ) = B _ { n } ( x , y ) } \end{array} \right.$$
where Bn+1(x, y) and Bn(x, y) represents intensity pixel value at (x, y) for the current and previous background [6].
The proposed method flowchart.
An original frame containing fire (a) and the frame containing fire after background subtraction (b).
2.2 Color-based Segmentation
Different kinds of moving things (e.g., trees, people, birds, etc.) as well fire can be included after applying background subtraction. Thus, CIE L∗a∗b∗ color is used to select candidate regions of fire color.
2.2.1 RGB to CIE L*a*b* conversion
The conversion from RGB to CIE L∗a∗b∗ color space is performed by using Eq. (3):
[TeX:] $$\left[ \begin{array} { l } { X } \\ { Y } \\ { z } \end{array} \right] = \left[ \begin{array} { c c c } { 0.412673 } \ { 0.357580 } \ { 0.180423 } \\ { 0.212671 }\ { 0.715160 } \ { 0.07169 } \\ { 0.019334 } \ { 0.119193 } \ { 0.950227 } \end{array} \right] * \left[ \begin{array} { l } { R } \\ { G } \\ { B } \end{array} \right] \\ L ^ { * } = \left\{ \begin{array} { c } { 116 * \left( Y / Y _ { n } \right) - 16 , \text { if } \left( Y / Y _ { n } \right) > 0.008856 } \\ { 903.3 * \left( Y / Y _ { n } \right) , \text { otherwise } } \end{array} \right. \\ \begin{aligned} a ^ { * } \ = 500 * \left( f \left( X / X _ { n } \right) - f \left( Y / Y _ { n } \right), \right. \\ b ^ { * } \ = 200 * \left( f \left( Y / Y _ { n } \right) - f \left( Z / Z _ { n } \right), \right. \end{aligned} \\ f ( t ) = \left\{ \begin{array} { c } { t ^ { 1 / 3 } , i f t > 0.008856 } \\ { 7.787 * t + 16 / 116 , \text { Otherwise } } \end{array} \right.$$
where Xn, Yn, and Zn represents the reference color (white) values. The RGB colors channels range is from 0 to 255 for 8-bit data representation, and the ranges of L*, a*, and b* are [0, 100], [–110, 110], and [–110, 110], respectively.
After calculating the values of color channels (L*, a*, b*), the values of average channel (L*m, a*m, b*m) are obtained using the following equations:
[TeX:] $$\begin{aligned} L _ { m } ^ { * } = \frac { 1 } { N } \sum _ { x } \sum _ { y } L ^ { * } ( x , y ) \\ a _ { m } ^ { * } = \frac { 1 } { N } \sum _ { x } \sum _ { y } a ^ { * } ( x , y ) \\ b _ { m } ^ { * } = \frac { 1 } { N } \sum _ { x } \sum _ { y } b ^ { * } ( x , y ) \end{aligned}$$
where L*m, a*m and b*m are the average CIE L*a*b* channels values, and N is the image pixels' total number.
To detect the candidate fire region using CIE L*a*b*, four rules are defined based on the notion that the fire region is the brightest area with near red color in the image. The rules are as follows:
[TeX:] $$R 1 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( L ^ { * } ( x , y ) \geq L ^ { * } m \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$
[TeX:] $$R 2 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( a * ( x , y ) \geq a ^ { * } m \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$
[TeX:] $$R 3 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( b ^ { * } ( x , y ) \geq b ^ { * } m \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$
[TeX:] $$R 4 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( b ^ { * } ( x , y ) \geq a ^ { * } ( x , y ) \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$
where R1(x, y), R2(x, y), R3(x, y), and R4(x, y) are binary images. Fig. 3 shows the applying rules (5) through (8).
Applying the rules from (5)–(8) to the input images: (i) original RGB images, (ii) binary images using rule (5), (iii) binary images using rule (6), (iv) binary images using rule (7), (v) binary images using rule (8), and (vi) binary images using rules (5) through (8).
2.3 Spatial Wavelet Analysis for Color Variations
There is high luminance contrast in genuine fire regions than in fire-like colored objects, due to the turbulent fire flicker. Spatial wavelet analysis is a good image-processing method that can be used to distinguish between genuine fire regions and fire-like colored regions. Thus, a 2D wavelet filter is used on the red channel and the spatial wavelet energy is calculated for each pixel. Fig. 4 shows the wavelet energies of two videos, one contains actual fire and the other contains fire-like objects. It is clear that these regions containing actual fires have high variations and high wavelet energy. The following formula is used to calculate the wavelet energy:
[TeX:] $$E ( x , y ) = \left( H L ( x , y ) ^ { 2 } + L H ( x , y ) ^ { 2 } + H H ( x , y ) ^ { 2 } \right)$$
where E(x, y) is the spatial wavelet energy for specific pixel, HL, LH and HH are low high, high low and high-high wavelet sub-images. The spatial wavelet energy for each block is calculated by adding the specific energy of each pixel in the block as follows [7].
[TeX:] $$E _ { b l o c k } = \frac { 1 } { N _ { b } } \sum _ { x , y } E ( x , y )$$
where Nb is the total number of pixel's in the block. Eblock is used in the next stage as SVM input, to classify the regions of interest to either fire or non-fire.
Wavelet energy for actual fire (a) and fire-like object (b).
2.4 Classification using SVM
SVM nowadays is commonly used in different fields of pattern recognition systems, because it provides high performance and accurate classification results with limited training data set. The SVM idea is to create an optimal hyperplane to divide the input dataset into two classes with maximum margins. In this study, SVM is used to classify the regions of interest to either fire or non-fire. SVM classification function defined in the following formula:
[TeX:] $$f ( x ) = \operatorname { sign } \left( \sum _ { i = 0 } ^ { l - 1 } w _ { i } \cdot k \left( x , x _ { i } \right) + b \right)$$
where sign() is to determine whether the class of x either belongs to fire or non-fire (+1 class and –1 class). wi are output weights of the kernel, k() represents a kernel function, xi are the support vectors, i is support vectors number. In our proposed method, a one-dimension feature vector has been used. The data in this study is nonlinearly separable, no hyper-plane may exist to separate the input data into two parts, therefore, non-linear radial basis function (RBF) [8] is used, as follows:
[TeX:] $$k ( x , y ) = \exp \left( - \frac { \| x - y \| ^ { 2 } } { 2 \sigma ^ { 2 } } \right) \text { for } \sigma > 0$$
where x, y represent the input feature vectors, σ is a parameter for controlling the width of the effective basis function, experimentally set to 0.1 which gives a good performance. To train the SVM, dataset consisting of 500 wavelet energies from actual fire video and 500 fire-like and non-fire moving pixels were used.
In this part, experimental results of the proposed method have been presented. The model is implemented using MATLAB (R2017a) and tested on an Intel Core i7 2.97 GHz PC 8 GB RAM PC.
To measure the proposed algorithm performance, 10 videos collected from the Internet (http://www.ultimatechase.com), eight of them are used with dimensions of 256×256. Table 1 shows a snapshot of the tested videos. True positive is counted if an image frame has a fire pixel, and it is determined by the proposed algorithm as fire and if the image frame has no fire. It is determined by the proposed algorithm as fire, it counts as a false-positive. The results are shown in Table 2.
Videos used for the proposed algorithm evaluation
The experimental final results in Table 2 show that the proposed method has an average true positive rate (93.46%) in the eight fire videos and false positive rate (6.89%) in the two fire-color moving object videos. These results indicate the good performance of the proposed method.
Experimental results for testing the proposed forest-fire detection method
Video Number of frames Number of fire frame TP TP rate (%) FP FP rate (%)
Video_NO. 1 260 260 230 88.46 - -
Video_NO. 3 208 208 203 97.6 - -
Video_NO. 6 585 0 - - 34 5.81
Video_NO. 10 251 0 - - 20 7.97
3.1 Performance Evaluation
To evaluate the performance of the proposed algorithm, comparisons between the above-mentioned methods and the proposed algorithm are carried out. All of these methods are tested in data sets consisting of 300 images (200 forest-fire images and 100 non-fire images) collected from the Internet. The Algorithms' performances are calculated using the evaluation metric F-score.
3.1.1 F-score
The F-score [9] is used to evaluate the performance of the detection methods. For any given detection method, there are four possible outcomes; If an image has fire pixels, and it was determined by the algorithm as fire, then it is a true-positive; if the same image is determined not to be fire pixels by the algorithm, it is false-negative. If an image has no fire, and it was determined by the algorithm as no fire, it is true-negative, but if it was identified as fire by the algorithm, it counts as a false-positive. Fire detection methods are evaluated using the following equations:
[TeX:] $$F = 2 * \frac { ( \text {precision} \text { reall } ) } { ( \text { precision } + \text {recall} ) }$$
[TeX:] $$precision \ = \frac { T P } { ( T P + F P ) }$$
[TeX:] $$r e c a l l = \frac { T P } { ( T P + F N ) }$$
where F refers to F-score; TP, TN, FP and FN are a true positive, true negative, false positive, and false negative, respectively. A higher algorithm F-score means a better overall performance. Table 3 shows the comparison results.
TP rate is TP divided by the overall number of fire images.
TN rate is TN divided by the overall number of non-fire images.
FN rate is FN divided by the overall number of fire images.
FP rate is FP divided by the overall number of non-fire images.
Evaluations of the four tested fire detection methods
Method TP rate (%) FN rate (%) TN rate (%) FP rate (%) Recall Precision F-score (%)
Premal and Vinsley [2] 91.5 8 89 13 0.920 0.876 89.74
Vipin [3] 86 9.5 82 11 0.901 0.887 89.38
Chen et al. [5] 83 16.5 88 26 0.834 0.761 79.58
Proposed method 94 5 90 8 0.949 0.922 93.52
Table 3 shows the F-score of four methods. The proposed method F-score is 3.78% higher than that of the method described in Premal and Vinsley [2], this indicates the reliability of the proposed method.
This work presented an effective forest-fire detection method using image processing. Background subtraction and special wavelet analysis are used. In addition, SVM is used for classifying the candidate region to either real fire or non-fire. Comparison between the existing methods and the proposed method is carried out. The final results indicate that the proposed forest fires detection method achieves a good detection rate (93.46%) and a low false-alarm rate (6.89%) in fire-like objects. These results indicate that the proposed method is accurate and can be used in automatic forest-fire alarm systems.
For future work, the method's accuracy could be improved by extracting more fire features and increasing the training data set.
The work is supported by Fundamental Research Funds for the central universities (No. 2572017PZ10).
Mubarak Adam Ishag Mahmoud
He received B.S. in Engineering Technology from Faculty of Engineering and Technology, University of Gezira in 2006 and M.S. degrees in Electronics Engineering from Sudan University of Science and Technology in 2012. Now he is a Ph.D. candidate at Information and Computer Engineering, Northeast Forestry University, China.
Honge Ren
She received the Ph.D. degree from Northeast Forestry University, China, in 2009. She is currently professor of College of Information and Computer Engineering at Northeast Forestry University, a supervisor of doctoral students, and the director of Heilongjiang Provincial Forestry Intelligent Equipment Engineering Research Center. Her main research interests include different aspects of artificial intelligence and distributed systems.
1 D. Stipanicev, T. Vuko, D. Krstinic, M. Stula, L. Bodrozic, "Forest fire protection by advanced video detection system: Croatian experiences," in Proceedings of the 3rd TIEMS Workshop on Improvement of Disaster Management Systems: Local and Global Trends, Trogir, Croatia, 2006;custom:[[[https://bib.irb.hr/prikazi-rad?rad=279548]]]
2 C. E. Premal, S. S. Vinsley, "Image processing based forest fire detection using YCbCr colour model," in Proceedings of 2014 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Nagercoil, India, 2014;pp. 1229-1237. doi:[[[10.1109/ICCPCT.2014.7054883]]]
3 V. Vipin, "Image processing based forest fire detection," International Journal of Emerging Technology and Advanced Engineering, vol. 2, no. 2, pp. 87-95, 2012.custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=Image+processing+based+forest+fire+detection&btnG=]]]
4 Y. L. Wang, J. Y. Ye, "Research on the algorithm of prevention forest fire disaster in the Poyang Lake Ecological Economic Zone," Advanced Materials Research, vol. 518-523, pp. 5257-5260, 2012.doi:[[[10.4028/www.scientific.net/amr.518-523.5257]]]
5 T. H. Chen, P. H. Wu, Y. C. Chiou, "An early fire-detection method based on image processing," in Proceedings of 2004 International Conference on Image Processing, Singapore, 2004;pp. 1707-1710. doi:[[[10.1109/ICIP.2004.1421401]]]
6 M. Kang, T. X. T ung, J. M. Kim, "Efficient video-equipped fire detection approach for automatic fire alarm systems," Optical Engineering, vol. 52, no. 1, 2013.doi:[[[10.1117/1.oe.52.1.017002]]]
7 B. U. Toreyin, Y. Dedeoglu, U. Gudukbay, A. E. Cetin, "Computer vision based method for real-time fire and flame detection," Pattern Recognition Letters, vol. 27, no. 1 pp.49-58, pp. no.1 49-58, 2006.doi:[[[10.1016/j.patrec.2005.06.015]]]
8 S. Theodoridis, A. Pikrakis, K. Koutroumbas, D. Cavouras, Introduction to Pattern Recognition: A Matlab Approach. New Y ork, NY: Academic Press, 2010.custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=Introduction+to+Pattern+Recognition%3A+A+Matlab+Approach&btnG=]]]
9 T . Fawcett, 2004;, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.9777
Received: September 19 2017
Revision received: November 30 2017
Accepted: January 9 2018
Corresponding Author: Honge Ren* ([email protected])
Mubarak Adam Ishag Mahmoud*, College of Information and Computer Engineering, Northeast Forestry University, Harbin, China, [email protected]
Honge Ren*, College of Information and Computer Engineering, Northeast Forestry University, Harbin, China, [email protected] | CommonCrawl |
Stress gradient effects on the nucleation and propagation of cohesive cracks
On estimation of internal state by an optimal control approach for elastoplastic material
Cellular instabilities analyzed by multi-scale Fourier series: A review
Michel Potier-Ferry 1, , Foudil Mohri 1, , Fan Xu 2, , Noureddine Damil 3, , Bouazza Braikat 3, , Khadija Mhada 3, , Heng Hu 4, , Qun Huang 4, and Saeid Nezamabadi 5,
LEM3, Laboratoire d'Etudes des Microstructures et de Mécanique des Matériaux, UMR CNRS 7239, Université de Lorraine, Ile du Saulcy, 57045 Metz Cedex 01, France, France
Department of Mechanics and Engineering Science, Fudan University, 220 Handan Road, 200433 Shanghai, China
Laboratoire d'Ingénierie et Matériaux LIMAT, Faculté des Sciences Ben M'Sik, Université Hassan II de Casablanca, Sidi Othman, Casablanca, Morocco, Morocco, Morocco
School of Civil Engineering, Wuhan University, 8 South Road of East Lake, 430072 Wuhan, China, China
Université de Montpellier, Laboratoire de Mécanique et Génie Civil, UMR CNRS 5508, CC048 Place Eugène Bataillon, 34095 Montpellier Cedex 05, France
Received April 2015 Revised October 2015 Published March 2016
The paper is concerned by multi-scale methods to describe instability pattern formation, especially the method of Fourier series with variable coefficients. In this respect, various numerical tools are available. For instance in the case of membrane models, shell finite element codes can predict the details of the wrinkles, but with difficulties due to the large number of unknowns and the existence of many solutions. Macroscopic models are also available, but they account only for the effect of wrinkling on membrane behavior. A Fourier-related method has been introduced in order to modelize the main features of the wrinkles, but by using partial differential equations only at a macroscopic level. Within this method, the solution is sought in the form of few terms of Fourier series whose coefficients vary more slowly than the oscillations. The recent progresses about this Fourier-related method are reviewed and discussed.
Keywords: wrinkling, Multi-scale models, bifurcation, slowly variable Fourier coefficients..
Mathematics Subject Classification: 34E13, 4K18, 35B36, 42B05, 74G6.
Citation: Michel Potier-Ferry, Foudil Mohri, Fan Xu, Noureddine Damil, Bouazza Braikat, Khadija Mhada, Heng Hu, Qun Huang, Saeid Nezamabadi. Cellular instabilities analyzed by multi-scale Fourier series: A review. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 585-597. doi: 10.3934/dcdss.2016013
S. Abdelkhalek, Un Exemple de Flambage Sous Contraintes Internes: Étude des Défauts de Planéité en Laminage à Froid Des Tôles Minces,, Doctoral dissertation, (2010). Google Scholar
S. Abdelkhalek, P. Montmitonnet, M. Potier-Ferry, H. Zahrouni, N. Legrand and P. Buessler, Strip flatness modelling including buckling phenomena during thin strip cold rolling,, Ironmaking and Steelmaking, 37 (2010), 290. doi: 10.1179/030192310X12646889255708. Google Scholar
J. C. Amazigo, B. Budiansky and G. F. Carrier, Asymptotic analyses of the buckling of imperfect columns on non-linear elastic foundations,, Internat. J. Solids Structures, 6 (1970), 1341. Google Scholar
K. Attipou, H. Hu, F. Mohri, M. Potier-Ferry and S. Belouettar, Thermal wrinkling of thin membranes using a Fourier-related double scale approach,, Thin-Walled Structures, 94 (2015), 532. doi: 10.1016/j.tws.2015.04.034. Google Scholar
A. Bensoussan, J. L. Lions and G. Papanicolaou, Asymptotic Analysis for Periodic Structures,, North Holland Publ, (1978). Google Scholar
N. N. Bogolyubov and Y. A. Mitropolski, Asymptotic Methods in the Theory of Nonlinear Oscillations,, Gordon and Breach, (1963). Google Scholar
N. Bowden, S. Brittain, A. G. Evans, J. W. Hutchinson and G. M. Whitesides, Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer,, Nature, 393 (1998), 146. Google Scholar
F. Brau, H. Vandeparre, A. Sabbah, C. Poulard, A. Boudaoud and P. Damman, Multiple-length-scale elastic instability mimics parametric resonance of non- linear oscillators,, Nature Physics, 7 (2011), 56. Google Scholar
M. C. Cross, P. G. Daniels, P. C. Hohenberg and E. D. Siggia, Phase-winding solutions in a finite container above the convective threshold,, J. Fluid Mech., 127 (1983), 155. doi: 10.1017/S0022112083002670. Google Scholar
M. C. Cross and P. C. Hohenberg, Pattern formation out of equilibrium,, Rev. Modern Phys., 65 (1993), 851. Google Scholar
N. Damil and M. Potier-Ferry, Amplitude equations for cellular instabilities,, Dynamics and Stability of Systems, 7 (1992), 1. doi: 10.1080/02681119208806124. Google Scholar
N. Damil and M. Potier-Ferry, A generalized continuum approach to describe instability pattern formation by a multiple scale analysis,, Comptes Rendus Mecanique, 334 (2006), 674. doi: 10.1016/j.crme.2006.09.002. Google Scholar
N. Damil and M. Potier-Ferry, A generalized continuum approach to predict local buckling patterns of thin structures,, European Journal of Computational Mechanics, 17 (2008), 945. Google Scholar
N. Damil and M. Potier-Ferry, Influence of local wrinkling on membrane behaviour: A new approach by the technique of slowly variable Fourier coefficients,, J. Mech. Phys. Solids, 58 (2010), 1139. doi: 10.1016/j.jmps.2010.04.002. Google Scholar
N. Damil, M. Potier-Ferry and H. Hu, New nonlinear multiscale models for membrane wrinkling,, Comptes Rendus Mecanique, 341 (2013), 616. Google Scholar
N. Damil, M. Potier-Ferry and H. Hu, Membrane wrinkling revisited from a multi-scale point of view,, Advanced Modeling and Simulation in Engineering Sciences, 1 (2014). doi: 10.1186/2213-7467-1-6. Google Scholar
A. Eriksson and A. Nordmark, Instability of hyper-elastic balloon-shaped space membranes under pressure loads,, Comput. Methods Appl. Mech. Engrg., 237 (2012), 118. doi: 10.1016/j.cma.2012.05.012. Google Scholar
F. Feyel and J. L. Chaboche, FE2 multiscale approach for modelling the elastoviscoplastic behaviour of long fibre SiC/Ti composite materials,, Comput. Methods Appl. Mech. Engrg., 183 (2000), 309. doi: 10.1016/S0045-7825(99)00224-8. Google Scholar
G. Geymonat, S. Muller and N. Triantafyllidis, Homogenization of nonlinearly elastic materials: Microscopic bifurcation and macroscopic loss of rank-one convexity,, Arch. Ration. Mech. Anal., 122 (1993), 231. doi: 10.1007/BF00380256. Google Scholar
R. Hoyle, Pattern Formation, An Introduction to Methods,, Cambrige University Press, (2006). doi: 10.1017/CBO9780511616051. Google Scholar
H. Hu, N. Damil and M. Potier-Ferry, A bridging technique to analyze the influence of boundary conditions on instability patterns,, J. Comput. Phys., 230 (2011), 3753. doi: 10.1016/j.jcp.2011.01.044. Google Scholar
Q. Huang, H. Hu, K. Yu, M. Potier-Ferry, N. Damil and S. Belouettar, Macroscopic simulation of membrane wrinkling for various loading cases,, Internat. J. Solids Structures, 64-65 (2015), 64. doi: 10.1016/j.ijsolstr.2015.04.003. Google Scholar
G. W. Hunt, M. A. Peletier, A. R. Champneys, P. D. Woods, M. A. Wadee, C. J. Budd and G. J. Lord, Cellular buckling in long structures,, Nonlinear Dynamics, 21 (2000), 3. doi: 10.1023/A:1008398006403. Google Scholar
G. Iooss, A. Mielke and Y. Demay, Theory of steady Ginzburg-Landau equation in hydrodynamic stability problems,, Eur. J. Mech. B Fluids, 8 (1989), 229. Google Scholar
Y. Lecieux and R. Bouzidi, Experimentation analysis on membrane wrinkling under biaxial load - Comparison with bifurcation analysis,, Internat. J. Solids Structures, 47 (2010), 2459. Google Scholar
B. Li, Y. P. Cao, X. Q. Feng and H. J. Gao, Mechanics of morphological instabilities and surface wrinkling in soft materials: A review,, Soft Matter, 8 (2012), 5728. doi: 10.1039/c2sm00011c. Google Scholar
Y. Liu, K. Yu, H. Hu, S. Belouettar and M. Potier-Ferry, A Fourier-related double scale analysis on instability phenomena of sandwich beams,, Internat. J. Solids Structures, 49 (2012), 3077. Google Scholar
K. Mhada, B. Braikat and N. Damil, A 2D Fourier double scale analysis of global-local instability interaction in sandwich structures,, 21ème Congrès Français de Mécanique, (2013). Google Scholar
K. Mhada, B. Braikat, H. Hu, N. Damil and M. Potier-Ferry, About macroscopic models of instability pattern formation,, Internat. J. Solids Structures, 49 (2012), 2978. doi: 10.1016/j.ijsolstr.2012.05.033. Google Scholar
H. Moulinec and P. Suquet, A numerical method for computing the overall response of nonlinear composites with complex microstructure,, Comput. Methods Appl. Mech. Engrg., 157 (1998), 69. doi: 10.1016/S0045-7825(97)00218-1. Google Scholar
R. Nakhoul, P. Montmitonnet and M. Potier-Ferry, Multi-scale method modeling of thin sheet buckling under residual stresses in the context of strip rolling,, Internat. J. Solids Structures, 66 (2015), 62. doi: 10.1016/j.ijsolstr.2015.03.028. Google Scholar
A. H. Nayfeh, Perturbation Methods,, John Wiley and Sons, (1973). Google Scholar
A. C. Newell and J. A. Whitehead, Finite band width, finite amplitude convection,, J. Fluid Mech., 38 (1969), 279. doi: 10.1017/S0022112069000176. Google Scholar
S. Nezamabadi, J. Yvonnet, H. Zahrouni and M. Potier-Ferry, A multilevel computational strategy for handling microscopic and macroscopic instabilities,, Comput. Methods Appl. Mech. Engrg., 198 (2009), 2099. doi: 10.1016/j.cma.2009.02.026. Google Scholar
Y. Pomeau and S. Zaleski, Wavelength selection in one-dimensional cellular structures,, Journal de Physique, 42 (1981), 515. doi: 10.1051/jphys:01981004204051500. Google Scholar
R. Rossi, M. Lazzari, R. Vitaliani and E. Onate, Simulation of light-weight membrane structures by wrinkling model,, Internat. J. Numer. Methods Engrg, 62 (2005), 2127. doi: 10.1002/nme.1266. Google Scholar
E. Sanchez-Palencia, Non-Homogeneous Media and Vibration Theory,, Lecture Notes in Physics, (1980). Google Scholar
L. A. Segel, Distant side walls cause slow amplitude modulation of cellular convection,, J. Fluid Mech., 38 (1969), 203. doi: 10.1017/S0022112069000127. Google Scholar
R. J. M. Smit, W. A. M. Brekelmans and H. E. H. Meijer, Prediction of the mechanical behavior of nonlinear heterogeneous systems by multi-level finite element modeling,, Comput. Methods Appl. Mech. Engrg., 155 (1998), 181. doi: 10.1016/S0045-7825(97)00139-4. Google Scholar
P. Suquet, Plasticité et Homogénéisation,, Doctoral dissertation, (1982). Google Scholar
M. A. Wadee and M. Farsi, Cellular buckling in stiffened plates,, Proc. R. Soc. A, 470 (2014). doi: 10.1098/rspa.2014.0094. Google Scholar
J. E Wesfreid and S. Zaleski editors, Cellular Structures in Instabilities,, Lecture Notes in Physics, (1984). doi: 10.1007/3-540-13879-X. Google Scholar
Y. W. Wong and S. Pellegrino, Wrinkled membranes-Part1: Experiments,, Journal of Mechanics of Materials and Structures, 1 (2006), 3. Google Scholar
F. Xu, H. Hu, M. Potier-Ferry and S. Belouettar, Bridging techniques in a multi-scale modeling of pattern formation,, Internat. J. Solids Structures, 51 (2014), 3119. doi: 10.1016/j.ijsolstr.2014.05.011. Google Scholar
F. Xu and M. Potier-Ferry, A multi-scale modeling framework for instabilities of film/substrate systems,, J. Mech. Phys. Solids, 86 (2016), 150. doi: 10.1016/j.jmps.2015.10.003. Google Scholar
K. Yu, H. Hu, S. Chen, S. Belouettar and M. Potier-Ferry, Multi-scale techniques to analyze instabilities in sandwich structures,, Composite Structures, 96 (2013), 751. doi: 10.1016/j.compstruct.2012.10.007. Google Scholar
Thomas Y. Hou, Pengfei Liu. Optimal local multi-scale basis functions for linear elliptic equations with rough coefficients. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4451-4476. doi: 10.3934/dcds.2016.36.4451
Eugene Kashdan, Svetlana Bunimovich-Mendrazitsky. Multi-scale model of bladder cancer development. Conference Publications, 2011, 2011 (Special) : 803-812. doi: 10.3934/proc.2011.2011.803
Thierry Cazenave, Flávio Dickstein, Fred B. Weissler. Multi-scale multi-profile global solutions of parabolic equations in $\mathbb{R}^N $. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 449-472. doi: 10.3934/dcdss.2012.5.449
Emiliano Cristiani, Elisa Iacomini. An interface-free multi-scale multi-order model for traffic flow. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-19. doi: 10.3934/dcdsb.2019135
Jean-Philippe Bernard, Emmanuel Frénod, Antoine Rousseau. Modeling confinement in Étang de Thau: Numerical simulations and multi-scale aspects. Conference Publications, 2013, 2013 (special) : 69-76. doi: 10.3934/proc.2013.2013.69
Wen-ming He, Jun-zhi Cui. The estimate of the multi-scale homogenization method for Green's function on Sobolev space $W^{1,q}(\Omega)$. Communications on Pure & Applied Analysis, 2012, 11 (2) : 501-516. doi: 10.3934/cpaa.2012.11.501
Grigor Nika, Bogdan Vernescu. Rate of convergence for a multi-scale model of dilute emulsions with non-uniform surface tension. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1553-1564. doi: 10.3934/dcdss.2016062
Thomas Blanc, Mihai Bostan, Franck Boyer. Asymptotic analysis of parabolic equations with stiff transport terms by a multi-scale approach. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4637-4676. doi: 10.3934/dcds.2017200
Markus Gahn. Multi-scale modeling of processes in porous media - coupling reaction-diffusion processes in the solid and the fluid phase and on the separating interfaces. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-21. doi: 10.3934/dcdsb.2019151
Huichi Huang. Fourier coefficients of $\times p$-invariant measures. Journal of Modern Dynamics, 2017, 11: 551-562. doi: 10.3934/jmd.2017021
Jamel Ben Amara, Hedi Bouzidi. Exact boundary controllability for the Boussinesq equation with variable coefficients. Evolution Equations & Control Theory, 2018, 7 (3) : 403-415. doi: 10.3934/eect.2018020
Hai Huyen Dam, Kok Lay Teo. Variable fractional delay filter design with discrete coefficients. Journal of Industrial & Management Optimization, 2016, 12 (3) : 819-831. doi: 10.3934/jimo.2016.12.819
Fágner D. Araruna, Flank D. M. Bezerra, Milton L. Oliveira. Rate of attraction for a semilinear thermoelastic system with variable coefficients. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3211-3226. doi: 10.3934/dcdsb.2018316
Jamel Ben Amara, Emna Beldi. Simultaneous controllability of two vibrating strings with variable coefficients. Evolution Equations & Control Theory, 2019, 8 (4) : 687-694. doi: 10.3934/eect.2019032
Abderrahman Iggidr, Josepha Mbang, Gauthier Sallet, Jean-Jules Tewa. Multi-compartment models. Conference Publications, 2007, 2007 (Special) : 506-519. doi: 10.3934/proc.2007.2007.506
Shikuan Mao, Yongqin Liu. Decay property for solutions to plate type equations with variable coefficients. Kinetic & Related Models, 2017, 10 (3) : 785-797. doi: 10.3934/krm.2017031
Takahiro Hashimoto. Nonexistence of weak solutions of quasilinear elliptic equations with variable coefficients. Conference Publications, 2009, 2009 (Special) : 349-358. doi: 10.3934/proc.2009.2009.349
M. Eller, Roberto Triggiani. Exact/approximate controllability of thermoelastic plates with variable thermal coefficients. Discrete & Continuous Dynamical Systems - A, 2001, 7 (2) : 283-302. doi: 10.3934/dcds.2001.7.283
Petronela Radu, Grozdena Todorova, Borislav Yordanov. Higher order energy decay rates for damped wave equations with variable coefficients. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 609-629. doi: 10.3934/dcdss.2009.2.609
Vo Van Au, Mokhtar Kirane, Nguyen Huy Tuan. Determination of initial data for a reaction-diffusion system with variable coefficients. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 771-801. doi: 10.3934/dcds.2019032
Michel Potier-Ferry Foudil Mohri Fan Xu Noureddine Damil Bouazza Braikat Khadija Mhada Heng Hu Qun Huang Saeid Nezamabadi | CommonCrawl |
$88,921 in 1938 has the same purchasing power as $858,939.02 in 1991. Over the 53 years this is a change of $770,018.02.
The average inflation rate of the dollar between 1938 and 1991 was 4.31% per year. The cumulative price increase of the dollar over this time was 865.96%.
The inflation rate for 1938 was -2.08%, while the inflation rate for 1991 was 4.21%. The 1991 inflation rate is higher than the average inflation rate of 3.64% per year between 1991 and 2021.
$88,921.00 -2.08%
$102,795.20 10.88%
$150,093.60 -1.24%
If you're interested to see the effect of inflation on various 1950 amounts, the table below shows how much each amount would be worth today based on the price increase of 865.96%.
$5.00 in 1938 $48.30 in 1991
$50.00 in 1938 $482.98 in 1991
$500.00 in 1938 $4,829.79 in 1991
$5,000.00 in 1938 $48,297.87 in 1991
$50,000.00 in 1938 $482,978.72 in 1991
$500,000.00 in 1938 $4,829,787.23 in 1991
We then replace the variables with the historical CPI values. The CPI in 1938 was 14.1 and 136.2 in 1991.
$$\dfrac{ \$88,921 \times 136.2 }{ 14.1 } = \text{ \$858,939.02 } $$
$$ \dfrac{\text{ 136.2 } - \text{ 14.1 } }{\text{ 14.1 }} \times 100 = \text{ 865.96\% } $$ | CommonCrawl |
Unique moment set from the order of magnitude method
KRM Home
Periodic long-time behaviour for an approximate model of nematic polymers
June 2012, 5(2): 383-415. doi: 10.3934/krm.2012.5.383
Large-time decay of the soft potential relativistic Boltzmann equation in $\mathbb{R}^3_x$
Robert M. Strain 1, and Keya Zhu 2,
University of Pennsylvania, Department of Mathematics, David Rittenhouse Lab, 209 South 33rd Street, Philadelphia, PA 19104, United States
University of Pennsylvania, Department of Mathematics, David RittenhouseLab, 209 South 33rd Street, Philadelphia, PA 19104-6395, United States
Received December 2011 Revised February 2012 Published April 2012
For the relativistic Boltzmann equation in $\mathbb{R}^3_x$, this work proves the global existence, uniqueness, positivity, and optimal time convergence rates to the relativistic Maxwellian for solutions which start out sufficiently close under the general physical soft potential assumption proposed in 1988 [13].
Keywords: relativistic Maxwellian, Relativity, Boltzmann, stability, kinetic theory., whole space, collisional kinetic theory.
Mathematics Subject Classification: Primary: 76P05; Secondary: 83A0.
Citation: Robert M. Strain, Keya Zhu. Large-time decay of the soft potential relativistic Boltzmann equation in $\mathbb{R}^3_x$. Kinetic & Related Models, 2012, 5 (2) : 383-415. doi: 10.3934/krm.2012.5.383
Håkan Andréasson, Regularity of the gain term and strong $L^1$ convergence to equilibrium for the relativistic Boltzmann equation,, SIAM J. Math. Anal., 27 (1996), 1386. doi: 10.1137/0527076. Google Scholar
Russel E. Caflisch, The Boltzmann equation with a soft potential. I. Linear, spatially-homogeneous,, Comm. Math. Phys., 74 (1980), 71. Google Scholar
Russel E. Caflisch, The Boltzmann equation with a soft potential. II. Nonlinear, spatially-periodic,, Comm. Math. Phys., 74 (1980), 97. Google Scholar
Simone Calogero, The Newtonian limit of the relativistic Boltzmann equation,, J. Math. Phys., 45 (2004), 4042. doi: 10.1063/1.1793328. Google Scholar
Carlo Cercignani and Gilberto Medeiros Kremer, "The Relativistic Boltzmann Equation: Theory and Applications," Progress in Mathematical Physics, 22,, Birkhäuser Verlag, (2002). Google Scholar
S. R. de Groot, W. A. van Leeuwen and Ch. G. van Weert, "Relativistic Kinetic Theory. Principles and Applications,", North-Holland Publishing Co., (1980). Google Scholar
L. Desvillettes and C. Villani, On the trend to global equilibrium for spatially inhomogeneous kinetic systems: The Boltzmann equation,, Invent. Math., 159 (2005), 245. doi: 10.1007/s00222-004-0389-9. Google Scholar
R. J. DiPerna and P.-L. Lions, On the Cauchy problem for Boltzmann equations: Global existence and weak stability,, Ann. of Math. (2), 130 (1989), 321. doi: 10.2307/1971423. Google Scholar
Renjun Duan and Robert M. Strain, Optimal time decay of the Vlasov-Poisson-Boltzmann system in $\mathbbR^3$,, Arch. Ration. Mech. Anal., 199 (2011), 291. doi: 10.1007/s00205-010-0318-6. Google Scholar
Renjun Duan and Robert M. Strain, Optimal large-time behavior of the Vlasov-Maxwell-Boltzmann system in the whole space,, Commun. Pure Appl. Math., 64 (2011), 1497. Google Scholar
Marek Dudyński, On the linearized relativistic Boltzmann equation. II. Existence of hydrodynamics,, J. Statist. Phys., 57 (1989), 199. doi: 10.1007/BF01023641. Google Scholar
Marek Dudyński and Maria L. Ekiel-Jeżewska, The relativistic Boltzmann equation-mathematical and physical aspects,, J. Tech. Phys., 48 (2007), 39. Google Scholar
Marek Dudyński and Maria L. Ekiel-Jeżewska, On the linearized relativistic Boltzmann equation. I. Existence of solutions,, Comm. Math. Phys., 115 (1988), 607. doi: 10.1007/BF01224130. Google Scholar
Marek Dudyński and Maria L. Ekiel-Jeżewska, Global existence proof for relativistic Boltzmann equation,, J. Statist. Phys., 66 (1992), 991. doi: 10.1007/BF01055712. Google Scholar
Marek Dudyński and Maria L. Ekiel-Jeżewska, Causality of the linearized relativistic Boltzmann equation,, Phys. Rev. Lett., 55 (1985), 2831. doi: 10.1103/PhysRevLett.55.2831. Google Scholar
Marek Dudyński and Maria L. Ekiel-Jeżewska, Errata: "Causality of the linearized relativistic Boltzmann equation,'', Investigación Oper., 6 (1985). Google Scholar
Seung-Yeal Ha, Yong Duck Kim, Ho Lee and Se Eun Noh, Asymptotic completeness for relativistic kinetic equations with short-range interaction forces,, Methods Appl. Anal., 14 (2007), 251. Google Scholar
Seung-Yeal Ha, Ho Lee, Xiongfeng Yang and Seok-Bae Yun, Uniform $L^2$-stability estimates for the relativistic Boltzmann equation,, J. Hyperbolic Differ. Equ., 6 (2009), 295. Google Scholar
Ling Hsiao and Hongjun Yu, Asymptotic stability of the relativistic Maxwellian,, Math. Methods Appl. Sci., 29 (2006), 1481. doi: 10.1002/mma.736. Google Scholar
Ling Hsiao and Hongjun Yu, Global classical solutions to the initial value problem for the relativistic Landau equation,, J. Differential Equations, 228 (2006), 641. Google Scholar
Robert T. Glassey, "The Cauchy Problem in Kinetic Theory,", Society for Industrial and Applied Mathematics (SIAM), (1996). Google Scholar
Robert T. Glassey, Global solutions to the Cauchy problem for the relativistic Boltzmann equation with near-vacuum data,, Comm. Math. Phys., 264 (2006), 705. doi: 10.1007/s00220-006-1522-y. Google Scholar
Robert T. Glassey and Walter A. Strauss, On the derivatives of the collision map of relativistic particles,, Transport Theory Statist. Phys., 20 (1991), 55. doi: 10.1080/00411459108204708. Google Scholar
Robert T. Glassey and Walter A. Strauss, Asymptotic stability of the relativistic Maxwellian,, Publ. Res. Inst. Math. Sci., 29 (1993), 301. doi: 10.2977/prims/1195167275. Google Scholar
Robert T. Glassey and Walter A. Strauss, Asymptotic stability of the relativistic Maxwellian via fourteen moments,, Transport Theory Statist. Phys., 24 (1995), 657. doi: 10.1080/00411459508206020. Google Scholar
Philip T. Gressman and Robert M. Strain, Global classical solutions of the Boltzmann equation with long-range interactions,, Proc. Nat. Acad. Sci. USA, 107 (2010), 5744. doi: 10.1073/pnas.1001185107. Google Scholar
Philip T. Gressman and Robert M. Strain, Global classical solutions of the Boltzmann equation without angular cut-off,, J. Amer. Math. Soc., 24 (2011), 771. doi: 10.1090/S0894-0347-2011-00697-8. Google Scholar
Philip T. Gressman and Robert M. Strain, Sharp anisotropic estimates for the Boltzmann collision operator and its entropy production,, Adv. Math., 227 (2011), 2349. doi: 10.1016/j.aim.2011.05.005. Google Scholar
Yan Guo, The Vlasov-Maxwell-Boltzmann system near Maxwellians,, Invent. Math., 153 (2003), 593. doi: 10.1007/s00222-003-0301-z. Google Scholar
Yan Guo, Classical solutions to the Boltzmann equation for molecules with an angular cutoff,, Arch. Ration. Mech. Anal., 169 (2003), 305. doi: 10.1007/s00205-003-0262-9. Google Scholar
Yan Guo, Decay and continuity of the Boltzmann equation in bounded domains,, Arch. Ration. Mech. Anal., 197 (2010), 713. doi: 10.1007/s00205-009-0285-y. Google Scholar
Yan Guo and Robert M. Strain, Momentum regularity and stability of the relativistic Vlasov-Maxwell-Boltzmann system,, Comm. Math. Phys., 310 (2012), 649. doi: 10.1007/s00220-012-1417-z. Google Scholar
Yan Guo and Walter A. Strauss, Instability of periodic BGK equilibria,, Comm. Pure Appl. Math., 48 (1995), 861. doi: 10.1002/cpa.3160480803. Google Scholar
Zhenglu Jiang, On the Cauchy problem for the relativistic Boltzmann equation in a periodic box: Global existence,, Transport Theory Statist. Phys., 28 (1999), 617. doi: 10.1080/00411459908214520. Google Scholar
Zhenglu Jiang, On the relativistic Boltzmann equation,, Acta Math. Sci. (English Ed.), 18 (1998), 348. Google Scholar
Shuichi Kawashima, The Boltzmann equation and thirteen moments,, Japan J. Appl. Math., 7 (1990), 301. doi: 10.1007/BF03167846. Google Scholar
P.-L. Lions, Compactness in Boltzmann's equation via Fourier integral operators and applications. I, II, III,, J. Math. Kyoto Univ., 34 (1994), 391. Google Scholar
Tai-Ping Liu and Shih-Hsien Yu, Initial-boundary value problem for one-dimensional wave solutions of the Boltzmann equation,, Comm. Pure Appl. Math., 60 (2007), 295. doi: 10.1002/cpa.20172. Google Scholar
Tai-Ping Liu and Shih-Hsien Yu, The Green's function and large-time behavior of solutions for the one-dimensional Boltzmann equation,, Comm. Pure Appl. Math., 57 (2004), 1543. doi: 10.1002/cpa.20011. Google Scholar
Clément Mouhot and Cédric Villani, On Landau damping,, Acta Math., 207 (2012), 29. Google Scholar
Jared Speck and Robert M. Strain, Hilbert expansion from the Boltzmann equation to relativistic fluids,, Comm. Math. Phys., 304 (2011), 229. doi: 10.1007/s00220-011-1207-z. Google Scholar
Robert M. Strain and Yan Guo, Stability of the relativistic Maxwellian in a collisional plasma,, Comm. Math. Phys., 251 (2004), 263. doi: 10.1007/s00220-004-1151-2. Google Scholar
Robert M. Strain and Yan Guo, Almost exponential decay near Maxwellian,, Comm. Partial Differential Equations, 31 (2006), 417. Google Scholar
Robert M. Strain and Yan Guo, Exponential decay for soft potentials near Maxwellian,, Arch. Ration. Mech. Anal., 187 (2008), 287. doi: 10.1007/s00205-007-0067-3. Google Scholar
Robert M. Strain, Global Newtonian limit for the relativistic Boltzmann equation near vacuum,, SIAM J. Math. Anal., 42 (2010), 1568. doi: 10.1137/090762695. Google Scholar
Robert M. Strain, Asymptotic stability of the relativistic Boltzmann equation for the soft-potentials,, Comm. Math. Phys., 300 (2010), 529. doi: 10.1007/s00220-010-1129-1. Google Scholar
Robert M. Strain, Coordinates in the relativistic Boltzmann theory,, Kinetic and Related Models, 4 (2011), 345. doi: 10.3934/krm.2011.4.345. Google Scholar
Robert M. Strain, Optimal time decay of the non cut-off Boltzmann equation in the whole space,, preprint, (2010). Google Scholar
Seiji Ukai and Kiyoshi Asano, On the Cauchy problem of the Boltzmann equation with a soft potential,, Publ. Res. Inst. Math. Sci., 18 (1982), 57. doi: 10.2977/prims/1195183569. Google Scholar
Ivan Vidav, Spectra of perturbed semigroups with applications to transport theory,, J. Math. Anal. Appl., 30 (1970), 264. doi: 10.1016/0022-247X(70)90160-5. Google Scholar
Bernt Wennberg, The geometry of binary collisions and generalized Radon transforms,, Arch. Rational Mech. Anal., 139 (1997), 291. doi: 10.1007/s002050050054. Google Scholar
Tong Yang and Hongjun Yu, Hypocoercivity of the relativistic Boltzmann and Landau equations in the whole space,, J. Differential Equations, 248 (2010), 1518. Google Scholar
Hongjun Yu, Smoothing effects for classical solutions of the relativistic Landau-Maxwell system,, J. Differential Equations, 246 (2009), 3776. Google Scholar
José Antonio Alcántara, Simone Calogero. On a relativistic Fokker-Planck equation in kinetic theory. Kinetic & Related Models, 2011, 4 (2) : 401-426. doi: 10.3934/krm.2011.4.401
Xinkuan Chai. The Boltzmann equation near Maxwellian in the whole space. Communications on Pure & Applied Analysis, 2011, 10 (2) : 435-458. doi: 10.3934/cpaa.2011.10.435
Robert M. Strain. Coordinates in the relativistic Boltzmann theory. Kinetic & Related Models, 2011, 4 (1) : 345-359. doi: 10.3934/krm.2011.4.345
Darryl D. Holm, Vakhtang Putkaradze, Cesare Tronci. Collisionless kinetic theory of rolling molecules. Kinetic & Related Models, 2013, 6 (2) : 429-458. doi: 10.3934/krm.2013.6.429
Emmanuel Frénod, Mathieu Lutz. On the Geometrical Gyro-Kinetic theory. Kinetic & Related Models, 2014, 7 (4) : 621-659. doi: 10.3934/krm.2014.7.621
Paolo Barbante, Aldo Frezzotti, Livio Gibelli. A kinetic theory description of liquid menisci at the microscale. Kinetic & Related Models, 2015, 8 (2) : 235-254. doi: 10.3934/krm.2015.8.235
Hung-Wen Kuo. Effect of abrupt change of the wall temperature in the kinetic theory. Kinetic & Related Models, 2019, 12 (4) : 765-789. doi: 10.3934/krm.2019030
Seung-Yeal Ha, Ho Lee, Seok Bae Yun. Uniform $L^p$-stability theory for the space-inhomogeneous Boltzmann equation with external forces. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 115-143. doi: 10.3934/dcds.2009.24.115
José A. Carrillo, M. R. D'Orsogna, V. Panferov. Double milling in self-propelled swarms from kinetic theory. Kinetic & Related Models, 2009, 2 (2) : 363-378. doi: 10.3934/krm.2009.2.363
Marzia Bisi, Tommaso Ruggeri, Giampiero Spiga. Dynamical pressure in a polyatomic gas: Interplay between kinetic theory and extended thermodynamics. Kinetic & Related Models, 2018, 11 (1) : 71-95. doi: 10.3934/krm.2018004
Carlos Escudero, Fabricio Macià, Raúl Toral, Juan J. L. Velázquez. Kinetic theory and numerical simulations of two-species coagulation. Kinetic & Related Models, 2014, 7 (2) : 253-290. doi: 10.3934/krm.2014.7.253
Daewa Kim, Annalisa Quaini. A kinetic theory approach to model pedestrian dynamics in bounded domains with obstacles. Kinetic & Related Models, 2019, 12 (6) : 1273-1296. doi: 10.3934/krm.2019049
Naoufel Ben Abdallah, Antoine Mellet, Marjolaine Puel. Fractional diffusion limit for collisional kinetic equations: A Hilbert expansion approach. Kinetic & Related Models, 2011, 4 (4) : 873-900. doi: 10.3934/krm.2011.4.873
Radjesvarane Alexandre, Mouhamad Elsafadi. Littlewood-Paley theory and regularity issues in Boltzmann homogeneous equations II. Non cutoff case and non Maxwellian molecules. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 1-11. doi: 10.3934/dcds.2009.24.1
Radjesvarane Alexandre, Yoshinori Morimoto, Seiji Ukai, Chao-Jiang Xu, Tong Yang. Bounded solutions of the Boltzmann equation in the whole space. Kinetic & Related Models, 2011, 4 (1) : 17-40. doi: 10.3934/krm.2011.4.17
Manuel Torrilhon. H-Theorem for nonlinear regularized 13-moment equations in kinetic gas theory. Kinetic & Related Models, 2012, 5 (1) : 185-201. doi: 10.3934/krm.2012.5.185
Etienne Bernard, Laurent Desvillettes, Franç cois Golse, Valeria Ricci. A derivation of the Vlasov-Stokes system for aerosol flows from the kinetic theory of binary gas mixtures. Kinetic & Related Models, 2018, 11 (1) : 43-69. doi: 10.3934/krm.2018003
Nicola Bellomo, Abdelghani Bellouquid, Juanjo Nieto, Juan Soler. Modeling chemotaxis from $L^2$--closure moments in kinetic theory of active particles. Discrete & Continuous Dynamical Systems - B, 2013, 18 (4) : 847-863. doi: 10.3934/dcdsb.2013.18.847
Luisa Arlotti, Bertrand Lods, Mustapha Mokhtar-Kharroubi. Non-autonomous Honesty theory in abstract state spaces with applications to linear kinetic equations. Communications on Pure & Applied Analysis, 2014, 13 (2) : 729-771. doi: 10.3934/cpaa.2014.13.729
Juan Calvo. On the hyperbolicity and causality of the relativistic Euler system under the kinetic equation of state. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1341-1347. doi: 10.3934/cpaa.2013.12.1341
2018 Impact Factor: 1.38
Robert M. Strain Keya Zhu | CommonCrawl |
Problem of uniqueness of an element of the best nonsymmetric L 1-approximation of continuous functions with values in KB -spaces
Babenko V. F., Tkachenko M. E.
On the behavior of a simple-layer potential for a parabolic equation on a Riemannian manifold
Bernatskaya J. N.
On a Riemannian manifold of nonpositive sectional curvature (Cartan-Hadamard-type manifold), we consider a parabolic equation. The second boundary-value problem for this equation is set in a bounded domain whose surface is a smooth submanifold. We prove that the gradient of the simple-layer potential for this problem has a jump when passing across the submanifold, similarly to its behavior in a Euclidean space.
On the uniqueness of a solution of the inverse problem for a simple-layer potential
Kapanadze D. V.
We prove the uniqueness of solution of the inverse problem of single-layer potential for star-shaped smooth surfaces in the case of the metaharmonic equation Δv - K² v = 0. For the Laplace equation, a similar statement is not true.
Solution of a second-order Poincaré-Perron-type equation and differential equations that can be reduced to it
Kruglov V. E.
The analytical solution of the second-order difference Poincare–Perron equation is presented. This enables us to construct in the explicit form a solution of the differential equation $$t^2(A_1t^2 + B_1t + C_1)u'' + t(A_2t^2 + B_2t + C_2)u' + (A_3t^2 + B_3t + C_3)u = 0 $$ The solution of the equation is represented in terms of two hypergeometric functions and one new special function. As a separate case, the explicit solution of the Heun equation is obtained, and polynomial solutions of this equation are found.
On some properties of solutions of quasilinear degenerate equations
Amanov R. A., Mamedov F. I.
For the quasilinear equations div A(x, u, ∇u) = 0 with degeneracy ω(x) from the Muckenhaupt Ap -class, we prove the Harnack inequality, an estimate of the Holder norm, and a sufficient test for the regularity of boundary points of the Wiener type.
Common periodic trajectories of two mappings
Matviichuk M. Yu.
For a map f ∈ Cr (I, I), r > 0, we consider the problem of the existence of a map close to it and having the common periodic trajectories of given periods with f.
Higher-order parabolic variational inequality in unbounded domains
Medvid' I. M.
We prove the existence and uniqueness of a solution of a nonlinear parabolic variational inequality in an unbounded domain without conditions at infinity. In particular, the initial data may infinitely increase at infinity, and a solution of the inequality is unique without any restrictions on its behavior at infinity.
Comparison theorems for some nonsymmetric classes of functions
Motornaya O. V., Motornyi V. P.
We prove comparison theorems of the Kolmogorov type for some nonsymmetric classes of functions.
Approximation of Poisson integrals by one linear approximation method in uniform and integral metrics
Serdyuk A. S.
We obtain asymptotic equalities for the least upper bounds of approximations of classes of Poisson integrals of periodic functions by a linear approximation method of special form in the metrics of the spaces C and Lp .
On the construction of cubature formulas invariant under dihedral groups
Shamsiev E. A.
We study cubature formulas invariant under the dihedral group of order 16p.
Classification of topologies on finite sets using graphs
Adamenko N. P., Velichko I. G.
With the use of digraphs, topologies on finite sets are studied. On this basis, a new classification of such topologies is proposed. Some properties of T0-topologies on finite sets are proved. In particular, it is proved that, in T0-topologies, there exist open sets containing arbitrary number of elements that does not exceed the cardinality of the set itself.
Generators and relations for wreath products
Drozd Yu. A., Skuratovskii R. V.
Generators and defining relations for wreath products of groups are given. Under a certain condition (conormality of generators), they are minimal.
First eigenvalue of the Laplace operator and mean curvature
Etemad Dehkordy A.
Ukr. Mat. Zh. - 2008. - 60, № 7. - pp. 1000–1003
The main theorem of this paper states a relation between the first nonzero eigenvalue of the Laplace operator and the squared norm of mean curvature in irreducible compact homogeneous manifolds under spatial conditions. This statement has some consequences presented in the remainder of paper.
Estimation of the product of inner radii of partially nonoverlapping domains
Podvysotskii R. V.
We present new results on the maximization of products of positive powers of inner radii of some special domain systems in the extended complex plane $\overline{{\mathbb C}}$ with respect to points of finite sets such that any two distinct points $z_1, z_2 \in {\mathbb C}\setminus \{0\}$ of such set belong to different rays emerging from the origin. | CommonCrawl |
Psychological Medicine (13)
Acta Neuropsychiatrica (1)
Journal of Plasma Physics (1)
Mineralogical Magazine (1)
Mineralogical Society (1)
Scandinavian College of Neuropsychopharmacology (1)
Chemistry of Solid State Materials (1)
The impact of Recovery Colleges on mental health staff, services and society
A. Crowther, A. Taylor, R. Toney, S. Meddings, T. Whale, H. Jennings, K. Pollock, P. Bates, C. Henderson, J. Waring, M. Slade
Journal: Epidemiology and Psychiatric Sciences , First View
Published online by Cambridge University Press: 23 October 2018, pp. 1-8
Recovery Colleges are opening internationally. The evaluation focus has been on outcomes for Recovery College students who use mental health services. However, benefits may also arise for: staff who attend or co-deliver courses; the mental health and social care service hosting the Recovery College; and wider society. A theory-based change model characterising how Recovery Colleges impact at these higher levels is needed for formal evaluation of their impact, and to inform future Recovery College development. The aim of this study was to develop a stratified theory identifying candidate mechanisms of action and outcomes (impact) for Recovery Colleges at staff, services and societal levels.
Inductive thematic analysis of 44 publications identified in a systematised review was supplemented by collaborative analysis involving a lived experience advisory panel to develop a preliminary theoretical framework. This was refined through semi-structured interviews with 33 Recovery College stakeholders (service user students, peer/non-peer trainers, managers, community partners, clinicians) in three sites in England.
Candidate mechanisms of action and outcomes were identified at staff, services and societal levels. At the staff level, experiencing new relationships may change attitudes and associated professional practice. Identified outcomes for staff included: experiencing and valuing co-production; changed perceptions of service users; and increased passion and job motivation. At the services level, Recovery Colleges often develop somewhat separately from their host system, reducing the reach of the college into the host organisation but allowing development of an alternative culture giving experiential learning opportunities to staff around co-production and the role of a peer workforce. At the societal level, partnering with community-based agencies gave other members of the public opportunities for learning alongside people with mental health problems and enabled community agencies to work with people they might not have otherwise. Recovery Colleges also gave opportunities to beneficially impact on community attitudes.
This study is the first to characterise the mechanisms of action and impact of Recovery Colleges on mental health staff, mental health and social care services, and wider society. The findings suggest that a certain distance is needed in the relationship between the Recovery College and its host organisation if a genuine cultural alternative is to be created. Different strategies are needed depending on what level of impact is intended, and this study can inform decision-making about mechanisms to prioritise. Future research into Recovery Colleges should include contextual evaluation of these higher level impacts, and investigate effectiveness and harms.
Assessing distress in the community: psychometric properties and crosswalk comparison of eight measures of psychological distress
P. J. Batterham, M. Sunderland, T. Slade, A. L. Calear, N. Carragher
Journal: Psychological Medicine / Volume 48 / Issue 8 / June 2018
Many measures are available for measuring psychological distress in the community. Limited research has compared these scales to identify the best performing tools. A common metric for distress measures would enable researchers and clinicians to equate scores across different measures. The current study evaluated eight psychological distress scales and developed crosswalks (tables/figures presenting multiple scales on a common metric) to enable scores on these scales to be equated.
An Australian online adult sample (N = 3620, 80% female) was administered eight psychological distress measures: Patient Health Questionnaire-4, Kessler-10/Kessler-6, Distress Questionnaire-5 (DQ5), Mental Health Inventory-5, Hopkins Symptom Checklist-25 (HSCL-25), Self-Report Questionnaire-20 (SRQ-20) and Distress Thermometer. The performance of each measure in identifying DSM-5 criteria for a range of mental disorders was tested. Scale fit to a unidimensional latent construct was assessed using Confirmatory Factor Analysis (CFA). Finally, crosswalks were developed using Item Response Theory.
The DQ5 had optimal performance in identifying individuals meeting DSM-5 criteria, with adequate fit to a unidimensional construct. The HSCL-25 and SRQ-20 also had adequate fit but poorer specificity and/or sensitivity than the DQ5 in identifying caseness. The unidimensional CFA of the combined item bank for the eight scales showed acceptable fit, enabling the creation of crosswalk tables.
The DQ5 had optimal performance in identifying risk of mental health problems. The crosswalk tables developed in this study will enable rapid conversion between distress measures, providing more efficient means of data aggregation and a resource to facilitate interpretation of scores from multiple distress scales.
Signatures of quantum effects on radiation reaction in laser–electron-beam collisions
Frontiers in Plasma Physics Conference
C. P. Ridgers, T. G. Blackburn, D. Del Sorbo, L. E. Bradley, C. Slade-Lowther, C. D. Baird, S. P. D. Mangles, P. McKenna, M. Marklund, C. D. Murphy, A. G. R. Thomas
Journal: Journal of Plasma Physics / Volume 83 / Issue 5 / October 2017
Published online by Cambridge University Press: 11 September 2017, 715830502
Two signatures of quantum effects on radiation reaction in the collision of a ${\sim}$ GeV electron beam with a high intensity ( ${>}3\times 10^{20}~\text{W}~\text{cm}^{-2}$ ) laser pulse have been considered. We show that the decrease in the average energy of the electron beam may be used to measure the Gaunt factor $g$ for synchrotron emission. We derive an equation for the evolution of the variance in the energy of the electron beam in the quantum regime, i.e. quantum efficiency parameter $\unicode[STIX]{x1D702}\not \ll 1$ . We show that the evolution of the variance may be used as a direct measure of the quantum stochasticity of the radiation reaction and determine the parameter regime where this is observable. For example, stochastic emission results in a 25 % increase in the standard deviation of the energy spectrum of a GeV electron beam, 1 fs after it collides with a laser pulse of intensity $10^{21}~\text{W}~\text{cm}^{-2}$ . This effect should therefore be measurable using current high-intensity laser systems.
Combined universal and selective prevention for adolescent alcohol use: a cluster randomized controlled trial
M. Teesson, N. C. Newton, T. Slade, N. Carragher, E. L. Barrett, K. E. Champion, E. V. Kelly, N. K. Nair, L. A. Stapinski, P. J. Conrod
Journal: Psychological Medicine / Volume 47 / Issue 10 / July 2017
No existing models of alcohol prevention concurrently adopt universal and selective approaches. This study aims to evaluate the first combined universal and selective approach to alcohol prevention.
A total of 26 Australian schools with 2190 students (mean age: 13.3 years) were randomized to receive: universal prevention (Climate Schools); selective prevention (Preventure); combined prevention (Climate Schools and Preventure; CAP); or health education as usual (control). Primary outcomes were alcohol use, binge drinking and alcohol-related harms at 6, 12 and 24 months.
Climate, Preventure and CAP students demonstrated significantly lower growth in their likelihood to drink and binge drink, relative to controls over 24 months. Preventure students displayed significantly lower growth in their likelihood to experience alcohol harms, relative to controls. While adolescents in both the CAP and Climate groups demonstrated slower growth in drinking compared with adolescents in the control group over the 2-year study period, CAP adolescents demonstrated faster growth in drinking compared with Climate adolescents.
Findings support universal, selective and combined approaches to alcohol prevention. Particularly novel are the findings of no advantage of the combined approach over universal or selective prevention alone.
Microcrack nucleation in granular ice under uniaxial compression: effect of grain-size and tetmperature
P. Kalifa, S. J. Jones, T. D. Slade
Uniaxial compression tests were carried out on fresh-water, isotropic, granular ice at a strain rate of 6 × 10−4 s−1. We investigated the effect of temperature (between −2 and −39°C) and grain-size (1 mm–8 mm) on the critical stress and strain at the initial crack nucleation. The amount of non-elastic strain at this event was estimated. The critical stress for initial crack nucleation increased strongly with decreasing temperature, following an Arrhenius law. It also exhibited a linear increase with d g −1/2 (where dg is the average grain-size). It is shown that the results cannot be explained by the purely brittle model of Sunder and Wu (1990). The results are interpreted in terms of grain-boundary sliding, controlled by the intrinsic viscosity of the boundary.
Parental supply of alcohol and alcohol consumption in adolescence: prospective cohort study
R. P. Mattick, M. Wadolowski, A. Aiken, P. J. Clare, D. Hutchinson, J. Najman, T. Slade, R. Bruno, N. McBride, L. Degenhardt, K. Kypri
Journal: Psychological Medicine / Volume 47 / Issue 2 / January 2017
Parents are a major supplier of alcohol to adolescents, yet there is limited research examining the impact of this on adolescent alcohol use. This study investigates associations between parental supply of alcohol, supply from other sources, and adolescent drinking, adjusting for child, parent, family and peer variables.
A cohort of 1927 adolescents was surveyed annually from 2010 to 2014. Measures include: consumption of whole drinks; binge drinking (>4 standard drinks on any occasion); parental supply of alcohol; supply from other sources; child, parent, family and peer covariates.
After adjustment, adolescents supplied alcohol by parents had higher odds of drinking whole beverages [odds ratio (OR) 1.80, 95% confidence interval (CI) 1.33–2.45] than those not supplied by parents. However, parental supply was not associated with bingeing, and those supplied alcohol by parents typically consumed fewer drinks per occasion (incidence rate ratio 0.86, 95% CI 0.77–0.96) than adolescents supplied only from other sources. Adolescents obtaining alcohol from non-parental sources had increased odds of drinking whole beverages (OR 2.53, 95% CI 1.86–3.45) and bingeing (OR 3.51, 95% CI 2.53–4.87).
Parental supply of alcohol to adolescents was associated with increased risk of drinking, but not bingeing. These parentally-supplied children also consumed fewer drinks on a typical drinking occasion. Adolescents supplied alcohol from non-parental sources had greater odds of drinking and bingeing. Further follow-up is necessary to determine whether these patterns continue, and to examine alcohol-related harm trajectories. Parents should be advised that supply of alcohol may increase children's drinking.
The structure of adolescent psychopathology: a symptom-level analysis
N. Carragher, M. Teesson, M. Sunderland, N. C. Newton, R. F. Krueger, P. J. Conrod, E. L. Barrett, K. E. Champion, N. K. Nair, T. Slade
Journal: Psychological Medicine / Volume 46 / Issue 5 / April 2016
Most empirical studies into the covariance structure of psychopathology have been confined to adults. This work is not developmentally informed as the meaning, age-of-onset, persistence and expression of disorders differ across the lifespan. This study investigates the underlying structure of adolescent psychopathology and associations between the psychopathological dimensions and sex and personality risk profiles for substance misuse and mental health problems.
This study analyzed data from 2175 adolescents aged 13.3 years. Five dimensional models were tested using confirmatory factor analysis and the external validity was examined using a multiple-indicators multiple-causes model.
A modified bifactor model, with three correlated specific factors (internalizing, externalizing, thought disorder) and one general psychopathology factor, provided the best fit to the data. Females reported higher mean levels of internalizing, and males reported higher mean levels of externalizing. No significant sex differences emerged in liability to thought disorder or general psychopathology. Liability to internalizing, externalizing, thought disorder and general psychopathology was characterized by a number of differences in personality profiles.
This study is the first to identify a bifactor model including a specific thought disorder factor. The findings highlight the utility of transdiagnostic treatment approaches and the importance of restructuring psychopathology in an empirically based manner.
The epidemiology of traumatic event exposure worldwide: results from the World Mental Health Survey Consortium
C. Benjet, E. Bromet, E. G. Karam, R. C. Kessler, K. A. McLaughlin, A. M. Ruscio, V. Shahly, D. J. Stein, M. Petukhova, E. Hill, J. Alonso, L. Atwoli, B. Bunting, R. Bruffaerts, J. M. Caldas-de-Almeida, G. de Girolamo, S. Florescu, O. Gureje, Y. Huang, J. P. Lepine, N. Kawakami, Viviane Kovess-Masfety, M. E. Medina-Mora, F. Navarro-Mateu, M. Piazza, J. Posada-Villa, K. M. Scott, A. Shalev, T. Slade, M. ten Have, Y. Torres, M. C. Viana, Z. Zarkov, K. C. Koenen
Considerable research has documented that exposure to traumatic events has negative effects on physical and mental health. Much less research has examined the predictors of traumatic event exposure. Increased understanding of risk factors for exposure to traumatic events could be of considerable value in targeting preventive interventions and anticipating service needs.
General population surveys in 24 countries with a combined sample of 68 894 adult respondents across six continents assessed exposure to 29 traumatic event types. Differences in prevalence were examined with cross-tabulations. Exploratory factor analysis was conducted to determine whether traumatic event types clustered into interpretable factors. Survival analysis was carried out to examine associations of sociodemographic characteristics and prior traumatic events with subsequent exposure.
Over 70% of respondents reported a traumatic event; 30.5% were exposed to four or more. Five types – witnessing death or serious injury, the unexpected death of a loved one, being mugged, being in a life-threatening automobile accident, and experiencing a life-threatening illness or injury – accounted for over half of all exposures. Exposure varied by country, sociodemographics and history of prior traumatic events. Being married was the most consistent protective factor. Exposure to interpersonal violence had the strongest associations with subsequent traumatic events.
Given the near ubiquity of exposure, limited resources may best be dedicated to those that are more likely to be further exposed such as victims of interpersonal violence. Identifying mechanisms that account for the associations of prior interpersonal violence with subsequent trauma is critical to develop interventions to prevent revictimization.
Can community midwives prevent antenatal depression? An external pilot study to test the feasibility of a cluster randomized controlled universal prevention trial
T. S. Brugha, J. Smith, J. Austin, J. Bankart, M. Patterson, C. Lovett, Z. Morgan, C. J. Morrell, P. Slade
Repeated epidemiological surveys show no decline in depression although uptake of treatments has grown. Universal depression prevention interventions are effective in schools but untested rigorously in adulthood. Selective prevention programmes have poor uptake. Universal interventions may be more acceptable during routine healthcare contacts for example antenatally. One study within routine postnatal healthcare suggested risk of postnatal depression could be reduced in non-depressed women from 11% to 8% by giving health visitors psychological intervention training. Feasibility and effectiveness in other settings, most notably antenatally, is unknown.
We conducted an external pilot study using a cluster trial design consisting of recruitment and enhanced psychological training of randomly selected clusters of community midwives (CMWs), recruitment of pregnant women of all levels of risk of depression, collection of baseline and outcome data prior to childbirth, allowing time for women 'at increased risk' to complete CMW-provided psychological support sessions.
Seventy-nine percent of eligible women approached agreed to take part. Two hundred and ninety-eight women in eight clusters participated and 186 termed 'at low risk' for depression, based on an Edinburgh Perinatal Depression Scale (EPDS) score of <12 at 12 weeks gestation, provided baseline and outcome data at 34 weeks gestation. All trial protocol procedures were shown to be feasible. Antenatal effect sizes in women 'at low risk' were similar to those previously demonstrated postnatally. Qualitative work confirmed the acceptability of the approach to CMWs and intervention group women.
A fully powered trial testing universal prevention of depression in pregnancy is feasible, acceptable and worth undertaking.
Anxious and non-anxious major depressive disorder in the World Health Organization World Mental Health Surveys
R. C. Kessler, N. A. Sampson, P. Berglund, M. J. Gruber, A. Al-Hamzawi, L. Andrade, B. Bunting, K. Demyttenaere, S. Florescu, G. de Girolamo, O. Gureje, Y. He, C. Hu, Y. Huang, E. Karam, V. Kovess-Masfety, S Lee, D. Levinson, M. E. Medina Mora, J. Moskalewicz, Y. Nakamura, F. Navarro-Mateu, M. A. Oakley Browne, M. Piazza, J. Posada-Villa, T. Slade, M. ten Have, Y. Torres, G. Vilagut, M. Xavier, Z. Zarkov, V. Shahly, M. A. Wilcox
Journal: Epidemiology and Psychiatric Sciences / Volume 24 / Issue 3 / June 2015
To examine cross-national patterns and correlates of lifetime and 12-month comorbid DSM-IV anxiety disorders among people with lifetime and 12-month DSM-IV major depressive disorder (MDD).
Method.
Nationally or regionally representative epidemiological interviews were administered to 74 045 adults in 27 surveys across 24 countries in the WHO World Mental Health (WMH) Surveys. DSM-IV MDD, a wide range of comorbid DSM-IV anxiety disorders, and a number of correlates were assessed with the WHO Composite International Diagnostic Interview (CIDI).
45.7% of respondents with lifetime MDD (32.0–46.5% inter-quartile range (IQR) across surveys) had one of more lifetime anxiety disorders. A slightly higher proportion of respondents with 12-month MDD had lifetime anxiety disorders (51.7%, 37.8–54.0% IQR) and only slightly lower proportions of respondents with 12-month MDD had 12-month anxiety disorders (41.6%, 29.9–47.2% IQR). Two-thirds (68%) of respondents with lifetime comorbid anxiety disorders and MDD reported an earlier age-of-onset (AOO) of their first anxiety disorder than their MDD, while 13.5% reported an earlier AOO of MDD and the remaining 18.5% reported the same AOO of both disorders. Women and previously married people had consistently elevated rates of lifetime and 12-month MDD as well as comorbid anxiety disorders. Consistently higher proportions of respondents with 12-month anxious than non-anxious MDD reported severe role impairment (64.4 v. 46.0%; χ 2 1 = 187.0, p < 0.001) and suicide ideation (19.5 v. 8.9%; χ 2 1 = 71.6, p < 0.001). Significantly more respondents with 12-month anxious than non-anxious MDD received treatment for their depression in the 12 months before interview, but this difference was more pronounced in high-income countries (68.8 v. 45.4%; χ 2 1 = 108.8, p < 0.001) than low/middle-income countries (30.3 v. 20.6%; χ 2 1 = 11.7, p < 0.001).
Conclusions.
Patterns and correlates of comorbid DSM-IV anxiety disorders among people with DSM-IV MDD are similar across WMH countries. The narrow IQR of the proportion of respondents with temporally prior AOO of anxiety disorders than comorbid MDD (69.6–74.7%) is especially noteworthy. However, the fact that these proportions are not higher among respondents with 12-month than lifetime comorbidity means that temporal priority between lifetime anxiety disorders and MDD is not related to MDD persistence among people with anxious MDD. This, in turn, raises complex questions about the relative importance of temporally primary anxiety disorders as risk markers v. causal risk factors for subsequent MDD onset and persistence, including the possibility that anxiety disorders might primarily be risk markers for MDD onset and causal risk factors for MDD persistence.
Clinical decision making and outcome in the routine care of people with severe mental illness across Europe (CEDAR)
B. Puschner, T. Becker, B. Mayer, H. Jordan, M. Maj, A. Fiorillo, A. Égerházi, T. Ivánka, P. Munk-Jørgensen, M. Krogsgaard Bording, W. Rössler, W. Kawohl, M. Slade, for the CEDAR study group
Journal: Epidemiology and Psychiatric Sciences / Volume 25 / Issue 1 / February 2016
Aims.
Shared decision making has been advocated as a means to improve patient-orientation and quality of health care. There is a lack of knowledge on clinical decision making and its relation to outcome in the routine treatment of people with severe mental illness. This study examined preferred and experienced clinical decision making from the perspectives of patients and staff, and how these affect treatment outcome.
Methods.
"Clinical Decision Making and Outcome in Routine Care for People with Severe Mental Illness" (CEDAR; ISRCTN75841675) is a naturalistic prospective observational study with bimonthly assessments during a 12-month observation period. Between November 2009 and December 2010, adults with severe mental illness were consecutively recruited from caseloads of community mental health services at the six study sites (Ulm, Germany; London, UK; Naples, Italy; Debrecen, Hungary; Aalborg, Denmark; and Zurich, Switzerland). Clinical decision making was assessed using two instruments which both have parallel patient and staff versions: (a) The Clinical Decision Making Style Scale (CDMS) measured preferences for decision making at baseline; and (b) the Clinical Decision Making Involvement and Satisfaction Scale (CDIS) measured involvement and satisfaction with a specific decision at all time points. Primary outcome was patient-rated unmet needs measured with the Camberwell Assessment of Need Short Appraisal Schedule (CANSAS). Mixed-effects multinomial regression was used to examine differences and course over time in involvement in and satisfaction with actual decision making. The effect of clinical decision making on the primary outcome was examined using hierarchical linear modelling controlling for covariates (study centre, patient age, duration of illness, and diagnosis). Analysis were also controlled for nesting of patients within staff.
Of 708 individuals approached, 588 adults with severe mental illness (52% female, mean age = 41.7) gave informed consent. Paired staff participants (N = 213) were 61.8% female and 46.0 years old on average. Shared decision making was preferred by patients (χ 2 = 135.08; p < 0.001) and staff (χ 2 = 368.17; p < 0.001). Decision making style of staff significantly affected unmet needs over time, with unmet needs decreasing more in patients whose clinicians preferred active to passive (−0.406 unmet needs per two months, p = 0.007) or shared (−0.303 unmet needs per two months, p = 0.015) decision making.
Decision making style of staff is a prime candidate for the development of targeted intervention. If proven effective in future trials, this would pave the ground for a shift from shared to active involvement of patients including changes to professional socialization through training in principles of active decision making.
Examining the shared and unique relationships among substance use and mental disorders
M. Sunderland, T. Slade, R. F. Krueger
Co-morbidity among use of different substances can be explained by a shared underlying dimensional factor. What remains unknown is whether the relationship between substance use and various co-morbid mental disorders can be explained solely by the general factor or whether there remain unique contributions of specific substances.
Data were from the 2007 Australian National Survey of Mental Health and Wellbeing (NSMHWB). A unidimensional latent factor was constructed that represented general substance use. The shared and specific relationships between lifetime substance use indicators and internalizing disorders, suicidality and psychotic-like experiences (PLEs) were examined using Multiple Indicators Multiple Causes (MIMIC) models in the total sample. Additional analyses then examined the shared and specific relationships associated with substance dependence diagnoses as indicators of the latent trait focusing on a subsample of substance users.
General levels of latent substance use were significantly and positively related to internalizing disorders, suicidality and psychotic-like experiences. Similar results were found when examining general levels of latent substance dependence in a sample of substance users. There were several direct effects between specific substance use/dependence indicators and the mental health correlates that significantly improved the overall model fit but they were small in magnitude and had relatively little impact on the general relationship.
The majority of pairwise co-morbid relationships between substance use/dependence and mental health correlates can be explained through a general latent factor. Researchers should focus on investigating the commonalities across all substance use and dependence indicators when studying mental health co-morbidity.
Onset and temporal sequencing of lifetime anxiety, mood and substance use disorders in the general population
T. Slade, P. M. McEvoy, C. Chapman, R. Grove, M. Teesson
Published online by Cambridge University Press: 15 November 2013, pp. 45-53
To date, very few studies have examined the bi-directional associations between mood disorders (MDs), anxiety disorders (ADs) and substance use disorders (SUDs), simultaneously. The aims of the current study were to determine the rates and patterns of comorbidity of the common MDs, ADs and SUDs and describe the onset and temporal sequencing of these classes of disorder, by sex.
Data came from the 2007 Australian National Survey of Mental Health and Wellbeing, a nationally representative household survey with 8841 (60% response rate) community residents aged 16–85.
Pre-existing mental disorders increase the risk of subsequent mental disorders in males and females regardless of the class of disorder. Pre-existing SUDs increase the risk of subsequent MDs and ADs differentially for males and females. Pre-existing MDs increase the risk of subsequent ADs differentially for males and females.
Comorbidity remains a significant public health issue and current findings point to the potential need for sex-specific prevention and treatment responses.
By James Ahn, Eric L. Anderson, Annette L. Beautrais, Dennis Beedle, Jon S. Berlin, Benjamin L. Bregman, Peter Brown, Suzie Bruch, Jonathan Busko, Stuart Buttlaire, Laurie Byrne, Gerald Carroll, Valerie A. Carroll, Margaret Cashman, Joseph R. Check, Lara G. Chepenik, Robert N. Cuyler, Preeti Dalawari, Suzanne Dooley-Hash, William R. Dubin, Mila L. Felder, Avrim B. Fishkind, Reginald I. Gaylord, Rachel Lipson Glick, Travis Grace, Clare Gray, Anita Hart, Ross A. Heller, Amanda E. Horn, David S. Howes, David C. Hsu, Andy Jagoda, Margaret Judd, John Kahler, Daryl Knox, Gregory Luke Larkin, Patricia Lee, Jerrold B. Leikin, Eddie Markul, Marc L. Martel, J. D. McCourt, MaryLynn McGuire Clarke, Mark Newman, Anthony T. Ng, Barbara Nightengale, Kimberly Nordstrom, Jagoda Pasic, Jennifer Peltzer-Jones, Marcia A. Perry, Larry Phillips, Paul Porter, Seth Powsner, Michael S. Pulia, Erin Rapp, Divy Ravindranath, Janet S. Richmond, Silvana Riggio, Harvey L. Ruben, Derek J. Robinson, Douglas A. Rund, Omeed Saghafi, Alicia N. Sanders, Jeffrey Sankoff, Lorin M. Scher, Louis Scrattish, Richard D. Shih, Maureen Slade, Susan Stefan, Victor G. Stiebel, Deborah Taber, Vaishal Tolia, Gary M. Vilke, Alvin Wang, Michael A. Ward, Joseph Weber, Michael P. Wilson, James L. Young, Scott L. Zeller
Edited by Leslie S. Zun
Edited in association with Lara G. Chepenik, Mary Nan S. Mallory
Book: Behavioral Emergencies for the Emergency Physician
Remission from post-traumatic stress disorder in the general population
C. Chapman, K. Mills, T. Slade, A. C. McFarlane, R. A. Bryant, M. Creamer, D. Silove, M. Teesson
Journal: Psychological Medicine / Volume 42 / Issue 8 / August 2012
Few studies have focused on post-traumatic stress disorder (PTSD) remission in the population, none have modelled remission beyond age 54 years and none have explored in detail the correlates of remission from PTSD. This study examined trauma experience, symptom severity, co-morbidity, service use and time to PTSD remission in a large population sample.
Data came from respondents (n=8841) of the 2007 Australian National Survey of Mental Health and Wellbeing (NSMHWB). A modified version of the World Health Organization's World Mental Health Composite International Diagnostic Interview (WMH-CIDI) was used to determine the presence and age of onset of DSM-IV PTSD and other mental and substance use disorders, type, age, and number of lifetime traumas, severity of re-experiencing, avoidance and hypervigilance symptoms and presence and timing of service use.
Projected lifetime remission rate was 92% and median time to remission was 14 years. Those who experienced childhood trauma, interpersonal violence, severe symptoms or a secondary anxiety or affective disorder were less likely to remit from PTSD and reported longer median times to remission compared to those with other trauma experiences, less severe symptoms or no co-morbidity.
Although most people in the population with PTSD eventually remit, a significant minority report symptoms decades after onset. Those who experience childhood trauma or interpersonal violence should be a high priority for intervention.
By Louise Arseneault, Sagnik Bhattacharyya, Mary Cannon, Maria Grazia Cascio, David Castle, Suman Chandra, Carolyn Coffey, David Copolov, Dean Brian, Louisa Degenhardt, Marta Di Forti, Mahmoud ElSohly, Ismael Galve-Roperh, Wayne Hall, Lumir Hanus, Cécile Henquet, Leanne Hides, Leslie Iversen, Wynne James, David J. Kavanagh, Koethe Dagmar, Rebecca Kuepper, Don Linszen, Valentina Lorenzetti, Dan Lubman, Michael Lynskey, Philip McGuire, Raphael Mechoulam, Zlatko Mehmedic, Paul Morrison, Kim T. Mueser, Sir Robin M. Murray, George Patton, Roger Pertwee, Nicole Pesa, Mohini Ranganathan, Miriam Schneider, Andrew Sewell, Silberberg Carol, Patrick D. Skosnik, Desmond Slade, Nadia Solowij, Deepak Cyril D'Souza, Sundram Suresh, Thérèse van Ameisvoort, van Os Jim, Verdoux Hélène, Murat Yücel, Zammit Stanley
Edited by David Castle, University of Melbourne, Robin M. Murray, Deepak Cyril D'Souza, Yale University, Connecticut
Book: Marijuana and Madness
Print publication: 27 October 2011, pp vii-x
Density of Deep Bandgap States in Amorphous Silicon From the Temperature Dependence of Thin Film Transistor Current
T. Globus, H. C. Slade, M. Shur, M. Hack
We have measured the current-voltage characteristics of amorphous silicon thin film transistors (a-Si TFTs) over a wide range of temperatures (20 to 160°C) and determined the activation energy of the channel current as a function of gate bias with emphasis on the leakage current and subthreshold regimes. We propose a new method for estimating the density of localized states (DOS) from the dependence of the derivative of activation energy with respect to gate bias. This differential technique does not require knowledge of the flat-band voltage (V FB) and does not incorporate integration over gate bias. Using this Method, we have characterized the density of localized states with energies in the range 0.15–1.2 eV from the bottom of the conduction band and have found a wide peak in the DOS in the range of 0.8–0.95 eV below the conduction band. We have also observed that the DOS peak in the lower half of the bandgap increases in magnitude and shifts towards the conduction band as a result of thermal and bias stress. We also measured an overall increase in the DOS in the upper half of the energy gap and an additional peak, centered at 0.2 eV below the conduction band, which appear due to the applied stress. These results are in qualitative agreement with the defect pool Model [1,2].
Modeling and Scaling of a-Si:H and Poly-Si Thin Film Transistors
M. S. Shur, H. C Slade, T. Ytterdal, L. Wang, Z. Xu, M. Hack, K. Aflatooni, Y. Byun, Y Chen, M. Froggatt, A. Krishnan, P. Mei, H. Meiling, B.-H. Min, A. Nathan, S. Sherman, M. Stewart, S. Theiss
We have developed analytic SPICE models for hydrogenated amorphous silicon (a-Si:H) and polysilicon (poly-Si) thin film transistors (TFTs) which accurately model all regimes of operation, are temperature dependent to 150°C, and scale with device dimensions. These models have been presented in [1, 2]. In this work, we compare the current-voltage characteristics predicted by our models with the measured characteristics from TFTs fabricated at different foundries. We compare the extracted device parameters in order to evaluate the robustness of our models and to determine a suitable default parameter set. We also use the models to examine the effects of device scaling for short channel TFTs. The models can be accessed using the circuit simulator AIM-Spice [3], which is available at http://nina.ecse.rpi.edu/aimspice.
Characterization and Modeling of Frequency Dispersion in Amorphous Silicon Thin Film Transistors
H. C Slade, M. S. Shur, T. Ytterdal
The large number of localized energy states in amorphous and polysilicon thin film transistors causes non-crystalline effects in both the DC and AC transistor characteristics. The observed frequency dispersion of the device capacitances is linked to the characteristic times of electron trapping and emission from localized thin film transistors and is modeled analytically by introducing effective RC time constants, which are proportional to the electron transit times, determined by the field effect mobility. The small-signal gate-to-source and gate-to-drain capacitances have been derived using Meyer's approach, which takes into account the non-zero drain-source voltage to achieve a partitioning of the channel capacitance. We have verified the model for amorphous silicon thin film transistors for varying gate lengths and frequencies.
Correlated Single Molecule Fluorescence and Scanning Probe Microscopies: Applications to the Study of Soft Materials
Andrea L. Slade, James E. Shaw, Guocheng Yang, Neetu Chhabra, Christopher M. Yip
Published online by Cambridge University Press: 01 February 2011, R2.1/Y2.1
We recently developed an integrated imaging platform that combines single molecule evanescent wave fluorescence imaging (and spectroscopy) with in situ scanning probe microscopy. The advantages, challenges, and potential represented by this coupled tool will be described in the context of the structure-function characteristics of nanostructured biomaterials and thin lipid films. | CommonCrawl |
Evangelos Evangelou ,
Department of Mathematical Sciences, University of Bath, BA2 7AY, Bath, UK
* Corresponding author: Evangelos Evangelou
The aim of this paper is to bring together recent developments in Bayesian generalised linear mixed models and geostatistics. We focus on approximate methods on both areas. A technique known as full-scale approximation, proposed by Sang and Huang (2012) for improving the computational drawbacks of large geostatistical data, is incorporated into the INLA methodology, used for approximate Bayesian inference. We also discuss how INLA can be used for approximating the posterior distribution of transformations of parameters, useful for practical applications. Issues regarding the choice of the parameters of the approximation such as the knots and taper range are also addressed. Emphasis is given in applications in the context of disease mapping by illustrating the methodology for modelling the loa loa prevalence in Cameroon and malaria in the Gambia.
Keywords: Disease mapping, full-scale approximation, generalised linear spatial model, geostatistics, integrated nested Laplace approximation.
Mathematics Subject Classification: Primary: 62H11, 62F15; Secondary: 60G15.
Citation: Evangelos Evangelou. Approximate Bayesian inference for geostatistical generalised linear models. Foundations of Data Science, 2019, 1 (1) : 39-60. doi: 10.3934/fods.2019002
S. Banerjee, A. E. Gelfand, A. O. Finley and H. Sang, Gaussian predictive process models for large spatial data sets, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70 (2008), 825-848. doi: 10.1111/j.1467-9868.2008.00663.x. Google Scholar
O. E. Barndorff-Nielsen and D. R. Cox, Asymptotic Techniques for Use in Statistics, Chapman & Hall Ltd, 1989. doi: 10.1007/978-1-4899-3424-6. Google Scholar
N. Breslow and D. Clayton, Approximate inference in generalized linear mixed models, Journal of the American Statistical Association, 88 (1993), 9-25. Google Scholar
M. Cameletti, F. Lindgren, D. Simpson and H. Rue, Spatio-temporal modeling of particulate matter concentration through the SPDE approach, AStA Advances in Statistical Analysis, 97 (2013), 109-131. doi: 10.1007/s10182-012-0196-3. Google Scholar
O. F. Christensen, J. Møller and R. Waagepetersen, Analysis of Spatial Data Using Generalized Linear Mixed Models and Langevin-type Markov chain Monte Carlo, Technical report, Department of Mathematical Sciences, Aalborg University, 2000.Google Scholar
O. F. Christensen and R. Waagepetersen, Bayesian prediction of spatial count data using generalized linear mixed models, Biometrics, 58 (2002), 280-286. doi: 10.1111/j.0006-341X.2002.00280.x. Google Scholar
O. F. Christensen, Monte Carlo maximum likelihood in model-based geostatistics, Journal of Computational and Graphical Statistics, 13 (2004), 702-718. doi: 10.1198/106186004X2525. Google Scholar
O. F. Christensen and P. J. Ribeiro Jr, geoRglm: A package for generalised linear spatial models, R News, 2 (2002), 26-28. Google Scholar
P. Diggle, R. Moyeed, B. Rowlingson and M. Thomson, Childhood malaria in the Gambia: A case-study in model-based geostatistics, Journal of the Royal Statistical Society: Series C (Applied Statistics), 51 (2002), 493-506. doi: 10.1111/1467-9876.00283. Google Scholar
P. J. Diggle, J. A. Tawn and R. A. Moyeed, Model-based geostatistics, Journal of the Royal Statistical Society: Series C (Applied Statistics), 47 (1998), 299-350. doi: 10.1111/1467-9876.00113. Google Scholar
P. J. Diggle, M. C. Thomson, O. F. Christensen, B. Rowlingson, V. Obsomer, J. Gardon, S. Wanji, I. Takougang, P. Enyong, J. Kamgno, J. H. Remme, M. Boussinesq and D. H. Molyneux, Spatial modelling and the prediction of loa loa risk: decision making under uncertainty, Annals of Tropical Medicine and Parasitology, 101 (2007), 499-509. Google Scholar
J. Eidsvik, S. Martino and H. Rue, Approximate Bayesian inference in spatial generalized linear mixed models, Scandinavian Journal of Statistics, 36 (2009), 1-22. doi: 10.1111/j.1467-9469.2008.00621.x. Google Scholar
J. Eidsvik, A. O. Finley, S. Banerjee and H. Rue, Approximate bayesian inference for large spatial datasets using predictive process models, Computational Statistics & Data Analysis, 56 (2012), 1362-1380. doi: 10.1016/j.csda.2011.10.022. Google Scholar
E. Evangelou, Z. Zhu and R. L. Smith, Estimation and prediction for spatial generalized linear mixed models using high order laplace approximation, Journal of Statistical Planning and Inference, 141 (2011), 3564-3577. doi: 10.1016/j.jspi.2011.05.008. Google Scholar
E. Evangelou and V. Maroulas, Sequential empirical Bayes method for filtering dynamic spatiotemporal processes, Spatial Statistics, 21 (2017), 114-129. doi: 10.1016/j.spasta.2017.06.006. Google Scholar
E. Evangelou and Z. Zhu, Optimal predictive design augmentation for spatial generalised linear mixed models, Journal of Statistical Planning and Inference, 142 (2012), 3242-3253. doi: 10.1016/j.jspi.2012.05.008. Google Scholar
A. O. Finley, H. Sang, S. Banerjee and A. E. Gelfand, Improving the performance of predictive process modeling for large datasets, Data Analysis, 53 (2009), 2873-2884. doi: 10.1016/j.csda.2008.09.008. Google Scholar
R. Furrer, M. G. Genton and D. Nychka, Covariance tapering for interpolation of large spatial datasets, Journal of Computational and Graphical Statistics, 15 (2006), 502-523. doi: 10.1198/106186006X132178. Google Scholar
T. Gneiting, Compactly supported correlation functions, Journal of Multivariate Analysis, 83 (2002), 493-508. doi: 10.1006/jmva.2001.2056. Google Scholar
F. Hosseini, J. Eidsvik and M. Mohammadzadeh, Approximate bayesian inference in spatial glmm with skew normal latent variables, Data Analysis, 55 (2011), 1791-1806. doi: 10.1016/j.csda.2010.11.011. Google Scholar
J. B. Illian, S. H. Sørbye and H. Rue, A toolbox for fitting complex spatial point process models using integrated nested Laplace approximation (INLA), The Annals of Applied Statistics, 6 (2012), 1499-1530. doi: 10.1214/11-AOAS530. Google Scholar
C. G. Kaufman, M. J. Schervish and D. W. Nychka, Covariance tapering for likelihood-based estimation in large spatial data sets, Journal of the American Statistical Association, 103 (2008), 1545-1555. doi: 10.1198/016214508000000959. Google Scholar
R. Langrock, Some applications of nonlinear and non-Gaussian state–space modelling by means of hidden Markov models, Journal of Applied Statistics, 38 (2011), 2955-2970. doi: 10.1080/02664763.2011.573543. Google Scholar
F. Lindgren, H. Rue and J. Lindström, An explicit link between Gaussian fields and Gaussian Markov random fields: The stochastic partial differential equation approach, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73 (2011), 423-498. doi: 10.1111/j.1467-9868.2011.00777.x. Google Scholar
S. Martino, R. Akerkar and H. Rue, Approximate Bayesian inference for survival models, Scandinavian Journal of Statistics, 38 (2011), 514-528. doi: 10.1111/j.1467-9469.2010.00715.x. Google Scholar
P. McCullagh and J. A. Nelder, Generalized Linear Models, Chapman & Hall/CRC, 1999. doi: 10.1007/978-1-4899-3242-6. Google Scholar
W. Müller, Collecting Spatial Data: Optimum Design of Experiments for Random Fields, Springer Verlag, 2007. Google Scholar
M. Paul, A. Riebler, L. M. Bachmann, H. Rue and L. Held, Bayesian bivariate meta-analysis of diagnostic test studies using integrated nested Laplace approximations, Statistics in Medicine, 29 (2010), 1325-1339. doi: 10.1002/sim.3858. Google Scholar
R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, 2018, URL https://www.R-project.org/.Google Scholar
P. J. Ribeiro Jr and P. J. Diggle, geoR: A package for geostatistical analysis, R News, 1 (2001), 15-18. Google Scholar
H. Rue and L. Held, Gaussian Markov Random Fields: Theory and Applications, Monographs on statistics and applied probability, Chapman & Hall/CRC, 2005. doi: 10.1201/9780203492024. Google Scholar
H. Rue, S. Martino and N. Chopin, Approximate bayesian inference for latent gaussian models by using integrated nested laplace approximations, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71 (2009), 319-392. doi: 10.1111/j.1467-9868.2008.00700.x. Google Scholar
H. Sang and J. Z. Huang, A full scale approximation of covariance functions for large spatial data sets, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74 (2012), 111-132. doi: 10.1111/j.1467-9868.2011.01007.x. Google Scholar
H. Sang, M. Jun and J. Huang, Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors, The Annals of Applied Statistics, 5 (2011), 2519-2548. doi: 10.1214/11-AOAS478. Google Scholar
B. Schrödle and L. Held, A primer on disease mapping and ecological regression using INLA, Computational Statistics, 26 (2011), 241-258. doi: 10.1007/s00180-010-0208-2. Google Scholar
Z. Shun and P. McCullagh, Laplace approximation of high dimensional integrals, Journal of the Royal Statistical Society, Series B, Methodological, 57 (1995), 749-760. doi: 10.1111/j.2517-6161.1995.tb02060.x. Google Scholar
D. Simpson, F. Lindgren and H. Rue, In order to make spatial statistics computationally feasible, we need to forget about the covariance function, Environmetrics, 23 (2012), 65-74. doi: 10.1002/env.1137. Google Scholar
M. L. Stein, Interpolation of Spatial Data: Some Theory for Kriging, Springer-Verlag Inc, 1999. doi: 10.1007/978-1-4612-1494-6. Google Scholar
B. M. Taylor and P. J. Diggle, INLA or MCMC? A tutorial and comparative evaluation for spatial prediction in log-Gaussian Cox processes, Journal of Statistical Computation and Simulation, 84 (2014), 2266-2284. doi: 10.1080/00949655.2013.788653. Google Scholar
H. Zhang, On estimation and prediction for spatial generalized linear mixed models, Biometrics, 58 (2004), 129-136. doi: 10.1111/j.0006-341X.2002.00129.x. Google Scholar
Figure 2. Scaled Frobenius norm ($\circ$) and computational time ($+$) against taper range $\gamma$
Figure 1. Locations for the simulated example, indicated by $\cdot$, and grid for the full scale approximation, indicated by $\times$. Prediction is considered at a central site ($\circ$) and a far site ($\square$)
Figure 3. Posterior densities for (a) logarithm of sill, (b) range, and (c) intercept. The histogram shows the MCMC sample. The approximation using INLA and exact covariance matrix is shown by a solid line, the INLA with the full scale approximation is shown by a dashed line, and the INLA with the predictive process approximation is shown by a dotted line. The true parameter value is indicated by a triangle on the horizontal axis
Figure 4. Predictive distribution of the random field at a central site (left) and a far site (right). The histogram shows the MCMC sample. The approximation using INLA and exact covariance matrix is shown by a solid line, the INLA with the full scale approximation is shown by a dashed line, and the INLA with the predictive process approximation is shown by a dotted line
Figure 7. Predicted prevalence of the loa loa parasite (top), and prediction standard deviation (bottom)
Figure 5. Posterior plots for the variance parameters. (a) Joint posterior of $\log(\sigma^2)$ and $\rho$ using exact INLA, (b) Marginal posterior of $\log(\sigma^2)$, (c) Marginal posterior of $\rho$. The histogram is for the MCMC sample, the exact INLA is shown by a solid line and the full-scale INLA by a dadhed line
Figure 6. Posterior for the regressor coefficients. The histogram is for the MCMC sample, the exact INLA is shown by a solid line and the full-scale INLA by a dashed line
Figure 8. Sampled locations for the Gambia data from [30]
Figure 9. Posterior densities for the parameters (a) $\tau^2$, (b) $\sigma^2$, and (c) $\rho$ of the Gambia malaria data
Figure 10. Prediction of spatial random field for the Gambia malaria data (top) and prediction standard deviation (bottom)
Table 1. Parameter estimates for the loa loa prevalence in Cameroon using exact INLA, approximate INLA, and MCMC
Parameter Exact INLA
Estimate 95% interval
Intercept $ (\beta_0) $ $ -14.17 $ $ -18.58 $ $ -9.76 $
Elevation $ 0-.65 $Km $ (\beta_1) $ $ 2.28 $ $ 1.07 $ $ 3.49 $
Elevation $ .65 - 1 $Km $ (\beta_2) $ $ 1.62 $ $ 0.90 $ $ 2.34 $
Elevation $ 1 - 1.3 $Km $ (\beta_3) $ $ 0.81 $ $ 0.17 $ $ 1.45 $
Max(NDVI) $ (\beta_4) $ $ 14.09 $ $ 8.00 $ $ 20.17 $
Sd(NDVI) $ (\beta_5) $ $ 0.71 $ $ -9.68 $ $ 11.10 $
Sill $ (\sigma^2) $ $ 0.72 $ $ 0.50 $ $ 1.02 $
Range $ (\rho) $ $ 0.55 $ $ 0.25 $ $ 1.08 $
Parameter Full-scale INLA
Intercept $ (\beta_0) $ $ -15.03 $ $ -19.28 $ $ -10.77 $
Parameter MCMC
Table 2. Parameter estimates of the Gambia malaria data
Parameter Estimate 95% interval
Intercept ($\beta_0$) $-0.07309$ $-2.95100$ $ 2.80483$
Age ($\beta_1$) $ 0.00066$ $ 0.00042$ $ 0.00090$
Untreated bed net ($\beta_2$) $-0.36216$ $-0.67639$ $-0.04793$
Treated bed net ($\beta_3$) $-0.68297$ $-1.07497$ $-0.29097$
Greenness ($\beta_4$) $-0.01334$ $-0.07507$ $ 0.04839$
PHC ($\beta_5$) $-0.32790$ $-0.77921$ $ 0.12340$
Area 2 ($\beta_6$) $-0.69385$ $-2.26728$ $ 0.87958$
Area 4 ($\beta_8$) $ 0.65537$ $-1.12152$ $ 2.43226$
Nugget ($\tau^2$) $ 0.13209$ $ 0.00310$ $ 0.26136$
Sill ($\sigma^2$) $ 0.98459$ $ 0.34501$ $ 1.82461$
Range ($\rho$) $ 9.82025$ $ 0.54713$ $18.63800$
Gonzalo Galiano, Julián Velasco. Finite element approximation of a population spatial adaptation model. Mathematical Biosciences & Engineering, 2013, 10 (3) : 637-647. doi: 10.3934/mbe.2013.10.637
Chris Guiver. The generalised singular perturbation approximation for bounded real and positive real control systems. Mathematical Control & Related Fields, 2019, 9 (2) : 313-350. doi: 10.3934/mcrf.2019016
Shrikrishna G. Dani. Simultaneous diophantine approximation with quadratic and linear forms. Journal of Modern Dynamics, 2008, 2 (1) : 129-138. doi: 10.3934/jmd.2008.2.129
Wenchang Luo, Lin Chen. Approximation schemes for scheduling a maintenance and linear deteriorating jobs. Journal of Industrial & Management Optimization, 2012, 8 (2) : 271-283. doi: 10.3934/jimo.2012.8.271
Min Zhu, Xiaofei Guo, Zhigui Lin. The risk index for an SIR epidemic model and spatial spreading of the infectious disease. Mathematical Biosciences & Engineering, 2017, 14 (5&6) : 1565-1583. doi: 10.3934/mbe.2017081
Yangrong Li, Jinyan Yin. Existence, regularity and approximation of global attractors for weakly dissipative p-Laplace equations. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1939-1957. doi: 10.3934/dcdss.2016079
Harbir Antil, Mahamadi Warma. Optimal control of the coefficient for the regional fractional $p$-Laplace equation: Approximation and convergence. Mathematical Control & Related Fields, 2019, 9 (1) : 1-38. doi: 10.3934/mcrf.2019001
Gabriella Bretti, Roberto Natalini, Benedetto Piccoli. Fast algorithms for the approximation of a traffic flow model on networks. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 427-448. doi: 10.3934/dcdsb.2006.6.427
Jishan Fan, Tohru Ozawa. An approximation model for the density-dependent magnetohydrodynamic equations. Conference Publications, 2013, 2013 (special) : 207-216. doi: 10.3934/proc.2013.2013.207
Stephen Thompson, Thomas I. Seidman. Approximation of a semigroup model of anomalous diffusion in a bounded set. Evolution Equations & Control Theory, 2013, 2 (1) : 173-192. doi: 10.3934/eect.2013.2.173
Luke Finlay, Vladimir Gaitsgory, Ivan Lebedev. Linear programming solutions of periodic optimization problems: approximation of the optimal control. Journal of Industrial & Management Optimization, 2007, 3 (2) : 399-413. doi: 10.3934/jimo.2007.3.399
Pablo Ochoa. Approximation schemes for non-linear second order equations on the Heisenberg group. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1841-1863. doi: 10.3934/cpaa.2015.14.1841
Wilfried Grecksch, Hannelore Lisei. Linear approximation of nonlinear Schrödinger equations driven by cylindrical Wiener processes. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3095-3114. doi: 10.3934/dcdsb.2016089
Z. K. Eshkuvatov, M. Kammuji, Bachok M. Taib, N. M. A. Nik Long. Effective approximation method for solving linear Fredholm-Volterra integral equations. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 77-88. doi: 10.3934/naco.2017004
Peter I. Kogut. On approximation of an optimal boundary control problem for linear elliptic equation with unbounded coefficients. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2105-2133. doi: 10.3934/dcds.2014.34.2105
Giuseppe Toscani. A Rosenau-type approach to the approximation of the linear Fokker-Planck equation. Kinetic & Related Models, 2018, 11 (4) : 697-714. doi: 10.3934/krm.2018028
Santiago Montaner, Arnaud Münch. Approximation of controls for linear wave equations: A first order mixed formulation. Mathematical Control & Related Fields, 2019, 0 (0) : 0-0. doi: 10.3934/mcrf.2019030
Jiao-Yan Li, Xiao Hu, Zhong Wan. An integrated bi-objective optimization model and improved genetic algorithm for vehicle routing problems with temporal and spatial constraints. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-18. doi: 10.3934/jimo.2018200
Gilles Carbou, Bernard Hanouzet. Relaxation approximation of the Kerr model for the impedance initial-boundary value problem. Conference Publications, 2007, 2007 (Special) : 212-220. doi: 10.3934/proc.2007.2007.212
Pierre Degond, Maximilian Engel. Numerical approximation of a coagulation-fragmentation model for animal group size statistics. Networks & Heterogeneous Media, 2017, 12 (2) : 217-243. doi: 10.3934/nhm.2017009
Evangelos Evangelou | CommonCrawl |
Upper and weak-lower semicontinuity of pullback attractors to impulsive evolution processes
Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model
Threshold dynamics and sensitivity analysis of a stochastic semi-Markov switched SIRS epidemic model with nonlinear incidence and vaccination
Xin Zhao , Tao Feng , Liang Wang , and Zhipeng Qiu ,
Department of Mathematics, Nanjing University of Science and Technology, Nanjing 210094, China
* Corresponding author: Liang Wang, Zhipeng Qiu
Received June 2020 Revised October 2020 Published December 2020
Fund Project: Z. Qiu is supported by the National Natural Science Foundation of China (NSFC) grants No. 12071217 and No. 11671206; L. Wang is supported by the National Science Foundation for Young Scientists of China grant No. 12001271, Natural Science Foundation of Jiangsu Province grant No. BK20200484; X. Zhao is supported by the Scholarship Foundation of China Scholarship Council grant No. 201906840072, the Postgraduate Research & Practice Innovation Program of Jiangsu Province grant No. KYCX20_0242; T. Feng is supported by the Scholarship Foundation of China Scholarship Council grant No. 201806840120, the Out-standing Chinese and Foreign Youth Exchange Program of China Association of Science and Technology, and the Fundamental Research Funds for the Central Universities grant No. 30918011339
In this paper, a stochastic SIRS epidemic model with nonlinear incidence and vaccination is formulated to investigate the transmission dynamics of infectious diseases. The model not only incorporates the white noise but also the external environmental noise which is described by semi-Markov process. We first derive the explicit expression for the basic reproduction number of the model. Then the global dynamics of the system is studied in terms of the basic reproduction number and the intensity of white noise, and sufficient conditions for the extinction and persistence of the disease are both provided. Furthermore, we explore the sensitivity analysis of $ R_0^s $ with each semi-Markov switching under different distribution functions. The results show that the dynamics of the entire system is not related to its switching law, but has a positive correlation to its mean sojourn time in each subsystem. The basic reproduction number we obtained in the paper can be applied to all piecewise-stochastic semi-Markov processes, and the results of the sensitivity analysis can be regarded as a prior work for optimal control.
Keywords: Stochastic SIRS epidemic model, Nonlinear incidence, Semi-Markov switching, Sensitivity analysis.
Mathematics Subject Classification: Primary: 37N25, 37A50, 60J20.
Citation: Xin Zhao, Tao Feng, Liang Wang, Zhipeng Qiu. Threshold dynamics and sensitivity analysis of a stochastic semi-Markov switched SIRS epidemic model with nonlinear incidence and vaccination. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021010
F. B. Agusto and M. A. Khan, Optimal control strategies for dengue transmission in Pakistan, Mathematical Biosciences, 305 (2018), 102-121. doi: 10.1016/j.mbs.2018.09.007. Google Scholar
R. M. Anderson and R. M. May, Population biology of infectious diseases: Part Ⅰ, Nature, 280 (1979), 361-367. doi: 10.1038/280361a0. Google Scholar
J. F. Andrews, A mathematical model for the continuous culture of microorganisms utilizing inhibitory substrates, Biotechnology and Bioengineering, 10 (1968), 707-723. doi: 10.1002/bit.260100602. Google Scholar
S. M. Blower and H. Dowlatabadi, Sensitivity and uncertainty analysis of complex models of disease transmission: An HIV model, as an example, International Statistical Review/Revue Internationale de Statistique, 62 (1994), 229-243. doi: 10.2307/1403510. Google Scholar
X. Cao, Semi-Markov decision problems and performance sensitivity analysis, IEEE Transactions on Automatic Control, 48 (2003), 758-769. doi: 10.1109/TAC.2003.811252. Google Scholar
V. Capasso and G. Serio, A generalization of the Kermack-McKendrick deterministic epidemic model, Mathematical Biosciences, 42 (1978), 43-61. doi: 10.1016/0025-5564(78)90006-8. Google Scholar
F. H. Chen, A susceptible-infected epidemic model with voluntary vaccinations, Journal of Mathematical Biology, 53 (2006), 253-272. doi: 10.1007/s00285-006-0006-1. Google Scholar
P. V. Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Mathematical Biosciences, 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar
N. H. Du, N. T. Dieu and N. N. Nhu, Conditions for permanence and ergodicity of certain SIR epidemic models, Acta Applicandae Mathematicae, 160 (2019), 81-99. doi: 10.1007/s10440-018-0196-8. Google Scholar
T. Feng and Z. Qiu, Global analysis of a stochastic TB model with vaccination and treatment, Discrete & Continuous Dynamical Systems-B, 24 (2019), 2923-2939. doi: 10.3934/dcdsb.2018292. Google Scholar
T. Feng and Z. Qiu, Analysis of an epidemiological model driven by multiple noises: Ergodicity and convergence rate, Journal of the Franklin Institute, 357 (2020), 2203-2216. doi: 10.1016/j.jfranklin.2019.09.004. Google Scholar
I. I. Gihman and A. V. Skorohod, The Theory of Stochastic Processes, II[M], , Springer-Verlag, New York-Heidelberg, 1975. Google Scholar
K. Hattaf, N. Yousfi and A. Tridane, Mathematical analysis of a virus dynamics model with general incidence rate and cure rate, Nonlinear Analysis: Real World Applications, 13 (2012), 1866-1872. doi: 10.1016/j.nonrwa.2011.12.015. Google Scholar
H. W. Hethcote, Qualitative analyses of communicable disease models, Mathematical Biosciences, 28 (1976), 335-356. doi: 10.1016/0025-5564(76)90132-2. Google Scholar
H. W. Hethcote and V. Driessche, Some epidemiological models with nonlinear incidence, Journal of Mathematical Biology, 29 (1991), 271-287. doi: 10.1007/BF00160539. Google Scholar
T. K. Kar and A. Batabyal, Stability analysis and optimal control of an SIR epidemic model with vaccination, Biosystems, 104 (2011), 127-135. doi: 10.1016/j.biosystems.2011.02.001. Google Scholar
W. O. Kermack and A. G. McKendrick, A contribution to the mathematical theory of epidemics, Proceedings of the Royal Society Of London. Series A, Containing Papers of a Mathematical and Physical Character, 115 (1927), 700-721. Google Scholar
A. Lahrouz, L. Omari and D. Kiouach, Complete global stability for an SIRS epidemic model with generalized non-linear incidence and vaccination, Applied Mathematics and Computation, 218 (2012), 6519-6525. doi: 10.1016/j.amc.2011.12.024. Google Scholar
A. Lahrouz and L. Omari, Extinction and stationary distribution of a stochastic SIRS epidemic model with non-linear incidence, Statistics & Probability Letters, 83 (2013), 960-968. doi: 10.1016/j.spl.2012.12.021. Google Scholar
J. Li and Z. Ma, Global analysis of SIS epidemic models with variable total population size, Mathematical and Computer Modelling, 39 (2004), 1231-1242. doi: 10.1016/j.mcm.2004.06.004. Google Scholar
D. Li, M. Liu and S. Liu, The evolutionary dynamics of stochastic epidemic model with nonlinear incidence rate, Bulletin of Mathematical Biology, 77 (2015), 1705-1743. doi: 10.1007/s11538-015-0101-9. Google Scholar
D. Li, S. Liu and J. Cui, Threshold dynamics and ergodicity of an SIRS epidemic model with Markovian switching, Journal of Differential Equations, 263 (2017), 8873-8915. doi: 10.1016/j.jde.2017.08.066. Google Scholar
D. Li, S. Liu and J. Cui, Threshold dynamics and ergodicity of an SIRS epidemic model with semi-Markov switching, Journal of Differential Equations, 266 (2019), 3973-4017. doi: 10.1016/j.jde.2018.09.026. Google Scholar
M. Li and J. S. Muldowney, Global stability for the SEIR model in epidemiology, Mathematical Biosciences, 125 (1995), 155-164. doi: 10.1016/0025-5564(95)92756-5. Google Scholar
N. Limnios and G. Oprisan, Semi-Markov Processes and Reliability[M], Birkhäuser Boston, Inc., Boston, MA, 2001. doi: 10.1007/978-1-4612-0161-8. Google Scholar
H. Liu, H. Xu and J. Yu, Stability on coupling SIR epidemic model with vaccination, Journal of Applied Mathematics, 2005 (2005), 301-319. doi: 10.1155/JAM.2005.301. Google Scholar
X. Mao, Stability of stochastic differential equations with Markovian switching, Heilongjiang Science & Technology Information, 79 (1999), 45-67. doi: 10.1016/S0304-4149(98)00070-2. Google Scholar
X. Mao, Stability of stochastic differential equations with markovian switching, Stochastic Processes and their Applications, $\texttt79$ (1999), 45–67. doi: 10.1016/S0304-4149(98)00070-2. Google Scholar
X. Mao, Stochastic Differential Equations and Applications[M], Elsevier, 2007. doi: 10.1533/9780857099402. Google Scholar
X. Mao, G. Marion and E. Renshaw, Environmental noise suppresses explosion in population dynamics, Stochastic Process and their Applications, 97 (2002), 95-110. doi: 10.1016/S0304-4149(01)00126-0. Google Scholar
H. Margolis, M. Alter and S. Hadler, Hepatitis B: Evolving epidemiology and implications for control, Seminars in Liver Disease, 11 (1991), 84-92. doi: 10.1055/s-2008-1040427. Google Scholar
S. P. Meyn and R. L. Tweedie, Stability of Markovian processes Ⅱ: Continuous-time processes and sampled chains, Advances in Applied Probability, 25 (1993), 487-517. doi: 10.2307/1427521. Google Scholar
D. Mollison, Spatial contact models for ecological and epidemic spread, Journal of the Royal Statistical Society, 39 (1977), 283-326. doi: 10.1111/j.2517-6161.1977.tb01627.x. Google Scholar
X. Mu and Q. Zhang, Optimal strategy of vaccination and treatment in an SIRS model with Markovian switching, Mathematical Methods in the Applied Sciences, 42 (2019), 767-789. doi: 10.1002/mma.5378. Google Scholar
C. Serra, M.D. Martinez and X. Lana, European dry spell length distributions, years 1951-2000, Theoretical and Applied Climatology, 114 (2013), 531-551. doi: 10.1007/s00704-013-0857-5. Google Scholar
M. J. Small and D. J. Morgan, The Relationship between a continuous-time renewal model and a discrete Markov chain model of precipitation occurrence, Water Resources Research, 22 (1986), 1422-1430. doi: 10.1029/WR022i010p01422. Google Scholar
C. Sun, Y. Hsieh and P. Georgescu, A model for HIV transmission with two interacting high-risk groups, Nonlinear Analysis: Real World Applications, 40 (2018), 170-184. doi: 10.1016/j.nonrwa.2017.08.012. Google Scholar
A. Swishchuk and J. Wu, Evolution of Biological Systems in Random Media: Limit Theorems and Stability[M], Springer Science & Business Media, 2003. doi: 10.1007/978-94-017-1506-5. Google Scholar
E. Vergu, H. Busson and P. Ezanno, Impact of the infection period distribution on the epidemic spread in a meta population model, PloS One, 5 (2010), e9371. Google Scholar
[41] K. Wang, Random Mathematical Biology Model,, Science Press, Beijing, 2010. Google Scholar
Y. Wu and X. Zou, Asymptotic profiles of steady states for a diffusive sis epidemic model with mass action infection mechanism, Journal of Differential Equations, 261 (2016), 4424-4447. doi: 10.1016/j.jde.2016.06.028. Google Scholar
D. Xiao and S. Ruan, Global analysis of an epidemic model with nonmonotone incidence rate, Mathematical Biosciences, 208 (2007), 419-429. doi: 10.1016/j.mbs.2006.09.025. Google Scholar
X. Zhang, D. Jiang and A. Alsaedi, Stationary distribution of stochastic SIS epidemic model with vaccination under regime switching, Applied Mathematics Letters, 59 (2016), 87-93. doi: 10.1016/j.aml.2016.03.010. Google Scholar
Y. Zhao and D. Jiang, The threshold of a stochastic SIS epidemic model with vaccination, Applied Mathematics and Computation, 243 (2014), 718-727. doi: 10.1016/j.amc.2014.05.124. Google Scholar
Y. Zhao and D. Jiang, The threshold of a stochastic sirs epidemic model with saturated incidence, Applied Mathematics Letters, 34 (2014), 90-93. doi: 10.1016/j.aml.2013.11.002. Google Scholar
Y. Zhao, D. Jiang and X. Mao, The threshold of a stochastic SIRS epidemic model in a population with varying size, Discrete and Continuous Dynamical Systems-Series B, 20 (2015), 1277-1295. doi: 10.3934/dcdsb.2015.20.1277. Google Scholar
B. Zheng, X. Liu, M. Tang and J. Yu, Use of age-stage structural models to seek optimal Wolbachia-infected male mosquito releases for mosquito-borne disease control, Journal of Theoretical Biology, 472 (2019), 95-109. doi: 10.1016/j.jtbi.2019.04.010. Google Scholar
L. Zu, D. Jiang and D. O'Regan, Conditions for persistence and ergodicity of a stochastic Lotka-Volterra predator-prey model with regime switching, Communications in Nonlinear Science and Numerical Simulation, 29 (2015), 1-11. doi: 10.1016/j.cnsns.2015.04.008. Google Scholar
Figure 1. Simulations of $ (S(t), I(t), R(t)) $ with initial values $ (5, 1, 0) $, the distribution function of each state of semi-Markov chain is Hyper-exponential distribution. (a) Sample of $ (S(t), I(t), R(t), r(t)) $ of system (2) with initial values $ (5, 1, 0) $, $ r(0) = 1 $ and $ \theta(0) = 0 $, the corresponding $ m_1 = 0.3750 $, $ m_2 = 0.7833 $ and $ R_0^s = 1.3181>1 $; (b) Sample of $ (S_1(t), I_1(t), R_1(t)) $ of subsystem in system (2) under state $\textbf{1}$ with $ \beta_1 = 0.0056 $, the disease $ I_1(t) $ is persistent; (c) Sample of $ (S_2(t), I_2(t), R_2(t)) $ of subsystem in system (2) under state $\textbf{2}$ with $ \beta_2 = 0.0013 $, the disease $ I_2(t) $ is extinct
Figure 2. Simulations of $ (S(t), I(t), R(t)) $ by using the parameter values in Table 2 with initial values $ (5, 1, 0) $, the distribution function of each state of semi-Markov chain as the Gamma distribution. (a) Sample of $ (S(t), I(t), R(t), r(t)) $ of system (2) with initial values $ (5, 1, 0) $, $ r(0) = 1 $ and $ \theta(0) = 0 $, the corresponding $ m_1 = 0.3333 $, $ m_2 = 3 $ and $ R_0^s = 0.8471<1 $. (b) Sample of $ (S_1(t), I_1(t), R_1(t)) $ of subsystem in system (2) under state $\textbf{1}$ with $ \beta_1 = 0.0056 $. (c) Sample of $ (S_2(t), I_2(t), R_2(t)) $ of subsystem in system (2) under state $\textbf{2}$ with $ \beta_2 = 0.0013 $
Figure 3. PRCC values for system (47), using the basic reproduction number $ R_0^s $ in (49) as response functions. By using the same parameter values in Table 2, we get almost the same picture: (a) Analysis of $ R_0^s $ in example under Hyper-exponential distribution with $ m_1 = 0.3750 $, $ m_2 = 0.7833 $; (b) Analysis of $ R_0^s $ in counterexample under Gamma distribution with $ m_1 = 0.3333 $, $ m_2 = 3 $
Figure 4. (a) The value of $ R_0^s $ when we set $ m_1 \in [0, 10] $ and $ m_2 \in [0, 10] $; (b) The relation between $ R_0^s $ and $ m_1 $ when we fixed $ m_2 = 0.1, 0.5, 1 $; (c) The relation between $ R_0^s $ and $ m_2 $ when we fixed $ m_1 = 0.1, 0.5, 1 $
Figure 5. In this example we set $ F_i(t) $ as the Hyper-exponential distribution. In order to make the picture clear, we adopted $ \Delta t = \frac{T}{10} $. One can use the smaller $ \Delta t $ to ensure accuracy
Figure 6. We use a new interval $ \Delta \ddot{t} = {\Delta t} /2 $, and assume $ r(t) = L(\tilde{t}_i ) $ if $ t \in [\tilde{t}_i , \tilde{t}_{i+1}] $. Obviously, the smaller $ \Delta \mathfrak{t} $ we choose, the more accurately this discrete semi-Markov chain is simulated
Table 1. The biological significance of each parameter for stochastic system (2)
Natation Biological meanings
$ S(t) $ Number of susceptibles at time $ t $
$ I(t) $ Number of infective individuals at time $ t $
$ R(t) $ Number of recovered individuals at time $ t $
$ B(t) $ Standard Brownian motion in one dimension
$ {\sigma} $ The intensity of $ B(t) $
$ \Lambda $ The recruitment rate of the population
$ p $ The proportion of population that is vaccinated
$ \mu $ The death rates of susceptibles, infectives, and recovered individuals
$ \beta(\cdot) $ The infection coefficient
$ \alpha $ The death rate of infected individuals from disease-related causes
$ \delta $ The recovery rate of the infective individuals
$ \lambda $ The recovered individuals immunity lose rate
Table 2. Parameters for the each subsystem
Parameters Value Source
p 0.833 [26]
$ \Lambda $ 0.33 $ days^{-1} $ [11,25]
$ \mu $ 0.006 $ days^{-1} $ [11,25]
$ \alpha $ 0.06 $ days^{-1} $ [11,25]
$ \beta_1 $ 0.0056 $ days^{-1} $ [11,25]
$ \lambda $ 0.021 $ days^{-1} $ [11,25]
$ \delta $ 0.01 $ days^{-1} $ [11,25]
$ a_1 $ 0.001 [11,25]
$ \sigma $ 0.04 Estimated
Table 3. Symbols used in simulation
Symbols Definition Value
$ T $ Total simulation time 10
$ \tilde{t}_i (i=0, 1, \cdots) $ Every checkpoint $ \{0, 1, \cdots, 10 \} $
$ t_s $ Accumulation of time of per cycle - -
$ \Delta t $ Time interval between each jump 1
$ s_i(i=1, 2) $ The states of the semi-Markov switching {1, 2}
$ L(t) $ The state of the semi-Markov chain in $ t $ 1 or 2
$ r(t) $ The state of the discrete chain in $ t $ 1 or 2
$ M $ The transition probability matrix - -
Angelica Pachon, Federico Polito, Costantino Ricciuti. On discrete-time semi-Markov processes. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1499-1529. doi: 10.3934/dcdsb.2020170
Zsolt Saffer, Miklós Telek, Gábor Horváth. Analysis of Markov-modulated fluid polling systems with gated discipline. Journal of Industrial & Management Optimization, 2021, 17 (2) : 575-599. doi: 10.3934/jimo.2019124
Hui Zhao, Zhengrong Liu, Yiren Chen. Global dynamics of a chemotaxis model with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021011
Ke Su, Yumeng Lin, Chun Xu. A new adaptive method to nonlinear semi-infinite programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021012
Laurence Cherfils, Stefania Gatti, Alain Miranville, Rémy Guillevin. Analysis of a model for tumor growth and lactate exchanges in a glioma. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020457
Tetsuya Ishiwata, Young Chol Yang. Numerical and mathematical analysis of blow-up problems for a stochastic differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 909-918. doi: 10.3934/dcdss.2020391
Guoliang Zhang, Shaoqin Zheng, Tao Xiong. A conservative semi-Lagrangian finite difference WENO scheme based on exponential integrator for one-dimensional scalar nonlinear hyperbolic equations. Electronic Research Archive, 2021, 29 (1) : 1819-1839. doi: 10.3934/era.2020093
Yining Cao, Chuck Jia, Roger Temam, Joseph Tribbia. Mathematical analysis of a cloud resolving model including the ice microphysics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 131-167. doi: 10.3934/dcds.2020219
Martin Kalousek, Joshua Kortum, Anja Schlömerkemper. Mathematical analysis of weak and strong solutions to an evolutionary model for magnetoviscoelasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 17-39. doi: 10.3934/dcdss.2020331
Matúš Tibenský, Angela Handlovičová. Convergence analysis of the discrete duality finite volume scheme for the regularised Heston model. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1181-1195. doi: 10.3934/dcdss.2020226
Yuan Tan, Qingyuan Cao, Lan Li, Tianshi Hu, Min Su. A chance-constrained stochastic model predictive control problem with disturbance feedback. Journal of Industrial & Management Optimization, 2021, 17 (1) : 67-79. doi: 10.3934/jimo.2019099
Xin Zhao Tao Feng Liang Wang Zhipeng Qiu | CommonCrawl |
Linear models enable powerful differential activity analysis in massively parallel reporter assays
Leslie Myint4,
Dimitrios G. Avramopoulos2,
Loyal A. Goff2,3 &
Kasper D. Hansen ORCID: orcid.org/0000-0003-0086-06871,2
BMC Genomics volume 20, Article number: 209 (2019) Cite this article
Massively parallel reporter assays (MPRAs) have emerged as a popular means for understanding noncoding variation in a variety of conditions. While a large number of experiments have been described in the literature, analysis typically uses ad-hoc methods. There has been little attention to comparing performance of methods across datasets.
We present the mpralm method which we show is calibrated and powerful, by analyzing its performance on multiple MPRA datasets. We show that it outperforms existing statistical methods for analysis of this data type, in the first comprehensive evaluation of statistical methods on several datasets. We investigate theoretical and real-data properties of barcode summarization methods and show an unappreciated impact of summarization method for some datasets. Finally, we use our model to conduct a power analysis for this assay and show substantial improvements in power by performing up to 6 replicates per condition, whereas sequencing depth has smaller impact; we recommend to always use at least 4 replicates. An R package is available from the Bioconductor project.
Together, these results inform recommendations for differential analysis, general group comparisons, and power analysis and will help improve design and analysis of MPRA experiments.
Noncoding regions in the human genome represent the overwhelming majority of genomic sequence, but their function remains largely uncharacterized. Better understanding of the functional consequences of these regions has the potential to greatly enrich our understanding of biology. It is well understood that some noncoding regions are regulatory in nature. It has been straightforward to experimentally test the regulatory ability of a given DNA sequence with standard reporter assays, but these assays are low throughout and do not scale to the testing of large numbers of sequences. Massively parallel reporter assays (MPRAs) have emerged as a high-throughput means of measuring the ability of sequences to drive expression [1, 2]. These assays build on the traditional reporter assay framework by coupling each putative regulatory sequence with several short DNA tags, or barcodes, that are incorporated into the RNA output. These tags are counted in the RNA reads and the input DNA, and the resulting counts are used to quantify the activity of a given putative regulatory sequence, typically involving the ratio of RNA counts to DNA counts (Fig. 1). The applications of MPRA have been diverse, and there have been correspondingly diverse and ad hoc methods used in statistical analysis.
Structure of MPRA data. Thousands of putative regulatory elements can be assayed at a time in an MPRA experiment. Each element is linked to multiple barcodes. A plasmid library containing these barcoded elements is transfected into several cell populations (samples). Cellular DNA and RNA can be isolated and sequenced. The barcodes associated with each putative regulatory element can be counted to obtain relative abundances of each element in DNA and RNA. The process of aggregation sums counts over barcodes for element in each sample. Aggregation is one method for summarizing barcode level data into element level data
There are three broad categories of MPRA applications: (1) characterization studies, (2) saturation mutagenesis, and (3) differential analysis. (1) Characterization studies examine thousands of different putative regulatory elements that have a wide variety of sequence features and try to correlate these sequence features with measured activity levels [3–10]. Typical statistical analyses use regression to study the impact of multiple features simultaneously. They also compare continuous activity measures or categorized (high/low) activity measures across groups using paired and unpaired t-, rank, Fisher's exact, and chi-squared tests. (2) Saturation mutagenesis studies look at only a few established enhancers and examine the impact on activity of every possible mutation at each base as well as interactions between these mutations [11–17]. Analyses have uniformly used linear regression where each position in the enhancer sequence is a predictor. (3) Differential analysis studies look at thousands of different elements, each of which has two or more versions. Versions can correspond to allelic versions of a sequence [18–20] or different environmental contexts [21], such as different cell or tissue types [22]. These studies have compared different sequence versions using paired t-tests, rank sum tests, and Fisher's exact test (FET) (by pooling counts over biological replicates).
Despite the increasing popularity of this assay, guiding principles for statistical analysis have not been put forth. Researchers use a large variety of ad hoc methods for analysis. For example, there has been considerable diversity in the earlier stages of summarization of information over barcodes. Barcodes are viewed as technical replicates of the regulatory element sequences, and groups have considered numerous methods for summarizing barcode-level information into one activity measure per enhancer. On top of this, a large variety of statistical tests are used to make comparisons.
Recently, a method called QuASAR-MPRA was developed to identify regulatory sequences that have allele-specific activity [23]. This method uses a beta-binomial model to model RNA counts as a function of DNA counts, and it provides a means for identifying sequences that show a significant difference in regulatory activity between two alleles. While it provides a framework for two group differential analysis within MPRAs, QuASAR-MPRA is limited in this regard because experiments might have several conditions and involve arbitrary comparisons.
To our knowledge, no method has been developed that provides tools for general purpose differential analysis of activity measures from MPRA. General purpose methods are ones that can flexibly analyze data from a range of study designs. We present mpralm, a method for testing for differential activity in MPRA experiments. Our method uses linear models as opposed to count-based models to identify differential activity. This approach provides desired analytic flexibility for more complicated experimental designs that necessitate more complex models. It also builds on an established method that has a solid theoretical and computational framework [24]. We show that mpralm can be applied to a wide variety of MPRA datasets and has good statistical properties related to type I error control and power. Furthermore, we examine proper techniques for combining information over barcodes and provide guidelines for choosing sample sizes and sequencing depth when considering power. Our method is open source and freely available in the mpra package for R on the Bioconductor repository [25].
The structure of MPRA data and experiments
MPRA data consists of measuring the activity of some putative regulatory sequences, henceforth referred to as "elements". First a plasmid library of oligos is constructed, where each element is coupled with a number of short DNA tags, or barcodes. This plasmid library is then transfected into one or more cellular contexts, either as free-floating plasmids or integrated into the genome [21]. Next, RNA output is measured using RNA sequencing, and DNA output as a proxy for element copy number is measured using DNA sequencing (occasionally, element copy number is unmeasured), giving the data structure shown in Fig. 1. The log-ratio of RNA to DNA counts is commonly used as an activity outcome measure.
Since each element is measured across a number of barcodes, it is useful to summarize this data into a single activity measure a for a single element in a single sample. Multiple approaches have been proposed for this summarization step. We consider two approaches. First is averaging, where a log-ratio is computed for each barcode, then averaged across barcodes. This treats the different barcodes as technical replicates. The second approach is aggregation, where RNA and DNA counts are each summed across barcodes, followed by formation of a log-ratio. This approach effectively uses the barcodes to simply increase the sequencing counts for that element.
In our investigation of the characteristics of MPRA data we use a number of datasets listed in Table 1. We have divided them into 3 categories. Two categories are focused on differential analysis: one on comparing different alleles and one on comparing the same element in different conditions (retina vs. cortex and episomal vs. chromosomal integration). The two allelic studies naturally involve paired comparisons in that the two elements being compared are always measured together in a single sample (which is replicated). We also use two saturation mutagenesis experiments.
Table 1 Datasets
The variability of MPRA data depends on element copy number
It is well established that count data from RNA sequencing studies exhibit a mean-variance relationship [26]. On the log scale, low counts are more variable across replicates than high counts, at least partly due to inherent Poisson variation in the sequencing process [27, 28]. This relationship has been leveraged in both count-based analysis methods [29, 30] and, more recently, linear model-based methods [24]. For count-based methods, this mean-variance relationship helps improve dispersion estimates, and for linear model-based methods, the relationship allows for estimation of weights reflecting inherent differences in variability for count observations in different samples and genes.
Because MPRAs are fundamentally sequencing assays, it is useful to know whether similar variance relationships hold in these experiments. Due to the construction of MPRA libraries, each element is present in a different (random) copy number, and this copy number ought to impact both background and signal measurements from the element. We are therefore specifically interested in the functional relationship between element copy number and the variability of the activity outcome measure. As outcome measure we use the log-ratio of RNA counts to DNA counts (aggregate estimator), and we use aggregated DNA counts, averaged across samples, as an estimate of DNA copy number. We compute empirical standard deviations of the library size-corrected outcome measure across samples. In Fig. 2 we depict this relationship across the previously discussed publicly available datasets (Table 1). For all datasets, with one exception, there is higher variation associated with lower copy number. The functional form is reminiscent of the mean-variance relationship in RNA sequencing data [24], despite that we here show variance of a log-ratio of sequencing counts.
Variability of MPRA activity measures depends on element copy number. For multiple publicly available datasets we compute activity measures of putative regulatory element as the log2 ratio of aggregated RNA counts over aggregated DNA counts. Each panel shows the relationship between variability (across samples) of these activity measures and the average log2 DNA levels (across samples). Smoothed relationships are lowess curves representing the local average variability. The last plot shows all lowess curves on the same figure
Statistical modeling of MPRA data
To model MPRA data we propose to use a simple variant of the voom methodology [24], proposed for analysis of RNA sequencing data. This methodology is based on standard linear models, which are coupled with inverse variance weights representing the mean-variance relationship inherent in RNA sequencing data. The weights are derived from smoothing an empirical mean-variance plot. Similar to voom, we propose to use linear models to model log-ratio activity data from MPRAs, but we estimate weights by smoothing the relationship between empirical variance of the log-ratios and log-DNA copy number, as depicted in Fig. 2. We call this approach mpralm.
The current literature on analysis of MPRA experiments contains many variant methods (see Introduction). To evaluate mpralm, we compare the method to the following variants used in the literature: QuASAR-MPRA, t-tests, and Fisher's exact test (FET). QuASAR-MPRA is a recently developed method that is targeted for the differential analysis of MPRA data [23]. It specifically addresses a two group differential analysis where the two groups are elements with two alleles and uses base-calling error rate in the model formulation. It collapses count information across samples to create three pieces of information for each element: one count for RNA reads for the reference allele, one count for RNA reads for the alternate allele, and one proportion that gives the fraction of DNA reads corresponding to the reference allele. Fisher's exact test similarly collapses count information across samples. To test for differential activity, a 2-by-G table is formed with RNA and DNA designation forming one dimension and condition designation (with G groups) in the second dimension. The t-test operates on the log-ratio outcomes directly; we use the aggregate estimator to summarize over barcodes. Either a paired or unpaired t-test is used based on experimental design.
Both edgeR and DESeq2 are popular methods for analysis of RNA-sequencing data represented as counts. The two methods are both built on negative binomial models, and both attempt to borrow information across genes. These methods allow for the inclusion of an offset. Because both methods use a logarithmic link function, including log-DNA as an offset allows for the modeling of log-ratios of RNA to DNA. This makes these methods readily applicable to the analysis of MPRA data, and they carry many of the same advantages as mpralm. In addition to QuASAR, t-tests, and Fisher's exact test, we examine the performance of edgeR and DESeq2 for differential activity analysis in our evaluations. We point out that although our application of edgeR and DESeq2 to MPRA data is straightforward, it has not been used in this way so far in the literature. Tewhey et al. [19] uses DESeq2 to perform differential expression analysis of RNA counts relative to DNA counts within a single condition. This assesses whether the regulatory elements have activating or repressive activity, but it does not assess whether the activity of regulatory elements differs between conditions. We remind the community of the ability to use offsets in the edgeR and DESeq2 negative binomial models, and explore a new use of these models for MPRA data.
mpralm is a powerful method for differential analysis
First, we focus on evaluating the performance of mpralm for differential analysis. We compare to QuASAR-MPRA, t-tests, Fisher's exact test, edgeR, and DESeq2. We use four of the previously discussed studies, specifically the Tewhey, Inoue, Ulirsch, and Shen studies. Two of these studies (Tewhey, Ulirsch) focus on comparing the activity of elements with two alleles, whereas the other two (Inoue, Shen) compare the activity of each element in two different conditions. For the allelic studies, we use a random effects model for mpralm and paired t-tests. In the random effects model, we estimate the correlation between the multiple alleles for a given single nucleotide polymorphism (SNP). This correlation estimate improves the estimation of element-wise variances used in testing for differences between conditions. Both Tewhey et al. [19] and Ulirsch et al. [18] compare alleles in different cellular contexts; we observe similar behavior of all evaluations in all contexts (data not shown) and have therefore chosen to depict results from one cellular context for both of these studies. For the Tewhey dataset we depict results both from a large pool of elements used for initial screening and a smaller, targeted pool.
Figure 3 shows p-value distributions that result from running all methods on the 5 real datasets. Across these datasets, all methods except for QuASAR show a well-behaved p-value distribution; high p-values appear uniformly distributed, and there is a peak at low p-values. QuASAR-MPRA consistently shows conservative p-value distributions. This feature of the p-value distributions is not apparent from the QQ-plots (Fig. 3, first row) that the authors use to evaluate their method [23]. Fisher's exact test has a very high peak around zero, likely due to the extreme sensitivity of the test with high counts. We examine mpralm using both an average estimator and an aggregation estimator for summarizing across barcodes; this cannot be done for the Tewhey dataset where we do not have access to barcode-level data. To fully interpret these p-value distributions, we need to assess error rates. For example, the large number of rejections achieved using Fisher's exact test may be associated with a large number of errors.
Comparison of detection rates and p-value calibration over datasets. (a) QQ-plots (row 1), and (b) density plots (rows 2 and 3) for p-values for all datasets, including a zoom of the [0,0.1] interval for some datasets (row 3). Over all datasets, most methods show p-values that closely follow the classic mixture of uniformly distributed p-values with an enrichment of low p-values for differential elements. For the datasets which had barcode level counts (Inoue, Ulirsch, and Shen), we used two types of estimators of the activity measure (log-ratio of RNA/DNA) with mpralm, shown in light and dark blue
To estimate empirical dataset-specific type I error rates, we simulated count data that gave rise to null comparisons for each regulatory element (Methods). With all comparisons being null, we estimate the dataset-dependent type I error rate for each method as the fraction of rejected null hypotheses at a given nominal level. Figure 4 shows these estimated error rates (based on simulated data). We observe that Fisher's exact test has wildly inflated type I error, presumably because MPRA count data are overdispersed and because exact tests for large count data can be very sensitive. The other methods are much closer to being calibrated, although there is some variability from dataset to dataset. Generally, QuASAR-MPRA seems to be slightly liberal across datasets. The t-test is close to consistently calibrated. DESeq2 and edgeR are the most variable in their calibration; in particular, they are fairly conservative for the Ulirsch dataset and liberal for the Shen dataset. mpralm falls between the t-test and edgeR/DESeq2 for calibration. For high throughput assays, there is also interest in type I error rate calibration at very low nominal levels (e.g. Bonferroni-adjusted levels). In Table 2, we show estimated error rates for very low nominal error rates (roughly corresponding to α=0.05/3000,0.05/5000,0.05/7000,0.05/9000). We see that at these stringent levels typical of multiple testing situations in these assays, mpralm with the aggregate estimator and the t-test tend to not make any errors.
Empirical type I error rates. Type I error rates were estimated for all methods with simulated null data (Methods). For the datasets which had barcode level counts (Inoue, Ulirsch, and Shen), we used two types of estimators of the activity measure (aggregate and average estimator) with mpralm, shown in dark and light blue
Table 2 Observed type I error rates for low nominal error rates
To investigate the trade-off between observed power (number of rejected tests) and type I error rates, we combine these quantities in two ways: (1) we look at the number of rejections as a function of observed type I error rates and (2) we look at estimated FDR as a function of the number of rejections. Specifically, we use the error rates we estimate using the simulation (Fig. 4) to adjust the comparison of the observed number of rejections (Fig. 3).
In Fig. 5 we display the number of rejections as a function of calibrated (using simulation, Fig. 4) type I error rates. For a fixed calibrated error rate, we interpret a high number of rejections to suggest high power (since the error rates are calibrated to be the same). QuASAR-MPRA shows poor performance on this metric across datasets because it generates such conservative p-values. FET shows good performance for the Tewhey and Inoue datasets, but this is caused by its extremely liberal p-value distribution. It performs more poorly relative to the other methods in the Ulirsch and Shen datasets. Across these datasets, mpralm tends to have the best performance, particularly at low nominal error rates (Fig. 4, rows 2 and 3). However, edgeR and DESeq2 are close behind.
Number of rejections as a function of observed error rate. To compare the observed detection (rejection) rates of the methods fairly, we compare them at the same observed type I error rates, estimated in Fig. 4. The bottom two rows are zoomed-in versions of the top row. We see that mpralm, edgeR, and DESeq2 consistently have the highest detection rates
If we know the proportion of true null hypotheses, π0, and the true type I error rate, we can compute false discovery rates (FDR). The true π0 is an unknown quantity, but we estimate it using a method developed by Phipson et al. [31]. The true type I error rate is also an unknown quantity, but we estimated it via a realistic simulation as described earlier (Fig. 4). Using these estimates, we compute an estimated FDR (Methods). In Fig. 6 the estimated FDR (for a given π0) is displayed as a function of the number of rejections. QuASAR-MPRA, t-tests, and Fisher's exact test tend to have the highest false discovery rates. mpralm tends to have the lowest FDRs, with edgeR and DESeq2 close behind. For the Inoue dataset, all methods have very low FDR, presumably because a very high fraction of elements are expected to be differential given the extreme expected differences between the comparison groups.
Estimated FDR. For each dataset and method, the false discovery rate is estimated as a function of the number of rejections. This requires estimation of the proportion of true null hypotheses (Methods). The bottom row is a zoomed-in version of the top row
In conclusion, we observe that Fisher's exact test has too high of an error rate and that QuASAR-MPRA is underpowered; based on these results we cannot recommend either method. T-tests perform better than these two methods but are still outperformed by mpralm, edgeR, and DESeq2. These methods have similar performance, but mpralm seems to have slightly better performance than the latter two in terms of consistency of type I error calibration and power.
Comparison of element rankings between methods
While power and error calibration are important evaluation metrics for a differential analysis method, they do not have a direct relation with element rankings, which is often of practical importance. for the top performing methods in our evaluations (mpralm, t-test, edgeR and DESeq2) we examine rankings in more detail.
We observe fairly different rankings between mpralm and the t-test and examine drivers of these differences in Fig. 7. For each dataset, we find the MPRA elements that appear in the top 200 elements with one method but not the other. We will call these uniquely top ranking elements, and they make up 24% to 64% of the top 200 depending on dataset. For most datasets, DNA, RNA, and log-ratio activity measures are higher in uniquely top ranking mpralm elements (top three rows of Fig. 7). It is desirable for top ranking elements to have higher values for all three quantities because higher DNA levels increase confidence in the activity measure estimation, and higher RNA and log-ratio values give a stronger indication that a particular MPRA element has regulatory activity. In the last two rows of Fig. 7, we compare effect sizes and variability measures (residual standard deviations). The t-test uniformly shows lower variability but also lower effect sizes for its uniquely top ranking elements. This follows experience from gene-expression studies where standard t-tests tend to underestimate the variance and thereby exhibit t-statistics which are too large, leading to false positives. In MPRA studies, as with most other high-throughput studies, it is typically more useful to have elements with high effect sizes at the top of the list. Such elements are able to picked out in mpralm due to its information sharing and weighting framework.
Distribution of quantities related to statistical inference in top ranked elements with mpralm and t-test. MPRA elements that appear in the top 200 elements with one method but not the other are examined here. For these uniquely top ranking elements, the DNA, RNA, and log-ratio percentiles are shown in the first three rows. The effect sizes (difference in mean log-ratios) and residual standard deviations are shown in the last two rows. Overall, uniquely top ranking elements for the t-test tend to have lower log-ratio activity measures, effect sizes, and residual standard deviations
We similarly compare mpralm rankings with edgeR and DESeq2 rankings in Figs. 8 and 9. The ranking concordance between mpralm and these two methods is much higher than with the t-test. Generally, uniquely top ranking mpralm elements have higher DNA and RNA levels, but lower log-ratio activity measures. Uniquely top ranking mpralm elements also tend to have larger effect sizes. The variability of activity measures (residual SD) is similar among the methods.
Distribution of quantities related to statistical inference in top ranked elements with mpralm and edgeR. Similar to Fig. 7
Distribution of quantities related to statistical inference in top ranked elements with mpralm and DESeq2. Similar to Fig. 7
Accuracy of activity measures and power of differential analysis depends on summarization technique over barcodes
MPRA data initially contain count information at the barcode level, but we typically desire information summarized at the element level for the analysis stage. We examine the theoretical properties of two summarization methods: averaging and aggregation. Under the assumption that DNA and RNA counts follow a count distribution with a mean-variance relationship, we first show that averaging results in activity estimates with more bias. Second, we examine real data performance of these summarization techniques.
Let Rb and Db denote the RNA and DNA count, respectively, for barcode b=1,…,B for a putative regulatory element in a given sample. We suppress the dependency of these counts on sample and element. Typically, B is approximately 10 to 15 (for examples, see Table 1). We assume that Rb has mean μr and variance krμr and that Db has mean μd and variance kdμd. Typically the constants kd and kr are greater than 1, modeling overdispersion. Negative binomial models are a particular case with k=1+ϕμ, where ϕ is an overdispersion parameter. Also let Nd and Nr indicate the library size for DNA and RNA, respectively, in a given sample. Let pd and pr indicate the fraction of reads mapping to element e for DNA and RNA, respectively, in a given sample so that μr=Nrpr and μd=Ndpd. Let a be the true activity measure for element e defined as a:= log(pr/pd). When performing total count normalization, the RNA and DNA counts are typically scaled to a common library size L.
The average estimator of a is an average of barcode-specific log activity measures:
$$\hat a^{AV} = \frac{1}{B} \sum\limits_{b = 1}^{B} \log\left(\frac{R_{b}L/N_{r} + 1}{D_{b}L/N_{d} + 1} \right) $$
Using a second order Taylor expansion (Methods), it can be shown that this estimator has bias approximately equal to
$$\text{bias}^{AV} \approx \frac{1}{2}\left(\frac{k_{d}}{\mu_{d}} - \frac{k_{r}}{\mu_{r}} \right) = \frac{1}{2}\left(\frac{k_{d}}{N_{d} p_{d}} - \frac{k_{r}}{N_{r} p_{r}} \right) $$
The aggregate estimator of a first aggregates counts over barcodes:
$$\hat a^{AGG} = \log\left(\frac{1 + (L/N_r)\sum\nolimits_{b=1}^{B} R_{b}}{1 + (L/N_d)\sum\nolimits_{b=1}^{B} D_{b}} \right) $$
Using an analogous Taylor series argument (Methods), we can show that this estimator has bias approximately equal to
$$\text{bias}^{AGG} \approx \frac{1}{B}\text{bias}^{AV} $$
The aggregate estimator has considerably less bias than the average estimator for most MPRA experiments because most experiments use at least 10 barcodes per element. Bias magnitude depends on count levels and the true activity measure a. Further, the direction of bias depends on the relative variability of RNA and DNA counts. Similar Taylor series arguments show that the variance of the two estimators is approximately the same.
The choice of estimator can impact the estimated log fold-changes (changes in activity) in a differential analysis. In Fig. 10 we compare the log fold-changes inferred using the two different estimators. For the Inoue dataset, these effect sizes are very similar, but there are larger differences for the Ulirsch and Shen datasets.
Fig. 10
Comparison of the average and aggregate estimators For the three datasets containing barcode-level information, we compare the effect sizes (log fold changes in activity levels) resulting from use of the aggregate and average estimators. The y=x line is shown in red
Aggregation technique affects power in a differential analysis. In the last three columns of Figs. 3, 4, 5, and 6, we compare aggregation to averaging using mpralm. The two estimators have similar type I error rates but very different detection rates between datasets. The average estimator is more favorable for the Ulirsch and Shen datasets, and the aggregate estimator is more favorable in the Inoue dataset.
Recommendations for sequencing depth and sample size
To aid in the design of future MPRA experiments, we used the above mathematical model to inform power calculations. Power curves are displayed in Fig. 11. We observe that the variance of the aggregate estimator depends minimally on the true unknown activity measure but is greatly impacted by sequencing depth. We fix one of the two true activity measures to be 0.8 as this is common in many datasets. We use a nominal type I error rate of 0.05 that has been Bonferroni adjusted for 5000 tests to obtain conservative power estimates. We also use ten barcodes per element as this is typical of many studies.
Power analysis. Variance and power calculated based on our theoretical model. (a) Variance of the aggregate estimator depends on library size and the true unknown activity level but not considerably on the latter. (b)-(f) Power curves as a function of library size for different effect sizes and sample sizes. Effect sizes are log2 fold-changes
Our model suggests different impacts of sample size, and a marked impact of increasing the number of replicates, especially between 2 and 6 samples. From Fig. 12, we can see that large effect sizes (effect sizes of 1 or greater) are typical for top ranking elements in many MPRA studies. In this situation it is advisable to do 4 or more replicates per group.
Effect size distributions across datasets. Effect sizes in MPRA differential analysis are the (precision-weighted) differences in activity scores between groups, also called log2 fold-changes. The distribution of log2 fold changes resulting from using mpralm with the aggregate estimator are shown here
The field of MPRA data analysis has been fragmented and consists of a large collection of study-specific ad hoc methods. Our objective in this work has been to provide a unified framework for the analysis of MPRA data. Our contributions can be divided into three areas. First, we have investigated techniques for summarizing information over barcodes. In the literature, these choices have always been made without justification and have varied considerably between studies. Second, we have developed a linear model framework, mpralm, for powerful and flexible differential analysis. To our knowledge, this is the second manuscript evaluating for statistical analysis in MPRA studies. The first proposed the QuASAR-MPRA method [23], which we show to have worse performance than mpralm. In our comparisons, we provide the largest and most comprehensive comparison of analysis methods so far; earlier work used only two datasets for comparisons. Third, we have analyzed the impact of sequencing depth and number of replicates on power. To our knowledge, this is the first mathematically-based power investigation, and we expect this information to be useful in the design of MPRA studies.
The activity of a regulatory element can be quantified with the log-ratio of RNA counts to DNA counts. In the literature, groups have generally taken two approaches to summarizing barcode information to obtain one such activity measure per element per sample. One approach is to add RNA and DNA counts from all barcodes to effectively increase sequencing depth for an element. This is termed the aggregate estimator. Another approach is to compute the log-ratio measure for each barcode and use an average of these measures as the activity score for an element. This is termed the average estimator, and we have shown that it is more biased than the aggregate estimator. Because of this bias, we caution against the use of the average estimator when comparing activity scores in enhancer groups (often defined by sequence features). However, it is unclear which of the two estimators is more appropriate for differential analysis.
In addition to barcode summarization recommendations, we have proposed a linear model framework, mpralm, for the differential analysis of MPRA data. Our evaluations show that it produces calibrated p-values and is as or more powerful than existing methods being used in the literature.
While the count-based tools, edgeR and DESeq2, would seem like natural methods to use for the analysis of MPRA data, they have not yet been used for differential analysis of MPRA activity measures between groups. There has been some use of DESeq2 to identify (filter) elements with regulatory activity (differential expression of RNA relative to DNA) [19, 32]. However, these tools have not been used for comparisons of activity measures between groups. In this work we propose the use of log-DNA offsets as potential sensible uses of these software for differential analysis. In our evaluations, we see that this approach is most competitive with mpralm. For the allelic studies [18, 19], we observe that the degree of within-sample correlation affects the power of mpralm relative to comparison methods. In particular, there is little difference in the performance of the different methods for the Tewhey large pool experiment, and this experiment had overall low within-sample correlation. Both the Tewhey targeted pool experiment and the Ulirsch experiment had larger within-sample correlations, and we observe that mpralm has increased power over the comparison methods for these datasets. We expect that mpralm will generally be more powerful for paired designs with high within-pair correlations.
In terms of element rankings, mpralm, edgeR, and DESeq2 are similar. However, we observe a substantial difference in ranking between t-tests and mpralm and believe top ranked mpralm elements exhibit better properties compared to those from t-tests.
Linear models come with analytic flexibility that is necessary to handle diverse MPRA designs. Paired designs involving alleles, for example, are easily handled with linear mixed effects models due to computational tractability. The studies we have analyzed here only consider two alleles per locus. It is possible to have more than two alleles at a locus, and such a situation cannot be addressed with paired t-tests, but is easily analyzed using mpralm. This is important because we believe such studies will eventually become routine for understanding results from genome-wide association studies. We note that for allelic studies, it is often of interest to filter detections of significant differences to those cases where at least one allele appears to show regulatory activity. This is not inherent in the mpralm method, but it is possible to screen for regulatory activity with a conventional count-based differential analysis of RNA counts versus DNA counts using methods such as voom, edgeR, or DESeq2.
While we have focused on characterizing the mpralm linear model framework for differential analysis, it is possible to include variance weights in the multivariate models used in saturation mutagenesis and characterization studies. We expect that modeling the copy number-variance relationship will improve the performance of these models.
For power, we find a substantial impact of even small increases in sample size. This is an important observation because many MPRA studies use 2 or 3 replicates per group, and our results suggest that power can be substantially increased with even a modest increase in sample size. We caution that using less than 4 replicates can be quite underpowered.
In short, the tools and ideas set forth here will aid in making rigorous conclusions from a large variety of future MPRA studies.
We have observed differences in the MPRA activity estimates resulting from the averaging and aggregation methods of summarizing counts across barcodes. For this reason, we recommend that practitioners perform sensitivity analyses for their results by using both of these estimation procedures. The mpralm linear model framework appears to have calibrated type I error rates and to be as or more powerful than the t-tests and Fisher's exact type tests that have been primarily used in the literature. mpralm has similar performance to variations of the edgeR and DESeq2 methods that we introduce here. These variations involve including DNA counts as offsets in the RNA differential analysis procedures. We recommend either of these 3 methods for unpaired differential analysis settings (such as tissue comparison studies), but we recommend mpralm for allelic studies due to its ability to better model the paired nature of the alleles with mixed models. Finally, we recommend that practitioners use at least 4 samples per condition for reasonable power to detect differences for top ranking elements.
See Table 1. Dataset labels used in figures are accompanied by short descriptions below.
Melnikov: Study of the base-level impact of mutations in two inducible enhancers in humans [12]: a synthetic cAMP-regulated enhancer (CRE) and a virus-inducible interferon-beta enhancer (IFNB). We do not look at the IFNB data because it contains only one sample. We consider 3 datasets:
Melnikov: CRE, single-hit, induced state: Synthetic cAMP-regulated enhancer, single-hit scanning, induced state.
Melnikov: CRE, multi-hit, uninduced state: Synthetic cAMP-regulated enhancer, multi-hit sampling, uninduced state.
Melnikov: CRE, multi-hit, induced state: Synthetic cAMP-regulated enhancer, multi-hit sampling, induced state.
Kheradpour: Study of the base-level impact of mutations in various motifs [15]. Transfection into HepG2 and K562 cells.
Tewhey: Study of allelic effects in eQTLs [19]. Transfection into two lymphoblastoid cell lines (NA12878 and NA19239) as well as HepG2. In addition two pools of plasmids are considered: a large screening pool and a smaller, targeted pool, designed based on the results of the large pool. We use data from both the large and the targeted pool in NA12878.
Inoue: chromosomal vs. episomal: Comparison of episomal and chromosomally-integrated constructs [21]. This study uses a wild-type and mutant integrase to study the activity of a fixed set of putative regulatory elements in an episomal and a chromosomally-integrated setting, respectively.
Ulirsch: Study of allelic effects in GWAS to understand red blood cell traits [18]. Transfection into K562 cells as well as K562 with GATA1 overexpressed. We use the data from K562.
Shen: mouse retina vs. cortex: Comparison of cis-regulatory elements in-vivo in mouse retina and cerebral cortex [22]. Candidate CREs that tile targeted regions are assayed in-vivo in these two mouse tissues with adeno-associated virus delivery.
Simulation of data for type I error rate estimation
Here we describe the realistic simulation of data to closely match properties of the real datasets. The starting point is a given dataset with two comparison groups:
Compute log-ratio activity measures from the original RNA and DNA counts.
Calculate and save the element-wise residual standard deviations of the log-ratios after mean-centering them in each group. Calculate and save the mean of the original log-ratios in group 1. These element-wise means will become the mean for both groups in the new null data.
Standardize the log-ratios in each group to have mean zero and unit variance.
The standardized log-ratios from both groups all have mean zero and unit variance. Resample these standardized residuals without replacement for each element in each sample. For paired (allelic) studies, resample each allele-allele pair without replacement.
Multiply the resampled residuals by the element-wise residual standard deviations, and add the original group 1 element-specific means. This creates identically distributed log-ratios in both comparison groups.
Retain the original DNA counts from all samples.
Compute RNA counts using the original DNA counts and the resampled log-ratio activity measures.
The original DNA counts and the new RNA counts form the new synthetic dataset.
Count preprocessing
DNA and RNA counts are scaled to have the same library size before running any methods. We perform minimal filtering on the counts to remove elements from the analysis that have low counts across all samples. Specifically, we require that DNA counts must be at least 10 in all samples to avoid instability of the log-ratio activity measures. We also remove elements in which these log-ratios are identical across all samples; in practice this only happens when the RNA counts are zero across all samples. Both filtering steps remove clear outliers in the copy number-variance plot (Fig. 2).
The square root of the standard deviation of the log-ratios over samples is taken as a function of average log DNA levels over samples, and this relationship is fit with a lowess curve. Predicted variances are inverted to form observation-level precision weights. Log-ratios activity measures and weights are used in the voom analysis pipeline. For the allelic studies, a mixed model is fit for each element using the duplicateCorrelation module in the limma Bioconductor package [33].
This linear model approach has a number of advantages. (1) It is flexible to different functional forms of the variance-copy number relationship. (2) It allows for a unified approach to modeling many different types of MPRA design using the power of design matrices. (3) It allows for borrowing of information across elements using empirical Bayes techniques. (4) It allows for different levels of correlation between elements using random effects.
mpralm enables modeling for complex comparisons
While many comparisons of interest in MPRA studies can be posed as a two group comparison (e.g. major allele vs. minor allele), more complicated experimental designs are also of interest. For example, in the allelic study conducted by Ulirsch [18], putative biallelic enhancer sequences are compared in two cellular contexts. The first is a standard culture of K562 cells, and the second is a K562 culture that induces over-expression of GATA1 for a more terminally-differentiated phenotype. A straightforward question is whether an allele's effect on enhancer activity differs between cellular contexts. Let yeia be the enhancer activity measure (log ratio of RNA over DNA counts) for element e, in sample i for allele a. Let x1eia be a binary indicator of the mutant allele. Let x2eia be a binary indicator of the GATA1 over-expression condition. Then the following model
$$\begin{array}{*{20}l} Y_{eia} =& \beta_{0e} + \beta_{1e}x_{1eia} + \beta_{2e}x_{2eia} + \\ & \beta_{3e}x_{1eia}x_{2eia} + b_{i} + \epsilon_{eia} \end{array} $$
is a linear mixed effects model for activity measures, where bi is a random effect that induces correlation between the two alleles measured within the same sample. We can perform inference on the β3e parameters to determine differential allelic effects. Such a model is easy to fit within the mpralm framework, since our framework supports model specifications by general design matrices. In contrast, this question cannot be formulated in the QuASAR, t-test, and Fisher's exact test frameworks. Neither edgeR nor DESeq2 support the fitting of mixed effects models.
Running mpralm, QuASAR, t-test, fisher's exact test
For all methods, DNA and RNA counts were first corrected for library size with total count normalization. For edgeR and DESeq2, DNA counts were included as offset terms on the log scale before standard analysis. For the t-test we computed the aggregate estimator of the log-ratio as the outcome measure. For Fisher's exact test, we summed DNA and RNA counts in the two conditions to form a 2-by-2 table as input to the procedure. For QuASAR-MPRA, we summed RNA counts in each condition to get one reference condition count and one alternative condition count per element. We also summed DNA counts in all samples and in the reference condition to get one DNA proportion for each element. These were direct inputs to the method.
Metrics used for method comparison
We use a number of metrics to compare methods and describe them in detail here.
Shape ofp-value distributions. Calibrated differential analysis methods have a characteristic shape for the p-value distribution. Normally the majority of p-values are uniformly distributed, corresponding to null comparisons, and there is a peak at low p-values for the truly differential comparisons. We compare the methods with regards to this expected shape.
Type I error rates. For all methods and datasets, we estimate via realistic simulation (described above) the proportion of truly null comparisons in which we reject the null hypothesis.
Number of rejections as a function of type I error rate. To some degree, this is a comparison of power between methods. We cannot compare power of methods by comparing the height of the peaks around zero in the p-value distributions because those plots tell us nothing about type I error rates. We wished to fix type I error rate and compare the number of rejections made by the different methods. For a given nominal type I error rate, we computed (1) the number of rejections made by a method and (2) the estimated true type I error rate. For example, a conservative method would have a true type I error rate of 0.03, say, at a nominal level of 0.05. Quantity (2) is plotted on the x-axis of Fig. 5, and quantity (1) is plotted on the y axis. Curves that are above others indicate higher detections for fixed type I error rate.
False discovery rates. These are defined as the fraction of rejections that are false positives. The estimation of these FDRs is described below.
Metrics that describe top ranking elements. The metrics above focus on type I error rates and power. Comparisons of the element rankings produced by the different methods were performed by taking elements that were ranked in the top 200 for one method but not the other. (Comparisons were done pairwise with mpralm always being one method compared.) Metrics measured the magnitude of the RNA counts, DNA counts, estimated log-ratio activity measures, effect sizes (difference in activity between groups), and residual standard deviations of the activity measures. It is nice if the RNA, DNA, log-ratios, and effect sizes are higher in top ranking elements. It is nice if the top ranking elements have residual standard deviations that are low, but not so low as to have been underestimated. It is common for variability to be underestimated when there are uniformly low counts across samples.
Estimation of FDR
The proportion of truly null hypotheses π0 for each dataset was estimated using the "lfdr" method in the propTrueNull function within limma [31]. As is common with π0 estimation procedures, the p-values resulting from a statistical analysis are used in the estimation process. To this end, the π0 proportion was estimated with the p-values resulting from mpralm, t-test, QuASAR, edgeR, and DESeq2, and the median of these estimates was used as the estimate for π0 for a given dataset. Fisher's exact test was excluded from this estimate because it gave an estimate of π0 that was considerably smaller than the other methods, and which was dubious in light of its uncontrolled type I error rate. We multiply the number of tests by these π0 estimates to obtain an estimate of the number of truly null hypotheses. We then multiply this by our estimate of the true dataset-specific type I error rate (as shown in Fig. 4) to obtain an estimate of the number of false positives. Dividing by the number of rejected hypotheses at a given nominal significance level gives the estimated FDRs in Fig. 6.
Bias and variance of estimators
We use Taylor series arguments to approximate the bias and variance of the aggregate and average estimators. The following summarizes our parametric assumptions:
$$\begin{array}{*{20}l} \mathrm{E}[R_{b}] &= \mu_{r} = N_{r} p_{r} & \text{Var}(R_{b}) &= k_{r}\mu_{r} \\ \mathrm{E}[D_{b}] &= \mu_{d} = N_{d} p_{d} & \text{Var}(D_{b}) &= k_{d}\mu_{d} \end{array} $$
We suppress the dependency of these parameters on sample and element. Library sizes are given by N. The fraction of reads coming from a given element is given by p. Dispersion parameters are given by k. The common library size resulting from total count normalization is given by L. The true activity measure of a given element is given by a:= log(pr/pd).
Average estimator: The "average estimator" of a is an average of barcode-specific log activity measures and is written as:
The second-order Taylor expansion of the function
$$f(R_{b},D_b) = \log(R_{b} L/N_{r} + 1) - \log(D_{b} L/N_{d} + 1) $$
around the point (E[Rb],E[Db])=(μr,μd) is:
$$\begin{aligned} \log& \left(\frac{R_{b} L/N_{r} +1}{D_{b} L/N_{d} +1} \right) \approx \\ & \log\left(\mu_{r} L/N_{r} +1 \right) - \log\left(\mu_{d} L/N_{d} +1 \right) \\ &+ \frac{L/N_{r}}{\mu_{r}L/N_{r}+1}(R_{b} - \mu_{r}) \\ &- \frac{L/N_{d}}{\mu_{d}L/N_{d}+1}(D_{b} - \mu_{d}) \\ &- \frac{(L/N_{r})^{2}}{2(\mu_{r}L/N_{r}+1)^{2}}(R_{b} - \mu_{r})^{2} \\ &+ \frac{(L/N_{d})^{2}}{2(\mu_{d}L/N_{d}+1)^{2}}(D_{b}-\mu_{d})^{2} \end{aligned} $$
We use the expansion above to approximate the expectation of the average estimator:
$$\begin{array}{*{20}l} \mathrm{E}\left[ \hat a^{AV} \right] & \approx \log\left(\frac{\mu_{r}L/N_{r}+1}{\mu_{d}L/N_{d}+1} \right) \\ &\quad + \frac{(L/N_{d})^{2} k_{d}\mu_{d}}{2(\mu_{d}L/N_{d}+1)^{2}} - \frac{(L/N_{r})^{2} k_{r}\mu_{r}}{2(\mu_{r}L/N_{r}+1)^{2}} \\ &\approx \log\left(\frac{p_{r}}{p_{d}} \right) + \frac{k_{d}}{2\mu_{d}} - \frac{k_{r}}{2\mu_{r}} \\ &= a + \frac{k_{d}}{2\mu_{d}} - \frac{k_{r}}{2\mu_{r}} \end{array} $$
We can also approximate the variance under the assumption that the barcode-specific log-ratios are uncorrelated:
$$\begin{array}{*{20}l} \text{Var}(\hat a^{AV}) &= \frac{1}{B} \text{Var} \left(\log\left(\frac{R_{b}L/N_{r}+1}{D_{b}L/N_{d}+1} \right) \right) \\ &\approx \frac{(L/N_{r})^{2} k_{r}\mu_{r}}{B(\mu_{r} L/N_{r} + 1)^{2}} + \frac{(L/N_{d})^{2} k_{d}\mu_{d}}{B(\mu_{d} L/N_{d} + 1)^{2}} \\ &\quad - \frac{2 (L/N_{r}) (L/N_{d}) \text{Cov}(R_{b}, D_{b})}{B(\mu_{r} L/N_{r} + 1)(\mu_{d} L/N_{d} + 1)} \end{array} $$
Aggregate estimator: The "aggregate estimator" of a first aggregates counts over barcodes and is written as:
$$\begin{array}{*{20}l} \hat{a}^{AGG} &= \log\left(\frac{1 + (L/N_{r})\sum\nolimits_{b=1}^{B} R_{b}}{1 + (L/N_{d})\sum\nolimits_{b=1}^{B} D_{b}} \right) \\ &= \log \left(\frac{1 + (L/N_{r})R^{AGG}}{1 + (L/N_{d})D^{AGG}} \right) \end{array} $$
$$ {\begin{aligned} f(R^{AGG},D^{AGG}) = \log((L/N_{r})R^{AGG}+1) - \log((L/N_{d})D^{AGG}+1) \end{aligned}} $$
around the point (E[RAGG],E[DAGG])=(Bμr,Bμd) is:
$$\begin{aligned} \log & \left(\frac{1 + (L/N_{r})R^{AGG}}{1 + (L/N_{d})D^{AGG}} \right) \approx \\ & \log\left(B\mu_{r}L/N_{r} +1 \right) - \log\left(B\mu_{d}L/N_{d} +1 \right) \\ & + \frac{L/N_{r}}{B\mu_{r}L/N_{r} +1}(R^{AGG} - B\mu_{r}) \\ & - \frac{L/N_{d}}{B\mu_{d}L/N_{d} +1}(D^{AGG} - B\mu_{d}) \\ & - \frac{(L/N_{r})^{2}}{2(B\mu_{r}L/N_{r} +1)^{2}}(R^{AGG}- B\mu_{r})^{2} \\ & + \frac{(L/N_{d})^{2}}{2(B\mu_{d}L/N_{d} +1)^{2}}(D^{AGG}-B\mu_{d})^{2} \end{aligned} $$
We use the expansion above to approximate the expectation:
$$\begin{array}{*{20}l} \mathrm{E}\left[ \hat a^{AGG} \right] &\approx \log\left(\frac{B\mu_{r}L/N_{r}+1}{B\mu_{d}L/N_{d}+1} \right) \\ &\quad + \frac{B k_{d}\mu_{d} (L/N_{d})^{2}}{2(B\mu_{d}L/N_{d} +1)^{2}} \\ &\quad - \frac{B k_{r}\mu_{r} (L/N_{r})^{2}}{2(B\mu_{r}L/N_{r} +1)^{2}} \\ &\approx \log\left(\frac{p_{r}}{p_{d}} \right) + \frac{k_{d}}{2B\mu_{d}} - \frac{k_{r}}{2B\mu_{r}} \\ &= a + \frac{k_{d}}{2B\mu_{d}} - \frac{k_{r}}{2B\mu_{r}} \end{array} $$
We can also approximate the variance:
$$\begin{array}{*{20}l} \text{Var} & (\hat a^{AGG}) \approx \\ &\frac{(L/N_{r})^{2} B k_{r}\mu_{r}}{(B\mu_{r} L/N_{r} + 1)^{2}} + \frac{(L/N_{d})^{2} B k_{d}\mu_{d}}{(B\mu_{d} L/N_{d} + 1)^{2}} \\ &- \frac{2 (L/N_{r}) (L/N_{d}) \text{Cov}(R^{AGG}, D^{AGG})}{(B\mu_{r} L/N_{r} + 1)(B\mu_{d} L/N_{d} + 1)} \end{array} $$
FET:
Fisher's exact test
LCL:
Lymphoblastoid cell line
MPRA:
Massively parallel reporter assay
White MA. Understanding how cis-regulatory function is encoded in DNA sequence using massively parallel reporter assays and designed sequences. Genomics. 2015; 106:165–70. https://doi.org/10.1016/j.ygeno.2015.06.003.
Melnikov A, Zhang X, Rogov P, Wang L, Mikkelsen TS. Massively parallel reporter assays in cultured mammalian cells. J Vis Exp. 2014. https://doi.org/10.3791/51719.
Grossman SR, Zhang X, Wang L, Engreitz J, Melnikov A, Rogov P, Tewhey R, Isakova A, Deplancke B, Bernstein BE, Mikkelsen TS, Lander ES. Systematic dissection of genomic features determining transcription factor binding and enhancer function. PNAS. 2017; 114:1291–300. https://doi.org/10.1073/pnas.1621150114.
Maricque BB, Dougherty J, Cohen BA. A genome-integrated massively parallel reporter assay reveals DNA sequence determinants of cis-regulatory activity in neural cells. Nucleic Acids Res. 2017; 45:16. https://doi.org/10.1093/nar/gkw942.
Ernst J, Melnikov A, Zhang X, Wang L, Rogov P, Mikkelsen TS, Kellis M. Genome-scale high-resolution mapping of activating and repressive nucleotides in regulatory regions. Nat Biotechnol. 2016; 34:1180–90. https://doi.org/10.1038/nbt.3678.
White MA, Kwasnieski JC, Myers CA, Shen SQ, Corbo JC, Cohen BA. A simple grammar defines activating and repressing cis-regulatory elements in photoreceptors. Cell Rep. 2016; 5:1247–54. https://doi.org/10.1016/j.celrep.2016.09.066.
Farley EK, Olson KM, Zhang W, Brandt AJ, Rokhsar DS, Levine MS. Suboptimization of developmental enhancers. Science. 2015; 350:325–8. https://doi.org/10.1126/science.aac6948.
Kamps-Hughes N, Preston JL, Randel MA, Johnson EA. Genome-wide identification of hypoxia-induced enhancer regions. PeerJ. 2015; 3:1527. https://doi.org/10.7717/peerj.1527.
Mogno I, Kwasnieski JC, Cohen BA. Massively parallel synthetic promoter assays reveal the in vivo effects of binding site variants. Genome Res. 2013; 23:1908–15. https://doi.org/10.1101/gr.157891.113.
White MA, Myers CA, Corbo JC, Cohen BA. Massively parallel in vivo enhancer assay reveals that highly local features determine the cis-regulatory function of ChIP-seq peaks. PNAS. 2013; 110(29):11952–7. https://doi.org/10.1073/pnas.1307449110.
Patwardhan RP, Lee C, Litvin O, Young DL, Pe'er D, Shendure J. High-resolution analysis of DNA regulatory elements by synthetic saturation mutagenesis. Nat Biotechnol. 2009; 27:1173–5. https://doi.org/10.1038/nbt.1589.
Melnikov A, Murugan A, Zhang X, Tesileanu T, Wang L, Rogov P, Feizi S, Gnirke A, Callan CG Jr, Kinney JB, Kellis M, Lander ES, Mikkelsen TS. Systematic dissection and optimization of inducible enhancers in human cells using a massively parallel reporter assay. Nat Biotechnol. 2012; 30:271–7. https://doi.org/10.1038/nbt.2137.
Patwardhan RP, Hiatt JB, Witten DM, Kim MJ, Smith RP, May D, Lee C, Andrie JM, Lee S-I, Cooper GM, Ahituv N, Pennacchio LA, Shendure J. Massively parallel functional dissection of mammalian enhancers in vivo. Nat Biotechnol. 2012; 30:265–70. https://doi.org/10.1038/nbt.2136.
Kwasnieski JC, Mogno I, Myers CA, Corbo JC, Cohen BA. Complex effects of nucleotide variants in a mammalian cis-regulatory element. PNAS. 2012; 109:19498–503. https://doi.org/10.1073/pnas.1210678109.
Kheradpour P, Ernst J, Melnikov A, Rogov P, Wang L, Zhang X, Alston J, Mikkelsen TS, Kellis M. Systematic dissection of regulatory motifs in 2000 predicted human enhancers using a massively parallel reporter assay. Genome Res. 2013; 23:800–11. https://doi.org/10.1101/gr.144899.112.
Birnbaum RY, Patwardhan RP, Kim MJ, Findlay GM, Martin B, Zhao J, Bell RJA, Smith RP, Ku AA, Shendure J, Ahituv N. Systematic dissection of coding exons at single nucleotide resolution supports an additional role in cell-specific transcriptional regulation. PLoS Genet. 2014; 10:1004592. https://doi.org/10.1371/journal.pgen.1004592.
Zhao W, Pollack JL, Blagev DP, Zaitlen N, McManus MT, Erle DJ. Massively parallel functional annotation of 3' untranslated regions. Nat Biotechnol. 2014; 32:387–91. https://doi.org/10.1038/nbt.2851.
Ulirsch JC, Nandakumar SK, Wang L, Giani FC, Zhang X, Rogov P, Melnikov A, McDonel P, Do R, Mikkelsen TS, Sankaran VG. Systematic functional dissection of common genetic variation affecting red blood cell traits. Cell. 2016; 165:1530–45. https://doi.org/10.1016/j.cell.2016.04.048.
Tewhey R, Kotliar D, Park DS, Liu B, Winnicki S, Reilly SK, Andersen KG, Mikkelsen TS, Lander ES, Schaffner SF, Sabeti PC. Direct identification of hundreds of Expression-Modulating variants using a multiplexed reporter assay. Cell. 2016; 165(6):1519–29. https://doi.org/10.1016/j.cell.2016.04.027.
Vockley CM, Guo C, Majoros WH, Nodzenski M, Scholtens DM, Hayes MG, Lowe WL Jr, Reddy TE. Massively parallel quantification of the regulatory effects of noncoding genetic variation in a human cohort. Genome Res. 2015; 25:1206–14. https://doi.org/10.1101/gr.190090.115.
Inoue F, Kircher M, Martin B, Cooper GM, Witten DM, McManus MT, Ahituv N, Shendure J. A systematic comparison reveals substantial differences in chromosomal versus episomal encoding of enhancer activity. Genome Res. 2017; 27:38–52. https://doi.org/10.1101/gr.212092.116.
Shen SQ, Myers CA, Hughes AEO, Byrne LC, Flannery JG, Corbo JC. Massively parallel cis-regulatory analysis in the mammalian central nervous system. Genome Res. 2016; 26:238–55. https://doi.org/10.1101/gr.193789.115.
Kalita CA, Moyerbrailean GA, Brown C, Wen X, Luca F, Pique-Regi R. QuASAR-MPRA: Accurate allele-specific analysis for massively parallel reporter assays. Bioinformatics. 2017. https://doi.org/10.1093/bioinformatics/btx598.
Law CW, Chen Y, Shi W, Smyth GK. voom: Precision weights unlock linear model analysis tools for RNA-seq read counts. Genome Biol. 2014; 15:29. https://doi.org/10.1186/gb-2014-15-2-r29.
The mpra package. https://bioconductor.org/packages/mpra.
McCarthy DJ, Chen Y, Smyth GK. Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation. Nucleic Acids Res. 2012; 40:4288–97. https://doi.org/10.1093/nar/gks042.
Marioni JC, Mason CE, Mane SM, Stephens M, Gilad Y. RNA-seq: an assessment of technical reproducibility and comparison with gene expression arrays. Genome Res. 2008; 18:1509–17. https://doi.org/10.1101/gr.079558.108.
Bullard JH, Purdom E, Hansen KD, Dudoit S. Evaluation of statistical methods for normalization and differential expression in mRNA-Seq experiments. BMC Bioinformatics. 2010; 11:94. https://doi.org/10.1186/1471-2105-11-94.
Robinson MD, McCarthy DJ, Smyth GK. edger: a bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26:139–40. https://doi.org/10.1093/bioinformatics/btp616.
Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014; 15:550. https://doi.org/10.1186/s13059-014-0550-8.
Phipson B. Empirical bayes modelling of expression profiles and their associations. 2013. PhD thesis, University of Melbourne.
Gisselbrecht SS, Barrera LA, Porsch M, Aboukhalil A, Estep PW 3rd, Vedenko A, Palagi A, Kim Y, Zhu X, Busser BW, Gamble CE, Iagovitina A, Singhania A, Michelson AM, Bulyk ML. Highly parallel assays of tissue-specific enhancers in whole drosophila embryos. Nat Methods. 2013; 10:774–80. https://doi.org/10.1038/nmeth.2558.
Smyth GK, Michaud J, Scott HS. Use of within-array replicate spots for assessing differential expression in microarray experiments. Bioinformatics. 2005; 21:2067–75. https://doi.org/10.1093/bioinformatics/bti270.
The authors would like to thank Michael Ziller for sharing his insights.
Research reported in this publication was supported by the National Cancer Institute and the National Institute of General Medical Sciences of the National Institutes of Health under award numbers U24CA180996 and R01GM121459. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Availability of data and material
The mpralm method described here is implemented in the mpra package for R on the Bioconductor repository [25].
All of the data used in this study are available on GEO. The Tewhey data is available under GSE75661, Inoue data under GSE83894, Melnikov data under GSE31982, Kheradpour data under GSE33367, Shen data under GSE68247, and Ulirsch data under GSE87711.
Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, 615 N. Wolfe St, E3527, Baltimore, MD 21212, USA
Kasper D. Hansen
McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins School of Medicine, Baltimore, USA
Dimitrios G. Avramopoulos, Loyal A. Goff & Kasper D. Hansen
Department of Neuroscience, Johns Hopkins School of Medicine, Baltimore, USA
Loyal A. Goff
Department of Mathematics, Statistics, and Computer Science, Macalester College, 1600 Grand Ave, Saint Paul, MN 55105, USA
Leslie Myint
Dimitrios G. Avramopoulos
LM and KDH developed the mpralm method and conceived the evaluations. LM carried out the analyses and evaluations. DA and LG provided guidance for handling allelic studies. All authors contributed to writing the manuscript. All authors read and approved the final manuscript.
Correspondence to Kasper D. Hansen.
Open Access This article is distributed under the terms of theCreative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Myint, L., Avramopoulos, D.G., Goff, L.A. et al. Linear models enable powerful differential activity analysis in massively parallel reporter assays. BMC Genomics 20, 209 (2019). https://doi.org/10.1186/s12864-019-5556-x
Massively parallel reporter assays
Submission enquiries: [email protected] | CommonCrawl |
Search SpringerLink
Extreme precipitation indices trend assessment over Thrace region, Turkey
Research Article - Hydrology
Sertac Oruc ORCID: orcid.org/0000-0003-2906-07711 &
Emrah Yalcin ORCID: orcid.org/0000-0002-3742-88661
Acta Geophysica (2021)Cite this article
52 Accesses
The frequency and the severity of extreme weather events are increasing globally and will continue to do so in the coming decades as a consequence of our changing climate. Understanding the characteristics of these events is crucial due to their significant negative impacts on social, physical and economic environments. In this study, 14 extreme rainfall indices are determined and examined in terms of trends and statistical characteristics for the four meteorological stations located in the Thrace region of Turkey, namely Edirne, Tekirdag, Kirklareli and Sariyer (Istanbul). The results indicate that annual total precipitation has an increasing trend for the Kirklareli and Sariyer stations (z = 1.730 and z = 2.127) and a decreasing trend for the Edirne and Tekirdag stations (z = − 0.368 and z = − 0.401). However, the precipitation intensity indices (SDII) of all stations show increasing trends that are statistically significant for the Edirne and Kirklareli stations. The Kirklareli station tends to have more days with heavy, very heavy and extremely heavy rainfall events (z = 2.241, z = 2.076 and z = 1.684, respectively). It is also anticipated that maximum amount of rainfalls in daily and consecutive five- and ten-day time scales will probably increase at all stations. Moreover, rainfall from very wet days and extremely wet days and fraction of total wet day rainfall that comes from very wet days and extremely wet days indices also show increasing trend tendencies for all stations. The remarkable point is the decreasing total precipitation trend at the Edirne and Tekirdag stations, contrary to the Kirklareli and Sariyer stations, which indicates that the annual total precipitation does not necessarily depend on extreme precipitation for the analyzed period.
Climate change projections show that extreme weather events will likely occur more frequently and intensely under future climates due to global warming (IPCC 2012, 2013, 2014a, 2014b). Although describing the processes behind extreme events is not easy, studies agree that climate change dominates and will dominate the severity and intensity of extreme events such as extreme precipitation (Keggenhoff et al. 2014; Myhre et al. 2019). Myhre et al. (2019) also showed that surface warming might double and even triple extreme precipitation events. Changes in the frequency and intensity of extreme rainfall can cause remarkable problems for human lives, agricultural production, economy and infrastructure design and management (Toros 2012; Yazid and Humphries 2015). It is expected that many sectors will be affected by the extreme precipitation events (Shiferaw et al. 2018). Hence, it is crucial to investigate the changes in extreme events and to quantify these changes coherently.
In this context, extreme precipitation has become the subject of interest in many studies. On the other hand, there was no consensus in terms of definition, calculation and methodology for the extreme events, particularly extreme indices (Abbasnia and Toros 2020). To ensure a uniform perspective in analyzing weather and climate extremes, expert team on climate change detection and indices (ETCCDI) has defined a core set of extreme indices (Alexander et al. 2006; Klein Tank et al. 2009; Zhang et al. 2011). In addition, the expert team on sector-specific climate indices (ET-SCI) has developed a number of climate indices for use in various sector applications such as water, agriculture or health (Alexander et al. 2013; Alexander and Harold 2016). The main purpose of all these indices is to measure and monitor the climate change and its variability in a uniform way.
Extreme precipitation indices have been subject of interest for historical and future periods by several researchers in the last decade to gain a better understanding of the behavior. Attogouinon et al. (2017) studied 11 precipitation indices for the upper Oueme River valley in Benin and did not found any significant trend. Harpa et al. (2019) analyzed future changes for five extreme precipitation indices in Romania and detected increases in heavy precipitation days, very heavy precipitation days and annual total wet day precipitation compared with the reference period. Keggenhoff et al. (2014) presented an increase in extreme precipitation over Georgia but they stated that all trends manifest a low spatial coherence. Türkeş (2012) showed that there is an increasing trend in annual total precipitation over the Tekirdag and Istanbul provinces in the Thrace region together with northern and eastern parts of the Black Sea region and the Central and Eastern Anatolia regions of Turkey. Yilmaz (2015) examined the Antalya province of Turkey and he indicated that the obtained results show that the region has the potential to face more intense rainfall events in the future. Sensoy et al. (2019) analyzed the sectoral climate indices for the Istanbul province of Turkey and notified the increasing risk on agriculture and water resources based on the indices and anthropogenic activities. Nigussie and Altunkaynak (2019) used both observed and projected rainfall series to analyze the effect of climate change on extreme rainfall events for the Olimpiyat meteorological station near the Ayamama watershed in Istanbul. Their results present significant differences between observed and future data-driven indices and highlight an increase in the flood risk for the region under the RCP8.5 scenario. Oruç (2020) studied the potential impacts of climate change on extreme precipitation for Tekirdağ station and found an increasing extreme daily rainfall magnitude at Tekirdağ Province. Abbasnia and Toros (2020) examined extreme temperature and precipitation indices for 71 meteorological stations across the coastal and non-coastal areas of Turkey. Their results reveal decreasing trends in the number of precipitation days and the volume of precipitation for the large majority of the stations. Tokgöz and Partal (2020) analyzed the annual precipitation and temperature data of the Black Sea region. Their results indicated a generally increasing trend for annual precipitations and temperatures in Black Sea region. Köyceğiz and Büyükyıldız (2019) investigated the temporal variability of extreme precipitation in Konya closed basin and found insignificant increasing and decreasing trends. In another study of Abbasnia and Toros (2018) covering the Marmara region of Turkey, while decreasing trends were observed in consecutive wet days for the Canakkale, Kocaeli and Sariyer meteorological stations, increasing trends were detected for the Yalova, Edirne, Bursa and Tekirdag stations. Moreover, maximum 1 day and 5 day precipitation amount indices presented significant increases for the Sariyer, Tekirdag and Edirne stations.
Knowledge about hydro-climatologic trends and diagnosis of extreme precipitation patterns are particularly important because underestimating these events can cause remarkable social and physical damages; on the other hand, overestimated extremes may lead to increases in infrastructure investment costs. Moreover, it is crucial for decisionmakers to have all the necessary knowledge in order to implement appropriate climate change strategies and policies. However, it is unlikely to obtain spatial coherence when extreme precipitation is the subject; therefore, regional- and local-scale investigations of extreme precipitation characteristics have gained importance.
The selected area for this study is the Thrace region of Turkey representing the European side of the country (Fig. 1). This region plays an important role in the Turkish economy due to dense population, industrial activities and agricultural production. Due to the economic activities, the region has been exposed to anthropogenic impact besides natural effects. Moreover, with increasing pollution due to the intense industrial activities, managing the domestic and irrigational water demand has become more crucial for the region (Şaylan et al. 2011; Bagdatli and Belliturk 2016).
Map of Thrace region and location of the meteorological stations
The objectives of this study are to compute the values of 14 specific ET-SCI extreme rainfall indices using ClimPACT2 software for the Edirne, Kirklareli, Tekirdag and Sariyer meteorological stations located in the Thrace region and to analyze the trend presence and relation for these indices in the historical period on annual and monthly basis. Although there are studies in the literature regarding extreme indices of the Thrace region (e.g., Sırdaş and Şen 2003; Acar et al. 2018; Abbasnia and Toros 2018), this study differentiates from the earlier studies by dealing with more sectoral precipitation indices, analyzing these indices not only annually but also in monthly temporal scale and investigating the correlations of indices among the stations.
Study area and data
The Thrace region lies in the northwestern part of Turkey (Fig. 1). In the north and west, Thrace has borders with Greece and Bulgaria. Kirklareli, Edirne, Tekirdag and part of Istanbul and Canakkale provinces are located in the Thrace region. The Black Sea and Balkan effects over Mediterranean climate lead to cold winters and warm summers in Thrace (Sırdaş and Sen 2003).
The daily historical precipitation records of several meteorological stations in the region are obtained from the Turkish State Meteorological Service. The Edirne (17050), Kirklareli (17052), Tekirdag (17056) and Sariyer (17061) stations are selected to be used for the historical period calculations due to their sufficiently long observation periods that allow the trend assessment of extreme precipitation indices over the region (MGM 2020). These stations are shown in Fig. 1 and are detailed in Table 1.
Table 1 Characteristics of the meteorological stations (MGM 2020)
Trend tests
The Mann–Kendall (MK) trend test (Mann 1945; Kendall 1975) is a nonparametric rank-based test for identifying statistically significant trends in time series data (Gilbert 1987). It is among the most widely used nonparametric trend tests (e.g., Keggenhoff et al. 2014; Nigussie and Altunkaynak 2019; Wang et al. 2019; Militino et al. 2020). In this study, the MK trend test is used to detect if there is a monotonic upward or downward trend over time for the indices.
In the MK test, the null hypothesis (H0) assumes that there is no trend and the alternate hypothesis (H1) implies an increasing or decreasing trend over time. The mathematical equations for calculating MK test statistics S, variance of statistics V (S) and standardized test statistics (Z) are given as follows:
$$S \ge { }\mathop \sum \limits_{i = 1}^{n - 1} \mathop \sum \limits_{{j = {\text{i}} + 1}}^{n} {\text{sgn}}(x_{j} - { }x_{i} ){ }$$
$${\text{sgn}}\left( {x_{j} - x_{i} } \right) = { }\left\{ {\begin{array}{*{20}c} { + 1\quad if \left( {x_{j} - x_{i} } \right) > 0 } \\ { 0\quad if \left( {x_{j} - x_{i} } \right) = 0} \\ { - 1\quad if \left( {x_{j} - x_{i} } \right) < 0} \\ \end{array} } \right.$$
$$V\left( S \right) = \frac{1}{18}\left[ {n\left( {n - 1} \right)\left( {2n + 5} \right) - \mathop \sum \limits_{p = 1}^{m} t_{p} \left( {t_{p} - 1} \right)\left( {2t_{p} + 5} \right)} \right]$$
$$Z = { }\left\{ {\begin{array}{*{20}c} { \frac{S - 1}{{\sqrt {V\left( S \right)} }}\quad if S > 0 } \\ { 0\quad if S = 0} \\ {\frac{S + 1}{{\sqrt {V\left( {\text{S}} \right)} }}\quad if S < 0} \\ \end{array} } \right.$$
In these equations, xi and xj are the values of sequence i, j; n is the length of the time series; tp is the number of ties for pth value; and m is the number of tied values. A positive Z value indicates an upward trend, while a negative Z value indicates downward trend in the time series (Mann 1945; Kendall 1975; Ahmad et al. 2015; Chen et al. 2016).
It is known that serial correlation can affect the time series. Modified versions of the Mann–Kendall (M-MK) test are often used to justify original MK test results or to eliminate the influence of autocorrelation. One of these alternatives is the variance correction approach proposed by Hamed and Rao (1998). While this approach uses only significant lags of autocorrelation coefficients, Rao et al. (2003) suggested to use the first three autocorrelation coefficients. In this study, the variance correction approach is considered with the first three lags as an alternative to the MK test. The details of this approach can be found in Hamed and Rao (1998), Rao et al. (2003), Patakamuri and O'Brien (2020) and Patakamuri et al. (2020).
Expert team on sector-specific climate indices (ET-SCI)
The expert team on sector-specific climate indices (ET-SCI) of the WMO has been widely used to identify variability and trends in climate and sensitivity of various sectors according to these potential variations (Herold et al. 2018; Junk et al. 2019). In this study, ClimPACT2, a downloadable R-software package based on RClimDex software developed by ET-CCDI, is used to calculate sector-specific climate indices (Zhang and Yang 2004; Alexander and Herold 2016).
The period from 1961 to 1990 is suggested by WMO as a standard reference period for long-term climate change assessments (WMO 2017). Hence, while threshold-based indices are computed over the baseline period 1961–1990 for the Edirne, Tekirdag and Sariyer stations, the baseline period for the Kirklareli station is taken as 1981–1990 due to the lack of rainfall measurements.
The quality control of the rainfall time series is performed using the quality control functions of ClimPACT2 software and the R-based RHtests_dlyPrcp software (Wang and Feng 2013). According to the RHtests_dlyPrcp results, while the Edirne, Kirklareli and Sariyer stations show no change points, a change point is detected for the Tekirdag station. This problem is solved by revising the start of the observation period to 1955 for the Tekirdag station. After that, negative values, missing values and outliers are controlled manually.
After completing the required checks, ClimPACT2 is run to calculate extreme precipitation indices, listed in Table 2, using the daily rainfall time series. Trend analysis is conducted at annual scale on each of the 14 rainfall-based indices. In addition, the trends in PRCPTOT, CWD, R10mm, R20mm, R30mm, Rx1day, Rx5day and Rx10day indices are analyzed at monthly timescale.
Table 2 Explanations of the expert team on sector-specific climate indices (ET-SCI) (Alexander and Herold 2016)
Index statistical inference and trends on an annual basis
Fourteen extreme rainfall indices are calculated using the daily rainfall series recorded in the concerned observation periods at the Edirne, Kirklareli, Tekirdag and Sariyer meteorological stations. Basic statistical characteristics of these indices are presented in Tables 3, 4, 5 and 6 for the Edirne, Kirklareli, Tekirdag and Sariyer stations, respectively. The Sariyer station has the highest mean and standard deviation values of PRCPTOT, and the mean value of CDD for this station is the smallest one among the four stations. The Sariyer station has the highest mean and standard deviation values of the maximum 1-, 5- and 10-day precipitation indices (i.e., Rx1day, Rx5day and Rx10day). Considering the percentile indices (i.e., R95p, R99p, R95pTot and R99pTot), while the Kirklareli station has the smallest mean values of R95p and R99p, the Sariyer station has the smallest mean values of R95ptot and R99ptot. Moreover, the Rx1day, Rx5day and R99p indices of the Edirne station, Rx1day index of the Tekirdag station and CWD, Rx5day and Rx10day indices of the Sariyer station indicate the presence of extreme values.
Table 3 Basic statistical parameters for the Edirne station
Table 4 Basic statistical parameters for the Kirklareli station
Table 5 Basic statistical parameters for the Tekirdag station
Table 6 Basic statistical parameters for the Sariyer station
Furthermore, positively and highly positively skewed values exist such as for CDD, CWD, Rx1day, Rx5day, Rx10day and R99p indices at all stations. Considering the kurtosis values, presence of outliers is more remarkable for Rx1day, Rx5day and R99p for the Edirne station, Rx1day for the Tekirdag station and CWD, Rx5day and Rx10day for the Sariyer station that indicates a heavy tail for these indices.
MK and M-MK tests are used for the trend detection of extreme rainfall indices considered in this study. Significance level is accepted as 0.05 in these analyses and the results of the MK and M-MK tests are presented in Tables 7 and 8, respectively. In these tables, the numbers presented in bold show significant trends at 0.05 significance level.
Table 7 Mann–Kendall (MK) test: z-statistics and p values of annual rainfall indices
Table 8 Modified Mann–Kendall (M-MK) test: corrected z-statistics and new p values of annual rainfall indices
The results in Table 7 show that in general, there are increasing trends in most of the indices. On the other hand, the PRCPTOT index of the Edirne station, the PRCPTOT, CDD and R10mm indices of the Tekirdag station and the CDD index of the Sariyer station show decreasing trends. Toros (2012) found a general decrease in the annual total precipitation during the last decades for individual stations of Turkey and stated that while 34% of the stations have negative trends with only 12% of them at significant level, 22% of the stations have positive trends with only 4% of them at significant level. There is no negative trend detected for the Kirklareli station although Gönençgıl (2012) found that the meteorological stations of the Thrace region are dominated with decreasing trend in annual precipitation amounts which started in the year 1975 and gradually became evident during the 1990s. The detected trends for the Edirne and Kirklareli stations are not statistically significant, except for SDII, R95p and R95ptot for the Edirne station, SDII, R10mm and R20mm for the Kirklareli station. Considering the Sariyer station, 50% of the indices (i.e., 7 of 14) show statistically significant increasing trend tendencies. While all significant trends are positive for the Edirne, Kirklareli and Sariyer stations, there is no significant trend (neither negative nor positive) at 0.05 significance level for the Tekirdag station on the annual base.
Rainfall is one of the most important components of water resource management for decisionmaking and planning in the region having agriculture-based economy, and the increasing trend of CDD for the Edirne and Kirklareli stations needs attention in this context. Especially the increasing trend and magnitude of CDD with decreasing total precipitation at the Edirne station is worth to focus as a signal of water scarcity in the near future. On the other hand, CWD indices also show increasing trends for all stations but none of these trends are significant.
While annual total precipitation (i.e., PRCPTOT) shows a decreasing trend for the Edirne and Tekirdag stations, increasing trends are detected for the Kirklareli and Sariyer stations. Although negative trends are detected for total precipitation indices of the Edirne and Tekirdag stations, SDII indices of these stations show positive trends with a significant one for the Edirne station. Sensoy et al. (2013) investigated the extreme climate indices in Turkey for 109 stations and for the period from 1960 to 2010 and they found that heavy precipitation days increase for most of the stations except the ones in the Aegean and southeastern Anatolia regions. Furthermore, in most of the stations, maximum 1-day precipitation followed an increasing trend, apart from southeastern Anatolia. Regarding R10mm, R20mm and R30mm indices, a positive trend is found for most of the stations, supporting the results of other studies, while it is observed that R10mm index has a negative trend for the Tekirdag station. In addition, Rx1day, Rx5day and Rx10day indices show an increasing trend during the study period for all stations. Abbasnia and Toros (2018) also revealed significant increasing trends of Rx1day and Rx5day indices for the Edirne and Tekirdag stations which support the results of this study.
Regarding the percentile indices, while R95p, R99p, R95pTot and R99pTot values of all four stations show an increasing trend, R95p and R95pTot indices of the Edirne station and all percentile indices of the Sariyer station have significant positive trends. R95pTot and R99pTot represent the percentage of annual precipitation that comes from very wet days and extremely wet days, respectively. Hence, these indices can be used to investigate the possibility that there may have been a greater change in extreme precipitation events than in total amount. Accordingly, for the Kirklareli and Sariyer stations, it can be expected that extremes are going to change more rapidly when increasing total precipitation and very wet days/extremely wet days indices are considered. However, for the Edirne and Tekirdag stations, the decreasing trend of total precipitation indicates that the very wet days and extremely wet days are less affected by the trend in the total precipitation. Abbasnia and Toros (2020) examined the extreme temperature and precipitation indices for 71 stations across Turkey from 1961 to 2016, covering the Thrace region and also R95p and R99p indices. When compared their findings with the present results, it is seen that there are differences in terms of trend tendency and significance that could be due to the duration of data, chosen baseline period and chosen significance level.
Furthermore, for the Kirklareli station, both consecutive dry days (i.e., CDD) and annual total precipitation (i.e., PRCPTOT) indices show positive trends with a higher magnitude for total precipitation. This can be interpreted as daily rainfall events with relatively stable CWD compared to annual precipitation may expose an increase in the frequency and/or intensity of heavy rain.
The M-MK tests are utilized to justify the results of MK tests, as stated before. The values in Table 8 show that the results of the M-MK tests are, in general, consistent with those of the MK tests in the direction and significance of trend. However, the R20mm value of the Edirne station, R30mm, Rx1day and R99p values of the Tekirdag station and R10mm value of the Sariyer station are corrected by the results of the M-MK tests. Considering these modifications, the corrected R20mm value of the Edirne station indicates a significant increasing trend while the MK test results yield insignificant trend for this index. The rest of the results show consistent z-statistics and p values for the indices.
The relationships between annual rainfall indices are calculated in a 14 × 14 matrix for the Edirne, Kirklareli, Tekirdag and Sariyer meteorological stations, as presented in Tables 9, 10, 11 and 12, respectively. The results show that the maximum 1-, 5- and 10-day precipitation indices (i.e., Rx1day, Rx5day and Rx10day) have better relationships with the percentile indices (i.e., R95p, R99p, R95pTot and R99pTot). The results also reveal that the threshold indices (i.e., R10mm, R20mm and R30mm) have better relationships with SDII and PRCPTOT indices while PRCPTOT has a stronger relationship than SDII. Moreover, CDD and CWD indices have the worst correlations with other indices for all stations. In addition, fairly good correlations are observed between R30mm, R95p and R95pTot indices and between Rx1day, R99p and R99pTot indices. Although both positive and negative correlations are detected among the annual values of the rainfall indices, positive relationships exhibit stronger correlations while the negative correlation values are not as strong as the positive ones.
Table 9 Correlation matrix of annual rainfall indices for the Edirne station
Table 10 Correlation matrix of annual rainfall indices for the Kirklareli station
Table 11 Correlation matrix of annual rainfall indices for the Tekirdag station
Table 12 Correlation matrix of annual rainfall indices for the Sariyer station
Index statistical inference and trends on a monthly basis
In order to detect the changes in extreme precipitation at a shorter time scale, the monthly trends of PRCPTOT, CWD, R10mm, R20mm, R30mm, Rx1day, Rx5day and Rx10day indices are computed using the MK and M-MK tests for all stations. The results of the MK and M-MK tests are presented in Tables 13, 14, 15 and 16 and Tables 17, 18, 19 and 20 for the Edirne, Kirklareli, Tekirdag and Sariyer stations, respectively. In these tables, the numbers presented in bold show significant trends at 0.05 significance level.
Table 13 Monthly Mann–Kendall (MK) test z-statistics for the Edirne station
Table 14 Monthly Mann–Kendall (MK) test z-statistics for the Kirklareli station
Table 15 Monthly Mann–Kendall (MK) test z-statistics for the Tekirdag station
Table 16 Monthly Mann–Kendall (MK) test z-statistics for the Sariyer station
Table 17 Monthly modified Mann–Kendall (M-MK) test z-statistics for the Edirne station
Table 18 Monthly modified Mann–Kendall (M-MK) test z-statistics for the Kirklareli station
Table 19 Monthly modified Mann–Kendall (M-MK) test z-statistics for the Tekirdag station
Table 20 Monthly modified Mann–Kendall (M-MK) test z-statistics for the Sariyer station
At the monthly scale, the PRCPTOT index of the Kirklareli station shows a significant increasing trend in January, September and October. In addition, the CWD index of the Edirne station significantly decreases in February. While there is no significant negative or positive trend in the R10mm, R20mm and R30mm indices of the Edirne station, these indices show significant increasing trends for the Kirklareli station (e.g., R10 mm and R20 mm in September and R30mm in February). On the other hand, the R20mm and R30mm indices of the Tekirdag station indicate significant negative trends in November. The maximum 1-, 5- and 10-day precipitation indices (i.e., Rx1day, Rx5day and Rx10day) of the Kirklareli station increases significantly in January, September and October. For the Sariyer station, the monthly MK test z-statistics show no significant decreasing trend for any of the indices while significantly increasing trends are observed for PRCPTOT in October, CWD in November, R30mm in February and Rx5day and Rx10day in October.
While both increasing and decreasing trends are detected at monthly scale, some months are dominated with solely negative or positive trends. For the Edirne station, the indices in April, May, June and November show mostly decreasing trends and increasing trends are observed for rest of the months. It is worth to say that these trends are nonsignificant except CWD index in February which has a significant decreasing trend. Considering the Kirklareli station, mostly negative trends are observed in November and December. Negative trends of the monthly indices are not significant; however, positive trends with significantly high values exist especially in January, September and October. For the Tekirdag station, decreasing trends are dominated in January, March, June, July, November and December and positive trends are observed in February, September and October. The only month with indices having significant trends, namely R20mm, R30mm and Rx1day, is November for the Tekirdag station. For the Sariyer station, negative trends are generally dominant for the indices in January, April, July and December while increasing trend tendencies are observed in rest of the months for most of the indices.
When the results of the monthly M-MK and MK tests are compared, it is observed that the z-statistics of the M-MK tests show variations in terms of significance for some months and indices. Considering the M-MK test results, the R20mm index of the Edirne station has a significant increasing trend in September, while the MK test results reveal an insignificant trend for this index. The Rx10day and Rx1day indices of the Kirklareli station approach significance in the months of September and November, respectively, according to the M-MK test results. Moreover, the magnitude of the Rx10day index of this station increases considerably in October. Although there are some differences between the MK and M-MK test results of the Tekirdag station as for the PRCPTOT, R10mm and Rx1day indices of January and the Rx1day and Rx5day indices of the months of October and December, these differences do not change the significance or direction of trends. The monthly M-MK test results exhibit differences from the ones of the MK tests for the Sariyer station, too, but the only significant difference is detected in the R30mm index of September.
The relationships between four stations in terms of monthly PRCPTOT, CWD, R10mm, R20mm, R30mm, Rx1day, Rx5day and Rx10day indices are analyzed as presented in Table 21. To be able to obtain consistent comparisons, the period from 1981 to 2016 is considered for all stations. The results show that the indices of the Edirne and Kirklareli stations are better correlated than the indices of other stations on monthly basis. January, February, September and October are the months that the indices of the Edirne and Kirklareli stations exhibit fair correlations. The PRCPTOT indices of these stations are also showed better correlations in most of the months except July and August. The indices of the Kirklareli and Tekirdag stations, except CWD and R20mm, show better correlations in September than in other months. Regarding the Tekirdag and Edirne stations, it can be said that September, October and December are the months that the PRCPTOT, CWD and R10mm indices show closer relationships. The Sariyer station has a better relationship with the Tekirdag station that may be due to the effect of their similar locations by the sea.
Table 21 Monthly correlations of indices for the Edirne (E)-Kirklareli (K), Edirne (E)-Tekirdag (T), Kirklareli (K)-Tekirdag (T), Edirne (E)-Sariyer (S), Kirklareli (K)-Sariyer (S) and Tekirdag (T)-Sariyer (S) stations
This study assesses the 14 sector-specific ET-SCI precipitation indices in terms of statistical characteristics, trends using the nonparametric MK and M-MK tests and correlations between each other in both annual and monthly temporal scales for the four meteorological stations located in the Thrace region of Turkey, namely Edirne, Tekirdag, Kirklareli and Sariyer. Within this scope, the basic statistical characteristics of the extreme rainfall indices are determined at first and the calculated values indicate significant differences between most of the extreme indices. The trend tendencies of extreme rainfall indices are detected using the nonparametric MK and M-MK tests on annual and monthly bases. The indices mostly do not have significant trends in most of all the stations, and the insignificant trends are mostly positive.
At the annual scale, total precipitation shows increasing trend for the Kirklareli and Sariyer stations and decreasing trend for the Edirne and Tekirdag stations. However, all of the precipitation intensity indices (SDII) show increasing trends that are significant for the Edirne and Kirklareli stations. Decreasing total precipitation and increasing daily intensity trends might be a signal of decrease in the wet days for the Edirne station. This situation is also supported by the increase in the consecutive dry days and, hence, could be considered as a warning both for increasing intense precipitation and drought conditions. The Kirklareli station tends to have more days with heavy, very heavy and extremely heavy rainfall events. It is also anticipated that maximum amount of rainfalls in daily and consecutive five- and ten-day time scales will probably increase at all stations. Moreover, rainfall from very wet days and extremely wet days and fraction of total wet day rainfall that comes from very wet days and extremely wet days indices also show increasing trend tendencies for all stations, especially Sariyer. Above all, the remarkable point that has to be emphasized is the decreasing total precipitation trend at the Edirne and Tekirdag stations indicating that the annual total precipitation does not necessarily depend on extreme precipitation for the analyzed period. When the relationships between annual rainfall indices are analyzed for each station separately, it is observed that the maximum 1-, 5- and 10-day precipitation indices seem to be correlated with the percentile indices. The correlation analyses also reveal that the threshold indices have better relationships with SDII and PRCPTOT indices that are more evident at the Kirklareli and Sariyer stations which have the same trend direction for the total precipitation and threshold indices. In addition, while CDD and CWD indices have the worst correlations with other indices for all stations, fairly good correlations are observed between R30mm, R95p and R95pTot indices and between Rx1day, R99p and R99pTot indices. Moreover, mixed positive and negative trends are seen only for the total precipitation and consecutive dry days indices while the rest of the indices show positive trends whether significant or not.
At the monthly scale, PRCPTOT index shows a significant increasing trend for the Kirklareli station in the months of January, September and October and for the Sariyer station in October. The CWD index of the Edirne station significantly decreases in February, while a significant increase is observed for this index of the Sariyer station in November. The threshold indices of the Kirklareli station have significant increasing trends as in September for R10mm and R20 mm and in February for R30mm. In addition, while the R30mm index of the Sariyer station shows an increasing trend in February and September, the R20mm and R30mm indices of the Tekirdag station indicate significant negative trends in November. The Rx1day, Rx5day and Rx10day indices of the Kirklareli station increase significantly in September and October and the Rx5day and Rx10day indices of the Sariyer stations have significant increasing trends in October. Moreover, according to the correlation analyses performed among the four stations at the monthly time scale, there are considerable correlations between the indices of the Edirne and Kirklareli stations, especially for total precipitation.
Annual and monthly extreme precipitation trends are important indicators of what might happen in the near future for the sectors such as agriculture, infrastructure and water supply. Potential changes in precipitation characteristics have direct impact over water availability and surface runoff and, hence, over the above-mentioned sectors. Regarding the agriculture sector, crop type, irrigation pattern and even insurance rates are determined according to the characteristics of extreme events. The Thrace region is one of the agricultural basins in Turkey so impacts over product variety and growing season directly affect the regional and national economy. This study depicts a quantitative base with the detection of annual and monthly trends of extreme precipitation indices and demonstrates the basic relationship patterns of these indices for the Thrace region. The results of this study notice not only the extreme precipitation but also the significant differences for the extreme precipitation indices within the region.
Abbasnia M, Toros H (2018) Analysis of long-term changes in extreme climatic indices: a case study of the Mediterranean climate, Marmara Region. Turkey Pure Appl Geophys 175(11):3861–3873. https://doi.org/10.1007/s00024-018-1888-8
Abbasnia M, Toros H (2020) Trend analysis of weather extremes across the coastal and non-coastal areas (case study: Turkey). J Earth Syst Sci 129:95. https://doi.org/10.1007/s12040-020-1359-3
Acar Z, Gonencgil B, Korucu Gümüşoğlu N (2018) Long-term changes in hot and cold extremes in Turkey. Cografya Dergisi 37:57–67
Ahmad I, Tang D, Wang T, Wang M, Wagan B (2015) Precipitation trends over time using Mann-Kendall and Spearman's rho tests in Swat River basin. Pakistan Adv Meteorol 2015:431860. https://doi.org/10.1155/2015/431860
Alexander L and Herold N (2016) ClimPACT2: Indices and Software. A document prepared on behalf of the commission for climatology (CCl) expert team on sector-specific climate indices (ET-SCI)
Alexander LV, Zhang X, Peterson TC, Caesar J, Gleason B, Klein Tank AMG, Haylock M, Collins D, Trewin B, Rahimzadeh F, Tagipour A, Rupa Kumar K, Revadekar J, Griffiths G, Vincent L, Stephenson DB, Burn J, Aguilar E, Brunet M, Taylor M, New M, Zhai P, Rusticucci P, Vazquez-Aguirre JL (2006) Global observed changes in daily climate extremes of temperature and precipitation. J Geophys Res 111:D05109. https://doi.org/10.1029/2005JD006290
Alexander L, Yang H and Perkins S (2013) Clim PACT, Indices and software. A document prepared on behalf of the commission for climatology (CCl) expert team on climate risk and sector-specific climate indices (ET CRSCI)
Attogouinon A, Lawin AE, M'Po YN, Houngue R (2017) Extreme precipitation indices trend assessment over the upper Oueme River valley-(Benin). Hydrology 4(3):36. https://doi.org/10.3390/hydrology4030036
Bagdatli MC, Belliturk K (2016) Water resources have been threatened in Thrace region of Turkey. Adv Plants Agric Res 4(1):227–228
Chen Y, Guan Y, Shao G, Zhang D (2016) Investigating trends in streamflow and precipitation in Huangfuchuan basin with wavelet analysis and the Mann-Kendall test. Water 8(3):77. https://doi.org/10.3390/w8030077
Gilbert RO (1987) Statistical methods for environmental pollution monitoring. Wiley, New York
Gönençgıl B (2012) Climate characteristics of Thrace and observed temperature-precipitation trends. In: Proceedings of the 2nd international Balkan annual conference (IBAC 2012), Tiran
Hamed KH, Rao AR (1998) A modified Mann-Kendall trend test for autocorrelated data. J Hydrol 204(1–4):182–196. https://doi.org/10.1016/S0022-1694(97)00125-X
Harpa G-V, Croitoru A-E, Djurdjevic V, Horvath C (2019) Future changes in five extreme precipitation indices in the lowlands of Romania. Int J Climatol 39(15):5720–5740. https://doi.org/10.1002/joc.6183
Herold N, Ekström M, Kala J, Goldie J, Evans JP (2018) Australian climate extremes in the 21st century according to a regional climate model ensemble: Implications for health and agriculture. Weather Clim Extremes 20:54–68. https://doi.org/10.1016/j.wace.2018.01.001
IPCC (Intergovernmental Panel on Climate Change) (2012) Managing the risks of extreme events and disasters to advance climate change adaptation special report of Working Groups I and II of the Intergovernmental Panel on Climate Change [Field CB, Barros V, Stocker TF, Dahe Q, Dokken DJ, Ebi KL, Mastrandrea MD, Mach KJ, Plattner G-K, Allen SK, Tignor M, Midgley PM (eds)]. Cambridge University Press, Cambridge and New York
IPCC (Intergovernmental Panel on Climate Change) (2013) Climate change 2013: The physical science basis-contribution of Working Group I to the fifth assessment report of the intergovernmental panel on climate change [Stocker TF, Qin D, Plattner G-K, Tignor MMB, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM (eds)]. Cambridge University Press, Cambridge and New York
IPCC (Intergovernmental Panel on Climate Change) (2014a) Climate change 2014: Impacts, adaptation and vulnerability, Part A: global and sectoral aspects-contribution of working group II to the fifth assessment report of the intergovernmental panel on climate change [Field CB, Barros VR, Dokken DJ, Mach KJ, Mastrandrea MD, Bilir TE, Chatterjee M, Ebi KL, Estrada YO, Genova RC, Girma B, Kissel ES, Levy AN, MacCracken S, Mastrandrea PR, White LL (eds)]. Cambridge University Press, Cambridge and New York
IPCC (Intergovernmental Panel on Climate Change) (2014b) Climate change 2014: synthesis report-contribution of working groups I, II and III to the fifth assessment report of the Intergovernmental Panel on Climate Change [Core Writing Team, Pachauri RK, Meyer L (eds)]. IPCC, Geneva
Junk J, Goergen K, Krein A (2019) Future heat waves in different European capitals based on climate change indicators. Int J Environ Res Public Health 16:3959
Keggenhoff I, Elizbarashvili M, Amiri-Farahani A, King L (2014) Trends in daily temperature and precipitation extremes over Georgia, 1971–2010. Weather Clim Extrem 4:75–85. https://doi.org/10.1016/j.wace.2014.05.001
Kendall MG (1975) Rank correlation methods, 4th edn. Charles Griffin, London
Klein Tank AMG, Peterson TC, Quadir DA, Dorji S, Zou X, Tang H, Santhosh K, Joshi UR, Jaswal AK, Kolli RK, Sikder AB, Deshpande NR, Revadekar JV, Yeleuova K, Vandasheva S, Faleyeva M, Gomboluudev P, Budhathoki KP, Hussain A, Afzaal M, Chandrapala L, Anvar H, Amanmurad D, Asanova VS, Jones PD, New MG, Spektorman T (2006) Changes in daily temperature and precipitation extremes in central and south Asia. J Geophys Res 111(D16):D16105. https://doi.org/10.1029/2005JD006316
Klein Tank AMG, Zwiers FW, Zhang X. 2009. Guidelines on analysis of extremes in a changing climate in support of informed decisions for adaptation. WCDMP-72, WMO-TD/No.1500; 56 pp
Köyceği̇z, C , Büyükyıldız, M . (2019). Temporal trend analysis of extreme precipitation: a case study of Konya Closed Basin . Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi , Volume: 25 - No: 8 , 956–961 . Retrieved from https://dergipark.org.tr/en/pub/pajes/issue/51127/668093
Mann HB (1945) Nonparametric tests against trend. Econometrica 13(3):163–171. https://doi.org/10.2307/1907187
MGM (Turkish State Meteorological Service) (2020) Daily precipitation records of the Edirne (1952–2016), Kirklareli (1981–2016), Tekirdag (1955–2016) and Sariyer (1960–2016) meteorological stations. Turkish state meteorological service, Ankara
Militino AF, Moradi M, Ugarte MD (2020) On the performances of trend and change-point detection methods for remote sensing data. Remote Sens 12(6):1008. https://doi.org/10.3390/rs12061008
Myhre G, Alterskjær K, Stjern CW, Hodnebrog Ø, Marelle L, Samset BH, Sillmann J, Schaller N, Fischer E, Schulz M, Stohl A (2019) Frequency of extreme precipitation increases extensively with event rareness under global warming. Sci Rep 9:16063. https://doi.org/10.1038/s41598-019-52277-4
Nigussie TA, Altunkaynak A (2019) Impacts of climate change on the trends of extreme rainfall indices and values of maximum precipitation at Olimpiyat Station, Istanbul, Turkey. Theor Appl Climatol 135:1501–1515. https://doi.org/10.1007/s00704-018-2449-x
Oruc S (2020) Investigation of the effect of climate change on extreme precipitation: Tekirdağ case. Turkish J Water Sci Manage 4(2):136–161. https://doi.org/10.31807/tjwsm.746134
Patakamuri SK, O'Brien N (2020) Modified versions of Mann Kendall and Spearman's Rho trend tests (R software package, version 1.5.0) https://cran.r-project.org/web/packages/modifiedmk/index.html. Accessed 18 June 2020
Patakamuri SK, Muthiah K, Sridhar V (2020) Long-term homogeneity, trend and change-point analysis of rainfall in the arid district of Ananthapuramu Andhra Pradesh state. India Water 12(1):211. https://doi.org/10.3390/w12010211
Rao AR, Hamed KH, Chen H-L (2003) Nonstationarities in hydrologic and environmental time series. Kluwer Academic Publishers, Dordrecht
Şaylan L, Çaldağ B, Bakanoğulları F, Toros H, Yazgan M, Şen O, Özkoca Y (2011) Spatial variation of the precipitation chemistry in the Thrace region of Turkey. Clean 39(5):491–501. https://doi.org/10.1002/clen.201000065
Sensoy S, Türkoğlu N, Akçakaya A, Ulupınar Y, Ekici M, Demircan M, Atay H, Tüvan A, Demirbaş H (2013) Trends in Turkey climate indices from 1960 to 2010, 6th Atmospheric Science Symposium, 24–26 April 2013. ITU, Istanbul
Şensoy, S., Kömüşçü, A.,Ü.,Türkoğlu, N.,Çiçek, İ., Esentürk, H., (2019) Trends in sectoral climate indices for Istanbul, 9 th International symposium on atmospheric sciences (ATMOS 2019)
Shiferaw A, Tadesse T, Rowe C, Oglesby R (2018) Precipitation extremes in dynamically downscaled climate scenarios over the greater horn of Africa. Atmosphere 9(3):112. https://doi.org/10.3390/atmos9030112
Sırdaş S, Sen Z (2003) Spatio-temporal drought analysis in the Trakya region. Turkey Hydrol Sci J 48(5):809–820. https://doi.org/10.1623/hysj.48.5.809.51458
Tokgöz S, Partal T (2020) Karadeniz Bölgesinde Yıllık Yağış ve Sıcaklık Verilerinin Yenilikçi Şen ve Mann-Kendall Yöntemleri ile Trend Analizi. J Inst Sci Technol 10(2):1107–1118. https://doi.org/10.21597/jist.633368
Toros H (2012) Spatio-temporal precipitation change assessments over Turkey. Int J Climatol 32(9):1310–1325. https://doi.org/10.1002/joc.2353
Türkes M (2012) Türkiye'de gözlenen ve öngörülen iklim değişikliği, kuraklık ve çölleşme. Ankara Üniversitesi Çevrebilimleri Dergisi 4(2):1–32. https://doi.org/10.1501/Csaum_0000000063
Wang XL, Feng Y (2013) RHtests_dlyPrcp user manual. Climate research division atmospheric science and technology directorate science and technology branch environment Canada, Toronto
Wang Y, Liu G, Guo E (2019) Spatial distribution and temporal variation of drought in Inner Mongolia during 1901–2014 using standardized precipitation evapotranspiration index. Sci Total Environ 654:850–862. https://doi.org/10.1016/j.scitotenv.2018.10.425
WMO Guidelines on the Calculation of Climate Normals (2017) World meteorological organization, Geneva, Switzerland
Yazid M, Humphries U (2015) Regional observed trends in daily rainfall indices of extremes over the Indochina Peninsula from 1960 to 2007. Climate 3(1):168–192. https://doi.org/10.3390/cli3010168
Yilmaz AG (2015) The effects of climate change on historical and future extreme rainfall in Antalya. Turkey Hydrol Sci J 60(12):2148–2162. https://doi.org/10.1080/02626667.2014.945455
Zhang X, Yang F (2004) RClimDex user manual. Climate research branch, Ontario
Zhang X, Alexander L, Hegerl GC, Jones P, Tank AK, Peterson TC, Trewin B, Zwiers FW (2011) Indices for monitoring changes in extremes based on daily temperature and precipitation data. WIREs Clim Change 2(6):851–870. https://doi.org/10.1002/wcc.147
This research received no external funding.
Department of Civil Engineering, Kirsehir Ahi Evran University, Kirsehir, 40100, Turkey
Sertac Oruc & Emrah Yalcin
Sertac Oruc
Emrah Yalcin
Correspondence to Sertac Oruc.
The authors declare no conflict of interest.
Communicated by Mohammad Valipour (ASSOCIATE EDITOR/ Michael Nones, Ph.D. (CO-EDITOR-IN-CHIEF).
Oruc, S., Yalcin, E. Extreme precipitation indices trend assessment over Thrace region, Turkey. Acta Geophys. (2021). https://doi.org/10.1007/s11600-020-00531-z
Extreme indices
ET-SCI
ClimPACT2
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Corporate Edition
Not logged in - 3.239.242.55 | CommonCrawl |
The KMOS Cluster Survey (KCS) I:The fundamental plane and the formation ages of cluster galaxies at redshift $1.4z1.6$
Beifiori, Alessandra and Mendel, J. Trevor and Chan, Jeffrey C. C. and Saglia, Roberto P. and Bender, Ralf and Cappellari, Michele and Davies, Roger L. and Galametz, Audrey and Houghton, Ryan C. W. and Prichard, Laura J. and Smith, Russel and Stott, John P. and Wilman, David J. and Lewis, Ian J. and Sharples, Ray and Wegner, Michael (2017) The KMOS Cluster Survey (KCS) I:The fundamental plane and the formation ages of cluster galaxies at redshift $1.4z1.6$. The Astrophysical Journal, 846 (2). ISSN 0004-637X
PDF (1708.00454v1)
1708.00454v1.pdf - Accepted Version
Official URL: https://doi.org/10.3847/1538-4357/aa8368
We present the analysis of the fundamental plane (FP) for a sample of 19 massive red-sequence galaxies ($M_{\star} >4\times10^{10} M_{\odot}$) in 3 known overdensities at $1.39z 11$) in our sample, we translate the FP zero-point evolution into a mass-to-light-ratio $M/L$ evolution finding $\Delta \log M/L_{B}=(-0.46\pm0.10)z$, $\Delta \log M/L_{B}=(-0.52\pm0.07)z$, to $\Delta \log M/L_{B}=(-0.55\pm0.10)z$, respectively. We assess the potential contribution of the galaxies structural and stellar velocity dispersion evolution to the evolution of the FP zero-point and find it to be $\sim$6-35 % of the FP zero-point evolution. The rate of $M/L$ evolution is consistent with galaxies evolving passively. By using single stellar population models, we find an average age of $2.33^{+0.86}_{-0.51}$ Gyr for the $\log M_{\star}/M_{\odot}>11$ galaxies in our massive and virialized cluster at $z=1.39$, $1.59^{+1.40}_{-0.62}$ Gyr in a massive but not virialized cluster at $z=1.46$, and $1.20^{+1.03}_{-0.47}$ Gyr in a protocluster at $z=1.61$. After accounting for the difference in the age of the Universe between redshifts, the ages of the galaxies in the three overdensities are consistent within the errors, with possibly a weak suggestion that galaxies in the most evolved structure are older.
The Astrophysical Journal
This is an author-created, un-copyedited version of an article accepted for publication/published in The Astrophysical Journal. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online at doi: 10.3847/1538-4357/aa8368 | CommonCrawl |
Prostaglandin E Synthase, a Terminal Enzyme for Prostaglandin E2 Biosynthesis
Kudo, Ichiro;Murakami, Makoto 633
Biosynthesis of prostanoids is regulated by three sequential enzymatic steps, namely phospholipase $A_2$ enzymes, cyclooxygenase (COX) enzymes, and various lineage-specific terminal prostanoid synthases. Prostaglandin E synthase (PGES), which isomerizes COX-derived $PGH_2$ specifically to $PGE_2$, occurs in multiple forms with distinct enzymatic properties, expressions, localizations and functions. Two of them are membrane-bound enzymes and have been designated as mPGES-1 and mPGES-2. mPGES-1 is a perinuclear protein that is markedly induced by proinflammatory stimuli, is down-regulated by anti inflammatory glucocorticoids, and is functionally coupled with COX-2 in marked preference to COX-1. Recent gene targeting studies of mPGES-1 have revealed that this enzyme represents a novel target for anti-inflammatory and anti-cancer drugs. mPGES-2 is synthesized as a Golgi membrane-associated protein, and the proteolytic removal of the N-terminal hydrophobic domain leads to the formation of a mature cytosolic enzyme. This enzyme is rather constitutively expressed in various cells and tissues and is functionally coupled with both COX-1 and COX-2. Cytosolic PGES (cPGES) is constitutively expressed in a wide variety of cells and is functionally linked to COX-1 to promote immediate $PGE_2$ production. This review highlights the latest understanding of the expression, regulation and functions of these three PGES enzymes.
Molecular Mechanisms of Protein Kinase C-induced Apoptosis in Prostate Cancer Cells
Gonzalez-Guerrico, Anatilde M.;Meshki, John;Xiao, Liqing;Benavides, Fernando;Conti, Claudio J.;Kazanietz, Marcelo G. 639
Protein kinase C (PKC) isozymes, a family of serine-threonine kinases, are important regulators of cell proliferation and malignant transformation. Phorbol esters, the prototype PKC activators, cause PKC translocation to the plasma membrane in prostate cancer cells, and trigger an apoptotic response. Studies in recent years have determined that each member of the PKC family exerts different effects on apoptotic or survival pathways. $PKC{\delta}$, one of the novel PKCs, is a key player of the apoptotic response via the activation of the p38 MAPK pathway. Studies using RNAi revealed that depletion of $PKC{\delta}$ totally abolishes the apoptotic effect of the phorbol ester PMA. Activation of the classical $PKC{\alpha}$ promotes the dephosphorylation and inactivation of the survival kinase Akt. Studies have assigned a pro-survival role to $PKC{\varepsilon}$, but the function of this PKC isozyme remains controversial. Recently, it has been determined that the PKC apoptotic effect in androgen-dependent prostate cancer cells is mediated by the autocrine secretion of death factors. $PKC{\delta}$ stimulates the release of $TNF{\alpha}$ from the plasma membrane, and blockade of $TNF{\alpha}$ secretion or $TNF{\alpha}$ receptors abrogates the apoptotic response of PMA. Molecular analysis indicates the requirement of the extrinsic apoptotic cascade via the activation of death receptors and caspase-8. Dissecting the pathways downstream of PKC isozymes represents a major challenge to understanding the molecular basis of phorbol ester-induced apoptosis.
Studies on the Epitope of Neuronal Growth Inhibitory Factor (GIF) with Using of the Specific Antibody
Pang, Li-Yan;Ru, Bing-Gen 646
Human neuronal growth inhibitory factor (GIF), a metalloprotein classified as metallothionein-3, is specifically expressed in mammal central nervous system (CNS). In these Studies the specific antibody to human GIF was prepared and used to search the epitope of human GIF by enzyme-linked immunosorbent assay (ELISA) and sequence comparison. The result of ELISA showed the epitope of human GIF may locate on a octapeptide (EAAEAEAE) in the $\alpha$-domain of human GIF, and the result of nerve cell culture indicated that the biological activity of GIF may be affected by the specific antibody.
Proteomic Analysis and Extensive Protein Identification from Dry, Germinating Arabidopsis Seeds and Young Seedlings
Fu, Qiang;Wang, Bai-Chen;Jin, Xiang;Li, Hong-Bing;Han, Pei;Wei, Kai-Hua;Zhang, Xue-Min;Zhu, Yu-Xian 650
Proteins accumulated in dry, stratified Arabidopsis seeds or young seedlings, totaled 1100 to 1300 depending on the time of sampling, were analyzed by using immobilized pH gradient 2-DE gel electrophoresis. The molecular identities of 437 polypeptides, encoded by 355 independent genes, were determined by MALDI-TOF or TOF-TOF mass spectrometry. In the sum, 293 were present at all stages and 95 were accumulated during the time of radicle protrusion while another 18 appeared in later stages. Further analysis showed that 226 of the identified polypeptides could be located in different metabolic pathways. Proteins involved in carbohydrate, energy and amino acid metabolism constituted to about 1/4, and those involved in metabolism of vitamins and cofactors constituted for about 3% of the total signal intensity in gels prepared from 72 h seedlings. Enzymes related to genetic information processing increased very quickly during early imbibition and reached highest level around 30 h of germination.
Polymorphonuclear Neutrophil Dysfunctions in Streptozotocin-induced Type 1 Diabetic Rats
Nabi, A.H.M. Nurun;Islam, Laila N.;Rahman, Mohanmmad Mahfuzur;Biswas, Kazal Boron 661
Since conflicting results have been reported on non-specific immune response in type 1 diabetes, this study evaluates polymorphonuclear neutrophil (PMN) functions in the infection free Long Evan diabetic rats (type 1) by using tests that include: polarization assay, phagocytosis of baker's yeasts (Saccharomyces cerevisiae) and nitroblue tetrazolium (NBT) dye reduction. Polarization assay showed that neutrophils from diabetic rats were significantly activated at the basal level compared to those from the controls (p < 0.001). After PMN activation with N-formyl-methionyl-leucyl-phenylalanine (FMLP), control neutrophils were found to be more polarized than those of the diabetic neutrophils and the highest proportions of polarization were found to be 67% and 57% at $10^{-7}\;M$ FMLP, respectively. In the resting state, neutrophils from the diabetic rats reduced significantly more NBT dye than that of the controls (p < 0.001). The percentages of phagocytosis of opsonized yeast cells by the neutrophils from control and diabetic rats were 87% and 61%, respectively and the difference was statistically significant (p < 0.001). Evaluation of the phagocytic efficiency of PMNs revealed that control neutrophils could phagocytose $381{\pm}17$ whereas those from the diabetic rats phagocytosed $282{\pm}16$ yeast cells, and the efficiency of phagocytosis varied significantly (p < 0.001). Further, both the percentages of phagocytosis and the efficiency of phagocytosis by the diabetic neutrophils were inversely related with the levels of their corresponding plasma glucose (p = 0.02; r = -0.498 and p < 0.05; r = -0.43, respectively), which indicated that increased plasma glucose reduced the phagocytic ability of neutrophils. Such relationship was not observed with the control neutrophils. These data clearly indicate that PMN functions are altered in the streptozotocin (STZ) - induced diabetic rats, and hyperglycemia may be the cause for the impairment of their functions leading to many infectious episodes.
Characterization and Expression Profile Analysis of a New cDNA Encoding Taxadiene Synthase from Taxus media
Kai, Guoyin;Zhao, Lingxia;Zhang, Lei;Li, Zhugang;Guo, Binhui;Zhao, Dongli;Sun, Xiaofen;Miao, Zhiqi;Tang, Kexuan 668
A full-length cDNA encoding taxadiene synthase (designated as TmTXS), which catalyzes the first committed step in the Taxol biosynthetic pathway, was isolated from young leaves of Taxus media by rapid amplification of cDNA ends (RACE). The full-length cDNA of TmTXS had a 2586 bp open reading frame (ORF) encoding a protein of 862 amino acid residues. The deduced protein had isoelectric point (pI) of 5.32 and a calculated molecular weight of about 98 kDa, similar to previously cloned diterpene cyclases from other Taxus species such as T. brevifolia and T. chinenisis. Sequence comparison analysis showed that TmTXS had high similarity with other members of terpene synthase family of plant origin. Tissue expression pattern analysis revealed that TmTXS expressed strongly in leaves, weak in stems and no expression could be detected in fruits. This is the first report on the mRNA expression profile of genes encoding key enzymes involved in Taxol biosynthetic pathway in different tissues of Taxus plants. Phylogenetic tree analysis showed that TmTXS had closest relationship with taxadiene synthase from T. baccata followed by those from T. chinenisis and T. brevifolia. Expression profiles revealed by RT-PCR under different chemical elicitor treatments such as methyl jasmonate (MJ), silver nitrate (SN) and ammonium ceric sulphate (ACS) were also compared for the first time, and the results revealed that expression of TmTXS was all induced by the tested three treatments and the induction effect by MJ was the strongest, implying that TmTXS was high elicitor responsive.
Functional Identification of an 8-Oxoguanine Specific Endonuclease from Thermotoga maritima
Im, Eun-Kyoung;Hong, Chang-Hyung;Back, Jung-Ho;Han, Ye-Sun;Chung, Ji-Hyung 676
To date, no 8-oxoguanine-specific endonuclease-coding gene has been identified in Thermotoga maritima of the order Thermotogales, although its entire genome has been deciphered. However, the hypothetical protein Tm1821 from T. maritima, has a helix-hairpin-helix motif that is considered to be important for DNA binding and catalytic activity. Here, Tm1821 was overexpressed in Escherichia coli and purified using Ni-NTA affinity chromatography, protease digestion, and gel filtration. Tm1821 protein was found to efficiently cleave an oligonucleotide duplex containing 8-oxoguanine, but Tm1821 had little effect on other substrates containing modified bases. Moreover, Tm1821 strongly preferred DNA duplexes containing an 8-oxoguanine:C pair among oligonucleotide duplexes containing 8-oxoguanine paired with four different bases (A, C, G, or T). Furthermore, Tm1821 showed AP lyase activity and Schiff base formation with 8-oxoguanine in the presence of $NaBH_4$, which suggests that it is a bifunctional DNA glycosylase. Tm1821 protein shares unique conserved amino acids and substrate specificity with an 8-oxoguanine DNA glycosylase from the hyperthermophilic archaeon. Thus, the DNA recognition and catalytic mechanisms of Tm1821 protein are likely to be similar to archaeal repair protein, although T. maritima is an eubacterium.
Expression of Hepatitis B Virus S Gene in Pichia pastoris and Application of the Product for Detection of Anti-HBs Antibody
Hu, Bo;Liang, Minjian;Hong, Guoqiang;Li, Zhaoxia;Zhu, Zhenyu;Li, Lin 683
Antibody to hepatitis B surface antigen (HBsAb) is the important serological marker of the hepatitis B virus (HBV) infection. Conventionally, the hepatitis B surface antigen (HBsAg) obtained from the plasma of HBV carriers is used as the diagnostic antigen for detection of HBsAb. This blood-origin antigen has some disadvantages involved in high cost, over-elaborate preparation, risk of infection, et al. In an attempt to explore the suitable recombinant HBsAg for the diagnostic purpose, the HBV S gene was expressed in Pichia pastoris and the product was applied for detection of HBsAb. Hepatitis B virus S gene was inserted into the yeast vector and the expressed product was analyzed by sodium dodecyl sulphate polyacrolamide gel electrophoresis (SDS-PAGE), immunoblot, electronic microscope and enzyme linked immunosorbent assay (ELISA). The preparations of synthesized S protein were applied to detect HBsAb by sandwich ELISA. The S gene encoding the 226 amino acid of HBsAg carrying ahexa-histidine tag at C terminus was successfully expressed in Pichia pastoris. The His-Tagged S protein in this strain was expressed at a level of about 14.5% of total cell protein. Immunoblot showed the recombinant HBsAg recognized by monoclonal HBsAb and there was no cross reaction between all proteins from the host and normal sera. HBsAb detection indicated that the sensitivity reached 10 mIu (micro international unit)/ml and the specificity was 100% with HBsAb standard of National Center for Clinical Laboratories. A total of 293 random sera were assayed using recombinant S protein and a commercial HBsAb ELISA kit (produced by blood-origin HBsAg), 35 HBsAb positive sera and 258 HBsAb negative sera were examined. The same results were obtained with two different reagents and there was no significant difference in the value of S/CO between the two reagents. The recombinant HBV S protein with good immunoreactivity and specificity was successfully expressed in Pichia pastoris. The reagent for HBsAb detection prepared by Pichia pastoris-derived S protein showed high sensitivity and specificity for detection of HBsAb standard. And a good correlation was obtained between the reagent produced by recombinant S protein and commercial kit produced by blood-origin HBsAg in random samples.
DNA Light-strand Preferential Recognition of Human Mitochondria Transcription Termination Factor mTERF
Nam, Sang-Chul;Kang, Chang-Won 690
Transcription termination of the human mitochondrial genome requires specific binding to termination factor mTERF. In this study, mTERF was produced in E. coli and purified by two-step chromatography. mTERF-binding DNA sequences were isolated from a pool of randomized sequences by the repeated selection of bound sequences by gel-mobility shift assay and polymerase chain reaction. Sequencing and comparison of the 23 isolated clones revealed a 16-bp consensus sequence of 5'-GTG$\b{TGGC}$AGANCCNGG-3' in the light-strand (underlined residues were absolutely conserved), which nicely matched the genomic 13-bp terminator sequence 5'-$\b{TGGC}$AGAGCCCGG-3'. Moreover, mTERF binding assays of heteroduplex and single-stranded DNAs showed mTERF recognized the light strand in preference to the heavy strand. The preferential binding of mTERF with the light-strand may explain its distinct orientation-dependent termination activity.
Cloning and Molecular Characterization of groESL Heat-Shock Operon in Methylotrophic Bacterium Methylovorus Sp. Strain SS1 DSM 11726
Eom, Chi-Yong;Kim, Eung-Bin;Ro, Young-Tae;Kim, Si-Wouk;Kim, Young-Min 695
The groESL bicistronic operon of a restricted facultative methylotrophic bacterium Methylovorus sp. strain SS1 DSM 11726 was cloned and characterized. It was found to consist of two ORFs encoding proteins with molecular masses of 11,395 and 57,396 daltons, which showed a high degree of homology to other bacterial GroES and GroEL proteins. The genes were clustered in the transcription order groES-groEL. Northern blot analyses suggested that the groESL operon is transcribed as a bicistronic 2.2-kb mRNA, the steady-state level of which was markedly increased by temperature elevation. Primer extension analysis demonstrated one potential transcription start site preceding the groESL operon, which is located 100bp upstream of the groES start codon. The transcription start site was preceded by a putative promoter region highly homologous to the consensus sequences of Escherichia coli ${\sigma}^{32}$-type heat shock promoter, which functioned under both normal and heat shock conditions in E. coli. Heat shock mRNA was maximally produced by Methylovorus sp. strain SS1 approximately 10min after increasing the temperature from 30 to $42^{\circ}C$. The groESL operon was also induced by hydrogen peroxide or salt shock.
Human Brain Pyridoxal-5'-phosphate Phosphatase: Production and Characterization of Monoclonal Antibodies
Kim, Dae-Won;Eum, Won-Sik;Choi, Hee-Soon;Kim, So-Young;An, Jae-Jin;Lee, Sun-Hwa;Sohn, Eun-Joung;Hwang, Seok-Il;Kwon, Oh-Shin;Kang, Tae-Cheon;Won, Moo-Ho;Cho, Sung-Woo;Lee, Kil-Soo;Park, Jin-Seu;Choi, Soo-Young 703
We cloned and expressed human pyridoxal-5'-phosphate (PLP) phosphatase, the coenzymatically active form of vitamin $B_6$, in Escherichia coli using pET15b vector. Monoclonal antibodies (mAb) were generated against purified human brain PLP phosphatase in mice, and four antibodies recognizing different epitopes were obtained, one of which inhibited PLP phosphatase. The binding affinities of these four mAbs to PLP phosphatase, as determined using biosensor technology, showed that they had similar binding affinities. Using the anti-PLP phosphatase antibodies as probes, we investigated their cross-reactivities in various mammalian and human tissues and cell lines. The immunoreactive bands obtained on Western blots had molecular masses of ca. 33 kDa. Similarly fractionated extracts of several mammalian cell lines all produced a single band of molecular mass 33 kDa. We believe that these PLP phosphatase mAbs could be used as valuable immunodiagnostic reagents for the detection, identification, and characterization of various neurological diseases related to vitamin $B_6$ abnormalities.
Endoplasmic Reticulum Mediated Necrosis-like Apoptosis of HeLa Cells Induced by Ca2+ Oscillation
Hu, Qingliu;Chang, Junlei;Tao, Litao;Yan, Guoliang;Xie, Mingchao;Wang, Zhao 709
Apoptosis and necrosis are distinguished by modality primarily. Here we show an apoptosis occurred instantly, induced by $300\;{\mu}M$ W-7 ((N-(6-aminohexyl)-5-chloro-1-naphthalenesulfonamide hydrochloride), inhibitor of calmodulin), which demonstrated necrotic modality. As early as 30 min after W-7 addition, apoptotic (sub-diploid) peak could be detected by fluorescence-activated cell sorter (FACS), "DNA ladders" began to emerge also at this time point, activity of caspase-3 elevated obviously within this period. Absence of mitochondrial membrane potential (MMP) reduction and cytochrome c, AIF (apoptosis inducing factor) release, verified that this rapid apoptosis did not proceed through mitochondria pathway. Activation of caspase-12 and changes of other endoplasmic reticulum (ER) located proteins ascertained that ER pathway mediated this necrosis-like apoptosis. Our findings suggest that it is not credible to judge apoptosis by modality. Elucidation of ER pathway is helpful to comprehend the pathology of diseases associated with ER stress, and may offer a new approach to the therapy of cancer and neurodegenerative diseases.
Expression of Cholera Toxin B Subunit and Assembly as Functional Oligomers in Silkworm
Gong, Zhao-Hui;Jin, Hui-Qing;Jin, Yong-Feng;Zhang, Yao-Zhou 717
The nontoxic B subunit of cholera toxin (CTB) can significantly increase the ability of proteins to induce immunological tolerance after oral administration, when it was conjugated to various proteins. Recombinant CTB offers great potential for treatment of autoimmune disease. Here we firstly investigated the feasibility of silkworm baculovirus expression vector system for the cost-effective production of CTB under the control of a strong polyhedrin promoter. Higher expression was achieved via introducing the partial non-coding and coding sequences (ATAAAT and ATGCCGAAT) of polyhedrin to the 5' end of the native CTB gene, with the maximal accumulation being approximately 54.4 mg/L of hemolymph. The silkworm bioreactor produced this protein vaccine as the glycoslated pentameric form, which retained the GM1-ganglioside binding affinity and the native antigenicity of CTB. Further studies revealed that mixing with silkworm-derived CTB increases the tolerogenic potential of insulin. In the nonconjugated form, an insulin : CTB ratio of 100 : 1 was optimal for the prominent reduction in pancreatic islet inflammation. The data presented here demonstrate that the silkworm bioreactor is an ideal production and delivery system for an oral protein vaccine designed to develop immunological tolerance against autoimmune diabetes and CTB functions as an effective mucosal adjuvant for oral tolerance induction.
Identification of Differentially Expressed Proteins in Imatinib Mesylate-resistant Chronic Myelogenous Cells
Park, Jung-Eun;Kim, Sang-Mi;Oh, Jong-K.;Kim, Jin-Y.;Yoon, Sung-Soo;Lee, Dong-Soon;Kim, Young-Soo 725
Resistance to imatinib mesylate (also known as Gleevec, Glivec, and STI571) often becomes a barrier to the treatment of chronic myelogenous leukemia (CML). In order to identify markers of the action of imatinib mesylate, we used a mass spectrometry approach to compare protein expression profiles in human leukemia cells (K562) and in imatinib mesylate-resistant human leukemia cells (K562-R) in the presence and absence of imatinib mesylate. We identified 118 differentially regulated proteins in these two leukemia cell-lines, with and without a $1\;{\mu}M$ imatinib mesylate challenge. Nine proteins of unknown function were discovered. This is the first comprehensive report regarding differential protein expression in imatinib mesylate-treated CML cells.
Molecular Cloning and Bioinformatic Analysis of SPATA4 Gene
Liu, Shang-Feng;Ai, Chao;Ge, Zhong-Qi;Liu, Hai-Luo;Liu, Bo-Wen;He, Shan;Wang, Zhao 739
Full-length cDNA sequences of four novel SPATA4 genes in chimpanzee, cow, chicken and ascidian were identified by bioinformatic analysis using mouse or human SPATA4 cDNA fragment as electronic probe. All these genes have 6 exons and have similar protein molecular weight and do not localize in sex chromosome. The mouse SPATA4 sequence is identified as significantly changed in cryptorchidism, which shares no significant homology with any known protein in swissprot databases except for the homologous genes in various vertebrates. Our searching results showed that all SPATA4 proteins have a putative conserved domain DUF1042. The percentages of putative SPATA4 protein sequence identity ranging from 30% to 99%. The high similarity was also found in 1 kb promoter regions of human, mouse and rat SPATA4 gene. The similarities of the sequences upstream of SPATA4 promoter also have a high proportion. The results of searching SymAtlas (http://symatlas.gnf.org/SymAtlas/) showed that human SPATA4 has a high expression in testis, especially in testis interstitial, leydig cell, seminiferous tubule and germ cell. Mouse SPATA4 was observed exclusively in adult mouse testis and almost no signal was detected in other tissues. The pI values of the protein are negative, ranging from 9.44 to 10.15. The subcellular location of the protein is usually in the nucleus. And the signal peptide possibilities for SPATA4 are always zero. Using the SNPs data in NCBI, we found 33 SNPs in human SPATA4 gene genomic DNA region, with the distribution of 29 SNPs in the introns. CpG island searching gives the data about CpG island, which shows that the regions of the CpG island have a high similarity with each other, though the length of the CpG island is different from each other.This research is a fundamental work in the fields of the bioinformational analysis, and also put forward a new way for the bioinformatic analysis of other genes.
Activation of Defense Responses in Chinese Cabbage by a Nonhost Pathogen, Pseudomonas syringae pv. tomato
Park, Yong-Soon;Jeon, Myeong-Hoon;Lee, Sung-Hee;Moon, Jee-Sook;Cha, Jae-Soon;Kim, Hak-Yong;Cho, Tae-Ju 748
Pseudomonas syringae pv. tomato (Pst) causes a bacterial speck disease in tomato and Arabidopsis. In Chinese cabbage, in which host-pathogen interactions are not well understood, Pst does not cause disease but rather elicits a hypersensitive response. Pst induces localized cell death and $H_2O_2$ accumulation, a typical hypersensitive response, in infiltrated cabbage leaves. Pre-inoculation with Pst was found to induce resistance to Erwinia carotovora subsp. carotovora, a pathogen that causes soft rot disease in Chinese cabbage. An examination of the expression profiles of 12 previously identified Pst-inducible genes revealed that the majority of these genes were activated by salicylic acid or BTH; however, expressions of the genes encoding PR4 and a class IV chitinase were induced by ethephon, an ethylene-releasing compound, but not by salicylic acid, BTH, or methyl jasmonate. This implies that Pst activates both salicylate-dependent and salicylate-independent defense responses in Chinese cabbage.
Epstein-Barr Virus-infected Akata Cells Are Sensitive to Histone Deacetylase Inhibitor TSA-provoked Apoptosis
Kook, Sung-Ho;Son, Young-Ok;Han, Seong-Kyu;Lee, Hyung-Soon;Kim, Beom-Tae;Jang, Yong-Suk;Choi, Ki-Choon;Lee, Keun-Soo;Kim, So-Soon;Lim, Ji-Young;Jeon, Young-Mi;Kim, Jong-Ghee;Lee, Jeong-Chae 755
Epstein-Barr virus (EBV) infects more than 90% of the world's population and has a potential oncogenic nature. A histone deacetylase (HDAC) inhibitor, trichostatin A (TSA), has shown potential ability in cancer chemoprevention and treatment, but its effect on EBV-infected Akata cells has not been examined. This study investigated the effect of TSA on the proliferation and apoptosis of the cells. TSA inhibited cell growth and induced cytotoxicity in the EBV infected Akata cells. TSA treatment sensitively induced apoptosis in the cell, which was demonstrated by the increased number of positively stained cells in the TUNEL assay, the migration of many cells to the sub-$G_0/G_1$ phase in flow cytometric analysis, and the ladder formation of genomic DNA. Western blot analysis showed that caspase-dependent pathways are involved in the TSA-induced apoptosis of EBV-infected Akata cells. Overall, this study shows that EBV-infected B lymphomas are quite sensitive to TSA-provoked apoptosis.
A Method for Direct Application of Human Plasmin on a Dithiothreitol-containing Agarose Stacking Gel System
Choi, Nack-Shick;Chung, Dong-Min;Yoon, Kab-Seog;Maeng, Pil-Jae;Kim, Seung-Ho 763
A new simplified procedure for identifying human plasmin was developed using a DTT copolymerized agarose stacking gel (ASG) system. Agarose (1%) was used for the stacking gel because DTT inhibits the polymerization of acrylamide. Human plasmin showed the lowest activity at pH 9.0. There was a similar catalytically active pattern observed under acidic conditions (pH 3.0) to that observed under alkaline conditions (pH 10.0 or 11.0). Using the ASG system, the primary structure of the heavy chain could be established at pH 3.0. This protein was found to consist of three fragments, 45 kDa, 23 kDa, and 13 kDa. These results showed that the heavy chain has a similar structure to the autolysed plasmin (Wu et al., 1987b) but there is a different start amino acid sequence of the N-termini. | CommonCrawl |
IIT JAM MS 2021 Question Paper | Set C | Problems & Solutions
This post discusses the solutions to the problems from IIT JAM Mathematical Statistics (MS) 2021 Question Paper - Set C. You can find solutions in video or written form.
Note: This post is getting updated. Stay tuned for solutions, videos, and more.
IIT JAM Mathematical Statistics (MS) 2021 Problems & Solutions (Set C)
Let $f_{0}$ and $f_{1}$ be the probability mass functions given by
Consider the problem of testing the mull hypothesis $H_{0}: X \sim f_{0}$ a gainst $H_{1}: X \sim f_{1}$ based on a single
sample $X .$ If $\alpha$ and $\beta$, respectively, denote the size and power of the test with critical region
${x \in \mathbb{R}: x>3},$ then $10(\alpha+\beta)$ is equal to ______________________
Answer: $13$
\alpha=\lim _{n \rightarrow \infty} \sum{m=n^{2}}^{2 n^{2}} \frac{1}{\sqrt{5 n^{4}+n^{3}+m}}
Then, $10 \sqrt{5} \alpha$ is equal to _________
Answer: 10
Let $\alpha, \beta$ and $\gamma$ be the eigenvalues of $M=\left[\begin{array}{ccc}0 & 1 & 0 \\ 1 & 3 & 3 \\ -1 & 2 & 2\end{array}\right] .$ If $y=1$ and $\alpha>\beta,$ then the value of
$2 \alpha+3 \beta$ is ___________________________________
Answer: $7$
Let $S=\{(x, y) \in \mathbb{R}^{2}: 2 \leq x \leq y \leq 4\}$. Then, the value of the integral
\iint_{S} \frac{1}{4-x} d x d y
is _______
Answer: 2
Let $M=\left(\begin{array}{cc}5 & -6 \ 3 & -4\end{array}\right)$ be a $2 \times 2$ matrix. If $\alpha=det \left(M^{4}-6 I_{2}\right),$ then the value of $\alpha^{2}$ is ________
Answer: 2500
Let $X$ be a random variable with moment generating function
M_{X}(t)=\frac{1}{12}+\frac{1}{6} e^{t}+\frac{1}{3} e^{2 t}+\frac{1}{4} e^{-t}+\frac{1}{6} e^{-2 t}, t \in \mathbb{R}
Then, $8 E(X)$ is equal to _______
Let $5,10,4,15,6$ be an observed random sample of size 5 from a distribution with probability density function
f(x ; \theta)=\begin{cases}
e^{-(x-\theta)}, x \geq \theta \\
0, \text { otherwise }
\end{cases}.
$\theta \in(-\infty, 3]$ is unknown. Then, the maximum likelihood estimate of $\theta$ based on the observed sample is equal to ________
Let $X$ be a random variable having the probability density function
f(x)=\frac{1}{8 \sqrt{2 \pi}}\left(2 e^{-\frac{x^{2}}{2}}+3 e^{-\frac{x^{2}}{8}}\right), \quad-\infty<x<\infty .
Then, $4 E\left(X^{4}\right)$ is equal to _____
Answer: 147
Let $\beta$ denote the length of the curve $y=\ln (\sec x)$ from $x=0$ to $x=\frac{\pi}{4}$. Then, the value of $3 \sqrt{2}\left(e^{\beta}-1\right)$ is equal to _____
Problem 10
Let $A=\{(x, y, z) \in \mathbb{R}^{3}: 0 \leq x \leq y \leq z \leq 1\}$. Let $\alpha$ be the value of the integral
\iiint_{A} x y z d x d y d z
Then, $384 \alpha$ is equal to _______
a_{n}=\sum_{k=2}^{n}\left(\begin{array}{l}
n \\
\end{array}\right) \frac{2^{k}(n-2)^{n-k}}{n^{n}}, \quad n=2,3, \ldots
Then, $e^{2} \lim _{n \rightarrow \infty}\left(1-a{n}\right)$ is equal to ____
Let $E_{1}, E_{2}, E_{3}$ and $E_{4}$ be four independent events such that $P\left(E_{1}\right)=\frac{1}{2}, P\left(E_{2}\right)=\frac{1}{3}, P\left(E_{3}\right)=\frac{1}{4}$ and $P\left(E_{4}\right)=\frac{1}{5} .$ Let $p$ be the probability that at most two events among $E_{1}, E_{2}, E_{3}$ and $E_{4}$ occur. Then, $240 p$ is equal to ____
The number of real roots of the polynomial
f(x)=x^{11}-13 x+5
is ____
Answer:$3$
Let $S \subseteq \mathbb{R}^{2}$ be the region bounded by the parallelogram with vertices at the points (1,0),(3,2) ,
(3,5) and $(1,3) .$ Then. the value of the integral $\iint_{S}(x+2 y) d x d y$ is equal to ___
Let $\alpha=\lim _{n \rightarrow \infty}\left(1+n \sin \frac{3}{n^{2}}\right)^{2 n}$. Then, $\ln \alpha$ is equal to ____
Let $A=\{(x, y) \in \mathbb{R}^{2}: x^{2}-\frac{1}{2 \sqrt{\pi}}<y<x^{2}+\frac{1}{2 \sqrt{\pi}}\}$ and let the joint probability density function
of $(X, Y)$ be
f(x, y)=\begin{cases}
e^{-(x-1)^{2}}, & (x, y) \in A \\
Then, the covariance between the random variables $X$ and $Y$ is equal to ____
Let $\phi:(-1,1) \rightarrow \mathbb{R}$ be defined by
\phi(x)=\int_{x^{7}}^{x^{4}} \frac{1}{1+t^{3}} d t
If $\alpha=\lim _{x \rightarrow 0} \frac{\phi(x)}{e^{2 x^{4}-1}},$ then $42 \alpha$ is equal to ____
Let $S=\{(x, y) \in \mathbb{R}^{2} ; 0 \leq x \leq \pi, \min {\sin x, \cos x} \leq y \leq \max {\sin x, \cos x}\}$.
If $\alpha$ is the area of $S$, then the value of $2 \sqrt{2} \alpha$ is equal to ____
Let the random vector $(X, Y)$ have the joint probability mass function
$f(x, y)=\begin{cases}{10 \choose x}{5 \choose y}(\frac{1}{4})^{x-y+5}(\frac{3}{4})^{y-x+10}, x=0,1, \ldots, 10 ; y=0,1, \ldots, 5 \\ 0, \text { otherwise }\end{cases}$.
Let $Z=Y-X+10 .$ If $\alpha=E(Z)$ and $\beta=Var(Z),$ then $8 \alpha+48 \beta$ is equal to ____
Let $X_{1}$ and $X_{2}$ be independent $N(0,1)$ random variables. Define
sgn(u)=\begin{cases}
-1, \text { if } u<0 \\ 0, \text { if } u=0 \\ 1, \text { if } u>0
Let $Y_{1}=X_{1} sgn\left(X_{2}\right)$ and $Y_{2}=X_{2} sgn\left(X_{1}\right)$. If the correlation coefficient between $Y_{1}$ and $Y_{2}$ is $\alpha$,
then $\pi \alpha$ is equal to ____
Some Useful Links:
IIT JAM MS (Set B) 2021 Question Paper - Problems and Solutions
IIT JAM MS (Set A) 2021 Question Paper - Problems and Solutions
How to Prepare for IIT JAM Statistics?
Know about the learning Paths
Our Statistics Program | CommonCrawl |
Plant Methods
December 2019 , 15:76 | Cite as
Automatic estimation of heading date of paddy rice using deep learning
Sai Vikas Desai
Vineeth N. Balasubramanian
Tokihiro Fukatsu
Seishi Ninomiya
Wei Guo
Accurate estimation of heading date of paddy rice greatly helps the breeders to understand the adaptability of different crop varieties in a given location. The heading date also plays a vital role in determining grain yield for research experiments. Visual examination of the crop is laborious and time consuming. Therefore, quick and precise estimation of heading date of paddy rice is highly essential.
In this work, we propose a simple pipeline to detect regions containing flowering panicles from ground level RGB images of paddy rice. Given a fixed region size for an image, the number of regions containing flowering panicles is directly proportional to the number of flowering panicles present. Consequently, we use the flowering panicle region counts to estimate the heading date of the crop. The method is based on image classification using Convolutional Neural Networks. We evaluated the performance of our algorithm on five time series image sequences of three different varieties of rice crops. When compared to the previous work on this dataset, the accuracy and general versatility of the method has been improved and heading date has been estimated with a mean absolute error of less than 1 day.
An efficient heading date estimation method has been described for rice crops using time series RGB images of crop under natural field conditions. This study demonstrated that our method can reliably be used as a replacement of manual observation to detect the heading date of rice crops.
Heading date Panicle detection Convolutional neural networks Sliding window
Vineeth N. Balasubramanian and Wei Guo contributed equally to this manuscript.
The online version of this article ( https://doi.org/10.1186/s13007-019-0457-1) contains supplementary material, which is available to authorized users.
It is an established fact that rice is one of the most important crops in the world. It feeds more than half of the world's population. Thus, a good understanding of the growth stages in rice crops would enable one to use the right amount of water, fertilizers and pesticides to ensure maximum yield. This has great economical consequences since timely and high yield of rice can potentially address the food shortage problem prevailing in many parts of the world.
When rice paddies grow from their seeds to mature plants, they go through a variety of transformations. They develop tillers, begin to grow leaves and gradually increase in height. Then their leaf stems start bulging, which conceals the developing panicle. The panicle then starts to grow and fully emerges outside. Flowering is characterized by the exsertion of the first rice panicle in the crop [1]. Heading date is characterized together by the vegetative growth phase i.e., the time period from germination to panicle initiation and the reproductive phase, meaning the time period from panicle initiation to heading [2]. Heading date is primarily used to measure the response of the rice plant to various environmental and genetic conditions. This makes it an indispensable parameter useful to breeders and researchers. By estimating the heading date and thereby observing the heading stage, a farmer can make informed crop management decisions such as: (1) deciding the optimum amount of fertilizers and pesticides for application in the field and (2) deciding the variety of crop to be grown in the field in subsequent seasons. Meanwhile, researchers can effectively leverage the knowledge of heading stage in their experiments to understand the response of the rice plant to various genetic and environmental alterations so that they can pick the best crop variety for a particular set of environmental conditions. For instance, growth stage information has been used to determine the genetic locus which affects the regional adaptation in rice [3]. Genetic modifications have been proposed to artificially control the heading date in rice crops [4]. Flowering time has been controlled experimentally to enable production of crops suitable for different climates [5, 6]. The effect of gene interactions on traits like flowering time and panicle number has been studied [7].
For the past decade, computer vision and machine learning together have witnessed a spike in multiple research domains producing state-of-the-art results in various tasks which were previously assumed to be difficult for computers to solve. Tasks such as image classification, scene understanding and image captioning have been addressed using deep neural networks with exceptional results [8]. Deep learning is an area of machine learning which uses high-capacity function approximators (neural networks) to recognize patterns in high dimensional data such as images. Deep learning has been successfully applied in the area of plant phenotyping in extracting traits such as plant density [9] and plant stress [10]. It has also been applied in species classification [11] and detecting objects of interest such as fruits [12], flowering panicles [13], rice spikes [14] and wheat spikes [15, 16]. For a detailed treatment of the uses of deep learning in agriculture, we encourage the readers to refer to the survey by Kamilaris and Prenafeta-Boldú [17].
Related to our task, Zhu et al. [18] have proposed a method to observe heading stage of wheat using a two-step coarse to fine wheat ear detection method based on support vector machines (SVM). Hasan et al. [15] more recently used an R-CNN based object detection network to accurately detect and count wheat spikes from high definition crop images; however, this approach typically requires large image datasets with object level annotations, which is very laborious. Xiong et al. [13] proposed Panicle-SEG, which uses a combination of CNN and entropy rate superpixel (ERS) optimization to segment rice panicles from crop images. Since our task requires us to get an estimate of the number of flowering panicles, pixel-wise segmentation of crop images such as in [13] is not necessary. In the context of sliding window methods, Bai et al. [14] used a three-stage cascade method to detect rice spikes in crop images and thereby observe the heading stage. For each patch extracted from the sliding window method, an SVM classifier is applied pixelwise to detect if the patch is a spike patch. Later, a gradient histogram method and a CNN are used to refine the classification. On the other hand, our method just requires a single pass through a CNN to detect a flowering region. This saves the computation time required to train an SVM and to apply it around each pixel in a given patch. Guo et al. [19] proposed a robust approach to detect rice flowering panicles from high definition RGB images of field taken under natural conditions. They use a sliding window method in conjunction with an SVM classifier trained on SIFT [20] features. When compared to the above studies, our approach uses a much simpler algorithm to detect flowering regions in images. Instead of using multi-step classification methods, a sliding window based mechanism is used in conjunction with a CNN to detect flowering regions in a high definition image. The number of flowering regions in an image gives a statistical estimate of the number of flowering panicles exserted. The heading date is determined by observing the date at which 50% of the flowering panicles have been exserted. One important advantage of using a CNN is that, instead of using hand-crafted image features like SIFT, the features are automatically learnt from the data. In order to demonstrate the reliability and trustworthiness of the proposed system, GradCAM [21], an existing method in the literature is used to provide visual explanations for the decisions made by the CNN model used in the panicle detection algorithm. For a real-world deployable intelligent system, we believe that the explainability and transparency of the system is vital.
Our aim is to estimate the heading date in a rice crop using a fast automatic system based on computer vision and deep learning. This should eliminate the need for manual visual inspection of crops which is both tedious and time-consuming. The contributions of our work are: (1) using a deep neural network to detect flowering regions from ground-level images of paddy rice, and (2) counting the detected flowering regions to estimate the heading date of the crop. An overview of the proposed method can be seen in Fig. 1. We evaluate the performance of our method on our dataset of five time-series RGB image sequences of three different crop varieties of paddy rice namely, Kinmaze, Kamenoo and Koshihikari. We compare our method with the manual approach to heading date measurement and observe that our automatic method estimates the heading stage with an mean absolute error of 1 day. From the results, it can be concluded that our method has the potential to be used for estimating the yield of the crop as well as an aid in making informative crop management decisions.
Figure showing various stages of our proposed method. Given a (1) time-series sequence of crop images, our sliding window + CNN method is applied on each image to perform (2) flowering region detection. Then, (3) the number of detected flowering regions are counted after which the (4) heading stage graphs are plotted
Training data. Examples of patches from the training dataset
An overview of the proposed method can be seen in Fig. 1. The input to our system is a time-series sequence of images (across different days and times) of a given crop variety taken at a particular location. For each image in the sequence, we use a sliding window mechanism to detect flowering regions. At each position of the sliding window, a CNN classifier predicts if the current window consists of a flowering panicles. In this way, we detect and count the number of flowering regions in each image. For a sequence of images taken with a single camera at a specific aspect ratio, it is easy to see that the number of flowering regions (windows) in an image is directly proportional to the number of flowering panicles present in the image. Therefore, we use the number of detected flowering regions in an image as a proxy for number of flowering panicles present. We use these region counts to draw flowering graphs and observe the heading stage.
The field server system was set up in our fields at the Institute for Sustainable Agro-ecosystem Services, University of Tokyo. The setup used for image acquisition is as follows. Canon EOS Kiss X5, a digital single-lens reflex (DSLR) camera was used as part of a field server system to acquire the experimental images. The captured images were then automatically uploaded to Flickr, a free cloud service via a 3G internet connection. The uploaded images were automatically obtained by an agent system [22] and saved into a database of National Agricultural and Food Research Organization. For the acquisition of Kinmaze and Kamenoo datasets, the cameras were set up at a height around 1.5 m from the ground. The field of view of the cameras was approximately \(138\,{\mathrm{cm}}\times 96\,{\mathrm{cm}}\,\) (focus length 24 mm) corresponding to an image resolution of \(5184 \times 3456\) pixels. Using this setup, time-series images were acquired every 5 min from 08:00 to 16:00 between and including days 84 and 91.
Details of image acquisition
Days from transplanting
Planting density (\(\hbox {plants}/\hbox {m}^2\))
Number of images acquired
Kinmaze
\(138\hbox { cm }\times \, 96 \hbox { cm }\)
Kamenoo
Koshihikari-1
\(180\hbox { cm }\times \, 120 \hbox { cm }\)
For the three Koshihikari datasets, the field of view of the cameras was approximately \(180\,{\mathrm{cm}}\times 120\,{\mathrm{cm}}\) (focus length 18 mm). Using this setup, the images were acquired between and including days 66 and 74. The captured images have a resolution of \(5184 \times 3456\) pixels. Table 1 shows further details regarding image acquisition.
Training dataset
The CNN model needs to differentiate between a flowering and a non-flowering patch. To gather the training data required to train our CNN model, we chose to annotate 500 images from the Koshihikari-3 dataset. Specifically, we manually drew tight bounding boxes around the flowering regions in those images. From the annotated images, we extracted 3000 patches which correspond to the annotated flowering regions. These patches are labeled with class flower which is a positive class. Similarly, we extracted background patches randomly from the non-annotated parts of the said 500 images to obtain 3000 patches which are labeled with class non-flower which we consider a negative class. In summary, we have a training dataset of 3000 images of positive class and 3000 images of negative class. Before training, we resize the patches to a fixed size of \(224 \times 224\) pixels. Using these images, our CNN model is trained to classify a patch into one of the two classes. Figure 2 shows examples of the patches present in the training dataset.
Detection evaluation on validation set. Examples of flowering region detection on the validation set. The ground truth boxes are shown in red and the predicted flowering regions are shown in blue
Daily flowering counts. Predicted flowering regions vs actual daily flowering panicle counts in Kinmaze (left) and Kamenoo (right) datasets
Generalization is an important characteristic of a machine learning model. Simply put, a model trained on one dataset should be able to perform well on similar datasets on which it wasn't trained. To assess the generalization capability of our model, we gathered training patches only from one dataset i.e., Koshihikari-3 and tested our model on all the five datasets.
Validation and test datasets
We evaluate the: (1) detection performance and (2) accuracy in heading date estimation of our model separately. To evaluate the detection performance of our method, we create a validation set of images as follows. We choose 15 images from each of the five datasets mentioned in Table 1. We pick three different time slots for choosing images: 8 a.m.–9 a.m., 11 a.m.–12 p.m. and 3 p.m.–4 p.m. We ensure that the timestamps of the chosen 15 images in any given dataset are equally distributed among these three time slots. We do this to test the robustness of our model in detecting images at various lighting conditions. From each of the 15 images, we randomly crop out a \(1000 \times 1000\) portion of the image and draw tight bounding boxes around the flowering panicles present in the image. In summary, the validation set contains 75 annotated images of size \(1000 \times 1000\). Note that the validation set is not used to evaluate the heading stage estimation performance, which requires counting the flowering regions. Thus, randomly cropping out a portion of the full image does not affect the evaluation method because the validation set is solely used to evaluate the detection performance of the model.
To assess the heading stage estimation accuracy of our method, we apply our method to all the five sequences of images given in Table 1 and report the predicted heading date. In other words, we consider those five sequences as our test set.
Training a CNN end to end
We train a Convolutional Neural Network (CNN) to learn the mapping between our the image patches and their labels in the training dataset. A CNN is a specially designed Artificial Neural Network (ANN) generally used to learn patterns and solve computer vision tasks from large amounts of image data. It allows for automatic feature extraction and pattern classification within its architecture. Basically, ANNs are function approximators which are generally used to learn the relationship between high dimensional input and output data. ANNs consist of several computational points called nodes connected together in the form of a directed acyclic graph. The nodes in the ANN are grouped into layers. Generally, the input data passes through one or more hidden layers sequentially before passing through the final layer to obtain the output. The choice of the number of nodes, type of nodes and number of layers constitute the architecture of the ANN. Stacking multiple hidden layers together to form a 'deep' network is commonly done in order to get better representations of data.
In the current study, we use the ResNet-50 [23] architecture which is a CNN model having state-of-the-art results in image classification. For a 50-layer deep network, it is evident that we need massive amounts of data to train the network. But it is generally difficult and time-consuming to obtain massive annotated datasets especially in the agricultural domain. Therefore, we apply the widely used technique of transfer learning. We use a pretrained ResNet-50 model trained on the ImageNet [24] dataset which is the source domain. Now, we remove the last layer in ResNet-50 i.e., the 1000-way softmax layer and replace it with a single node sigmoid layer which gives the probability of the class being positive (flower). The weights in the model are now finetuned with data from our target domain i.e., the training dataset of patches. The process of feature extraction and classification is not separated in this case. The model is just trained end-to-end with our training data. The convolutional layers are responsible for generating the feature descriptors for the images. The sigmoid layer at the end takes these features as input and outputs the probability of the input image belonging to a positive class. The model is trained for 3 epochs using Stochastic Gradient Descent with a learning rate of 0.001 and momentum of 0.9.
To test our model on the full images, we run a sliding window over each image. At every position of the sliding window, the model classifies the patch of the image beneath the sliding window into one of the two classes. If the model classifies the patch as a flower, then a bounding box is drawn over that sliding window as shown in Fig. 1.
Change in flowering counts. Change in number of predicted flowering regions versus change in number of actual daily flowering panicle counts in Kinmaze (left) and Kamenoo (right) datasets
Sliding window parameter selection
In a sliding window mechanism, there are two important parameters to decide: (1) the dimensions of the window and (2) stride (step length) of the window. We have manually performed experiments on the validation dataset and empirically decided the sliding window dimensions and stride length for each dataset as shown in Table 2. The reason for having different parameters for different datasets is the fact that, despite having the camera at a fixed location above the ground, the plant height may vary for different crops. Due to the variation in plant height, the average size of flowering panicles as observed by the camera might not be consistent across different datasets. Therefore, we empirically choose the sliding window parameters separately for each dataset.
Flowering region detection
Since the images in each of the five datasets are in chronological order, the first step to determine the heading date is to detect the flowering regions in the images and get an estimate of the flowering panicle count. We use a sliding window mechanism to detect flowering regions in each image. The procedure of flowering region detection is described in Fig. 1. At each position of the sliding window, the patch of the image under the window is extracted and passed through a Convolutional Neural Network (CNN). We define a flowering patch to be an image patch containing a flowering panicle. If the patch is classified by the CNN as a flowering patch, then a bounding box is automatically drawn on the boundaries of the sliding window. Once the model is trained, the model is evaluated on the test images. We use the previously mentioned five datasets as the test datasets. The procedure of testing the model on a dataset is as follows. For each image in the dataset, a sliding window is applied on the image. For each position of the sliding window, the CNN classifier detects if there is a flowering panicle in that patch. Using this process, we count the number of patches classified as flowering regions in each image.
Flowering region detection. Examples of flowering region detection in datasets Kinmaze (left), Kamenoo (right)
Flowering stage graphs for Kinmaze and Kamenoo crops. The estimated 50% flowering day for Kinmaze is day 88. Similarly, the estimated 50% flowering day for Kamenoo is day 86
Heading date estimation
Once we have the flowering panicle counts for each image, we can estimate the day when \(50\%\) flowering is reached which is a highly useful metric to determine the heading date of the crop. The heading stages are generally identified by percentages. Since heading stage is characterized by the exsertion of the rice panicle, the heading date can be marked as the date when 50% of the panicles have exserted [1]. For each dataset, we plot the cumulative distribution of detected flowering panicles against the time at which each image is captured. This allows us to find the day where 50% of the flowering has taken place.
Design decisions
Feature extraction versus feature learning
The Scale Invariant Feature Transform (SIFT) algorithm, as used in [19], is a feature extraction algorithm. It tries to create a scale invariant representation of an image. As mentioned in the seminal paper [20] by Lowe, the SIFT algorithm extracts image features that can be used for matching different images of an object. But the features extracted using the SIFT algorithm are human-engineered, in the sense that the algorithm looks for specific things like corners and edges in the image to decide its features.
On the other hand, a deep CNN performs a series of non-linear transformations on each image to extract denser and more abstract features. The parameters of these non-linear transformations are learned by training the network with labeled data. This allows the CNN to learn distinctive features by looking at the data instead of applying some fixed mathematical transformations. Training a deep neural network end-to-end is more efficient because the learned features adapt to the task at hand i.e., classification in this case. Also, the feature extraction and classification steps are fused together in a single network.
SVM versus sigmoid classification
In the SIFT based method [19], an SVM classifier is used to classify the patches based on the SIFT features. The ResNet-50 network used in this work instead uses a one node sigmoid layer to perform binary classification i.e., it gives the probability of the input image belonging to the positive class. This layer can be seen as a logistic regression classifier. The SVM and logistic regression classifier are known to show similar performance in classification. The characteristic that makes them different is the objective function that is optimized. SVMs use a hinge loss function which tries to find the maximum margin separation between two classes of data. Logistic regression generally uses a cross-entropy loss as the cost function. The outputs of the logistic regression classifier can be directly interpreted as the positive class probability.
Generating visual explanations
After training and testing the CNN model, we generate visual explanations to observe the part of the image that the model looks at before detecting the presence of a flowering panicle in an image patch. For this, we take a random image from the Kinmaze dataset and run our panicle detection algorithm which draw bounding boxes around flowering panicles in the image. Now, we randomly select a few bounding boxes and extract the patches of the image inside the bounding boxes. GradCAM [21] is used to generate visual explanations for each image patch. In the GradCAM algorithm, we first pass the image through the CNN to get class probabilities. Since the model detected a flowering panicle in this patch, the probability of the 'flower' class would be the highest. Now, the gradient of the 'flower' class logit is taken with respect to each of the output feature maps of the final convolutional layer in the model. Then, global average pooling is used to calculate the weight of each feature map i.e., the importance of each feature map in causing the model to detect the presence of a flowering panicle. Finally, a heatmap is generated by taking a weighted combination of each feature map in the final convolutional layer and applying the ReLU activation function at the end.
Grad CAM. Grad CAM outputs of flowering panicle patches with respect to the final convolutional layer of the ResNet-50 CNN are shown here. The red regions are on the part of the patch depicting the anthesis of flowering panicle, thus supporting our claim that the model has actually learnt specific features of the flowering panicle
We evaluate the flowering region detection performance of our method on the validation set described in the Methods section. Using the proposed method, we get the predicted flowering regions for each of the 75 images in the dataset. Note that, as shown in Fig. 3, the ground truth annotations for the images are tight bounding boxes around the flowering panicles whereas the predicted bounding boxes are fixed size boxes detecting the flowering regions. Therefore, the standard detection evaluation metric of Intersection over Union (IoU) cannot be used to evaluate the performance of this model. Instead, we propose the following metric to evaluate the correctness of a predicted bounding box.
$$\begin{aligned} Intersection\,Ratio = \frac{Area\, of\,Overlap}{Area\,of\,Predicted\,Box} \end{aligned}$$
Comparison of detection performance of our model (CNN) with our previous model [19] on the validation set
Validation dataset
No. of images
Sliding window
F1-score
\(140\, \times \,140\)
Simply put, intersection ratio (IR) is the portion of the predicted bounding box which overlaps the ground truth bounding box. A predicted bounding box is considered positive if its \(IR \ge 0.5\), else it is considered negative. Using this metric, we calculate the standard binary classification metrics such as Precision, Recall and F1-Score for each dataset. Table 2 shows the detection results of our model on the validation set. It can be seen that our current method outperforms our previous method which used SIFT to extract features and an SVM to classify patches. From the results, it can be concluded that our current method generalizes well and has a good detection performance on images from all the five sequences.
Heading stage estimation
We assess the heading stage estimation performance of our method on the five image sequences mentioned in Table 1. For each image sequence, we use our detection pipeline to detect and count the number of flowering regions in each image. Given a fixed window size, it is easy to see that the number of detected flowering regions are directly proportional to the number of flowering panicles present in an image. In other words, more the number of flowering panicles, more will be the number of flowering regions and vice versa. To evaluate this hypothesis, we have manually counted the number of flowering panicles present in each image in Kinmaze and Kamenoo sequences. Figure 4 shows the comparison between the actual flowering panicle counts and the number of detected flowering regions. The Pearson Correlation Coefficient (PCC) between the ground truth panicle counts and the number of detected flowering regions was found to be 0.844 for Kinmaze and 0.711 for Kamenoo. These results support our hypothesis that the number of detected flowering regions are indeed a good estimate of the number of flowering panicles present. To further strengthen this hypothesis, we have plotted in Fig. 5 the change in number of flowering regions detected and the change in number of flowering panicles present. It can be seen that, in general, if the number of flowering panicles decreases at a given point, the number of detected flowering regions also decreases. Examples of images in Kinmaze and Kamenoo datasets and their flowering region detection outputs are shown in Fig. 6. A similar set of images for the three Koshihikari datasets can be found in an additional file (see Additional file 1). To evaluate this method of estimating the heading stage, we need to manually find the heading date of the crop by visual inspection. Since the recording of the date of 50% flowering stage is subjective and strongly depends on the experience and intuition of the observers, we also add the dates of 1st panicle appearance in the corresponding crop as reference since normally more than 70% of the ears will come out within the first 3 days after the 1st panicle appearance has been observed [2]. Note that for paddy rice, flowering begins with panicle exsertion [1]. Figure 7 shows the flowering plots of Kinmaze and Kamenoo datasets. An additional file shows the flowering plots for the Koshihikari-1, Koshihikari-2 and Koshihikari-3 datasets (see Additional file 2). Table 3 shows the comparison of 50% flowering stage between field check and our proposed method.
Comparison of proposed method and manual observation for estimation of heading stage
Transplanting dates
1st panicle appearance (field observed)
50%flowering dates (field observed)
50%flowering dates (estimated)
Estimation error (days)
Mean absolute error (days)
Formula for estimation error: (estimated) − ( field observed) 50% flowering date
It can be concluded from the results in Table 3 that our proposed method is fairly accurate in identifying the heading stage and estimating the heading date in paddy rices. With the definition of heading date [1] that we used, it has become quite simple to evaluate the performance of the CNN model. We have proposed a simple automatic method to observe the heading stage of rice crops. Since the observation of heading date requires an estimate of the number of flowering panicles exserted, we are not interested in accurately localizing flowering panicles in the images. Accurate localization of objects is generally done using object detection networks such as Faster R-CNN [25] which requires bounding box level annotated data for training. In other words, the images in the training data need to be annotated by drawing tight bounding boxes around the objects of interest which are flowering panicles in our case. Getting a large number of bounding box level annotated images is both time-consuming and expensive when compared to labeling an image for classification. In our work, we completely avoid this expense by using a sliding window mechanism in conjunction with a CNN classifier. The boxes predicted by our method may not always tightly localize the flowering panicles but these errors can be tolerated in our application because our end goal is not to accurately localize flowering panicles, but to observe the heading stage for which an estimate of the flowering panicle count is sufficient.
GradCAM, a visual explanation method has been used to visualize what part of the image patch the CNN model "looks at" before detecting a flowering panicle in a given patch. This visualisation would enable the model to reason its detections. Ideally, the detection of a flowering panicle in a patch should be based on the presence of flower-specific parts in the patch. The GradCAM outputs in Fig. 8 support our proposition that this is indeed the case with the proposed CNN model. The red regions in the output heatmaps represent the pixels in the patch which influenced the detection the most. It can be seen that the red regions are on the part of the patch depicting the anthesis of flowering panicle, thus supporting our claim that the model has actually learnt specific features of the flowering panicle.
The proposed method, however, has some limitations. The current method requires high-resolution static and ground-level images of rice crop to be able to efficiently detect flowering panicles and estimate the heading date. A possible next step in this research could be to study the performance of CNNs on images taken from fully automatic Unmanned Air Vehicles (UAVs). This is because image acquisition is much simpler and faster when UAVs are used. Assessing various phenotypic traits from UAV-based images would be immensely helpful to the agricultural community owing to the simplicity of deploying drones and the ability to collect and analyze data in real time.
Acknowlegements
We thank the anonymous reviewers for their valuable comments and suggestions.
Authors' contributions
SVD analyzed the data and interpreted results with the input of WG and VNB, SN supervised the entire study, WG conceived, designed, and coordinated the field experiments, TF developed the field sever and image acquisition modules for the field monitoring system. SVD wrote the paper with input from all authors. All authors read and approved the final manuscript.
This study was partially funded by Indo-Japan DST-JST SICORP program "Data Science-based Farming Support System for Sustainable Crop Production under Climatic Change" and CREST Program "Knowledge Discovery by Constructing AgriBigData" (JPMJCR1512) from Japan Science and Technology Agency.
13007_2019_457_MOESM1_ESM.pdf (3 mb)
Additional file 1. Flowering Region Detection in Koshihikari. It is a figure depicting flowering region detection in crop images of Koshihikari-1, Koshihikari-2 and Koshihikari-3.
13007_2019_457_MOESM2_ESM.pdf (24 kb)
Additional file 2. Flowering Stage Graphs for Koshihikari. It contains graphs depicting observed 50% flowering stage using crop images in Koshihikari-1, Koshihikari-2 and Koshihikari-3.
Yoshida S. Fundamentals of rice crop science. Los Baños: International Rice Research Institute; 1981. p. 56.Google Scholar
Takeoka Y, Shimizu M, Wada T. Morphology and development of reproductive organs. In: Matsuo T, Hoshikawa K, editors. Science of the rice plant. Volume 1: morphology. Tokyo: Food and Agriculture Policy Research Center; 1993. p. 293–412.Google Scholar
Gao H, Jin M, Zheng XM, et al. Days to heading 7, a major quantitative locus determining photoperiod sensitivity and regional adaptation in rice. Proc Natl Acad Sci. 2014;111(46):16337–42. https://doi.org/10.1073/pnas.1418204111 CrossRefPubMedGoogle Scholar
Hu Y, Li S, Xing Y. Lessons from natural variations: artificially induced heading date variations for improvement of regional adaptation in rice. Theor Appl Genet. 2019;132(2):383–94. https://doi.org/10.1007/s00122-018-3225-0.CrossRefPubMedGoogle Scholar
Okada R, Nemoto Y, Endo-Higashi N, Izawa T. Synthetic control of flowering in rice independent of the cultivation environment. Nat Plants. 2017;3:17039.CrossRefGoogle Scholar
Yano M, Kojima S, Takahashi Y, Lin H, Sasaki T. Genetic control of flowering time in rice, a short-day plant. Plant Physiol. 2001;127(4):1425–9.CrossRefGoogle Scholar
Zhang Z-H, Zhu Y-J, Wang S-L, Fan Y, Zhuang J-Y. Importance of the interaction between heading date genes hd1 and ghd7 for controlling yield traits in rice. Int J Mol Sci. 2019;20:516. https://doi.org/10.3390/ijms20030516.CrossRefPubMedCentralGoogle Scholar
Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85–117.CrossRefGoogle Scholar
Ribera J, Chen Y, Boomsma C, Delp EJ. Counting plants using deep learning. In: 2017 IEEE global conference on signal and information processing; 2017. p. 1344–1348. https://doi.org/10.1109/GlobalSIP.2017.8309180
Ghosal S, Blystone D, Singh AK, Ganapathysubramanian B, Singh A, Sarkar S. An explainable deep machine vision framework for plant stress phenotyping. Proc Natl Acad Sci. 2018;115(18):4613–8.CrossRefGoogle Scholar
Ise T, Minagawa M, Onishi M. Identifying 3 moss species by deep learning, using the "chopped picture" method. Open J Ecol. 2017. https://doi.org/10.4236/oje.2018.83011.CrossRefGoogle Scholar
Sa I, Ge Z, Dayoub F, Upcroft B, Perez T, McCool C. Deepfruits: a fruit detection system using deep neural networks. Sensors. 2016;16(8):1222.CrossRefGoogle Scholar
Xiong X, Duan L, Liu L, Tu H, Yang P, Wu D, Chen G, Xiong L, Yang W, Liu Q. Panicle-seg: a robust image segmentation method for rice panicles in the field based on deep learning and superpixel optimization. Plant Methods. 2017;13(1):104.CrossRefGoogle Scholar
Bai X, Cao Z, Zhao L, Zhang J, Lv C, Li C, Xie J. Rice heading stage automatic observation by multi-classifier cascade based rice spike detection method. Agric For Meteorol. 2018;259:260–70. https://doi.org/10.1016/j.agrformet.2018.05.001.CrossRefGoogle Scholar
Hasan MM, Chopin JP, Laga H, Miklavcic SJ. Detection and analysis of wheat spikes using convolutional neural networks. Plant Methods. 2018;14(1):100. https://doi.org/10.1186/s13007-018-0366-8.CrossRefPubMedPubMedCentralGoogle Scholar
Pound MP, Atkinson JA, Wells DM, Pridmore TP, French AP. Deep learning for multi-task plant phenotyping. In: 2017 IEEE international conference on computer vision workshops (ICCVW); 2017. p. 2055–2063. https://doi.org/10.1109/ICCVW.2017.241
Kamilaris A, Prenafeta-Boldú FX. Deep learning in agriculture: a survey. Comput Electron Agric. 2018;147:70–90. https://doi.org/10.1016/j.compag.2018.02.016.CrossRefGoogle Scholar
Zhu Y, Cao Z, Lu H, Li Y, Xiao Y. In-field automatic observation of wheat heading stage using computer vision. Biosyst Eng. 2016;143:28–41.CrossRefGoogle Scholar
Guo W, Fukatsu T, Ninomiya S. Automated characterization of flowering dynamics in rice using field-acquired time-series RGB images. Plant Methods. 2015;11:7.CrossRefGoogle Scholar
Lowe DG. Distinctive image features from scale-invariant keypoints. Int J Comput Vis. 2004;60(2):91–110.CrossRefGoogle Scholar
Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-cam: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE international conference on computer vision (ICCV); 2017. p. 618–626. https://doi.org/10.1109/ICCV.2017.74
Fukatsu T, Hirafuji M, Kiura T. An agent system for operating web-based sensor nodes via the internet. J Robot Mechatron. 2005;18:186.CrossRefGoogle Scholar
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR); 2016. p. 770–778. https://doi.org/10.1109/CVPR.2016.90
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L. Imagenet large scale visual recognition challenge. Int J Comput Vis. 2015;115(3):211–52.CrossRefGoogle Scholar
Ren S, He K, Girshick R, Sun J. Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2017;39(6):1137–49. https://doi.org/10.1109/TPAMI.2016.2577031.CrossRefPubMedGoogle Scholar
1.Department of Computer Science and EngineeringIndian Institute of Technology - HyderabadHyderabadIndia
2.International Field Phenomics Research Laboratory, Graduate School of Agricultural and Life SciencesThe University of TokyoNishi-TokyoJapan
3.Institute of Agricultural MachineryNational Agriculture and Food Research OrganizationTsukubaJapan
4.Graduate School of Life and Environmental SciencesUniversity of TsukubaTsukubaJapan
Desai, S.V., Balasubramanian, V.N., Fukatsu, T. et al. Plant Methods (2019) 15: 76. https://doi.org/10.1186/s13007-019-0457-1
Accepted 02 July 2019
Publisher Name BioMed Central | CommonCrawl |
Mateo Díaz
Optimization Seminar
Escaping strict saddle points of the Moreau envelope in nonsmooth optimization (with D. Davis and D. Drusvyatskiy)
Preprint, 2021.
Recent work has shown that stochastically perturbed gradient methods can efficiently escape strict saddle points of smooth functions. We extend this body of work to nonsmooth optimization, by analyzing an inexact analogue of a stochastically perturbed gradient method applied to the Moreau envelope. The main conclusion is that a variety of algorithms for nonsmooth optimization can escape strict saddle points of the Moreau envelope at a controlled rate. The main technical insight is that typical algorithms applied to the proximal subproblem yield directions that approximate the gradient of the Moreau envelope in relative terms.
Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient (with D. Applegate, O. Hinder, H. Lu, M. Lubin, B. O'Donoghue, and W. Schudy)
We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications. PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized by Chambolle and Pock (2011), to a saddle-point formulation of LP. PDLP enhances PDHG for LP by combining several new techniques with older tricks from the literature; the enhancements include diagonal preconditioning, presolving, adaptive step sizes, and adaptive restarting. PDLP improves the state of the art for first-order methods applied to LP. We compare PDLP with SCS, an ADMM-based solver, on a set of 383 LP instances derived from MIPLIB 2017. With a target of $10^{-8}$ relative accuracy and 1 hour time limit, PDLP achieves a 6.3x reduction in the geometric mean of solve times and a 4.6x reduction in the number of instances unsolved (from 227 to 49). Furthermore, we highlight standard benchmark instances and a large-scale application (PageRank) where our open-source prototype of PDLP, written in Julia, outperforms a commercial LP solver.
Optimal Convergence Rates for the Proximal Bundle Method (with B. Grimmer)
We study convergence rates of the classic proximal bundle method for a variety of nonsmooth convex optimization problems. We show that, without any modification, this algorithm adapts to converge faster in the presence of smoothness or a Hölder growth condition. Our analysis reveals that with a constant stepsize, the bundle method is adaptive, yet it exhibits suboptimal convergence rates. We overcome this shortcoming by proposing nonconstant stepsize schemes with optimal rates. These schemes use function information such as growth constants, which might be prohibitive in practice. We complete the paper with a new parallelizable variant of the bundle method that attains near-optimal rates without prior knowledge of function parameters. These results improve on the limited existing convergence rates and provide a unified analysis approach across problem settings and algorithmic details. Numerical experiments support our findings and illustrate the effectiveness of the parallel bundle method.
Infeasibility detection with primal-dual hybrid gradient for large-scale linear programming (with D. Applegate, H. Lu, and M. Lubin)
We study the problem of detecting infeasibility of large-scale linear programming problems using the primal-dual hybrid gradient method (PDHG) of Chambolle and Pock (2011). The literature on PDHG has mostly focused on settings where the problem at hand is assumed to be feasible. When the problem is not feasible, the iterates of the algorithm do not converge. In this scenario, we show that the iterates diverge at a controlled rate towards a well-defined ray. The direction of this ray is known as the infimal displacement vector. The first contribution of our work is to prove that this vector recovers certificates of primal and dual infeasibility whenever they exist. Based on this fact, we propose a simple way to extract approximate infeasibility certificates from the iterates of PDHG. We study three different sequences that converge to the infimal displacement vector: the difference of iterates, the normalized iterates, and the normalized average. All of them are easy to compute, and thus the approach is suitable for large-scale problems. Our second contribution is to establish tight convergence rates for these sequences. We demonstrate that the normalized iterates and the normalized average achieve a convergence rate of $O(1/k)$, improving over the known rate of $O(1/\sqrt{k})$. This rate is general and applies to any fixed-point iteration of a nonexpansive operator. Thus, it is a result of independent interest since it covers a broad family of algorithms, including, for example, ADMM, and can be applied settings beyond linear programming, such as quadratic and semidefinite programming. Further, in the case of linear programming we show that, under nondegeneracy assumptions, the iterates of PDHG identify the active set of an auxiliary feasible problem in finite time, which ensures that the difference of iterates exhibits eventual linear convergence to the infimal displacement vector.
Efficient Clustering for Stretched Mixtures: Landscape and Optimality (with K. Wang and Y. Yan)
NeurIPS, 2020.
This paper considers a canonical clustering problem where one receives unlabeled samples drawn from a balanced mixture of two elliptical distributions and aims for a classifier to estimate the labels. Many popular methods including PCA and k-means require individual components of the mixture to be somewhat spherical, and perform poorly when they are stretched. To overcome this issue, we propose a non-convex program seeking for an affine transform to turn the data into a one-dimensional point cloud concentrating around -1 and 1, after which clustering becomes easy. Our theoretical contributions are two-fold: (1) we show that the non-convex loss function exhibits desirable landscape properties as long as the sample size exceeds some constant multiple of the dimension, and (2) we leverage this to prove that an efficient first-order algorithm achieves near-optimal statistical precision even without good initialization. We also propose a general methodology for multi-class clustering tasks with flexible choices of feature transforms and loss objectives.
Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence (with V. Charisopoulos, Y. Chen, D. Davis, L. Ding, D. Drusvyatskiy)
Foundations of Computational Mathematics, 2020.
The task of recovering a low-rank matrix from its noisy linear measurements plays a central role in computational science. Smooth formulations of the problem often exhibit an undesirable phenomenon: the condition number, classically defined, scales poorly with the dimension of the ambient space. In contrast, we here show that in a variety of concrete circumstances, nonsmooth penalty formulations do not suffer from the same type of ill-conditioning. Consequently, standard algorithms for nonsmooth optimization, such as subgradient and prox-linear methods, converge at a rapid dimension-independent rate when initialized within constant relative error of the solution. Our framework subsumes such important computational tasks as phase retrieval, blind deconvolution, quadratic sensing, matrix completion, and robust PCA. Numerical experiments on these problems illustrate the benefits of the proposed approach.
Composite optimization for robust rank one bilinear sensing (with V. Charisopoulos, D. Davis, and D. Drusvyatskiy)
Information and Inference, 2020.
We consider the task of recovering a pair of vectors from a set of rank one bilinear measurements, possibly corrupted by noise. Most notably, the problem of robust blind deconvolution can be modeled in this way. We consider a natural nonsmooth formulation of the rank one bilinear sensing problem and show that its moduli of weak convexity, sharpness and Lipschitz continuity are all dimension independent, under favorable statistical assumptions. This phenomenon persists even when up to half of the measurements are corrupted by noise. Consequently, standard algorithms, such as the subgradient and prox-linear methods, converge at a rapid dimension-independent rate when initialized within a constant relative error of the solution. We complete the paper with a new initialization strategy, complementing the local search algorithms. The initialization procedure is both provably efficient and robust to outlying measurements. Numerical experiments, on both simulated and real data, illustrate the developed theory and methods.
The nonsmooth landscape of blind deconvolution
Workshop on Optimization for Machine Learning, 2019.
The blind deconvolution problem aims to recover a rank-one matrix from a set of rank-one linear measurements. Recently, Charisopulos et al. introduced a nonconvex nonsmooth formulation that can be used, in combination with an initialization procedure, to provably solve this problem under standard statistical assumptions. In practice, however, initialization is unnecessary. As we demonstrate numerically, a randomly initialized subgradient method consistently solves the problem. In pursuit of a better understanding of this phenomenon, we study the random landscape of this formulation. We characterize in closed form the landscape of the population objective and describe the approximate location of the stationary points of the sample objective. In particular, we show that the set of spurious critical points lies close to a codimension two subspace. In doing this, we develop tools for studying the landscape of a broader family of singular value functions, these results may be of independent interest.
Local angles and dimension estimation from data on manifolds (with A. Quiroz and M. Velasco)
Journal of Multivariate Analysis, to appear 2019.
For data living in a manifold $M\subseteq \mathbb{R}^m$ and a point $p\in M$ we consider a statistic $U_{k,n}$ which estimates the variance of the angle between pairs of vectors $X_i-p$ and $X_j-p$, for data points $X_i$, $X_j$, near $p$, and evaluate this statistic as a tool for estimation of the intrinsic dimension of $M$ at $p$. Consistency of the local dimension estimator is established and the asymptotic distribution of $U_{k,n}$ is found under minimal regularity assumptions. Performance of the proposed methodology is compared against state-of-the-art methods on simulated data.
In Search of Balance: The Challenge of Generating Balanced Latin Rectangles (with C. Gomes, R. Le Bras)
CPAIOR 2017 Fourteenth International Conference on Integration of Artificial Intelligence and Operations Research Techniques in Constraint Programming.
Spatially Balanced Latin Squares are combinatorial structures of great importance for experimental design. From a computational perspective they present a challenging problem and there is a need for efficient methods to generate them. Motivated by a real-world application, we consider a natural extension to this problem, balanced Latin Rectangles. Balanced Latin Rectangles appear to be even more defiant than balanced Latin Squares, to such an extent that perfect balance may not be feasible for Latin rectangles. Nonetheless, for real applications, it is still valuable to have well balanced Latin rectangles. In this work, we study some of the properties of balanced Latin rectangles, prove the nonexistence of perfect balance for an infinite family of sizes, and present several methods to generate the most balanced solutions.
Compressed sensing of data with known distribution (with M. Junca, F. Rincón and M. Velasco)
Applied and Computational Harmonic Analysis, 2018.
Compressed sensing is a technique for recovering an unknown sparse signal from a small number of linear measurements. When the measurement matrix is random, the number of measurements required for perfect recovery exhibits a phase transition: there is a threshold on the number of measurements after which the probability of exact recovery quickly goes from very small to very large. In this work we are able to reduce this threshold by incorporating statistical information about the data we wish to recover. Our algorithm works by minimizing a suitably weighted $\ell_1$-norm, where the weights are chosen so that the expected statistical dimension of the corresponding descent cone is minimized. We also provide new discrete-geometry-based Monte Carlo algorithms for computing intrinsic volumes of such descent cones, allowing us to bound the failure probability of our methods.
For academic inquiries please email md825atcornelldotcom | CommonCrawl |
How would a black hole power plant work?
A black hole power plant (BHPP) is something I'll define here as a machine that uses a black hole to convert mass into energy for useful work. As such, it constitutes the 3rd kind of matter-energy power (formerly "nuclear power") humans have entertained, the first two being fission and fusion. Putting aside the fact that the level of technological advancement needed for this is far beyond modern day humans, it seems to be pretty well established that this is possible (and maybe someday inevitable) among physicists. Personally, I feel like I don't understand the proposal because no one has really created a coherent picture of how it would work. To keep this focused, here are my objective questions:
Related to the conversion itself:
What mechanism of energy release are we talking about? Possibilities include radiation from accretion of material, Hawking radiation, magnetic conversion of rotational energy, and maybe others (EDIT: Manishearth's answer revealed that I don't know the possibilities myself). Can we rule some of these out pragmatically? How well are these mechanisms understood? Do we really have the physics/observations to back them up, or is it still on the forefront of physics?
How would the energy be harvested? Some possibilities include a thermal cycle (like a Rankine cycle), radiation to electricity conversion (like photovoltaics), magnetically induced currents, and charge movements. Could the particles of emission be too high energy to harness with known materials as a part of the machine structure? That would certainly apply to Hawking radiation from small BHs, do other options have the problem of overly-energetic emissions?
Related to application:
Is there anything we could use to change the Hawking radiation power output other than the BH mass? Would we be dancing on a knife's edge by using Hawking radiation as a power source? As you should know, a BH evaporation would be a fairly major cosmic event, and to get more power out, one would have to use a smaller BH, which presents a greater risk of catastrophe. This problem could be mitigated by using several BHs so a lower energy throughput would be needed for each, but then that risks BH collision which is also a catastrophic event. Does the spin, charge, or local gravitational potential affect the output? Is there any way for Hawking radiation to be a reasonably safe method of matter to energy conversion (even for an advanced civilization)?
What are the parameters for the total mass and energy output? Specifically, what would the total energy output divided by mass be, and what would the mass-to-energy conversion efficiency of new in-falling material be? For accretion, I've heard numbers like 30% of the in-falling material can be converted to energy and ejected back into space at the poles... but could we change and control that? How would a BHPP's parameters compare to our sun and current Earth-based power plants?
Sorry I couldn't wrap this up into a single question. I tried. To address the "on topic" concern, I hear people mention that tokomak fusion power research still has many unknowns that are in the domain of physics, as opposed to engineering, which will come later in its development. I tried to edit my question above to be actually physics in nature. I think I speak for most people when I say that I don't understand the vast majority of the physics of BHs. My question here is generally how those physics would play out in a machine that does useful matter to energy conversion.
black-holes
hawking-radiation
nuclear-engineering
Qmechanic♦
181k3737 gold badges467467 silver badges20912091 bronze badges
Alan RomingerAlan Rominger
$\begingroup$ Famous last words: That's just an engineering problem. At which point the speaker sneaks off leaving the hard work still to be done. $\endgroup$
– dmckee --- ex-moderator kitten
$\begingroup$ Bhpps dont use hawking radiation. They extract ROTATIONAL energy from a spinning black hole (lots if mass, so lots of angular momentum/energy to be extracted). I doubt that hawking radiation would be efficient. $\endgroup$
I don't know about extracting mass-energy from inside the black hole; it seems pretty inefficient to me. Hawking radiation isn't that powerful (as well as still being highly hypothetical), but building a Dyson sphere around a star is better. Remember, black holes, are, well, black. If there was enough Hawking radiation, they would cease to be black and would shine like stars.
The efficient way to extract energy from a black hole is to extract its rotational energy. 20% 1 of a (rotating) black hole's mass-energy is in the form of rotational energy. This energy is not stored inside the black hole, rather it is stored in the swirl of space outside the black hole (in the Ergosphere). We can extract this energy by threading magnetic filled lines through the black hole. The swirl of space swirls the magnetic fields, and this swirl creates current (I'm not sure of the exact treatment of this, classically it would be some form of electromagnetic induction, but this isn't classical physics.). The current flows along the field lines and can be picked up.
Here's a picture from 1:
UPDATE: See this also.
1 Black Holes and Time Warps, Kip Thorne. Page 53.
Peter Mortensen
$\begingroup$ This is exciting! Apparently I thought that I knew a great deal more than I actually did. I will have to edit my question to allow for more processes. You mention a possibility that I didn't even think to mention. The superconducting coils do look like they come possibly perilously close to the BH, but maybe not. I imagine this idea could use comparatively "small" BHs and it's not clear how significant how far out the magnetic influence is compared to the gravitational. The current lines are difficult to swallow - it doesn't look like inductive current. $\endgroup$
– Alan Rominger
$\begingroup$ Arxiv link discussing hawking radiation power generation (no claims as to quality): arxiv.org/abs/0908.1803v1 $\endgroup$
$\begingroup$ @Zassounotsukushi: Even I find it difficult to swallow. It was mentioned that it was by the same mechanism that powers quasars. I vaguely remember something called the blandford-znajek process for quasars where there is a similar current. Ill look it up tomorrow. The net doesnt seem to have much on it. $\endgroup$
$\begingroup$ @Zassounotsukushi Alright, wherever I look, the "current" part of the Blandford-znajek process seems to taken almost for granted. It seems that the strong magnetic field lines accelerate the electron/charged particle, and the swirl of space does something or the other to it. The end result is that a potential difference is developed and there's a current. $\endgroup$
$\begingroup$ I think we need someone who understands GR better to finish this off. $\endgroup$
With this answer, I am going to list some of my notes from the paper that @zephyr posted as a comment, http://arxiv.org/abs/0908.1803v1. Using Hawking radiation as a means of mass-to-energy conversion seems, in a word, absurd. The paper, however, addressed exactly that for use in powering a manned spaceship to the stars.
Hawking Radiation as a thrust source
Firstly, let me list the parameters of the BH discussed in the paper:
mass of $10^{12} kg$
radius of $10^{-10} m$, or $10^{-18} m$, it's not entirely clear
power of $P=a f(T) / R^2$, which is different depending on the numbers I try from the paper, my first try gives me $13 W$, which indicates that the radius of $10^{-10} m$ is too big.
Such a spaceship, by the way, is an extraordinarily energy-intensive activity. A major fraction of the mass of any such spaceship would necessarily have to be converted to energy for the idea to be workable. Thus, it is necessary for the trust device to have very high mass-to-energy conversion efficiency for this process. This is among the reasons that Hawking radiation may be the only viable options for certain trips.
When thinking about designing a spaceship, our considerations go beyond the craft itself. Earth-bound or solar-bound activities will be needed to build it and prepare the energy source. The paper compares Hawking radiation to an antimatter rocket. It makes the general claim that theorists don't predict the antimatter production efficiency to exceed $10^4$ energy input to mass created. Now, all of the mass of the antimatter will be converted to photon propellent in-service in the spaceship. The efficiency quantification for an artificial black hole is a little more complicated. So I made a picture.
Basically, you throw together some amount of mass so that it compresses tighter than its event horizon and you have a black hole. Obviously, this would be a very energetic process to do, so we have to include the kinetic energy used to throw this matter together to an extremely small point. As you're using it, though, you can add as much mass as you want to the BH with relative ease because once it's in the event horizon, it has added to the mass. Hawking radiation will be emitted continuously (whether you want it to or not), and at the end of the trip you'll be left with some mass as a BH that you didn't use. This is important to include for a spaceship because for a spaceship the matter-energy to energy conversion matters. For civilian power, generally the energy to energy conversion matters. I defined the matter-to-energy conversion above, but the energy-to-energy efficiency would be:
$$\eta = \frac{M+E+M'-M''}{E}$$
The point I'm getting to is that the mass-to-energy conversion efficiency will be close to 1, similar to anti-matter. The energy-to-energy conversion will be vastly greater than 1, as opposed to $10^{-4}$ for the case of anti-matter. Based on pure energetics, BHs would be very efficient, light, and ideal for a interstellar thrust source.
The other major reason for looking at Hawking radiation from artificial black holes is that they have very good confinement. Using anti-matter would require actively confining the antimatter, for which the prospects do not look good. Should the active confinement methods fail, there will be a spectacular explosion that destroys the spacecraft and the local cosmic neighborhood. A BH, on the other hand, is passively confined by gravity. If left to sit for long enough, yes, it can evaporate and destroy just as much, but that will take time to happen.
Firstly, in order to get the power level needed, the BH would have to be "subatomic black hole" (SBH), with a radius less than $10^{-10} m$. Apparently, we don't know exactly how much power a given mass SBH would emit, that requires basically quantum gravity physics models, and those are not completely decided upon.
Also, apparently, such a SBH would emit some of its mass as mass, not energy. This means that mass would not be converted to thrust for a spacecraft and would not be converted to useful energy in a civilian power plant type application. Also, neutrinos would be created and emitted isotropically, contributing nothing to either thrust or useful energy.
The temperature of the SBH would be very high apparently, somewhere in the range of $0.06 GeV$ and $100 GeV$ for the parameters the paper discussed. These energies will have a large multiplier for calculating material radiation damage. Shielding would also be difficult.
The mass entertained in the paper is $10^{12} kg$, and to create the black hole, this would have to be accelerated/compressed into a subatomic radius. At the risk of sounding snarky, this doesn't sound easy. This is basically saying that if you can compress Mount Everest into the volume of a single atom you can have free energy. One method discussed is to use "converging x-ray lasers" to do this. Apparently self-gravitation may help focus them to a point. So that's one bit of good news.
$\begingroup$ Recommend you look up black hole on Wikipedia. All of your open questions are answered there. Also look up Hawking radiation. $\endgroup$
– Andrew Palfreyman
$\begingroup$ How could you accelerate the black hole? $\endgroup$
– Kevin Kostlan
$\begingroup$ @KevinKostlan reflect the hawking radiation on a single side back into the black hole to give it a net thrust $\endgroup$
– Inverse
Oct 27, 2014 at 4:49
$\begingroup$ It's hard to reflect gamma rays, and even harder to get them back into such a small target. $\endgroup$
$\begingroup$ You have a nucleus-sized-everest-mass object that is emitting 300 megawatts. It's hard to push or pull that since anything coming close will get vaporized. You can't put much mass near it to pull it. You can't charge it since it will be emitting electrons or positrons and in this way can neutralize itself. Maby magnatic monopoles (too heavy to be emmited by hawking raiation and self-neutralize) lol. $\endgroup$
Is there any way to annihilate matter without the use of anti-matter?
Translational invariance implying diagonal representation in momentum space
How much energy can be extracted by lowering something into a black hole?
Blandford-Znajek process: Why/how does the current flow along the magnetic field lines?
Physical -> Chemical -> Nuclear -> (what comes next)
Theoretical basis for black hole evaporation
Throwing a micro black hole into the sun: does it collapse into a black hole or does it result in a supernova?
Geothermal power plant powered by coal plants?
Does Hawking radiation lead to black hole evaporation / reduction of black hole mass?
A question on the death of a black hole
Can white holes undergo hawking radiations like black hole?
What exactly makes a black hole STAY a black hole? | CommonCrawl |
What is the theoretical lower mass limit for a gravitationally stable neutron star?
I ask here intentionally not for the size of the smallest possible observed size of neutron stars, which corresponds approximately to the well-known Chandrasekhar-limit for the upper limit of the white dwarfs. This is defined by the minimal size of a stellar core to collapse into a neutron star, instead of white dwarf.
But, I think this is not the smallest possible neutron star mass - it is only the smallest mass that can be produced by stellar evolution processes.
For example, black holes have also a lower limit: the Tolman-Oppenheimer-Volkoff limit, which is around 1.5-3.0 Solar masses. Corresponding to that, the known smallest black hole is observed to be around 4 Solar masses. But this doesn't define the smallest possible size of a black hole, it only defines the smallest black hole size which can be formed. Theoretically, even Earth-sized or much more smaller black holes could exist, but there is no known process which could create them. Despite the extensive searches for micro black holes, nothing was found.
By analogy, I am asking is this a similar situation for neutron stars? What is the minimal mass of a neutron star, which could remain stable? Is this mass smaller than the Chandrasekhar-limit?
astrophysics neutron-stars supernova stellar-evolution
peterh - Reinstate Monicapeterh - Reinstate Monica
$\begingroup$ Could you check I have not changed the meaning of your question? $\endgroup$ – ProfRob Oct 26 '14 at 13:48
$\begingroup$ I wonder if smaller neutron stars could be formed by collisions of neutron stars that fragmented them. The cross-section is extremely low, but such collisions might occur in the far future. $\endgroup$ – user4552 Oct 26 '14 at 15:30
$\begingroup$ @RobJeffries: As I understand the state of Cosmology, there are several competing theories on the long-term fate of the universe, but the serious contenders agree that with time, the intensity of background radiation approaches zero. No black hole can have become small yet, but every black hole will become small some day. $\endgroup$ – Beta Apr 28 '15 at 17:19
$\begingroup$ @Beta Yep, perhaps. Have you worked the numbers to check that the absorption of the CMB decreases more quickly than the Hawking radiation? $\endgroup$ – ProfRob Apr 28 '15 at 22:28
$\begingroup$ @Beta? Did you ever solve that? I'd be interested to see the result. $\endgroup$ – TLW Jul 25 '16 at 1:46
We think that most neutron stars are produced in the cores of massive stars and result from the collapse of a core that is already at a mass of $\sim 1.1-1.2 M_{\odot}$ and so as a result there is a minimum observed mass for neutron stars of about $1.2M_{\odot}$ (see for example Ozel et al. 2012). Update - the smallest, precisely measured mass for a neutron star is now $1.174 \pm 0.004 M_{\odot}$ - Martinez et al. (2015).
The same paper also shows that there appears to be a gap between the maximum masses of neutron stars and the minimum mass of black holes.
You are correct that current thinking is that the lower limit on observed neutron star and black hole masses is as a result of the formation process rather than any physical limit (e.g. Belczynski et al. 2012 [thanks Kyle]).
Theoretically a stable neutron star could exist with a much lower mass, if one could work out a way of forming it (perhaps in a close binary neutron star where one component loses mass to the other prior to a merger?). If one just assumes that you could somehow evolve material at a gradually increasing density in some quasi-static way so that it reaches a nuclear statistical equilibrium at each point, then one can use the equation of state of such material to find the range of densities where $\partial M/\partial \rho$ is positive. This is a necessary (though not entirely sufficient) condition for stability and would be complicated by rotation, so let's ignore that.
The zero-temperature "Harrison-Wheeler" equation of state (ideal electron/neutron degeneracy pressure, plus nuclear statistical equilibrium) gives a minimum stable mass of 0.19$M_{\odot}$, a minimum central density of $2.5\times10^{16}$ kg/m$^3$ and a radius of 250 km. (Colpi et al. 1993). However, the same paper shows that this is dependent on the details of the adopted equation of state. The Baym-Pethick-Sutherland EOS gives them a minimum mass of 0.09$M_{\odot}$ and central density of $1.5\times10^{17}$ kg/m$^3$. Both of these calculations ignore General Relativity.
More modern calculations (incorporating GR, e.g. Bordbar & Hayti 2006) get a minimum mass of 0.1$M_{\odot}$ and claim this is insensitive to the particular EOS. This is supported by Potekhin et al. (2013), who find $0.087 < M_{\rm min}/M_{\odot} < 0.093$ for EOSs with a range of "hardness". On the other hand Belvedere et al. (2014) find $M_{\rm min}=0.18M_{\odot}$ with an even harder EOS.
A paper by Burgio & Schulze (2010) shows that the corresponding minimum mass for hot material with trapped neutrinos in the centre of a supernova is more like 1$M_{\odot}$. So this is the key point - although low mass neutron stars could exist, it is impossible to produce them in the cores of supernovae.
Edit: I thought I'd add a brief qualitative reason why lower mass neutron stars can't exist. The root cause is that for a star supported by a polytropic equation of state $P \propto \rho^{\alpha}$, it is well known that the binding energy is only negative, $\partial M/\partial \rho>0$ and the star stable, if $\alpha>4/3$. This is modified a bit for GR - very roughly $\alpha > 4/3 + 2.25GM/Rc^2$. At densities of $\sim 10^{17}$ kg/m$^3$ the star can be supported by non-relativistic neutron degeneracy pressure with $\alpha \sim 5/3$. Lower mass neutron stars will have larger radii ($R \propto M^{-1/3}$), but if densities drop too low, then it is energetically favorable for protons and neutrons to combine into neutron-rich nuclei; removing free neutrons, reducing $\alpha$ and producing relativistic free electrons through beta-decay. Eventually the equation of state becomes dominated by the free electrons with $\alpha=4/3$, further softened by inverse beta-decay, and stability becomes impossible.
ProfRobProfRob
$\begingroup$ This astrobites review discusses the NS-BH gap in reference to this paper. This suggests the gap is caused by delayed vs rapid bounces in the collapse of CCSNe. $\endgroup$ – Kyle Kanos Oct 26 '14 at 13:57
$\begingroup$ Has anyone calculated what it would look like when a hypothetical neutron star that loses mass "popped back" into simple degenerate matter? Could we detect such an event at all? $\endgroup$ – biziclop Oct 26 '14 at 23:21
$\begingroup$ There are a number of papers that explore the possibility of mass transfer in a binary neutron star shortly after birth. Passing below the minimum mass ought to result in an explosion of some sort. adsabs.harvard.edu/abs/2002ApJ...581.1271C $\endgroup$ – ProfRob Oct 26 '14 at 23:37
$\begingroup$ @biziclop Considering that the per-particle energy difference of free neutrons and hidrogen is in the order of the deuterium fusion, it would be essentially a supernova. Lsp? $\endgroup$ – peterh - Reinstate Monica Apr 28 '15 at 18:24
$\begingroup$ @peterh Yes. :) Would this supernova have a recognisable radiation signature? $\endgroup$ – biziclop Apr 28 '15 at 18:31
Not the answer you're looking for? Browse other questions tagged astrophysics neutron-stars supernova stellar-evolution or ask your own question.
Why are planets not crushed by gravity?
Neutron stars and black holes
Can stars contain neutron-star dense cores?
What is the theoretical lower mass limit for a white dwarf?
What could break up a neutron star?
For a collapsing star, at what mass is the formation of a black hole inevitable?
What happens when a neutron star loses enough mass to go under Chandrasekhar limit?
Are there any known neutron stars just below the Tolman–Oppenheimer–Volkoff limit?
Does the Chandrasekhar Limit scale for a Black Hole?
In-between neutron stars and black holes
Why do neutron stars fed by another star not produce a second supernova like white dwarf supernovae?
Can quark stars form under an event horizon?
Understanding the Chandrasekhar limit for white dwarfs and its relation with supernovas
Mass gap between neutron stars and black holes | CommonCrawl |
Journal of Nanostructure in Chemistry
pp 1–7 | Cite as
Synthesis of Ni base nanoparticles by W/O emulsion combustion
Jun Kobayashi
Noriyuki Kobayashi
Yoshinori Itaya
Masanobu Hasatani
To develop a process of fine particle production by spray pyrolysis, spray combustion of W/O (water-in-oil) emulsion, of which water phase was raw material solution and oil phase was fuel for heat source of high-temperature reaction field, was investigated. In this study, nickel oxide particles, which were a preliminary step of the nickel fine particle production, were synthesized and its structural characteristics were evaluated. Mixed solution of nickel nitrate and white kerosene was used as raw material. W/O emulsions were prepared using ultrasonic homogenizer and stirring these raw materials adding a surfactant. These emulsions were burning in a high temperature furnace to produce nickel oxide particles. The mean particle diameter of produced particles was less than 20 nm according to TEM observation. The diameter of the particles was much smaller than the estimated value based on the size distribution of dispersed solution phase in the emulsions and its concentration. Moreover, there is no effect of the concentration of the aqueous solution phase. On the other hand, X-ray diffraction pattern showed that the produced particles were complex of metal nickel with nickel oxide.
Graphic abstract
Nickel oxide nanoparticle W/O emulsion Spray combustion Ultrasonic homogenizer
List of symbols
Diameter of water phase in W/O emulsion (m)
Diameter of produced particle (m)
Flow rate of emulsion injected into an atomization nozzle (m3 s−1)
Flow rate of emulsion returned from an atomization nozzle (m3 s−1)
Molecular weight of produced particle (kg kmol−1)
Weight of sample (kg)
Initial weight of sample (kg)
Concentration of nickel salt solution (kmol m−3)
ρp
Density of produced particle (kg m−3)
Spray pyrolysis is one of the methods for fine particle production [1, 2, 3] and a droplet of precursor solution or slurry is a template of a produced particle. Many studies for development of fine particle production using the method have already been reported. A spherical droplet of precursor solution or slurry almost decides the size of the produced particle when these droplets do not break and/or unite themselves. Therefore, spray methods to produce fine and uniform droplets have been studied on and mass production methods of these droplets have been developed. [4] Electrostatic spray [5, 6, 7], ultrasonic spray [8, 9] and low-pressure spray methods [10, 11] are suitable for uniform nano-particle production since fine and uniformly sized droplets are obtained. On the other hand, it has been difficult to generate a large amount of fine droplets using these spray methods. Therefore, there is a need to produce a large amount of nano-particles more effectively and industrially. Then, we proposed production method for the ultrafine particle by the combustion of W/O emulsion prepared by ultrasonic homogenizer [12]. In this method, a large amount of W/O emulsion that included fine and uniform droplets of a metallic salt solution can be fed to a reactor at a time. The emulsion is burned and high-temperature combustion field is formed in the reactor. As a result, fine water phases of the aqueous solution are dried, decomposed and sintered by thermal energy of combustion in a short time. Although particle production by W/O emulsion combustion has already been reported [13, 14, 15], the W/O emulsions have been prepared by simple stirring and the size of water phases is not small enough for nano-particle production of less than 1 m [16]. Furthermore, there has been little reporting of the relationship between particle morphology and water phase structure in the emulsion.
In this work, nano-particle production due to spray combustion of a W/O emulsion prepared with an ultrasonic homogenizer was performed to obtain the nickel oxide ultrafine particle as the precursor of the nickel ultrafine particle. Based on microscopic observation of both the product and the raw material, the influence of the characteristics of the W/O emulsion and experimental conditions on the structure of produced particles was investigated.
W/O emulsion
As the raw material, Ni(NO3)2·6H2O (Kanto Chem. Co., Ltd.) was dissolved in water and an aqueous solution of a certain concentration was prepared. Kerosene dissolving sorbitan mono-palmitic acid ester (Sigma-Aldrich Japan Co., Ltd.), which was a surfactant with 6.7 of hydrophilic–lipophilic balance (HLB) value, was added in the water phase. The mass ratio of kerosene to aqueous solution was set to 7/3 and the surfactant was added 2.1 mass percent of aqueous solution and kerosene mixture.
The mixture was sufficiently stirred using ultrasonic homogenizer (Branson Digital Sonifier S-250D; 20 kHz, 150 W) and the W/O emulsion solution was prepared. The distribution of water phase diameter measured by laser diffraction particle size analyzer (Shimadzu SALD-300 V) is shown in Fig. 1. Due to ultrasonic homogenizer, water phase could be dispersed finer than conventional stirring. [17, 18] Moreover, the distribution range of aqueous solution phase is also narrower than that of droplet size using conventional spray methods [19, 20]. Median diameter of water phase based on this distribution was about 0.18 m.
Distribution of water phase size in W/O emulsion
Experimental procedure
Schematic diagram of experimental apparatus is shown in Fig. 2. The apparatus consists of an atomizer for W/O emulsion, a thermal decomposition furnace and a particle collection device. The furnace consists of a stainless steel tube with an inner diameter of 40 mm to which an electric heater with a height of 1200 mm is attached and a combustion chamber connected to bottom of the tube. Using the pressure atomization nozzle, the emulsion was supplied to the combustion chamber at 9.2 ml/min, which includes about 2.1 ml/min of the aqueous solution. To set fuel equivalence ratio of 0.75, which is suitable for stable combustion of W/O emulsion, oxygen-enriched air with an oxygen concentration of 26 vol % was supplied into the combustion chamber at 65 l/min. Moreover, the temperature of the heater was set to 973 K. Fine particles produced in the reactor were collected using an electric precipitator. Temperature of the electric precipitator was maintained at relatively high temperature conditions where water vapor does not condense. The obtained fine particles were observed by scanning and transmission electron microscopy (SEM: Hitachi Ltd. S 570, TEM: Hitachi Ltd. H 800) and identified using X-ray diffraction pattern (XRD: Shimadzu XRD-6100).
Schematic diagram of experimental apparatus
Thermogravimetric analysis
Before fine particle production by W/O emulsion combustion, thermogravimetric analysis (Shimadzu TGA-50) for Ni(NO3)2·6H2O was carried out to investigate the thermal decomposition properties. These results are shown in Fig. 3. Here, nitrogen, air, oxygen and hydrogen were used as carrier gas, respectively. The flow rates were all 50 ml/min. Moreover, heating rate of all analyses was 1 K/min.
Thermogravimetric curves for Ni(NO3)2·6H2O
Weight loss of the hydrated metal salt progressed through almost three stages in any condition. The first two steps would be elimination reaction of hydration water since the mass ratio of the water to the hydrated metal salt (37%) was roughly consistent with the mass yield at the end of first two steps. Furthermore, the weight loss rate at the end of second step was almost same in any case. The third step would be decomposition of the metal salt and especially reduction reaction under hydrogen atmosphere. In fact, the mass ratios of both nickel oxide and metal nickel to the hydrated metal salt are 26 and 20%, respectively. These values were almost same to the mass yield at the end of third step; it was found that nickel oxide and/or metal nickel would be produced by decomposition and/or reduction reaction of the metal salt. Moreover, weight loss hardly appeared over 600 K. Based on these results, the above experimental conditions were determined.
Production of nickel oxide particle by W/O emulsion combustion
Due to combustion of W/O emulsion containing precursor solution, particles synthesized in the reactor were obtained. The appearance of these particles was not green, which is the color of nickel oxide (II), but black and carbon deposition during the particle synthesis was supposed. SEM and TEM micrographs of the obtained fine particles are shown in Figs. 4 and 5, respectively. Here, SEM and TEM images of particles made only from aqueous solution are also shown in both figures. According to SEM observation, relatively large spherical particles of more than 5 m diameter were confirmed in produced materials using low concentration solution. The size of spherical particles increases with increase in the concentration of solution and these large particles appear to contain hollow particles. On the other hand, particles made from high concentration solution (0.45 M) were comparatively small and almost less than 5 m. These large spherical particles were weakly agglomerated and could be dispersed into primary particles in alcohol solvent. In contrast, large spherical particles were hardly observed when only aqueous solution was used. According to TEM observation, to clarify the appearance of primary particles, the diameter of particles produced from W/O emulsion was almost 10 nm and it appeared that the particles included not only spherical but also cubic shapes. Moreover, the size of particles produced from W/O emulsions was almost same in spite of different concentration of nickel salt solutions. Apart from overlapping of particles, several small black spots were observed in these TEM images. These black spots could indicate crystal of metal nickel according to a diffraction image of the TEM. On the other hand, the particles produced from aqueous solution were larger than those from W/O emulsions and the diameters were approximately 500 nm.
SEM photographs of produced nickel oxide particles
TEM photographs of produced nickel oxide fine particles
Based on these TEM images, particle size distributions using W/O emulsion as raw materials were determined and shown in Fig. 6. Moreover, median diameter of produced particles based on this distribution is shown in Table 1. Here, all lines were estimated from water phase diameter distributions in W/O emulsions (Fig. 1) and the concentration of the aqueous solution, when it was assumed that one particle was made from one dispersion solution phase. The calculating equation is as follows:
Particle size distribution of nickel oxide primal particles produced from W/O emulsions
Median diameter of particles produced from W/O emulsions
Median diameter (nm)
Estimated median diameter (nm)
$$D_{\text{p}} = \sqrt[3]{{\frac{{xD_{\text{em}}^{3} M_{\text{p}} }}{{\rho_{\text{p}} }}}}$$
From these results, it is observed that the concentration of nickel salt solution had little influence on the size distribution of the synthesized particles although the predicted distributions of particle size shifted to smaller particle size with decrease in the concentration. In comparison with the estimated distributions, it was found that all actual distributions were proximate to the estimate based on the most dilute concentration of aqueous solution. When the concentration is high, solid phase such as metal salt will rapidly precipitate in the solution droplet. It is supposed that the solid phase may play a role in promoting evaporation such as boiling stone. Therefore, the water phase in the emulsion may explosively evaporate and cause secondary atomization both the emulsion and the water phase [21, 22] with increase in the concentration. As a result, the higher the concentration of the solution was, the smaller both primary and agglomerated particle would become. Moreover, crystals of nickel oxide (II) have face-centered cubic structure and that is responsible for cubic shape particles observed by TEM.
The results of XRD analysis are shown in Fig. 7. Here, XRD pattern of nickel oxide test reagent is also shown in this figure. XRD pattern of produced particles was not only consistent with that of the test reagent but also showed the other peaks due to existence of metal nickel. From the appearance of the obtained particles, it was suggested that these particles included a mix of carbon substance, but the analysis result of XRD did not show existence of carbon. Therefore, it appears that the carbon substance is probably amorphous material. On the other hand, metal nickel containing in the produced particles was confirmed even under fuel lean combustion conditions. That indicates generation of reducing gases around the produced particles and above-mentioned explosive combustion of W/O emulsion could account for instantaneous and local reducing atmosphere. However, residence time in the reactor is very short less than 1 s and further investigation into detail is required to discuss the reduction reaction of produced particles.
X-ray diffraction pattern of produced fine particles (solution concentration: 0.45 M)
W/O emulsion combustion for production of ultrafine nickel oxide particles was performed and the characteristics of production method were investigated. As a result, the following was clarified:
The diameter of primary particles produced from W/O emulsion was approximately 10 nm based on TEM observation. On the other hand, the size of agglomerated particles was almost less than 10 m due to SEM observation. The form of the primary particles was spherical or cubic.
The size distributions of primary particles were independent of the concentration of nickel salt solutions.
Not only nickel oxide but also metal nickel particles were produced by W/O emulsion combustion even under oxidative atmosphere according to TEM observation and XRD analysis.
This research was partially supported by JSPS KAKENHI Grant Number JP14750616 and Tanikawa Fund Promotion of Thermal Technology. Moreover, we thank Mr. Takanori Watanabe, Mr. Goshi Yokota and Mr. Shingo Kawagoe for assistance with experiments.
Author contribution
JK conceived of the presented idea. JK wrote the manuscript with support from NK and YI. MH supervised the project. All authors discussed the results and contributed to the final manuscript.
Medeiros, P.N., Santiago, A.A.G., Ferreira, E.A.C., Li, M.S., Longo, E., Bomio, M.R.D., Motta, F.V.: Influence Ca-doped SrIn2O4 powders on photoluminescence property prepared one step by ultrasonic spray pyrolysis. J. Alloy. Compd. 747, 1078–1087 (2018)CrossRefGoogle Scholar
Krasnikova, I.V., Mishakov, I.V., Bauman, Y.I., Karnaukhov, T.M., Vedyagin, A.A.: Preparation of NiO–CuO–MgO fine powders by ultrasonic spray pyrolysis for carbon nanofibers synthesis. Chem. Phys. Lett. 684, 36–38 (2017)CrossRefGoogle Scholar
Lee, S., Schneider, K., Schumann, J., Mogalicherla, A.K., Pfeifer, P., Dittmeyer, R.: Effect of metal precursor on Cu/ZnO/Al2O3 synthesized by flame spray pyrolysis for direct DME production. Chem. Eng. Sci. 138, 194–202 (2015)CrossRefGoogle Scholar
Okuyama, K., Lenggoro, I.W.: Preparation of nanoparticles via spray route. Chem. Eng. Sci. 58, 537–547 (2003)CrossRefGoogle Scholar
Kelder, E.M., Nijs, O.C., Schoonman, J.: Low-temperature synthesis of thin films of YSZ and BaCeO3 using electrostatic spray pyrolysis (ESP). Solid State Ionics 68, 5–7 (1994)CrossRefGoogle Scholar
Zaouk, D., Zaatar, Y., Asmar, R., Jabbour, J.: Piezoelectric zinc oxide by electrostatic spray pyrolysis. Microelectr. J. 37, 1276–1279 (2006)CrossRefGoogle Scholar
Yurteri, C.U., Hartman, R.P.A., Marijnissen, J.C.M.: Producing pharmaceutical particles via electrospraying with an emphasis on nano and nano structured particles—a review. KONA Powder Part. J. 28, 91–115 (2010)CrossRefGoogle Scholar
Okuyama, K., Lenggoro, I.W., Tagami, N.: Preparation of ZnS and CdS fine particles with different particle sizes by a spray-pyrolysis method. J. Mater. Sci. 32, 1229–1237 (1997)CrossRefGoogle Scholar
Kang, Y.C., Lenggoro, I.W., Park, S.B., Okuyama, K.: YAG: Ce phosphor particles prepared by ultrasonic spray pyrolysis. Mater. Res. Bull. 35, 789–798 (2000)CrossRefGoogle Scholar
Lenggoro, I.W., Itoh, Y., Iida, N., Okuyama, K.: Control of size and morphology in NiO particles prepared by a low-pressure spray pyrolysis. Mater. Res. Bull. 38, 1819–1827 (2003)CrossRefGoogle Scholar
Wang, W.N., Lenggoro, I.W., Terashi, Y., Kim, T.O., Okuyama, K.: One-step synthesis of titanium oxide nanoparticles by spray pyrolysis of organic precursors. Mater. Sci. Eng., B 123, 194–202 (2005)CrossRefGoogle Scholar
Watanabe, T., Nawata, M., Kobayashi, J., Kobayashi, N., Hasatani, M.: Synthesis of Y2O3: Eu nanoparticles by emulsion combustion at high temperature. Kagaku Kogaku Ronbunshu 34, 181–186 (2008). (in Japanese) CrossRefGoogle Scholar
Takatori, K., Tani, T., Watanabe, N., Kamiya, N.: Preparation and characterization of nano-structured ceramic powders synthesized by emulsion combustion method. J. Nanopart. Res. 1, 197–204 (1999)CrossRefGoogle Scholar
Tani, T., Watanabe, N., Takatori, K.: Emulsion combustion and flame spray synthesis of zinc oxide/silica particles. J. Nanopart. Res. 5, 39–46 (2003)CrossRefGoogle Scholar
Tani, T., Watanabe, N., Takatori, K.: Morphology of oxide particles made by the emulsion combustion method. J. Am. Ceram. Soc. 86, 898–904 (2003)CrossRefGoogle Scholar
Tolosa, L.I., Forgiarini, A., Moreno, P., Salager, J.L.: Combined effects of formulation and stirring on emulsion drop size in the vicinity of three-phase behavior of surfactant-oil water systems. Ind. Eng. Chem. Res. 45, 3810–3814 (2006)CrossRefGoogle Scholar
Rao, J., McClements, D.J.: Formation of flavor oil microemulsions, nanoemulsions and emulsions: influence of composition and preparation method. J. Agric. Food Chem. 59, 5026–5035 (2011)CrossRefGoogle Scholar
Lin, C.Y., Chen, L.W.: Comparison of fuel properties and emission characteristics of two- and three-phase emulsions prepared by ultrasonically vibrating and mechanically homogenizing emulsification methods. Fuel 87, 2154–2161 (2008)CrossRefGoogle Scholar
Wang, W.N., Purwanto, A., Lenggoro, I.W., Okuyama, K., Chang, H.K., Jang, H.D.: Investigation on the correlations between droplet and particle size distribution in ultrasonic spray pyrolysis. Ind. Eng. Chem. Res. 47, 1650–1659 (2008)CrossRefGoogle Scholar
Jaworek, A., Sobczyk, A.T.: Electrospraying route to nanotechnology: an overview. J. Electrostat. 66, 197–219 (2008)CrossRefGoogle Scholar
Lin, C.Y., Chen, L.W.: Engine performance and emission characteristics of three-phase diesel emulsions prepared by an ultrasonic emulsification method. Fuel 85, 593–600 (2006)CrossRefGoogle Scholar
Li, Y., Wang, T., Liang, W., Wu, C., Ma, L., Zhang, Q., Zhang, X., Jiang, T.: Ultrasonic preparation of emulsions derived from aqueous bio-oil fraction and 0# diesel and combustion characteristics in diesel generator. Energ. Fuel. 24, 1987–1995 (2010)CrossRefGoogle Scholar
1.Department Mechanical EngineeringKogakuin UniversityTokyoJapan
2.Department Material ScienceNagoya UniversityNagoyaJapan
3.Department Mechanical EngineeringGifu UniversityGifuJapan
4.Aichi Institute of TechnologyToyotaJapan
Kobayashi, J., Kobayashi, N., Itaya, Y. et al. J Nanostruct Chem (2019). https://doi.org/10.1007/s40097-019-0309-6
Received 14 April 2019 | CommonCrawl |
NCERT Solutions for Class 11 Biology Chapter 14
NCERT Solutions for Class 11 B...
NCERT Solutions for Class 11 Biology Chapter 14 - Respiration in Plants
Views today: 12.31k
CBSE NCERT Solutions of Class 11 Biology Chapter 14 are a very crucial part of anyone's exam preparation. A student can easily solve the challenges faced while going through the chapter. Class 11 Biology Chapter 14 Respiration in Plants NCERT Solution is uniquely drafted by the subject matter experts at Vedantu which provide readymade answers to the various questions given in the academic textbook of Class 11th Biology.
These are written as per the latest CBSE curriculum and guidelines which suit the calibre of every kind of student. Download the PDF of the Class 11 NCERT Biology for Class 11 easily from Vedantu website.
NCERT Solutions for Class 11 Accounting Chapter 2
NCERT Solutions For Class 12 English Poetry Chapter 2 - Poems By Milton
NCERT Solutions for Class 8 Maths Chapter 3 Understanding Quadrilaterals
NCERT Solutions Class 11 Biology Chapter 2 'Biological Classification' - FREE Pdf Here!
NCERT Solutions for Class 11 Business Studies - Chapter 6 - Social Responsibilities of Business and Business Ethics
NCERT Solutions for Class 11 Business Studies Chapter 5
Download PDF of NCERT Solutions for Class 11 Biology Chapter 14
Access NCERT Solutions for Class 11 Biology Chapter 14 – Respiration in Plants
Question 1: Differentiate between
a) Respiration and Combustion
b) Glycolysis and Krebs' cycle
c) Aerobic respiration and Fermentation
Ans:
a) Differences between respiration and combustion are as follows:
It occurs inside living cells (cellular process).
It is a non-cellular process
Respiration is a biochemical process.
Combustion is a physio-chemical process.
Chemical bonds are broken down into steps, as a result, energy is released in stages.
All chemical steps occur simultaneously, as a result, energy is released in a single step.
A considerable amount of energy is stored in ATP molecules.
ATP is not formed.
Oxidation occurs at the end of the reaction (terminal oxidation) between reduced coenzymes and oxygen.
During combustion, the substrate is oxidized directly.
Several intermediates are formed. They are utilized in the synthesis of various organic compounds
No intermediates are produced in combustion.
Less than 50% of energy is liberated in the form of heat energy. Light is rarely produced.
Energy is liberated in the form of both light and heat energy.
Temperature is not allowed to rise.
Temperature becomes very high.
Several enzymes are needed, one for each step or reaction.
Burning is a non-enzymatic process.
b) Differences between glycolysis and Krebs' cycle are as follows:
Glycolysis
Krebs' Cycle
It occurs inside the cytoplasm.
Krebs' cycle operates inside mitochondria.
Glycolysis is the first step in respiration where glucose is broken down to pyruvate.
Krebs' cycle is the second step in respiration in which an active acetyl group is broken down completely.
This process is common in both aerobic and anaerobic respiration.
It only occurs during aerobic respiration.
It degrades a molecule of glucose into 2 molecules of pyruvate, an organic substance.
It degrades pyruvate completely into inorganic substances i.e., \[{\text{C}}{{\text{O}}_{\text{2}}}\] and ${{\text{H}}_{\text{2}}}{\text{O}}$.
Glycolysis requires two ATP molecules for the initial phosphorylation of the substrate molecule.
It does not require ATP molecules.
One glucose molecule yields four ATP molecules in glycolysis through substrate-level phosphorylation.
Two acetyl residues in the Krebs cycle liberate two ATP or GTP molecules through substrate-level phosphorylation.
The net gain is 2 molecules of NADH and 2 molecules of ATP for every molecule of glucose broken down.
Krebs' cycle produces 6 molecules of NADH, and 2 molecules of ${\text{FAD}}{{\text{H}}_{\text{2}}}$ for every two molecules of ${\text{acetyl}}\;{\text{CoA}}$oxidized by it. Two molecules of NADH are released during the conversion of two pyruvates to ${\text{acetyl}}\;{\text{CoA}}$.
The net gain of energy during glycolysis is equal to 8 ATP molecules.
In krebs' cycle, the net gain of energy is equal to 24 ATP molecules. Six molecules of ATP can be produced from 2 molecules of ${\text{NAD}}{{\text{H}}_{\text{2}}}$ formed during the dehydrogenation of 2 pyruvates.
In glycolysis, no \[{\text{C}}{{\text{O}}_{\text{2}}}\] is evolved.
\[{\text{C}}{{\text{O}}_{\text{2}}}\] is evolved during Krebs' cycle.
Oxygen is not required for glycolysis.
Oxygen is used as a terminal oxidant during krebs' cycle.
c) Differences between aerobic respiration and fermentation are as follows:
Aerobic Respiration
It uses oxygen for breaking the respiratory material into simpler substances.
Oxygen is not used in the breakdown of the respiratory substrate.
Respiratory material is completely oxidized.
Respiratory material is incompletely broken.
The end products are inorganic i.e., \[{\text{C}}{{\text{O}}_{\text{2}}}\]and ${{\text{H}}_{\text{2}}}{\text{O}}$.
Small, reduced organic molecules (ethanol or lactic acid) are produced as end products. Inorganic substances (\[{\text{C}}{{\text{O}}_{\text{2}}}\]) may or may not be produced.
Aerobic respiration is known for the normal mode of respiration in both plants and animals.
It occurs in yeast cells, bacteria and in the muscle cells of animals during vigorous exercise.
Aerobic respiration consists of three-step: - Glycolysis, Krebs' cycle and terminal oxidation.
Anaerobic respiration or fermentation consists of two steps: - Glycolysis and incomplete breakdown of pyruvic acid.
Every carbon atom in the food is oxidised, releasing a substantial amount of carbon dioxide.
Less quantity of carbon dioxide is evolved.
Water is formed.
Water is usually not formed.
$686\;{\text{kcal}}$ of energy is produced per gm mole of glucose.
$39 - 59\;{\text{kcal}}$ of energy is produced per gm mole of glucose.
It continues indefinitely.
It cannot be continued indefinitely (except in some microorganisms) due to the accumulation of poisonous compounds and the reduced availability of energy per gram mole of food broken.
Question 2: What are respiratory substrates? Name the most common respiratory substrate.
Respiratory substrates are organic substances. They are oxidized during respiration to release energy within living cells. Carbohydrates, proteins, fats, and organic acids are common respiratory substrates. The most common respiratory substrates are glucose (carbohydrates). It is a type of hexose monosaccharide.
Question 3: Give the schematic representation of glycolysis.
Question 4: What are the main steps in aerobic respiration? Where does it take place?
Ans: The main steps of aerobic respiration are as follows: - Glycolysis, link reaction, Krebs cycle and terminal oxidation.
Glycolysis (EMP Pathway):- The process of breakdown of glucose into pyruvic acid is known as glycolysis. Glucose is partially oxidized to form two molecules of pyruvate, two NADH, and two ATP. This is a common pathway for both aerobic and anaerobic modes of respiration. It takes place in the cytoplasm.
Link Reaction (Gateway Reaction):- Pyruvic acid undergoes oxidative decarboxylation to form ${\text{acetyl}}\;{\text{CoA}}$ and NADH. This reaction occurs within the matrix of mitochondria.
Krebs' Cycle (TCA Cycle):- The Krebs' Cycle occurs within the matrix of mitochondria. The net gain of energy is equal to 24 ATP molecules along with other products.
Terminal Oxidation:- Electron Transport System or oxidative phosphorylation takes place in the inner mitochondrial membrane.
Question 5: Give the schematic representation of an overall view of Krebs' cycle.
Ans: The schematic representation of an overall view of krebs' cycle (Citric acid cycle):
Question 6: Explain ETS.
Ans: The electron transport system (ETS) is also called Oxidative Phosphorylation. It is present in the inner mitochondrial membrane. It's a metabolic pathway that allows electrons to go from one carrier to the next. The passes of electrons from NADH and ${\text{FAD}}{{\text{H}}_{\text{2}}}$ to oxygen (${{\text{O}}_{\text{2}}}$) is facilitated by five multiprotein complexes in the ETS. The complexes are: - Complex I (NADH dehydrogenase), Complex II (Succinate dehydrogenase), Complex III (\[{\text{Cytochrome b}}{{\text{c}}_{\text{1}}}{\text{ Complex}}\]), Complex IV (Cytochrome c oxidase) and Cytochrome V (ATP Synthase). The steps involved in ETS are as follows: -
Electrons from NADH produced in the inner mitochondrial matrix during the citric acid cycle are oxidized by NADH dehydrogenase (Complex I).
Post this, electrons are transferred to Ubiquinone which receives reducing equivalents via ${\text{FAD}}{{\text{H}}_{\text{2}}}$ (Complex II).
Ubiquinol (reduced ubiquinone) is then oxidized with the transfer of electrons to Cytochrome c via \[{\text{Cytochrome b}}{{\text{c}}_{\text{1}}}{\text{ Complex}}\] (Complex III).
Cytochrome c oxidase Complex (Complex IV) contains cytochromes a, ${a_3}$ and two ${\text{Cu}}$ centres.
When electrons travel from one carrier to another in the electron transport chain via complex I to IV, they are connected to ATP Synthase (complex V).
Complex V consists of components like ${F_1}$ (peripheral membrane protein complex) and ${F_0}$ (integral membrane protein complex). At ${F_1}$ATP is synthesized from ADP and Pi. Protons passing through channels formed by ${F_0}$ are coupled to the catalytic site of ${F_1}$.
One molecule of NADH (oxidized) provides 3 molecules of ATP. One molecule of ${\text{FAD}}{{\text{H}}_{\text{2}}}$ produces 2 molecules of ATP.
Question 7: Distinguish between the following:
a) Aerobic respiration and Anaerobic respiration.
b) Glycolysis and Fermentation.
c) Glycolysis and Citric acid cycle.
a) Aerobic respiration and Anaerobic respiration
Anaerobic respiration
It occurs in the presence of oxygen.
It occurs in the absence of oxygen.
It is a type of respiration in which food (generally carbohydrates) is completely oxidized to carbon dioxide and water with the release of chemical energy.
It is a type of respiration in which food (generally carbohydrates) is partially oxidized with the release of chemical energy.
Since the substrate is completely oxidized, the energy yield of this type of respiration is more than that of anaerobic respiration.
Since the substrate is oxidized partially, the energy yield of this type of respiration is lower than that of aerobic respiration.
Complete oxidation of one molecule of glucose leads to a net gain of 38 ATP molecules.
Partial oxidation of one molecule of glucose leads to a net gain of 2 ATP molecules.
The end product of aerobic respiration is ${\text{C}}{{\text{O}}_{\text{2}}}$ and ${{\text{H}}_{\text{2}}}{\text{O}}$ (all higher organisms).
The end product of anaerobic respiration is lactic acid (animal cells), ethanol and ${\text{C}}{{\text{O}}_{\text{2}}}$ (lower organisms like bacteria and yeast).
Some reactions of aerobic respiration occur in the cytoplasm (glycolysis) and the mitochondria (Krebs' cycle and ETS).
All reactions of anaerobic respiration occur in the cytoplasm. There is no involvement of mitochondria in this process.
${{\text{C}}_6}{{\text{H}}_{12}}{{\text{O}}_6} + 6{{\text{O}}_2} \to 6{\text{C}}{{\text{O}}_2} + 6{{\text{H}}_2}{\text{O}} + 686{\text{Kcal}}$
${{\text{C}}_6}{{\text{H}}_{12}}{{\text{O}}_6} \to 2{\text{C}}{{\text{O}}_2} + 2{{\text{C}}_2}{{\text{H}}_5}{\text{OH}} + 59{\text{Kcal}}$
(b) Glycolysis and Fermentation
Glycolysis is a common pathway for both aerobic and anaerobic modes of respiration.
Fermentation is a type of anaerobic respiration which occurs in the absence of oxygen.
Glucose is split into two molecules of pyruvic acid during glycolysis.
During fermentation, pyruvic acid is converted to ethyl alcohol (yeast and some other microbes) or lactic acid (muscle cells of humans).
Glycolysis results in a net gain of 2 molecules of ATP.
No ATP is produced during fermentation.
It gives out two molecules of NADH per glucose molecule.
It generally utilizes NADH produced during glycolysis.
Glycolysis and Citric acid cycle
Citric Acid Cycle
Glycolysis is the first step of respiration.
The citric acid cycle (Krebs' cycle or TCA Cycle) is the second step of respiration.
This process happens in the cytoplasm.
This process occurs in the matrix of mitochondria.
It occurs both aerobically and anaerobically.
It occurs anaerobically.
Two ATPs are consumed during this process.
No consumption of ATP in the citric acid cycle.
The total gain of ATP is 8 (which includes NADH).
The net gain of ATP is 24.
Oxidative phosphorylation is not involved.
Oxidative phosphorylation is involved.
It is a linear pathway.
It is a circular pathway.
Carbon-di-oxide has not evolved.
Carbon-di-oxide is evolved.
Question 8: What are the assumptions made during the calculation of net gain of ATP?
Ans: Calculating the net gain of ATP for each glucose molecule oxidised is doable, but it could only be a theoretical exercise in practise. These calculations can only be made based on the following assumptions:
There is a sequential, orderly pathway is in function, with one substrate forming the next and glycolysis, TCA cycle, and ETS pathway occurring one after the other.
The NADH produced during glycolysis is transferred to the mitochondria and undergoes oxidative phosphorylation. None of the intermediates in the pathway is used to make another compound.
Only glucose is respired. No other alternative substrates enter the pathway at any of the intermediate stages.
These kinds of assumptions, however, are not valid in a living system. All pathways occur simultaneously and do not occur one after the other. Substrates enter the pathways and are withdrawn from them as needed. ATP is used as and when it is required. Multiple factors influence enzymatic rates. As a result, aerobic respiration of one molecule of glucose can result in a net gain of 36 ATP molecules.
Question 9: Discuss "The respiratory pathway is an amphibolic pathway".
Ans: The amphibolic pathway is the one that is used for both breakdown (catabolism) and build-up (anabolism) reactions. Respiratory pathways are mainly a catabolic process that serves to run the living system by providing energy. Several intermediates are produced by the respiratory pathway. Many of them serve as raw materials for the formation of both primary and secondary metabolites. ${\text{acetyl}}\;{\text{CoA}}$ is essential not only for the Krebs cycle but also for the synthesis of fatty acids, aromatic compounds, steroids, terpenes and carotenoids. In amination, $\alpha - $ketoglutarate forms glutamate (an important amino acid). In amination, OAA (Oxaloacetic acid) produces aspartate. Aspartate and glutamate are components of proteins. Other products include pyrimidines and alkaloids. Succinyl CoA is the precursor to cytochromes and chlorophyll.
When fatty acids are used as a substrate, they are broken down to ${\text{acetyl}}\;{\text{CoA}}$ before entering the respiratory pathway. ${\text{acetyl}}\;{\text{CoA}}$ is withdrawn from the respiratory pathway when the organism needs to synthesize fatty acids.
As a result, the respiratory pathway is involved in both the breakdown and synthesis of fatty acids.
Similarly, respiratory intermediates serve as a link during the breakdown and synthesis of proteins. Catabolism is the breaking down processes within a living organism, while anabolism is the synthesis of new ones. Since the respiratory system is engaged in both anabolism and catabolism in plants, it is better to think of it as an amphibolic instead of a catabolic pathway.
Question 10: Define RQ. What is its value for fats?
The ratio of the volume of ${\text{C}}{{\text{O}}_{\text{2}}}$ evolved to the volume of ${{\text{O}}_{\text{2}}}$ consumed in respiration over a given period is known as a respiratory quotient (RQ) or respiratory ratio. Its value can be equal to one, zero, more than one or less than one.
${\text{RQ}} = \dfrac{{{\text{ Volume of C}}{{\text{O}}_2}{\text{ evolved }}}}{{{\text{ Volume of }}{{\text{O}}_2}{\text{ consumed }}}}$
When fat or protein is used as a respiratory substrate, the respiratory quotient (RQ) is less than one.
${{\text{C}}_{57}}{{\text{H}}_{104}}{{\text{O}}_6} + 80{{\text{O}}_2} \to 57{\text{C}}{{\text{O}}_2} + 52{{\text{H}}_2}{\text{O}}$
${\text{RQ}} = \dfrac{{57{\text{C}}{{\text{O}}_2}}}{{80{{\text{O}}_2}}}$
$\quad = 0.71$
The respiratory quotient (RQ) is about 0.7 for most of the common fats.
Question 11: What is oxidative phosphorylation?
Ans: The process by which ATP is formed as a result of the transfer of electrons from NADH or ${\text{FAD}}{{\text{H}}_{\text{2}}}$ to ${{\text{O}}_{\text{2}}}$ by a series of electron carriers is known as oxidative phosphorylation. This process, which occurs in mitochondria, is the primary source of APT in aerobic organisms.
For example, when glucose is completely oxidized to ${\text{C}}{{\text{O}}_{\text{2}}}$ and ${{\text{H}}_{\text{2}}}{\text{O}}$, oxidative phosphorylation generates 26 of the 30 molecules of ATP.
Question 12: What is the significance of the stepwise release of energy in respiration?
Ans: The following are the benefits of stepwise release of energy in respiration: -
There is a gradual release of chemical bond energy, which is easily trapped in the formation of ATP molecules.
The temperature of the cell is not allowed to rise.
Energy waste is reduced.
A variety of intermediates can be used in the production of a variety of biochemicals.
Different substances can undergo respiratory catabolism via their metabolic intermediates.
Each step of respiration is regulated by its enzyme. Specific compounds can either increase or decrease the activity of various enzymes. This aids in controlling the rate of respiration as well as the amount of energy released.
Respiration in Plants Class 11 NCERT Solutions PDF
These solutions for Respiration in Plants Class 11 NCERT Biology for Class 11 recommended by CBSE provided by Vedantu in NCERT Biology Class 11 respiration PDF format helps in easy referring of the various questions in the textbook which in turn helps a student in active learning and understanding of the concepts which automatically results in securing higher marks. These solutions are very helpful in the basic understanding of the subject and the answer format which would help any student scoring better in academics.
One of the best ways to learn and understand Respiration in Plants Class 11th NCERT is by referring to the NCERT solutions. In these solutions, the difficult and twisted terms are explained in the simplest way. The confusing and complicated parts are broken into fragments and explained to make understanding easier and helpful. The best thing about these NCERT solutions is that they have been put together by experienced Biologists turned professors.
Chapter 14 of NCERT Class 11 Biology explains cellular respiration in plants and how it leads to the release of energy through the process of disintegration of food within the cells. The energy released is stored for the synthesis of ATP.
This chapter will help students in understanding the process of respiration and how it is carried out in living beings. The oxidation of certain macromolecules derived from food provides all the energy required for life processes. Photosynthesis is a process in which light energy is trapped and turned into chemical energy and this process is used by the autotrophs to produce their food. Green plants, various tissues, and non-green cells thus, always require food for oxidation.
The important topics included under NCERT Class 11 Biology Chapter 14 are given in the table below.
Class 11 Biology Chapter 14 Topics
Do Plants Breathe?
The Respiratory Balance Sheet
Amphibolic Pathway
Respiratory Quotient
Importance of NCERT Class 11 Biology Chapter 14 Topics
NCERT Class 11 Biology Chapter 14 Respiration in Plants comes under Unit 4 'Plant Physiology' and is an important chapter for Class 11 CBSE Term II exam. The topics covered in this chapter such as Glycolysis, Aerobic Respiration, and Amphibolic pathways are very important and students can expect a weightage of about 18 marks in the term II exam on these topics. Topics like oxidative phosphorylation will help students to understand the electron transport system in a precise manner and the steps involved in the conversion of ADP into ATP.
NCERT Solutions for Class 11 Biology Chapter 14 Respiration in Plants
We all breathe to live, yet why is breathing so fundamental to life? What happens when we relax? Likewise, do every single living life form, including plants and microorganisms, relax? Assuming this is the case, how? All living life forms require energy for completing everyday life exercises, be it ingestion, transport, development, multiplication, or in any event, relaxing.
Where does this energy originate from? We realize we eat nourishment for energy – however, how is this energy taken from food? How is this energy used? Do all food give a similar measure of energy? Do plants 'eat'? Where do plants get their energy from? Also, miniature life forms – for their energy necessities, do they eat 'food'? You may marvel at the few inquiries raised above – they may appear to be detached. In any case, truly, the way toward breathing is a lot associated with the cycle of the arrival of energy from food.
All the energy needed forever measures is gotten by oxidation of certain macromolecules that we call 'food'. Just green plants and cyanobacteria can set up their own food; by the cycle of photosynthesis, they trap light energy and convert it into compound energy that is put away in the obligations of sugars like glucose, sucrose, and starch. We should recollect that in green plants as well, not all cells, tissues, and organs photosynthesize; just cells containing chloroplasts, that are regularly situated in the shallow layers, do photosynthesis.
Division of Marks for Respiration in Plants Class 11 NCERT
This chapter falls under unit 4 which has a weightage of 18 marks. One can expect either a basic 1 mark question or an elaborate 5 mark question.
Advantages of NCERT Solutions for Class 11 Biology Chapter Respiration in Plants
The Class 11 Bio Ch 14 NCERT solutions help students acquaint themselves to understand and develop the ability to answer questions in a certain format which are compatible to the CBSE guidelines. Students can use these NCERT Solutions for their revision and doubt clearing purpose as well. They can also refer to these to rectify their mistakes once they complete solving all the exercises' questions.
These resources are of outstanding quality which can be trusted blindly. Vedantu experts cover all the questions mentioned in the Ch 14 Biology Class 11 NCERT solutions Biology. Solving questions from Biology textbook and referring to the NCERT Solutions for Class 11 Biology chapter Respirations in Plants for better assistance will always help the students comprehend the concepts effectively.
NCERT Class 11 Biology Chapter wise Solutions - Free PDF Download
Chapter 1 - The Living World
Chapter 2 - Biological Classification
Chapter 3 - Plant Kingdom
Chapter 4 - Animal Kingdom
Chapter 5 - Morphology of Flowering Plants
Chapter 6 - Anatomy of Flowering Plants
Chapter 7 - Structural Organisation in Animals
Chapter 8 - Cell The Unit of Life
Chapter 9 - Biomolecules
Chapter 10 - Cell Cycle and Cell Division
Chapter 11 - Transport in Plants
Chapter 12 - Mineral Nutrition
Chapter 13 - Photosynthesis in Higher Plants
Chapter 14 - Respiration in Plants
Chapter 15 - Plant Growth and Development
Chapter 16 - Digestion and Absorption
Chapter 17 - Breathing and Exchange of Gases
Chapter 18 - Body Fluids and Circulation
Chapter 19 - Excretory Products and their Elimination
Chapter 20 - Locomotion and Movement
Chapter 21 - Neural Control and Coordination
Chapter 22 - Chemical Coordination and integration
FAQs on NCERT Solutions for Class 11 Biology Chapter 14
1. What are the core stages of aerobic respiration? Where does it take place?
Aerobic respiration is an enzymatically meticulous arrival of energy in a stepwise catabolic cycle of whole oxidation of natural food into carbon dioxide and water with oxygen going about as terminal oxidant. It happens by two techniques, normal pathway, and pentose phosphate pathway. The regular pathway is known so in light of the fact that its initial step, called glycolysis, is basic to both high-impact and anaerobic methods of breath. The normal pathway of oxygen-consuming breath comprises three stages – glycolysis, Krebs' cycle, and terminal oxidation. Vigorous breath happens inside mitochondria. The eventual outcome of glycolysis, pyruvate is shipped from the cytoplasm into the mitochondria.
2. What kind of assumption is made during the calculation of the net gain of ATP?
To calculate the net addition of ATP for each glucose particle oxidized possible; however, actually, this can stay just a hypothetical exercise. These estimations can be made distinctly on specific suspicions that:
There is a consecutive, methodical pathway working, with one substrate shaping the following and with glycolysis, TCA cycle and ETS pathway following in a steady progression moved into the mitochondria and goes through oxidative phosphorylation.
Only glucose is being breathed – no other elective substrates are entering in the pathway at any of the go-between stages.
Yet, these sorts of suppositions are not generally substantial in a living framework; all pathways work all the while and don't occur consistently; substrates enter the pathways and are pulled back from it as and when important; ATP is used as and when required; enzymatic rates are constrained by various methods. Consequently, there can be a net increase of 36 ATP particles during an oxygen-consuming breath of one atom of glucose.
4. What concepts can I learn from the NCERT Solutions for Class 11 Biology Chapter 14?
Chapter 14 is about Respiration in Plants. The concepts that you can learn from the NCERT Solutions for Class 11 Biology Chapter 14 are "Do Plants Breathe?", Glycolysis, Fermentation, Aerobic Respiration, The Respiratory Balance Sheet, Amphibolic Pathway and Respiratory Quotient. These topics are made very easy for the students in NCERT Solutions. The concepts are very easy to understand and given in an organised manner.
4. Are the NCERT Solutions for Class 11 Biology Chapter 14 sufficient for exam preparation?
Yes, if your preparation from the NCERT Solutions for Class 11 Biology Chapter 14 is thoroughly done with the practice of all the questions and answers from the exercises and the important questions, you are all prepared for the exam. You can also practise the sample question papers to get an idea of writing the answers appropriately. Go through the NCERT Solutions from start to end, and you will be confident with your subject. The solutions are accessible free of cost on the Vedantu website as well as the Vedantu Mobile app.
5. What is respiration in plants Class 11?
Respiration in plants is the release of energy through the enzymes, which is a catabolic process, which involves the breakdown of the food substances inside the living cells. The energy is required by all living organisms for all the activities like breathing, absorption, reproduction and movement. The important aspect of respiration is the liberation of metabolic energy as ATP.
6. What is Aerobic Respiration NCERT Solutions?
Aerobic respiration involves the exchange of gases in the presence of oxygen. This will give rise to the breakdown of respiratory materials giving carbon dioxide and water as end products. The process of glycolysis is involved in aerobic respiration. The acid produced during aerobic respiration is pyruvic acid, and it forms two ATP for each glucose molecule.
7. What is the respiratory balance sheet Class 11?
The respiratory balance sheet in Class 11 Biology Chapter 14 is the calculation of net gain of ATP for every glucose molecule oxidised, which depends on the orderly pathway functioning with the next substrate and with glycolysis, TCA cycle, NADH synthesised in glycolysis which is transferred to mitochondria and undergoes the process of oxidative phosphorylation, the respired compound is only glucose, and none of the intermediates in the pathway is utilised for synthesising any other compound.
Home Tuition in Bangalore
Home Tuition in Mumbai
Home Tuition in New Delhi
Home Tuition in Lucknow
Home Tuition in Jaipur | CommonCrawl |
Importins promote high-frequency NF-κB oscillations increasing information channel capacity
Zbigniew Korwek1,
Karolina Tudelska1,
Paweł Nałęcz-Jawecki2,
Maciej Czerkies1,
Wiktor Prus1,
Joanna Markiewicz1,
Marek Kochańczyk1 &
Tomasz Lipniacki ORCID: orcid.org/0000-0002-3488-25611
Biology Direct volume 11, Article number: 61 (2016) Cite this article
Importins and exportins influence gene expression by enabling nucleocytoplasmic shuttling of transcription factors. A key transcription factor of innate immunity, NF-κB, is sequestered in the cytoplasm by its inhibitor, IκBα, which masks nuclear localization sequence of NF-κB. In response to TNFα or LPS, IκBα is degraded, which allows importins to bind NF-κB and shepherd it across nuclear pores. NF-κB nuclear activity is terminated when newly synthesized IκBα enters the nucleus, binds NF-κB and exportin which directs the complex to the cytoplasm. Although importins/exportins are known to regulate spatiotemporal kinetics of NF-κB and other transcription factors governing innate immunity, the mechanistic details of these interactions have not been elucidated and mathematically modelled.
Based on our quantitative experimental data, we pursue NF-κB system modelling by explicitly including NF-κB–importin and IκBα–exportin binding to show that the competition between importins and IκBα enables NF-κB nuclear translocation despite high levels of IκBα. These interactions reduce the effective relaxation time and allow the NF-κB regulatory pathway to respond to recurrent TNFα pulses of 45-min period, which is about twice shorter than the characteristic period of NF-κB oscillations. By stochastic simulations of model dynamics we demonstrate that randomly appearing, short TNFα pulses can be converted to essentially digital pulses of NF-κB activity, provided that intervals between input pulses are not shorter than 1 h.
By including interactions involving importin-α and exportin we bring the modelling of spatiotemporal kinetics of transcription factors to a more mechanistic level. Basing on the analysis of the pursued model we estimated the information transmission rate of the NF-κB pathway as 1 bit per hour.
This article was reviewed by Marek Kimmel, James Faeder and William Hlavacek.
Control of nuclear localization of proteins,especially transcription factors (TFs), is a crucial aspect of gene expression regulation [1, 2]. While nuclear pore complexes (NPCs) allow passive diffusion of molecules of mass below approximately 40 kDa [3], larger molecules require active transport to cross the nuclear envelope. As a result, nuclear transport of most TFs is energy-dependent and in most cases involves homologous family of carrier molecules called karyopherins, with import carriers called importins and export carriers called exportins [4]. Importins recognize cargoes containing a signal peptide sequence called the nuclear localization signal (NLS). The signal peptide sequence recognized by exportins is called the nuclear export signal (NES). In the classical nuclear protein import pathway, the importin-α family functions as an NLS-recognizing adaptor, which is in turn recognized by importin-β (KPNB1), a carrier mediating interactions with NPC. In the best recognized pathway of nuclear protein export, exportin 1 (XPO1) functions both in NES recognition and as a carrier. Usually, cargoes possessing both NLS and NES sequences undergo continuous shuttling between the cytoplasmic and the nuclear compartment. Thus, localization of a TF can be dynamically regulated by its conformational change or association with other molecules affecting the accessibility of the NLS or NES for karyopherin binding [5–7].
The energy required for nuclear transport is supplied by a GTPase called Ran (RAN) and is used among others for importin-β dissociation. Following classical nuclear protein import, conversion of importin-bound RanGDP into RanGTP causes the release of importin-α:cargo complex from importin-β in the nucleus [3]. The mechanisms of importin-α release vary and their details are still debated. However, since the affinity of importin-α for its target classical NLS is ~10 nM, the release is likely to require catalysis or competitive binding [4]. GTPases are essential for active nuclear transport, but due to their abundance [8, 9], they do not limit the rates of nuclear transport processes. Instead, these rates can be limited by diffusion or active transport along microtubules [10], as the karyopherin:cargo complexes are formed anywhere in a cellular compartment, and need to translocate first into the vicinity of the nuclear envelope before translocation across a NPC can occur.
Here, we focus on NF-κB, an ubiquitous TF fundamental in innate immune response, which upon stimulation exhibits oscillatory nucleocytoplasmic shuttling. These oscillations result from the negative feedbacks mediated by its inhibitors, IκBα (NFKBIA) [11] and A20 (TNFAIP3) [12], and require bidirectional transport across the nuclear membrane. The most abundant of NF-κB heterodimers consist of RelA (RELA) and p50 (NFKB1) subunits, and in resting cells most of them are retained in the cytoplasm by IκBα which masks the NLS of RelA [13, 14]. Upon TNFα or LPS stimulation, IκBα is phosphorylated by kinase complex IKK, and then ubiquitinated and degraded by the 26S proteasome [15–18]. IκBα degradation exposes the NLS of RelA, allowing importin-α3 (KPNA3) or α4 (KPNA4) binding [19, 20]. Then, the NF-κB:importin-α complex can be intercepted by importin-β, which interacts with the nuclear pore to effect NF-κB translocation [21]. In the nucleus, NF-κB triggers the expression of numerous target genes, including two of its inhibitors, A20 and IκBα [22]. A20 attenuates IKK activity [23], allowing for accumulation of newly synthesized IκBα, which diffuses into the nucleus and binds NF-κB. The transcriptionally active NF-κB is removed from gene promoters by IκBα binding, which terminates transcription. After exportin 1 recognizes the NES of IκBα, it enables free IκBα as well as the IκBα:NF-κB complexes to pass through the NPC and leave the nucleus [24–27]. In this way, exportins participate in the suppression of NF-κB signalling.
Despite the fact that modelling of nuclear trafficking has been considered in the past [28, 29] and NF-κB signalling network has been extensively studied both experimentally and by mathematical modelling [30–39], the regulation of NF-κB translocation by karyopherins has not been modelled explicitly. Following the work of Hoffmann et al. [30], existing computational models have simplified these interactions by assuming that free NF-κB translocates to the nucleus, while NF-κB:IκBα complexes translocate to the cytoplasm. We found that this approach falls short of capturing the spatiotemporal coevolution of NF-κB and IκBα levels during the second NF-κB pulse. We observe, both at the population and single-cell level, that NF-κB enters the nucleus despite the excess of IκBα. This suggests that a fraction of released NF-κB is rapidly captured by importin-α and in this way escapes from binding by the remaining IκBα. Therefore, we pursued NF-κB modelling towards a more detailed mechanistic description, which better explains the observed spatiotemporal coevolution of levels of NF-κB and its inhibitor IκBα in response to stimulation with TNFα or LPS. The proposed model captures the puzzling short-period NF-κB oscillations in response to pulsed TNFα stimulation observed recently by Zambrano et al. [40]. As the NF-κB–IκBα feedback loop emerges as the canonical example of regulation of transcription factor signalling [11], its mechanistic modelling can add to understanding of spatiotemporal kinetics of other transcription factors [41, 42].
Cheong et al. [43] and Selimkhanov et al. [44] demonstrated that the NF-κB pathway transmits only 1 bit of information about the level of TNFα, which is equivalent to resolving whether TNFα is present or not. The interesting question is how frequently this bit of information can be transmitted. The ability to respond to frequent pulses is controlled by the refractory time, which may depend on the specific cytokine or other stimuli, and as found by Adamson et al. [45] can be shorter when one type of the stimuli is replaced by another. Based on the proposed model, we theoretically demonstrate that the NF-κB pathway can transmit information about the short TNFα pulses as long as their frequency does not exceed 1 per hour. Since TNFα is short lived in vivo with half-time of order of 10 min [18, 46] we expect that information is encoded in the sequence of TNFα pulses rather than in their amplitude.
Spatiotemporal profiles of NF-κB and IκBα in response to TNFα and LPS
We investigated the spatiotemporal NF-κB–IκBα relationship using immunofluorescent staining. We chose this technique in addition to Western blotting (WB) to obtain information about the levels and localization of NF-κB and IκBα in single cells. Fluorescent tagging of NF-κB and IκBα [47], although a method of choice for examining NF-κB regulation in single cells, could influence protein interactions with karyopherins and, by increasing IκBα mass above 40 kDa, suppress its passive diffusion through nuclear pores.
As shown in Fig. 1a and b, in unstimulated cells most of NF-κB is sequestered by IκBα in the cytoplasm. TNFα- or LPS-induced IκBα degradation observed at 15 and 30 min after stimulation results in nuclear NF-κB translocation at 15–30 min for TNFα and 30–60 min for LPS stimulation. Immunostaining images in Fig. 1a show that upon TNFα stimulation NF-κB returns to cytoplasm at 60 min and then translocates to the nucleus again at 100 min despite accumulation of IκBα above its baseline level. Interpretation of responses to LPS is more difficult due to a delayed and more heterogeneous cell activation [48]. Nevertheless, also in this case immunostaining images indicate that at 90 and 120 min a fraction of cells have increased level of cytoplasmic IκBα and simultaneously a sizable nuclear NF-κB translocation. In response to LPS costimulation with a protein synthesis inhibitor, cycloheximide (CHX), IκBα is degraded but not resynthesized, allowing NF-κB to remain in nucleus for 4 h (Fig. 1c).
Spatiotemporal profiles of NF-κB and IκBα in response to TNFα and LPS and the emerging model. Immunostaining confocal images showing localization and level of RelA (component of NF-κB dimer) and IκBα after (a) TNFα stimulation, (b) LPS stimulation, or (c) LPS + CHX costimulation. In (c), CHX was added 1 h before LPS stimulation. Additional file 4, Additional file 5 and Additional file 6 provide full confocal images corresponding to the images shown in (a–c). Additional file 7 provides confocal images of cells stimulated with 10 ng/ml TNFα for short time points (0, 5, 10, 15, 20 and 30 min). d Model scheme. Arrow-headed lines denote transitions, mRNA or protein synthesis, complex formation, or fast degradation; circle-headed lines denote positive influence; hammer-headed line denotes negative influence. Importins direct NF-κB to the nucleus, whereas exportins bind IκBα and IκBα–NF-κB dimers and shepherd them to the cytoplasm. All other nucleocytoplasmic translocations are assumed to proceed in a karyopherin-independent manner
Formulation of the computational model
We propose that the nuclear translocation of NF-κB occurring after 90 min of TNFα stimulation despite the excess of IκBα is enabled by NF-κB–importin interactions, which prevent the newly released NF-κB from binding to remaining IκBα molecules. In order to investigate this hypothesis we pursued our previous model [33, 49] and explicitly considered NF-κB–importin binding in the cytoplasm and IκBα–exportin binding in the nucleus. The emerging model is outlined in Fig. 1d. IκBα kinase complex (IKK) is activated via kinase IKKK in response to TNFα (or LPS). Active IKK (IKKa) phosphorylates free as well as NF-κB-bound IκBα (at Ser 32 and 36) leading to its polyubiquitination and rapid degradation by the 26S proteasome [15, 18]. When IκBα is in excess, the newly released NF-κB can either bind another IκBα molecule or translocate to the nucleus. Since nuclear translocation of NF-κB is preceded by binding of importins to its NLS sequences, we propose that competition for NF-κB binding between importin-α and IκBα determines whether released NF-κB enters the nucleus or becomes sequestered in the cytoplasm by another IκBα molecule. Although previous models assumed that free NF-κB may translocate to the nucleus, importin-α binding precluding sequestration by IκBα has not been explicitly considered.
NF-κB translocates back to the cytoplasm complexed with IκBα, which may pass through nuclear pores after association with exportin 1. To reduce complexity of the model we assume that both importin-α and exportin 1 dissociate from their cargo immediately after crossing the nuclear pore. Later we will compare it with a model variant in which the dissociation step is considered explicitly. IκBα, a small 32 kDa protein, can translocate to the nucleus independently of importin-α, so for simplicity we do not assume any IκBα–importin-α interactions. Finally, because karyopherins have multiple binding partners apart from IκBα and NF-κB, we assume that they are present in excess to IκBα and NF-κB and that their levels are not influenced by IκBα and NF-κB binding.
The rates of karyopherin binding and of translocation of cargo-karyopherin complexes are set based on the following assumptions. Karyopherin-bound proteins can freely but unidirectionally move across the nuclear membrane. For freely diffusing molecules the ratio of nuclear export rate to nuclear import rate is equal to the ratio of cytoplasmic-to-nuclear volume, kv, which ensures cell-uniform concentration in equilibrium. Hence, we assume that the ratio of karyopherin-dependent nuclear export to import rate is equal kv. We assume that the effective NF-κB:importin-α binding rate is higher than NF-κB:IκBα binding rate. Under this assumption, NF-κB can translocate to the nucleus even when IκBα is temporarily in excess (Fig. 1a and b). In contrast, we assume that IκBα:exportin 1 binding rate is lower than that of NF-κB and IκBα, which in turn allows nuclear IκBα to bind NF-κB before it is transported back to the cytoplasm. Overall, the effective IκBα nuclear export rate is higher than its import rate, which reflects the observation that IκBα, even if present in excess to NF-κB, localizes mainly in the cytoplasm.
The structure of our model, except for the interactions of IκBα and NF-κB with karyopherins, is laid out as in our previous paper [33] (see Additional file 1 for a complete model definition and parameters). However, following our later study [35], we consider all reactions stochastic, firing with concentration-dependent propensities. We used the model-specification language of BioNetGen (BNGL) to define types of molecules included in our model and to specify rules of interactions. The conventions of BNGL are described in detail elsewhere [50, 51]. BioNetGen allows for efficient deterministic and stochastic simulation employing a variation of Gillespie's direct method [52]. A BioNetGen language-encoded model is enclosed in Additional file 2.
As described previously [33], the model accounts for two types of noise: intrinsic, associated with low numbers of molecules, and extrinsic, arising from initial heterogeneity of cells in the population. The major source of intrinsic noise is A20 and IκBα genes switching between on and off states (saying that a gene is on we state that the transcription factor is bound to DNA and all other conditions are satisfied for transcription to proceed). Extrinsic noise arises from variable expression of TNF receptors (TNFRs) and NF-κB. The receptor level variability results in heterogeneous cell sensitivity to the signal, whereas the NF-κB level variability is responsible for a broad distribution of NF-κB nuclear intensities observed even when almost whole NF-κB pool is translocated into the nucleus. Following our previous study [33], we assume a lognormal distribution of TNFR, but the distribution of NF-κB is estimated based on data from 1 h time-point from the CHX + LPS costimulation experiment (Figs. 1c and 2a–b). As shown in Fig. 2a, after 0.5 h of CHX + LPS costimulation about 90 % of IκBα is degraded, while NF-κB translocation reaches its maximum at 1 h with about 70 % of total NF-κB translocated to the nucleus. The distribution shown in Fig. 2b was obtained under a condition when the synthesis of NF-κB inhibitors is almost fully suppressed so it can be interpreted as the distribution of all of the NF-κB that has the potential to translocate to the nucleus upon LPS or TNFα stimulation. Therefore, based on the assessments by Carlotti et al. [53, 54], we assume that the median level of NF-κB is 105 molecules per cell (with on average 30 % of NF-κB associated with inhibitors that are not degraded in response to LPS or TNFα) and we use this distribution to draw numbers of NF-κB molecules for stochastic simulations.
Quantification of NF-κB, IκBα and A20 levels in response to TNFα and LPS stimulation. a Immunostaining time profiles of RelA (NF-κB) nuclear/total ratio and total IκBα/total RelA (NF-κB) ratio (in arbitrary units) in response to 1 μg/ml LPS with 5 μg/ml CHX costimulation started 1 h before LPS. Open squares show values calculated in each of five confocal frames analysed. Filled squares show mean over these five frames containing in total more than 500 cells for each time point. b Histograms showing nuclear RelA (NF-κB) fluorescence normalized to cell average fluorescence for unstimulated cells (grey) and 1 h after 1 μg/ml LPS with 5 μg/ml CHX costimulation (green); see Methods for details of normalization. Coefficient μ is the histogram average while σ is the standard deviation. In stochastic numerical simulations, total single-cell NF-κB levels were drawn at random based on the data used to plot the histogram; see Methods. Bottom subpanel shows cumulative distributions for unstimulated (grey line) and stimulated (green line) cells. Kolmogorov–Smirnov statistic (K–S) equals 0.878, which implies that at least 87.8 % of cells respond to stimulation. c Experimental IκBα and A20 mRNA time profiles after 10 ng/ml TNFα and 1 μg/ml LPS stimulation from three independent measurements. Data show absolute quantification by digital PCR for TNFα stimulation or rescaled RT-PCR quantification using digital PCR measurements in selected time points. Model simulated mRNA profiles after 10 ng/ml TNFα show the average over 300 stochastic simulations. The numerical values are shown only for experimental time points, and are connected by line only to guide the eye. d Western blot analysis of cytoplasmic and nuclear fractions of RelA (NF-κB), IκBα and A20. Blots from one of three quantified experiments are shown. Nuclear IκBα and A20 were near the limit of detection. Model simulated protein profiles after 10 ng/ml TNFα show the average over 300 stochastic simulations
Based on the absolute quantification of IκBα and A20 transcripts (Fig. 2c) we modified IκBα and A20 mRNA synthesis and degradation coefficients, so that IκBα and A20 mRNA are transcribed at the rate of 0.2 mRNA/(s × gene copy) and translated at the rate of 0.5 protein/(s × mRNA). These two rates appear to be close to physiological maxima. The transcription speed was measured at ~60 nt/s [55] and minimal spacing between RNA polymerases as small as ~100 nt [56], which gives maximal transcription rate of about 0.6 mRNA/(s × gene copy). The rate of translation in eukaryotic cells has been estimated at 6 aa/s (or 18 nt/s) [57] and the ribosome centres can be only 40–60 nucleotides apart in 3D helical polysomal conformation [58], which implies the upper bound estimate of 0.5 protein/(s × mRNA). The assumed transcription rate assures that IκBα transcript level reaches about 300 mRNA molecules during the first NF-κB pulse, which together with the high translation rate allows for the rapid de novo synthesis of more than 105 IκBα molecules needed to shepherd NF-κB back to the cytoplasm. Our estimates suggest that the IκBα–NF-κB negative loop is optimized to produce high-amplitude, short-lasting pulses of NF-κB activity.
Finally, with respect to the previous study, we reduce the coefficient of TNFα degradation to 10-4/s. TNFα is known to be short-lived in vivo, with reported half-life times ranging from 4.6 to 10.5 min [59–61]. However, these values represent the rate of elimination of TNFα from murine plasma/bloodstream resulting from many different processes occurring simultaneously in a complex system of the animal organism. Aside from internalisation by target stimulated cells, TNFα is mainly cleared from blood by liver and kidneys and is also broken down by plasma proteases (such as neutrophil elastase and cathepsin G – see [62] and references therein), resulting in short in vivo half-life time. These processes are either absent (blood filtration) or severely limited in vitro (protease degradation). Zambrano et al. [40] reported negligible degradation of TNFα used for stimulation of medium-cultured MEFs in a microfluidic chamber. Our own data indicate slow degradation (or activity loss) of TNFα incubated with cells. In an experiment reported in Additional file 3: Figure S1, we stimulated naive cells with media extracted after 6 h following stimulation of naive cells with TNFα at the initial concentration of 10 ng/ml. We observed a somewhat weaker and more heterogeneous response, resembling the response to 1 ng/ml dose, which allowed us to estimate the effective TNFα degradation/loss rate for our experimental conditions as 10−4/s. For such degradation rate, 10 ng/ml is degraded in 6 h to 10 ng/ml × e−2.16 ≈ 1.15 ng/ml. As we discussed previously [33], in small microfluidic chambers at low TNFα concentrations, effective TNFα loss rate can be much higher due to binding by more abundant TNFR and endocytosis. We observed also, based on ELISA, negligible degradation of TNFα in cell-free medium during the course of 24 h incubation at 22 °C and 37 °C. Taken together, these results suggest that cellular internalization is the main process in decreasing TNFα levels in the described in vitro conditions.
Accumulation of IκBα above initial level shown in Fig. 1a and 1b is corroborated by population data obtained by WB, which was quantified and used to fit model parameters, and juxtaposed with simulated trajectories (Fig. 2d). WB analysis indicate that in unstimulated cells the nuclear NF-κB level is very low. The TNFα- and LPS-induced degradation of IκBα at 15 and 30 min results in nuclear NF-κB translocation. The decrease of cytoplasmic IκBα (at 15 and 30 min for TNFα and 30 min for LPS) is more pronounced than the decrease of cytoplasmic NF-κB, which suggests that a fraction of NF-κB is sequestered by other inhibitors, such as IκBε or IκBβ, which are not degraded as rapidly as IκBα [30]. These additional NF-κB inhibitors are implicitly modelled by assuming that on average 30 % of NF-κB is associated with inhibitors that are not degraded in response to TNFα.
Figure 3 illustrates how the signal propagates from TNFR to NF-κB and through negative feedback loops. In short, TNFR activation leads to a pulse of IKK activity followed by an oscillating tail. Active IKK (IKKa) phosphorylates IκBα, leading to its rapid degradation, which allows NF-κB to translocate to the nucleus and trigger transcription of its inhibitors, IκBα and A20. Resythesized IκBα enters the nucleus, binds NF-κB and exportin 1, and the complex translocates back to the cytoplasm. A20 blocks TNFR activity and enhances transformation of IKKa to inactive IKKi. The first pulse of nuclear NF-κB is followed by subsequent less pronounced pulses, which occur despite the accumulation of IκBα above its initial levels. Single-cell trajectories show progressing desynchronization of responses initially synchronized by TNFα stimulation. Due to the loss of cell synchronization, oscillations of the population average are more dampened than oscillations of individual cell trajectories. In the numerical simulations, cells were equilibrated in the absence of TNFα for 100 h. As shown in Additional file 3: Figure S2, unstimulated cells exhibit irregular oscillations, arising from spontaneous degradation of IκBα and resulting in low-level bursts of NF-κB activity.
Numerical simulations. a Time profiles of active IKK (IKKa), total IκBα, nuclear NF-κB, IκBα mRNA, A20 mRNA, and total A20 in response to 10 ng/ml TNFα stimulation. Bold red line denotes deterministic stimulation, bold black line denotes average over 300 stochastic simulations, 5 thin colour lines show single cell stochastic simulations. TNFα stimulation starts at time = 0 and lasts till 300 min. b Stochastic (thin colour lines), deterministic (bold red line) and population average (bold black line) time profiles of total IκBα and nuclear NF-κB in response to pulsed 10 ng/ml TNFα stimulation. The simulation protocols correspond to repeated TNFα pulses in experiment performed by Zambrano et al. [40]: 22.5-min TNFα stimulation, 22.5-min break; 30-min TNFα stimulation, 30-min break; 30-min TNFα stimulation, 60-min break; 30-min TNFα stimulation, 150-min break
In order to verify whether the modified model is capable of reproducing previously reported key experiments posing constraints on pathway connectivity and parameters, we performed a series of numerical simulations (Fig. 3b and Additional file 3: Figures S3, S4 and S5). In Fig. 3b we show stochastic, deterministic and population average time profiles of IκBα and nuclear NF-κB in response to pulsed 10 ng/ml TNFα stimulation. These protocols correspond to the recent experiment by Zambrano et al. [40], who observed NF-κB pulses in response to periodic TNFα stimulation with periods of 45, 60, 90, 180 min. The simulations show that the system can respond by NF-κB translocations even to the shortest-period pulses, and that during these pulses IκBα remains above the level expected for resting cells. The 45-min pulsing period is about half of the intrinsic oscillation period of 90–100 min [35, 63], and this intrinsic oscillation frequency becomes increasingly more visible when signal propagates downstream of IKK (Additional file 3: Figure S6). In (Additional file 3: Figure S3) we reproduce the responses of A20-deficient fibroblasts, observed by Lee et al. [12]. The stable, switch-like NF-κB activation in this knock-out cell line results from the lack of the A20-mediated negative feedback [31]. In Additional file 1: Figure S4 we reproduce responses to 1, 2, 5, 15, 45 min 10 ng/ml TNFα pulses, known to result in a single NF-κB translocation [30, 64] of amplitude weakly dependent on TNFα pulse duration. In Additional file 3: Figure S5 we reproduce responses to three series of 5-min TNFα pulses separated by time intervals of 60, 100 or 200 min from the experiment by Ashall et al. [34]. They observed that the NF-κB translocation amplitude of the second and third pulse is equal to that of the first pulse for time intervals of 200 min, and reduced to about 30 % for time intervals of 60 and 100 min.
Model validation
Quantification of time series of confocal immunofluorescent images was performed to validate the proposed model. With the use of our in-house software, DAPI-stained nuclei were detected automatically (with occasional manual correction) and nuclear fluorescence was quantified in single cells. To obtain accurate single-cell cytoplasmic fluorescence, cytoplasmic contours were marked manually. Automatic quantification (see Methods for details of confocal images quantification) allowed us to calculate frame-average ratios such as: nuclear NF-κB/total NF-κB, total IκBα/total NF-κB. The last ratio can be only expressed in arbitrary units, since the actual values depend on staining protocols and laser intensities (kept low and the same in all experiments). Based on manual identification of cells, we calculate these values in selected representative cells (see Additional file 4 and Additional file 5 for confocal images with marked cells used for quantification).
In Fig. 4a (experiment) and 4b (model) we provides scatter plots showing the coevolution of two observables: IκBα/NF-κB ratio in the cytoplasm and NF-κB nuclear fraction. At 15 and 30 min after TNFα stimulation we observe a decrease of cytoplasmic IκBα/NF-κB ratio and simultaneous increase of NF-κB nuclear fraction. In the experiment, between 30 and 60 min cytoplasmic IκBα increases above initial levels, and nuclear NF-κB drops to nearly initial levels. Then, despite the continuous rise of cytoplasmic IκBα/NF-κB ratio, nuclear NF-κB fraction increases. We should notice that although upraised IκBα level (with respect to the initial value) is observed both in WB and immunostaining images, the increase of IκBα level between 60 and 100 min is not observed in WB (Fig. 2d). Upraised IκBα level observed between 100 and 180 min confirms that the excess of cytoplasmic IκBα does not prevent the next pulse of NF-κB activity. Similar behaviour is also observed in the case of LPS stimulation (Additional file 3: Figure S5), but identification of the second pulse is difficult due to greater cell heterogeneity. Figure 4c, showing coevolution of total IκBα/NF-κB ratio and NF-κB nuclear fraction, confirms that at the second pulse the total amount of IκBα also exceeds that of NF-κB.
Model validation. a, b Scatter plots showing evolution of total IκBα/total (NF-κB) ratio and nuclear NF-κB/total NF-κB in response to 10 ng/ml TNFα. a Experiment-based scatter plots are based on quantified confocal images shown in Additional file 4. For each time point, boundaries of 50 stained cells and their nuclei were manually determined and IκBα and NF-κB were quantified. Dots represent single cells, squares represent averages over confocal images, crosses represent confocal images (see Additional file 4) from which single cells were analysed. b Simulation-based scatter plots were obtained in stochastic model simulations. c Model simulated time profiles of total IκBα/total NF-κB and nuclear NF-κB/total versus experimental data. Black bold line shows average over 300 stochastic simulations, colour lines show single-cell stochastic simulations. Open squares represent average over confocal images, filled squares represent mean over 5 frames
Importins and information transmission through the NF-κB pathway
Negative feedbacks mediated by IκBα and A20 allow the system to be reset rapidly, however combined with noise and ultrasensitivity they reduce the information that can be transmitted by a single NF-κB pulse. By theoretical system modelling we found that NF-κB system exhibits stochastic robustness, allowing cells to respond differently to the same stimuli, but causing their individual responses to be unequivocal, essentially of all-or-nothing character [49]. It was confirmed by observation that the expression level of early genes, when calculated per responding cell, is independent of the TNFα dose [33]. Cheong et al. [43] and Selimkhanov et al. [44] demonstrated in a more rigorous way that the NF-κB pathway transmits only n ≈ 1 bit of information about the level of TNFα, which is equivalent to resolving whether TNFα is present or not.
According to the Shannon's definition [65] the information channel capacity can be expressed as
$$ C=\underset{T\to \infty }{ \lim}\left(\frac{{ \log}_2M}{T}\right), $$
where M is the number of different signal functions that can be reliably distinguished in time T. Thus C can be estimated as [65]
$$ C=f{ \log}_2{M}_0, $$
where f is frequency and M 0 is a number of states that can be distinguished. Since the number of distinct states is 2, C is equal to the maximal frequency.
In Fig. 5 we produce simulated nuclear NF-κB trajectories in response to series of four "true"-or-"false" TNFα pulses occurring in time intervals T of 45, 60 or 90 min. The simulations indicate that when T ≥ 60 min, in most cells NF-κB translocates in response to the "true" pulses and does not translocate in response to "false" pulses. This shows the system is capable of transmitting 24 different signal functions in 4 h, which from definition allows to estimate C = 1 bit/h. The system's ability to respond to pulses randomly placed in time reflects the lack of memory as postulated by Zambrano et al. [40].
Transmission of information. NF-κB responses to 10 min 10 ng/ml TNFα pulses, that occur (or not) at the beginnings of subsequent 4 time intervals of length T, equal respectively 45 min (first column), 60 min (second column), 90 min (third column). Each of 16 sequences of 4 "true" or "false" TNFα pulses carry 4 bits of information. Simulations show that this information can be reliably transmitted for T ≥ 60 min, i.e., each "true" TNFα pulse is visible in the NF-κB nuclear fraction for almost all cells, while "false" pulses do not induce any response
The ability of the system to respond to high-frequency pulses follows from high rates of synthesis of inhibitors, molecular stripping, i.e., the process in which IκBα abruptly terminates transcription by actively removing NF-κB from gene promoters [66], and fast circulation of IκBα and NF-κB between cell compartments. The latter is enabled by importins and exportins. In Fig. 6 we show that explicit inclusion of importins allows the model to better reproduce high-frequency TNFα pulses. As shown in Fig. 5, responding to the second TNFα pulse is critical for the system's ability to transmit high-frequency signals, therefore we analyse NF-κB and IκBα time profiles in response to two 10 min-long TNFα pulses at 1 h interval. We compare three models: (1) a model without importins, in which free NF-κB translocates to the nucleus, (2) the proposed model with importins, (3) a more detailed model with importins, which includes an additional step of NF-κB:importin-α dissociation in nucleus at the rate of 0.003/s. In Fig. 6a we show that the second peak of nuclear NF-κB is lowest in the model without importins, and highest in the more detailed model with importins. Importantly, it is also the detailed model where IκBα remains at the highest level during the second NF-κB translocation pulse.
Analysis of three alternative models. Model without explicit implementation of importins (blue), model considering NF-κB–importin binding (analysed throughout this paper - orange) and a more detailed model in which the process of importin dissociation from NF-κB after translocation is not immediate, but has a finite rate 0.003/s (green). A protocol with two 10-min pulses of TNFα (10 ng/ml) at 60-min interval is analysed. a Deterministic trajectories of nuclear NF-κB and total IκBα for each model. b, c Parameter analysis: height of the second NF-κB translocation peak (as a fraction of the first peak height - solid line) and IκBα second minimum (normalised to concentration before stimulation - dashed line) are plotted against the varying NF-κB import rate (b) and NF-κB–IκBα binding rate (c). Other parameters are set to default values
Next, we compare the behaviour of these three model variants when two rate parameters are varied: NF-κB import rate (Fig. 6b) and NF-κB and IκBα binding rate (Fig. 6c). The rates are varied ten-fold below and ten-fold above their default values. In the whole range, model (3) shows the highest level of IκBα (at second minimum) and simultaneously the highest second-to-first peak amplitude ratio. Model (1) shows the lowest level of IκBα and simultaneously the lowest second-to-first peak amplitude ratio. In short, the more detailed the description, the higher the model ability to produce significant NF-κB translocations with a relatively small decrease in the level of IκBα. Unsurprisingly, when NF-κB translocation rate comes close to the assumed rate of NF-κB–importin binding (0.1/s), the dynamical role of this additional step becomes negligible and differences between models (1) and (2) vanish.
Karyopherins play a crucial role in the regulation of nucleocytoplasmic localization of transcription factors (of molecular mass exceeding 40 kDa), however there is no unique scheme of regulation. Two other transcription factors critical in innate immune responses, IRF3 and STAT1, are regulated in a different way. In unstimulated cells, IRF3 is mainly cytoplasmic but, possessing both a NLS and a NES, circulates between the cytoplasm and the nucleus. After infection, phosphorylated and dimerized IRF3 is captured by the nuclear CBP/p300 proteins [67]. In contrast, STAT1 localizes to the nucleus after phosphorylation on Tyr701, which makes its NLS available for binding by importin-α5 [68]. STAT1 after dephosphorylation may return to the cytoplasm, with the help of exportin 1, which recognizes its amino acid sequence located within the DNA-binding domain [69]. Lack of immediate inhibitors regulating translocation to the cytoplasm suggests that spatial dynamics of IRF3 and STAT1 is not pulsatile as in the case of NF-κB. These three transcription factors govern antiviral responses; NF-κB and IRF3 co-regulate transcription of IFNβ (IFNB1) [70] which via paracrine interactions triggers activation of STAT1 [71] and drives the cell into an antiviral state [72]. Deciphering complex interactions of innate immune responses would thus require specific mechanistic models of regulation of these three transcription factors.
NF-κB pathway is one of the best resolved regulatory systems of innate immunity. Since the seminal work by Hoffmann et al. [30] identifying IκBα-mediated negative feedback loop as responsible for oscillations, the NF-κB system was intensely modelled. We demonstrated [31] that the IκBα feedback loop functions only in the presence of A20, which mediates another negative feedback and attenuates IKK activity, protecting IκBα from rapid phosphorylation and degradation. These two loops control spatiotemporal activity of NF-κB, allowing its nucleocytoplasmic circulation observed at single-cell level even at constant TNFα stimulation [47]. As found by Nelson et al. [47], oscillations of NF-κB are indispensable for activation of NF-κB-responsive genes. Unsurprisingly, the pulsed TNFα stimulation leading to more pronounced pulses of NF-κB has received a lot of attention [34, 40, 63]. Kellogg and Tay [63] found that when TNFα pulsing frequency is close to the intrinsic frequency of NF-κB oscillations, 1/(90 min), or twice lower, NF-κB oscillations are well pronounced which increases NF-κB transcriptional efficiency. Zambrano et al. [40] found that high-amplitude TNFα oscillations can induce recurrent NF-κB translocations even when frequency of TNFα pulsing is twice higher than the intrinsic frequency of NF-κB oscillations. The authors proposed that the system has no memory, and oscillations are to segment time and provide "renewing opportunity windows for decision".
Based on simultaneous measurements of IκBα and NF-κB levels in single cells, we found that NF-κB can translocate to the nucleus even when its immediate inhibitor, IκBα, is present in excess. To explain this observation, we propose that the NF-κB molecule released from the NF-κB:IκBα complex due to IκBα degradation can be rapidly bound by importin-α3/α4, which both protects it from binding by the remaining cytoplasmic IκBα molecules, and targets it for nuclear import. Only after remaining or newly synthesized IκBα translocates to the nucleus, can it bind NF-κB and exportin 1, which directs the complex back to the cytoplasm. By explicitly including importin-α and exportin 1 binding we pursued modelling of the NF-κB system towards a more mechanistic description. This allowed us to explain the observed nuclear translocations of NF-κB despite high IκBα levels, as well as the NF-κB translocation pulses observed by Zambrano et al. [40] in response to short-period TNFα stimulation. The model predicts that during these short NF-κB pulses IκBα is only partially degraded.
Absolute quantification of IκBα and A20 mRNA time profiles imposed constraints on kinetic model parameters. Based on the fitted parameters we conclude that IκBα/NF-κB feedback is optimized for fast response/fast inhibition. In response to 10 ng/ml TNFα, IκBα is almost fully degraded within 10–15 min, and restored at 60 min of TNFα stimulation to levels somewhat higher than initial. This can be achieved because IκBα and A20 transcription and translation rates are close to physiological maxima. Rapid synthesis of A20 is also important because it ensures attenuation of IKK activity, which in turn allows for accumulation of IκBα. Properties of the IκBα-controlled feedback loop, analysed recently by Fagerlund et al. [11], allow for rapid activation of NF-κB-responsive genes and nearly perfect adaptation to the signal at 60 min after TNFα stimulation. After 60 min of TNFα stimulation the A20/IκBα/NF-κB system returns to the proximity of the initial state, with somewhat increased levels of A20 and IκBα. However, as demonstrated here, in case of subsequent or continued TNFα stimulation, increased IκBα level does not preclude further NF-κB activation.
The importin/exportin system helps to reduce pathway repeating time τ to about 1 h and thus makes it possible to reach transmission frequency f = 1/h, which gives estimate of information channel capacity C = 1 bit/h. Because the IκBα transcription and translation rates are close to their physiological maxima, this short resetting time probably verges on the minimum for the systems based on degradation and resynthesis of the inhibitor protein. Combination of all-or-nothing type responses with fast resetting time suggests that the NF-κB system is optimized to digitize TNFα or LPS signals and convert them into pulses of transcription of NF-κB-responsive genes [33, 40].
In summary, based on single-cell and population-wise quantification of mRNA and protein expression levels of IκBα and A20, as well as NF-κB translocation, we pursued NF-κB system modelling by including IκBα and NF-κB interactions with karyopherins. This novel model enables us to explain nuclear NF-κB translocations arising even when the level of IκBα is increased with respect to the level of unstimulated cells, as well as NF-κB translocations in response to high-frequency TNFα pulsing. By means of stochastic simulations we demonstrate that NF-κB pulses can be employed to transmit information at the rate of about 1 bit/h.
Cell lines and compounds
Experiments were performed on wild-type (WT) mouse embryonic fibroblasts (MEFs). The cells were cultured in Dulbecco's Modified Eagle Medium (DMEM) with 4.5 g/l of D-glucose and 0.1 mM L-glutamine (ThermoFischer Scientific), supplemented with 10 % fetal bovine serum (ThermoFischer Scientific) and 100 mg/ml penicillin/streptomycin mix (Sigma-Aldrich). Cells were grown and maintained in a conditioned incubator at 37 °C, 5 % CO2. For stimulation, cells were seeded on dishes, multi-well plates or coverslips, depending on the type of experiment and allowed to adhere overnight at 37 °C. All cell lines were routinely tested against mycoplasma contamination by DAPI staining and PCR. Lipopolysaccharide (LPS) from Escherichia coli 0111:B4 (purified by ion-exchange chromatography) and mouse recombinant Tumor Necrosis Factor alpha (TNFα) were purchased from Sigma Aldrich. In order to disrupt LPS micelles, it was solubilised in a bath sonicator for 15 min and vortexed vigorously for additional 1 min prior to making further dilutions and adding to cells. Cycloheximide (Sigma-Aldrich) was administered to cells at a final concentration of 5 μg/ml 60 min before LPS.
Immunostaining
Cells were seeded on a 12 mm-diameter round glass coverslips. Seeding density was 50,000 cells/coverslip. After stimulation, cells on coverslips were washed with PBS and immediately fixed with 4 % formaldehyde (20 min, room temperature). Cells were then washed thoroughly and incubated for 10 min with 50 mM NH4Cl in order to block reactive aldehyde groups left after fixation. Cell membranes were permeabilized with 0.1 % Triton X-100 (Sigma-Aldrich) for 5 min, washed again and blocked with 5 % BSA/PBS. Antibodies detecting target proteins, anti-p65 (D14E12, Cell Signalling Technologies) or anti-IκBα (L35A5, Cell Signalling Technologies) were then added to the cells in 5 % BSA/PBS, and incubated for 1.5 h. After washing with PBS, appropriate secondary antibodies conjugated with fluorescent dyes (Alexa 488/Alexa 555) were added and incubated for another 1.5 h. Subsequently, cells were washed and their nuclei were stained for 10 min with 200 ng/ml DAPI (Sigma-Aldrich). Coverslips were mounted on microscope slides with a drop of Mowiol (Sigma-Aldrich) and observed using Leica TCS SP5 X confocal microscope with Leica Application Suite AF software.
Microscopic image analysis
Confocal images obtained from the immunostaining were segmented within our in-house software (MEFTrack). Automatically detected nuclear contours (based on DAPI nuclear staining) were corrected manually or excluded when the corresponding nuclei turned out to be on border-line or unfit (mitotic, overlapping or otherwise misshapen). For a limited number of cells (see Additional file 4 and Additional file 5) cell contours were marked manually. Analysis of these cells was used for producing Fig. 4a (dots) and Additional file 3: Figure S7 (dots). Fluorescence of each region enclosed by those nuclear or cellular contours was calculated as a sum of intensities of its pixels. The fluorescence of all regions were subject to analysis with auxiliary Matlab scripts, which eventually provided estimates of the magnitude of nuclear translocation and protein abundance. A correction for background noise was applied to fluorescence intensities in all contour-enclosed regions, in all relevant channels (green for NF-κB, red for IκBα, and DAPI for nuclear staining). In a given channel, for a given compartment the background-corrected fluorescence is denoted by I channelcompartment , where superscript denotes channel (NF-κB, IκBα or DAPI), and subscript denotes compartment (c – cytoplasm, n – nucleus, cell – whole cell). Bold I denotes value averaged over all compartments of a given type within a frame.
Quantification of NF-κB nuclear fraction
For single-cell analysis (Fig. 4a and Additional file 3: Figure S7 – dots) in the case when both nuclear and cell contours are determined, nuclear NF-κB fraction in i th cell is calculated as
$$ \mathrm{raw}\_\mathrm{N}\mathrm{F}\hbox{-} {\upkappa \mathrm{B}}_{\mathrm{n}\mathrm{uc}}/\mathrm{N}\mathrm{F}\hbox{-} {\upkappa \mathrm{B}}_{\mathrm{total}}(i) = \frac{I_{{\mathrm{n}}_i}^{\mathrm{NF}\upkappa \mathrm{B}}}{I_{{\mathrm{cell}}_i}^{\mathrm{NF}\upkappa \mathrm{B}\ }}\ . $$
The "raw" values are further corrected for a residual fluorescence as described at the end of this subsection.
For single-cell analysis (Fig. 2b – histogram) in the case when only nuclear contours are determined, nuclear NF-κB fraction in i th cell is calculated as
$$ \mathrm{raw}\_\mathrm{N}\mathrm{F}\hbox{-} {\upkappa \mathrm{B}}_{\mathrm{n}\mathrm{uc}}/\mathrm{N}\mathrm{F}\hbox{-} {\upkappa \mathrm{B}}_{\mathrm{total}}(i)=\frac{I_{{\mathrm{n}}_i}^{\mathrm{NF}\upkappa \mathrm{B}}}{{\boldsymbol{I}}_{\mathrm{cell}}^{\mathrm{NF}\upkappa \mathrm{B}}}\frac{{\boldsymbol{I}}_{\mathrm{cell}}^{\mathrm{DAPI}}}{I_{{\mathrm{cell}}_i}^{\mathrm{DAPI}}}, $$
where the average cell fluorescences in the NF-κB and DAPI channels, I NFκBcell and I DAPIcell , are calculated by dividing total frame fluorescence (with background correction) in respective channel by the number of cells in the analyzed frame. The applied normalization using DAPI staining corrects possible errors resulting from out of focus cell displacements (intensity of displaced cells registers weakly in both the NF-κB and DAPI channels).
For frame-wise average analysis (Figs. 2a and 4a and c – diamonds) nuclear NF-κB fraction is calculated as
$$ \mathrm{raw}\_\mathrm{N}\mathrm{F}\hbox{-} {\upkappa \mathrm{B}}_{\mathrm{n}\mathrm{uc}}/\mathrm{N}\mathrm{F}\hbox{-} {\upkappa \mathrm{B}}_{\mathrm{total}}\left(\mathrm{frame}\right)=\frac{{\boldsymbol{I}}_{\mathrm{n}}^{\mathrm{NF}\upkappa \mathrm{B}}}{{\boldsymbol{I}}_{\mathrm{cell}}^{\mathrm{NF}\upkappa \mathrm{B}}}, $$
where the average nuclear fluorescence in the NF-κB channel, I NFκBn , is calculated by dividing the sum of nuclear fluorescence in NF-κB channel by the number of cells in the analyzed frame.
Correction for the residual fluorescence
Our Western blot data show almost no nuclear NF-κB in untreated cells, but the lowest ratio of NF-κBnuc to NF-κBtotal fluorescence that we observed in untreated cells is about δ = 0.1. We assume that this effect is due to the presence of cytoplasmic NF-κB above and below the nucleus, and that this residual fluorescence registers as nuclear. Thus we correct "raw" nuclear NF-κB fraction given in Eqs. (1), (2) and (3) for this spurious contribution following the generic formula:
\( \mathrm{N}\mathrm{F}\hbox{-} {\upkappa \mathrm{B}}_{\mathrm{nuc}}/\mathrm{N}\mathrm{F}\hbox{-} {\upkappa \mathrm{B}}_{\mathrm{total}} = \frac{\mathrm{raw}\_\mathrm{N}\mathrm{F}\hbox{-} {\upkappa \mathrm{B}}_{\mathrm{nuc}}/\mathrm{N}\mathrm{F}\hbox{-} {\upkappa \mathrm{B}}_{\mathrm{total}} - \delta }{1 - \delta }. \)
The corrected values are used to produce figures as indicated in points (1), (2) and (3). The raw values are given in Additional file 4 and Additional file 5.
Quantification of frame-average nuclear and cytoplasmic abundancies of NF-κB and IκBα
The average nuclear fluorescence in NF-κB and IκBα channels, I NFκBn and I IκBαn , are calculated by dividing the sum of nuclear fluorescence in respective channel by the number of cells in the analyzed frame. The average cell fluorescence in NF-κB and IκBα channels, I NFκBcell and I IκBαcell , are calculated by dividing total frame fluorescence (with background correction) in respective channel by the number of cells. Finally, the average cytoplasmic fluorescence in NF-κB and IκBα channels, I NFκBc and I IκBαc , is estimated as
I NFκBc = I NFκBcell − I NFκBn , I IκBαc = I IκBαcell − I IκBαn .
Based on these frame-average values we generated the following figures:
Figs. 2a and 4a, b, c and Additional file 3: Figure S7. Here \( {\mathrm{I}\upkappa \mathrm{B}\upalpha}_{\mathrm{total}}/\mathrm{N}\mathrm{F}\hbox{-} {\upkappa \mathrm{B}}_{\mathrm{total}}=\kern0.5em \frac{{\boldsymbol{I}}_{\mathrm{cell}}^{\mathrm{I}\upkappa \mathrm{B}\upalpha}}{{\boldsymbol{I}}_{\mathrm{cell}}^{\mathrm{NF}\upkappa \mathrm{B}}}\ . \) Since the ratio of fluorescence in the NF-κB and IκBα channels depends on the laser intensities and specific fluorescence of antibodies, it cannot be determined up to an absolute value. Therefore, the values in these plots are normalized such that IκBαtotal/NF-κBtotal averaged over all frames equals 1 for unstimulated cells.
Fig. 2d: Nuclear NF-κB is I NFκBn averaged over analysed frames, normalized (like WB data) such that it assumes value of 1 in its maximum, i.e., for 15 min of TNFα stimulation. Cytoplasmic NF-κB is I NFκBc averaged over analysed frames, normalized (like WB data) such that it assumes value of 1 for unstimulated cells. Cytoplasmic IκBα is I IκBαc averaged over analysed frames, normalized such that it assumes value of 1 at 60 min after TNFα stimulation (i.e., in the maximum according to WB data). Let us notice that the normalization for LPS stimulation is applied jointly with that for TNFα stimulation.
Cell-fractionation
Cells were seeded on a 100 mm tissue culture-treated dishes, at a density of 1,000,000/dish, and incubated overnight. After stimulation, cells were placed on ice, washed with ice-cold PBS, scraped from the dish in PBS and centrifuged (4 °C, 100 × g, 5 min). Cell pellet was then suspended in 1.5 ml of hypotonic cytoplasmic fraction buffer (20 mM HEPES pH 8.0, 0.2 % IGEPAL CA-630, 1 mM EDTA, 1 mM DTT, protease and phosphatase inhibitor cocktail, as above) and incubated on ice for 10 min with occasional shaking. After centrifugation (4 °C, 1700 × g, 5 min), supernatant was set aside and treated as the cytoplasmic fraction; pellet was washed in the same buffer and recentrifuged, and supernatant was discarded. Remaining pellet was suspended in 150 μl of nuclear fraction buffer (20 mM HEPES pH 8, 420 mM NaCl, 20 % glycerol, 1 mM EDTA, 1 mM DTT, protein and phosphatase inhibitors, as above), incubated on ice for 30 min with occasional mixing and then centrifuged at 4 °C, 10,000 × g, 10 min. Supernatant containing nuclear fraction was transferred to a fresh tube and left for further processing.
SDS-PAGE and Western blot
Cell lysate was used to determine protein concentration using Bradford method against BSA standard. Cell lysate was precipitated by adding trichloroacetic acid (TCA) to a final concentration of 10 % and keeping on ice for 30 min. After centrifugation at 4 °C, 12,000 g, 10 min. Protein pellet was washed by adding cold acetone, vortexing and re-centrifuging. Finally, proteins were resuspended in standard Laemmli sample buffer containing 10 mM DTT and boiled at 95 °C for 10 min. Equal amounts of each protein sample was loaded onto 10 % polyacrylamide gel and SDS-PAGE was performed with Mini-PROTEAN Tetra System (Bio-Rad). Proteins were transferred to nitrocellulose membrane using wet electrotransfer in the Mini-PROTEAN apparatus, according to the modified Towbin method (400 mA, 50 min). Membrane was rinsed with TBST (TBS buffer containing 0.1 % Tween-20) and blocked for 1 h with 5 % BSA/TBS or 5 % non-fat dry milk. Membranes were incubated at 4 °C overnight with one of the primary antibodies. Following antibodies were used: anti-p65 D14E12 (CST), anti-IκBα L35A5 (CST), anti-A20 D13H3 (CST), anti-GAPDH (EMD Millipore) and anti-HDAC-1 (Santa Cruz Biotechnology). After washing with TBST, membranes were incubated with secondary antibodies conjugated with horseradish peroxidase (Goat anti-rabbit and anti-mouse immunoglobulins/HRP, Dako) for 1 h, at room temperature. After washing, chemiluminescent reaction was developed with Clarity Western ECL system (Bio-Rad). Specific proteins were detected in the dark room on the medical X-ray film. After taking scans of western blots, densitometric quantification of protein bands was performed using ImageJ software using normalization against indicated reference proteins (GAPDH or HDAC-1).
Gene expression analysis
RNA isolation and reverse transcription
Cells were seeded on 12-well plates at a density of 100,000 cells/well. Upon stimulation, cells were washed once with PBS and submitted to isolation of total RNA using PureLink RNA Mini Kit (ThermoFischer Scientific), following manufacturer's instructions. Concentration and quality of isolated RNA was determined by measuring UV absorbance of diluted samples at 260 and 280 nm, using Multiskan GO Microplate Spectrophotometer (ThermoFischer Scientific). If not used immediately, RNA was stored for later use at –80 °C. Reverse transcription with random primers was performed from about 2 μg of template RNA using High Capacity cDNA Reverse Transcription Kit (ThermoFischer Scientific). Reaction was performed in Mastercycler Gradient thermal cycler (Eppendorf) under following conditions: 10 min 25 °C, 120 min 37 °C, and 5 min 85 °C.
Real-Time Polymerase Chain Reaction (RT-PCR)
RT-PCR was performed on a QuantStudio 12 K Flex Real-Time PCR system with Array Card block (ThermoFischer Scientific). Reverse transcribed cDNA (1000 ng) was mixed with reaction Master Mix and loaded onto TaqMan Array Card containing probes and primers including endogenous reference controls. Reaction was conducted using QuantStudio "Standard" protocol, with FAM/ROX chemistry. Upon completion, expression of target genes was analysed using comparative ΔCT method with QuantStudio 12 K Flex software, normalized against GAPDH gene expression. TaqMan assays Mm00477798_m1 and Mm00437121_m1 were used for analysing the expression of IκBα and A20 genes, respectively.
Digital PCR (dPCR)
Digital PCR measurements for IκBα and A20 genes were performed using QuantStudio 3D system (Life Technologies). Sample loaded onto QuantStudio 3D Digital PCR Chip was thermocycled using ProFlex PCR System (ThermoFischer Scientific) according to the manufacturer's instruction. Chips were analysed using QuantStudio 3D Digital PCR Instrument and ANALYSIS SUITE cloud software. In the case of TNFα stimulation the dPCR, measurements were used to calculate absolute numbers of IκBα and A20 mRNA/cell. In the case of LPS stimulation the dPCR measurements were used to rescale RT-PCR data to absolute numbers of mRNA/cell.
Reviewers' comments
We thank the Reviewers for their valuable comments which helped us to improve the manuscript. These comments allowed us to view our study from a different angle, and for this reason we modified (and shortened) the title which now emphasizes the result of analysis of NF-κB information channel capacity shown in Fig. 5. Additionally, we reformulated the image analysis section in Methods hoping to improve its clarity. Below, we include our responses, which indicate also how the manuscript has been modified to address Reviewers' concerns. We hope that the revised manuscript is now suitable for publication in Biology Direct.
Reviewer's report 1
Marek Kimmel, Rice University, Houston, TX, USA
Reviewer's summary: This is an interesting paper, contributing to the understanding of the mechanistic details of the active and passive transport between cytoplasm and nucleus using an important example of NF-kB. The message of the paper is very well documented, both experimentally and by simulations. I have three remarks or rather discussion item, which in my opinion are worthy of clarification.
Reviewer's recommendations to authors
Naively, one obvious thing to do is to compare functioning of the system with importins and exportins to the system without these molecules. It seems easy computationally, but more difficult experimentally. It would be good to know the authors' opinion on this and maybe some simulations carried out.
Authors' response: Knocking out or silencing importins presents an experimental difficulty, since it would disturb the functioning of an entire cell. However, in the case of NF-κB family interactions with importins are well established due to experiments in which NLS sequences in p65, and other NF-κB proteins were mutated, see [19, 20]. Wolynes group [73] and others [13, 14] have also shown that IκBα mask these NLS. Therefore the question we consider is not whether importins are indispensable for NF-κB regulation, but rather whether explicit modelling of importins–NF-κB interactions adds to the understanding of kinetics of the NF-κB pathway. We address this question in response to reviewer 2, by comparing models in which the importin binding step is modelled explicitly or lumped with NF-κB nuclear import step.
In several places, the possibility of the cell optimizing this or that is mentioned. There are two caveats to such hypotheses. One is that most real-life systems (not only biological) are clearly suboptimal, but linger for long periods regardless (trilobites and human genome, being ad hoc examples). Why should cells be different? Second, optimization of two different processes may be contradictory (consider cancer cells dividing slower than normal cells). The NF-κB system is a multifunctional hub, so how to optimize such an object? Authors' insights are welcome.
Authors' response: We agree that there are many systems that appear to be far from optimal. However, since NF-κB pathway has been evolutionarily conserved from Drosophila to mammals (Ghosh et al., 1998, Annu. Rev. Immunol. 16:225–260), and is on the first lines of defence against pathogens, we expect that it is optimized for fast responses. Additionally, since excessive inflammation can be harmful we hypothesize that it is also designed for fast shut-off.
I like the "digitized" pulse-series experiment. However, what I would expect is that if a "1" is succeeded by a "0", there should be some low-level transients observed, while there is a complete absence of signal. Might you explain how it is possible? Is this exactly what is also expected in an experiment (no such experiment has been performed, correct?)?
Authors' response: The model predicts that pulses lasting 15 min or less produce a single pulse of nuclear NF-κB with no tail (see Additional file 3: Figure S4) and this prediction is in agreement with experimental data at the population level [30, 64]. See also Additional file 3: Figure S5 and Ashall et al. [34] for single cell analysis of responses to repeated 5 min TNFα pulses, also showing no tail in NF-κB activity.
DETAILS: Background: Please explain if exportins help export mRNA and proteins, while importins only help import proteins, or is the distinction more complex.
Authors' response: Importins and exportins, together termed karyopherins, are involved in the classical pathway of nuclear transport of proteins. On top of that, pre-microRNA and tRNA are also exported by members of the exportin family. In contrast, mRNA do not use karyopherins but instead are exported by a heterodimer of NXF1 and NXT1 proteins [4]. The distinctions amongst karyopherins, importins and exportins are now explained better in the Background section.
Results: "NF-κB is transported back to the cytoplasm complexed with IκBα, which passes through nuclear pores after association with exportin 1". Do you mean that exportin pulls IkBa, which in turn pulls NF-kB out of the nucleus?
Authors' response: IκBα binds NF-κB. After exportin 1 binds the NES of IκBα, it enables the free IκBα and IκBα:NF-κB complex to cross the nuclear pore. This is now clarified in the main text.
Is dynamics of exporting of such large complex different from say, exporting of pure IkBa?
Authors' response: IκBα, having a mass of 32 kDa, is at least partially independent of importins and exportins, i.e. it can cross nuclear pores alone, see Fagerlund et al. [11]. Since IκBα is mostly cytoplasmic, we assume in the model that IκBα translocates to the nucleus independently of importins but uses exportins to translocate out of the nucleus.
"For freely diffusing molecules the ratio of nuclear export to nuclear import …" Do you mean the ratio of rates? The reasoning outlined in this paragraph is pivotal for the paper, so it should be carefully phrased.
Authors' response: Yes, we mean the ratio of rates, we have now clarified it in revised text.
Responses to Reviewer 2
James Faeder, University of Pittsburgh, Pittsburgh, PA, USA
Reviewer's summary: This paper presents an extended computational model of NF-κB signaling that specifically considers the interactions of NF-κB and IκBαlpha (IκBα) with the importin and exportin proteins that mediate nuclear import and export respectively. Experimental data from fibroblasts in the form of Western blots and fixed cell immunofluorescence staining is obtained that shows that at times between about 60 and 90 min the apparent concentration of IκBα in the cytoplasm exceeds that of NF-κB and yet NF-κB is still able to translocate in substantial amounts to the nucleus. It is claimed that this phenomenon can only be accurately described by the extended model that includes NF-κB interaction with importin. It is further argued that the ability of NF-κB to translocate to the nucleus even when the cytoplasmic concentration of IκBα exceeds that of NF-κB allows the system to reset more quickly following pulsatile stimulation than would otherwise be the case, which thus increases the maximum possible rate of information transmission, which is estimated to be 1 bit (based on the ability to detect only the presence or absence of TNF) times one over the reset time, which is estimated to be 60 min based on the simulations shown in Fig. 5. Overall, I see this work as a significant contribution to the ongoing effort to model and understand the mechanisms that influence NF-κB dynamics. I have some reservations about the claimed importance and novelty of the mechanisms being considered here, which I would like the authors to address prior to publication. In particular, the authors claim but do not demonstrate that the proposed model uniquely captures a key finding in their experimental data, which is that NF-κB translocation can continue even when the level of IκBα exceeds that of NF-κB. I would like to see a more conclusive demonstrate that the nuclear import mechanism the authors have explicitly added to their model is required to capture this effect.
The main issue I have with this paper is that it is not clear from the presentation in the paper that the explicit consideration of karyopherin (importin/exportin) mediated transport actually results in a novel mechanism. It seems like any model that explicitly considers free cytoplasmic NF-κB will include a competition between binding to free IκBα and nuclear transport, although the parameters governing that competition could be different from those proposed in the current model. On p. 4 it is stated, however, that "although previous models assumed that free NF-κB may translocate to the nucleus, importin alpha binding precluding sequestration by IκBα was not explicitly considered. As a result, in these models efficient NF-κB translocation was possible only after the IκBα level dropped below that of NF-κB." So, the authors are claiming the rate in these models was set so low that translocation could not compete with IκBα rebinding until IκBα levels fell below those of cytoplasmic NF-κB, but no references are given so it's hard to tell which models the authors are referring to. I check one model that I'm familiar with, that of Lee et al. (2014), and this didn't seem to be the case. Also, the model already has the competition mechanism built in, and it's just a matter of varying the import rate relative to the binding rate to capture the mechanism that is described here, so although it's useful to identify a mechanistic basis for this competition, it is not clear that the present model uniquely captures this mechanism. I think that to demonstrate that this competition parameter is indeed important for providing a correct description of the NF-κB/IκBα dynamics, they should show how varying the import rate affects the overall dynamics and also demonstrate how previous models do not capture this effect correctly.
Authors' response: The role of importins and exportins in NF-κB regulation is well documented. The questions is whether explicit implementation of the NF-κB–importin binding step is important in modelling. We think it is, and it can be explained as follows. In the model without importins, in the presence of free IκBα, released NF-κB may translocate to the nucleus if the expected entry time is shorter (or at least comparable) with expected binding time with IκBα. The Reviewer is right that this can be assured in the model by assuming sufficiently fast nuclear translocation of NF-κB. However, in the reality the NF-κB translocation time is controlled by the size of the cell and its nucleus, diffusion coefficient, binding with importins, and the translocation through nuclear pores. It is therefore possible that imposing constrains on NF-κB translocation time (in order to make it shorter than NF-κB–IκBα binding time) we obtain the wrong picture of spatial regulation of the system. This has important implications for the more detailed reaction-diffusion models that are emerging in recent years for NF-κB and other regulatory systems (see Terry & Chaplain [42] and Sturrock et al. [41]).
In the model with importins it is NF-κB may enter the nucleus despite elevated levels of IκBα provided that expected NF-κB–importin binding time is shorter than the expected binding time with IκBα. We expect that this condition holds until concentration of free cytoplasmic IκBα is smaller than that of importins.
Therefore, we think that explicitly accounting for NF-κB–importin interactions is important in order to bring modelling to a more precise mechanistic description. This does not mean that the models that lumped together various reactions may not serve as an reasonable description. We now formulate presentation of our results in a more modest way.
We also supplement the Results section with a new figure (Fig. 6), in which we compare our model with its variant that lumps together the processes of NF-κB–importin binding and NF-κB nuclear translocation. The analysis is performed in correspondence to pulsed stimulation considered in Fig. 5 (in response to a suggestion by Reviewer 3, we introduce this figure in the Results section). Fig. 5 shows that the system can transmit information about NF-κB pulses as long as their frequency is not larger than 1/h. From the stochastic time profiles shown in Fig. 5 one can see that the response to second pulse TNFα is critical, i.e. the amplitude of the response to the second pulse is the lowest. Therefore in Fig. 6 we compare two model variants analysing the ratio of the second to the first peak amplitude and the difference between levels of IκBα in its second minimum and at t = 0. The comparison is done as a function of IκBα–NF-κB binding rate and NF-κB nuclear import coefficient. Generally in the novel model, the NF-κB translocation at the second peak is higher and is accompanied by a smaller decrease in the level of IκBα. The difference between two models is pronounced for small NF-κB import coefficient and for high IκBα–NF-κB binding rate, and as expected by the Reviewer, it vanishes when NF-κB import coefficient is large.
Additionally, we consider a more detailed model in which the three processes of NF-κB binding, complex translocation to the nucleus and importin dissociation are considered separately. As a reminder, in the original model we lumped processes of NF-κB nuclear translocation and importins dissociation in the nucleus. We demonstrate that this more detailed description enhances the effect of importins. Nonetheless, one should keep in mind that this is also a simplified picture, as in reality NF-κB is first bound by importin-α3 or α4, which are in turn bound by importin-β, the ternary complex diffuses into the vicinity of the nucleus and passes through nuclear pores. Next, in the nucleus importin β dissociates in response to RanGTP binding, and only then importin-α may dissociate from NF-κB. Since the affinity between importin-α and NLS is high (typically 10 nM) this process must be also somehow induced [4].
Regarding the values that the coefficients mentioned above take in existing NF-κB models, in Lee et al. [38] the authors assume IκBα–NF-κB binding rate equal 0.5 (μM s)-1 and NF-κB nuclear import rate equal 0.0026 s-1. Assuming NF-κB concentration equal 0.1 μM (Lee et al. scan the range 0.04–0.4 μM; the value 0.1 μM corresponding to roughly 105 was estimated by Carlotti et al. [53, 54], and assuming that IκBα concentration exceeds that of NF-κB by 50 % (i.e., assuming that there is 0.05 μM of free IκBα) we obtain the pseudo-first order NF-κB binding rate equal 0.025 s-1 (versus NF-κB nuclear import rate 0.0026 s-1). This implies that a released NF-κB molecule has about a 10 times higher chance of binding a free IκBα molecule than of translocating to the nucleus. In our earlier model [33] the NF-κB nuclear import rate was equal 0.01 s-1, while IκBα–NF-κB binding rate was equal 5 × 10-7 s-1. These values mean that NF-κB nuclear translocation is 2.5 times less probable than IκBα binding. The substantially different import coefficient is assumed/fitted in the models developed by Levchenko and Hoffmann (see Werner et al. [64] and Werner et al., 2005, Science 309:1857–61). In these models NF-κB import rate is 0.09 s-1, while the IκBα–NF-κB binding rate is also equal 0.5 (μM s)-1. Therefore in these models the NF-κB translocation outcompetes IκBα binding. We expect however, that NF-κB import rate is rather of order of 0.01 s-1 (or smaller) than of order of 0.1 s-1 (which would imply average translocation time of 10 s). To our knowledge, the NF-κB import rate was never measured directly.
Another possible problem with the modeling and inference here is the discrepancy between model and experiment that is displayed in Fig. 3c, about which I could not find any comment in the manuscript. The issue is this: in the experiment the IκBα level continues to rise between 60 and 90 min while at the same time the amount of nuclear NF-κB rises and both IκBα and NF-κB remain elevated at 180 min. The model, on the other hand, exhibits a decrease in the IkB level on the same time interval. This result begs the question how in the experiment the NF-κB level can rise as the IkB level also rises, but the model doesn't display this behavior and hence can't provide an explanation. The model clearly shows that IκBα and NF-κB oscillate out of phase, whereas the measured levels do not. It seems likely that some other mechanism is at play here, which is not being captured by the model. Another issue that concerns the modeling and also the interpretation of the experimental data is the basal level of NF-κB in the nucleus. On p. 6 it is stated that "WB analysis indicate[s] that in unstimulated cells the nuclear NF-κB level is very low." That is indeed what is shown in Fig. 2d, but it is contradicted by the fixed cell imaging data shown in Fig. 2a and in Fig. 3a, c, which indicate than an average of about 20 % of NF-κB is in the nucleus prior to stimulation. The model does not capture this effect, which calls into question whether is it also missing some key aspects of the IkB/NF-κB interaction dynamics. Something curious about the initial conditions of the model is also revealed by looking at the black points in Fig. 4a and b showing the initial conditions in individual cells for the experiments and model respectively. Whereas the experiments exhibit considerable variability in the fractional amount of nuclear NF-κB and relatively little variation in the ratio of IkB/NF-κB, the model shows little nuclear NF-κB but considerable variability of in the relative amount of IkB. How might this discrepancy affect the observed results?
Authors' response: Since there is no Fig. 3c, we think that the Reviewer means Fig. 4c. Indeed, there is a discrepancy between the model and single cell data, as observed by the Reviewer. The model was fitted to the population data obtained in the form of Western blots (see Fig. 2d). The immunofluorescence single cell data were provided to show that also at single cell level NF-κB translocation is possible even when IκBα exceeds initial levels. By analysing single cell images we rule out the possibility that IκBα level is very high only in the fraction of cells that do not exhibit second NF-κB translocation.
In fact this effect is more pronounced when analysed at the level of immunostaining images. As shown in Figs. 2d and 4c the average IκBα level (between 100 and 180 min) calculated based on immunofluorescence images is higher than obtained in Wester blots, and surprisingly it increases between 60 and 100 min apparently in phase with nuclear NF-κB.
The discrepancy between the fraction of nuclear NF-κB in unstimulated cell obtained by quantified Western blots and immunofluorescence images follows possibly from overshadowing of nuclei by cytoplasm, which is hard to avoid even when using confocal microscopy. This overshadowing depends on cell morphology and we failed to fully correct it by our quantification method (see Methods).
Considering the above, we think that population data better represent the average NF-κB and IκBα levels, while image-based single-cell quantification can give some insight into heterogeneity of the response. In the revised manuscript we mention and briefly discuss these discrepancies.
William Hlavacek, CNLS, Los Alamos, NM, USA
Reviewer's summary: Korwek et al. report results from a study that involved both experimentation and modeling. The study was focused on understanding oscillations in nuclear localization of the transcription factor NF-kappaB in response to stimulation by an endotoxin (lipopolysaccharide, LPS) or a cytokine (TNFalpha). These signals induce the degradation of IkappaB, which is responsible for sequestering NF-kappaB in the cytosol. Degradation of IkappaB allows NF-kappaB to concentrate in the nucleus, which leads to new synthesis of IkappaB. The authors explain how, after an initial pulse of nuclear localization, NF-kappaB is able to concentrate in the nucleus a second time even though the overall abundance of its inhibitor IkappaB rises above its baseline abundance before the second pulse of nuclear localization. The explanation is that IkappaB must compete for binding to NF-kappaB with importin alpha proteins, which are karyopherins that mediate transport of NF-kappaB into the nucleus. It seems that this report offers an answer to a puzzling question about the dynamics of NF-kappaB nuclear localization. I think this report would be rather interesting to other researchers working on regulation of NF-kappaB. I suppose the major weakness of this report would be that the conclusions of the authors about the influence of karyopherins on NF-kappaB dynamics have not been directly tested, for example, by modulation of the strength of interaction between RELA and KPNA2.
There are some points in the report of Korwek et al. that could be clarified. I wonder if the authors could present more illustrative simulations or introduce a simplified model to more clearly explain how the competition between IkappaB and importin alpha gives rise to the faster-than-expected oscillations in NF-kappaB nuclear localization. I'm not confident that I was able to fully appreciate the authors' insights.
Authors' response: In the revised manuscript we provide a comparison between the model with and without explicitly accounting for importins (see Fig. 6 and the response to Reviewer 2).
I think that competition alone is not the only deciding factor but rather it is the competition in combination with the fact that there are two different compartments where NF-kappaB can be found (cytosol and nucleus). In any case, I would appreciate a clearer explanation of the role of karyopherins in NF-kappaB nuclear localization dynamics.
Authors' response: Yes, the Reviewer is indeed right that the discussed effect is the competition of IκBα and importin-α in combination with the fact that there are two different cellular compartments where NF-κB can be found. In the revised manuscript we clarified role of karyopherins in NF-kappaB nuclear localization dynamics.
It is not entirely clear from the manuscript as written if the above-baseline level of IkappaB during the second pulse of NF-kappaB nuclear localization is a novel observation of the authors being reported for the first time here, or rather a previously observed phenomenon.
Authors' response: To our knowledge, we are the first to show that in single cells nuclear NF-κB translocation coincides with above-baseline levels of IκBα. Although in a report by Fagerlund et al. (2015) [11] NF-κB translocation and elevated IκBα are also shown by Western blotting at 90-120 min after stimulation, only our immunofluorescence single-cell data demonstrate that this effect cannot be explained by high accumulation of IκBα in some cells and nuclear translocation in others.
The authors make several assumptions about protein copy numbers. I think these assumptions could be bolstered by referring to protein copy numbers reported by the Mann group for various mammalian cell lines, such as the report by Geiger T et al. (2012) [Mol Cell Proteomics DOI 10.1074/mcp.M111.014050 ].
Authors' response: The use of data detailing the exact protein copy number is indeed very compelling. Concerning the work of Geiger et al. (2012), however, we found the values included in the paper unsuitable for our model. First of all, all eleven cell lines screened for proteins by Geiger et al. (2012) were of human origin and mostly of epithelial phenotype (we are aware that it does not necessarily preclude this data from use in modelling of MEFs). Secondly, in most of these cell lines copy numbers for all of the pertinent proteins like RelA, IκBα and A20 were not quantified simultaneously. Only three lines (GAMG, Jurkat and LnCap) had iBAQ values quantified for all three of these proteins and some of these values come from only single replicate or exhibit quite significant intra-replicate variance. Furthermore, these data suggest that copy number of RelA exceeds that of IκBα by one order of magnitude, or in some cases even two (e.g. log-transformed IBAQ values for RelA and IκBα in the GAMG cell line are around 7 and 5.2, respectively). Although we admire the scope of the cited paper, we find these values hard to reconcile with our current understanding of the role of IκBα as the main RelA inhibitor.
We have decided to use the estimations of NF-κB copy number included in the works of Carlotti et al. [53, 54], as stated in the manuscript, while the values for other proteins were mostly predicted by the model.
It would be appreciated if the authors provided the HGNC names for proteins (e.g., NF-KBIA for IkappaB).
Authors' response: We now provide HGNC names for genes in revised manuscript.
The description of nuclear trafficking is incomplete. For example, RAN is never mentioned. A more complete description of nuclear trafficking would be helpful.
Authors' response: We include more detailed description of nuclear trafficking and discuss the role of RAN.
Furthermore, the authors may wish to acknowledge that modeling of nuclear trafficking has been considered in the past, as in the work of Zilman A., Effects of multiple occupancy and interparticle on selective transport through narrow channels: theory versus experiment. Biophysical Journal Volume 96 February 2009 1235–1248
Authors' response: We refer to the work of Zilman et al. [28] and the recent work of Lolodi et al. [29] in the Background section.
In the abstract, the authors assume that readers will know that importins and exportins are karyopherins. The word "karyopherin" should probably be defined upon first use.
Authors' response: We define the term karyopherins in the revised manuscript.
The authors state that BioNetGen implements the Gillespie algorithm. It would be more precise to state that BioNetGen implements an efficient variation of Gillespie's direct method.
Authors' response: This is now corrected it in the revised manuscript.
It is said that TNFalpha is unstable in vivo but stable "under experimental conditions." Could the authors say more about how conditions affect TNFalpha stability? Does "resynthesized" mean "newly synthesized?"
Authors' response: We discussed TNFα stability in vivo and in vitro studies in revised manuscript in the section "Formulation of the computational model".
Formulation of the computational model. The abbreviation "WB" should be defined upon first use.
In the abstract, the authors make a point about information transmission rate, but this issue is next considered only in the Discussion section. It's odd that Fig. 5 is not cited in the Results section. The authors claim that information channel capacity depends on n and tau without explaining or citing a source. It would be helpful if the authors could say more about this point and cite appropriate supporting references.
Authors' response: We now include Fig. 5 in the Results section and clarify what we mean by information channel capacity and how it is estimated.
The authors claim that the rates of transcription and translation for IkappaB and A20 are near their maximum values. How are the maximum values estimated? Could the authors cite appropriate supporting references for the estimates of the maximum rates?
Authors' response: We discussed how these rates are estimated and included appropriate references in the section "Formulation of the computational model" in revised manuscript.
CHX:
Cycloheximide
IKK:
IRF3:
Interferon regulatory factor 3
IκBα:
Nuclear factor of kappa light polypeptide gene enhancer in B-cells inhibitor α
LPS:
NES:
Nuclear export sequence
NF-κB:
Nuclear factor kappa-light-chain-enhancer of activated B cells
NLS:
Nuclear localization sequence
STAT1:
Signal transducer and activator of transcription 1
TF:
TNFR:
Tumor necrosis factor α receptor
TNFα:
Tumor necrosis factor α
WB:
Muqbil I, Wu J, Aboukameel A, Mohammad RM, Azmi AS. Snail nuclear transport: The gateways regulating epithelial-to-mesenchymal transition? Semin Cancer Biol. 2014;27:39–45. doi:10.1016/j.semcancer.2014.06.003.
Forbes DJ, Travesa A, Nord MS, Bernis C. Nuclear transport factors: global regulation of mitosis. Curr Opin Cell Biol. 2015;35:78–90. doi:10.1016/j.ceb.2015.04.012.
Christie M, Chang C-W, Róna G, Smith KM, Stewart AG, Takeda AAS, et al. Structural biology and regulation of protein import into the nucleus. J Mol Biol. 2016;428:2060–90. doi:10.1016/j.jmb.2015.10.023.
Stewart M. Molecular mechanism of the nuclear protein import cycle. Nat Rev Mol Cell Biol. 2007;8:195–208. doi:10.1038/nrm2114.
Scott ES, Malcomber S, O'Hare P. Nuclear translocation and activation of the transcription factor NFAT is blocked by herpes simplex virus infection. J Virol. 2001;75:9955–65. doi:10.1128/JVI.75.20.9955-9965.2001.
Passinen S, Valkila J, Manninen T, Syvälä H, Ylikomi T. The C-terminal half of Hsp90 is responsible for its cytoplasmic localization. Eur J Biochem. 2001;268:5337–42. doi:10.1046/j.0014-2956.2001.02467.x.
Melen K, Fagerlund R, Franke J, Kohler M, Kinnunen L, Julkunen I. Importin nuclear localization signal binding sites for STAT1, STAT2, and influenza A Virus nucleoprotein. J Biol Chem. 2003;278:28193–200. doi:10.1074/jbc.M303571200.
Ren M, Drivas G, D'Eustachio P, Rush MG. Ran/TC4: a small nuclear GTP-binding protein that regulates DNA synthesis. J Cell Biol. 1993;120:313–23. doi:10.1083/jcb.120.2.313.
Bischoff FR, Ponstingl H. Mitotic regulator protein RCC1 is complexed with a nuclear ras-related polypeptide. Proc Natl Acad Sci USA. 1991;88:10830–4.
Mikenberg I, Widera D, Kaus A, Kaltschmidt B, Kaltschmidt C. Transcription factor NF-κB is transported to the nucleus via cytoplasmic dynein/dynactin motor complex in hippocampal neurons. PLoS One. 2007;2:e589. doi:10.1371/journal.pone.0000589.
Fagerlund R, Behar M, Fortmann KT, Lin YE, Vargas JD, Hoffmann A. Anatomy of a negative feedback loop: the case of IκBα. J R Soc Interface. 2015;12:20150262. doi:10.1098/rsif.2015.0262.
Lee EG, Boone DL, Chai S, Libby SL, Chien M, Lodolce JP, et al. Failure to regulate TNF-induced NF-κB and cell death responses in A20-deficient mice. Science. 2000;289:2350–4. doi:10.1126/science.289.5488.2350.
Malek S. IκBβ, but Not IκBα, functions as a classical cytoplasmic inhibitor of NF-κB Dimers by masking both NF-κB nuclear localization sequences in resting cells. J Biol Chem. 2001;276:45225–35. doi:10.1074/jbc.M105865200.
Cervantes C, Bergqvist S, Kjaergaard M, Kroon G, Sue S-C, Dyson H, et al. The RelA Nuclear Localization Signal Folds upon Binding to IκBα. J Mol Biol. 2011;405:754–64. doi:10.1016/j.jmb.2010.10.055.
Chen Z, Hagler J, Palombella VJ, Melandri F, Scherer D, Ballard D, et al. Signal-induced site-specific phosphorylation targets IκBα to the ubiquitin-proteasome pathway. Genes Dev. 1995;9:1586–97. doi:10.1101/gad.9.13.1586.
Karin M. How NF-κB is activated: the role of the IκB kinase (IKK) complex. Oncogene. 1999;18:6867–74.
Bergqvist S, Ghosh G, Komives EA. The IκBα/NF-κB complex has two hot spots, one at either end of the interface. Protein Sci. 2008;17:2051–8. doi:10.1110/ps.037481.108.
Alverdi V, Hetrick B, Joseph S, Komives EA. Direct observation of a transient ternary complex during IκBα-mediated dissociation of NF-κB from DNA. Proc Natl Acad Sci USA. 2014;111:225–30. doi:10.1073/pnas.1318115111.
Fagerlund R, Kinnunen L, Kohler M, Julkunen I, Melen K. NF-κB Is Transported into the Nucleus by Importin α3 and Importin α4. J Biol Chem. 2005;280:15942–51. doi:10.1074/jbc.M500814200.
Fagerlund R. Nuclear import mechanisms of STAT and NF-κB transcription factors. PhD thesis, University of Helsinki: National Public Health Institute of Finland. 2008.
Liang P, Zhang H, Wang G, Li S, Cong S, Luo Y, et al. KPNB1, XPO7 and IPO8 Mediate the Translocation of NF-κB/p65 into the Nucleus. Traffic. 2013;14:1132–43. doi:10.1111/tra.12097.
Sabatel H, Di Valentin E, Gloire G, Dequiedt F, Piette J, Habraken Y. Phosphorylation of p65(RelA) on Ser547 by ATM Represses NF-κB-Dependent Transcription of Specific Genes after Genotoxic Stress. PLoS One. 2012;7:e38246. doi:10.1371/journal.pone.0038246.
Zetoune FS, Murthy AR, Shao Z, Hlaing T, Zeidler MG, Li Y, et al. A20 inhibits NF-κB activation downstream of multiple MAP3 kinases and interacts with the IκB signalosome. Cytokine. 2001;15:282–98. doi:10.1006/cyto.2001.0921.
Hrdličková R, Nehyba J, Roy A, Humphries EH, Bose HR. The relocalization of v-Rel from the nucleus to the cytoplasm coincides with induction of expression of Ikba and nfkb1 and stabilization of IκBα. J Virol. 1995;69(1):403–13.
Kumar S, Gelinas C. IκBα-mediated inhibition of v-Rel DNA binding requires direct interaction with the RXXRXRXXC Rel/κB DNA-binding motif. Proc Natl Acad Sci USA. 1993;90:8962–6.
Sachdev S, Hoffmann A, Hannink M. Nuclear Localization of IκBα Is Mediated by the Second Ankyrin Repeat: the IκBα Ankyrin Repeats Define a Novel Class of cis -Acting Nuclear Import Sequences. Mol Cell Biol. 1998;18:2524–34. doi:10.1128/MCB.18.5.2524.
Tam WF, Lee LH, Davis L, Sen R. Cytoplasmic Sequestration of Rel Proteins by IκBα Requires CRM1-Dependent Nuclear Export. Mol Cell Biol. 2000;20:2269–84. doi:10.1128/MCB.20.6.2269-2284.2000.
Zilman A. Effects of Multiple Occupancy and Interparticle Interactions on Selective Transport through Narrow Channels: Theory versus Experiment. Biophys J. 2009;96:1235–48. doi:10.1016/j.bpj.2008.09.058.
Lolodi O, Yamazaki H, Otsuka S, Kumeta M, Yoshimura SH. Dissecting in vivo steady-state dynamics of karyopherin-dependent nuclear transport. Mol Biol Cell. 2016;27:167–76. doi:10.1091/mbc.E15-08-0601.
Hoffmann A, Levchenko A, Scott ML, Baltimore D. The IκB-NF-κB signaling module: temporal control and selective gene activation. Science. 2002;298:1241–5. doi:10.1126/science.1071914.
Lipniacki T, Paszek P, Brasier AR, Luxon B, Kimmel M. Mathematical model of NF-κB regulatory module. J Theor Biol. 2004;228:195–215. doi:10.1016/j.jtbi.2004.01.001.
Covert MW, Leung TH, Gaston JE, Baltimore D. Achieving stability of lipopolysaccharide-induced NF-κB activation. Science. 2005;309:1854–7. doi:10.1126/science.1112304.
Tay S, Hughey JJ, Lee TK, Lipniacki T, Quake SR, Covert MW. Single-cell NF-kappaB dynamics reveal digital activation and analogue information processing. Nature. 2010;466:267–71. doi:10.1038/nature09145.
Ashall L, Horton CA, Nelson DE, Paszek P, Harper CV, Sillitoe K, et al. Pulsatile stimulation determines timing and specificity of NF-κB-dependent transcription. Science. 2009;324:242–6. doi:10.1126/science.1164860.
Pękalski J, Zuk PJ, Kochańczyk M, Junkin M, Kellogg R, Tay S, et al. Spontaneous NF-κB activation by autocrine TNFα signaling: a computational analysis. PLoS One. 2013;8:e78887. doi:10.1371/journal.pone.0078887.
Kellogg RA, Tian C, Lipniacki T, Quake SR, Tay S. Digital signaling decouples activation probability and population heterogeneity. eLife. 2015;4:e08931. doi:10.7554/eLife.08931.
Williams SA, Chen L-F, Kwon H, Fenard D, Bisgrove D, Verdin E, et al. Prostratin antagonizes HIV latency by activating NF-κB. J Biol Chem. 2004;279:42008–17. doi:10.1074/jbc.M402124200.
Lee REC, Walker SR, Savery K, Frank DA, Gaudet S. Fold Change of Nuclear NF-κB Determines TNF-Induced Transcription in Single Cells. Mol Cell. 2014;53:867–79. doi:10.1016/j.molcel.2014.01.026.
Lipniacki T, Kimmel M. Deterministic and stochastic models of NF-κB pathway. Cardiovasc Toxicol. 2007;7:215–34. doi:10.1007/s12012-007-9003-x.
Zambrano S, Toma ID, Piffer A, Bianchi ME, Agresti A. NF-κB oscillations translate into functionally related patterns of gene expression. Elife. 2016;5:e09100. doi:10.7554/eLife.09100.
Sturrock M, Terry AJ, Xirodimas DP, Thompson AM, Chaplain MAJ. Spatio-temporal modelling of the Hes1 and p53-Mdm2 intracellular signalling pathways. J Theor Biol. 2011;273:15–31. doi:10.1016/j.jtbi.2010.12.016.
Terry AJ, Chaplain MAJ. Spatio-temporal modelling of the intracellular signalling pathway: The roles of diffusion, active transport, and cell geometry. J Theor Biol. 2011;290:7–26. doi:10.1016/j.jtbi.2011.08.036.
Cheong R, Rhee A, Wang CJ, Nemenman I, Levchenko A. Information transduction capacity of noisy biochemical signaling networks. Science. 2011;334:354–8. doi:10.1126/science.1204553.
Selimkhanov J, Taylor B, Yao J, Pilko A, Albeck J, Hoffmann A, et al. Accurate information transmission through dynamic biochemical signaling networks. Science. 2014;346:1370–3. doi:10.1126/science.1254933.
Adamson A, Boddington C, Downton P, Rowe W, Bagnall J, Lam C, et al. Signal transduction controls heterogeneous NF-κB dynamics and target gene expression through cytokine-specific refractory states. Nat Commun. 2016;7:12057. doi:10.1038/ncomms12057.
Krappmann D, Wulczyn FG, Scheidereit C. Different mechanisms control signal-induced degradation and basal turnover of the NF-κB inhibitor IκBα in vivo. EMBO J. 1996;15:6716.
Nelson DE, Ihekwaba AEC, Elliott M, Johnson JR, Gibney CA, Foreman BE, et al. Oscillations in NF-κB Signaling Control the Dynamics of Gene Expression. Science. 2004;306:704–8. doi:10.1126/science.1099962.
Lee TK, Denny EM, Sanghvi JC, Gaston JE, Maynard ND, Hughey JJ, et al. A noisy paracrine signal determines the cellular NF-κB response to lipopolysaccharide. Sci Signal. 2009;2:ra65. doi:10.1126/scisignal.2000599.
Lipniacki T, Puszynski K, Paszek P, Brasier AR, Kimmel M. Single TNFα trimers mediating NF-κB activation: stochastic robustness of NF-κB signaling. BMC Bioinformatics. 2007;8:376. doi:10.1186/1471-2105-8-376.
Hlavacek WS, Faeder JR, Blinov ML, Posner RG, Hucka M, Fontana W. Rules for Modeling Signal-Transduction Systems. Sci STKE. 2006;2006:re6. doi:10.1126/stke.3442006re6.
Faeder JR, Blinov ML, Hlavacek WS. Rule-based modeling of biochemical systems with BioNetGen. Methods Mol Biol. 2009;500:113–67. doi:10.1007/978-1-59745-525-1_5.
Gillespie DT. Exact stochastic simulation of coupled chemical reactions. J Phys Chem. 1977;81:2340–61. doi:10.1021/j100540a008.
Carlotti F, Chapman R, Dower SK, Qwarnstrom EE. Activation of nuclear factor kappaB in single living cells. Dependence of nuclear translocation and anti-apoptotic function on EGFPRELA concentration. J Biol Chem. 1999;274:37941–9.
Carlotti F, Dower SK, Qwarnstrom EE. Dynamic shuttling of nuclear factor kappa B between the nucleus and cytoplasm as a consequence of inhibitor dissociation. J Biol Chem. 2000;275:41028–34. doi:10.1074/jbc.M006179200.
Fuchs G, Voichek Y, Benjamin S, Gilad S, Amit I, Oren M. 4sUDRB-seq: measuring genomewide transcriptional elongation rates and initiation frequencies within cells. Genome Biol. 2014;15:1. doi:10.1186/gb-2014-15-5-r69.
Bolouri H, Davidson EH. Transcriptional regulatory cascades in development: initial rates, not steady state, determine network kinetics. Proc Natl Acad Sci USA. 2003;100:9371–6.
Ingolia NT, Lareau LF, Weissman JS. Ribosome Profiling of Mouse Embryonic Stem Cells Reveals the Complexity and Dynamics of Mammalian Proteomes. Cell. 2011;147:789–802. doi:10.1016/j.cell.2011.10.002.
Afonina ZA, Myasnikov AG, Shirokov VA, Klaholz BP, Spirin AS. Conformation transitions of eukaryotic polyribosomes during multi-round translation. Nucleic Acids Res. 2015;43:618–28. doi:10.1093/nar/gku1270.
Beutler BA, Milsark IW, Cerami A. Cachectin/tumor necrosis factor: production, distribution, and metabolic fate in vivo. J Immunol. 1985;135:3972–7.
Flick DA, Glfford GE. Pharmacokinetics of Murine Tumor Necrosis Factor. Immunopharmacol Immunotoxicol. 1986;8:89–97. doi:10.3109/08923978609031087.
Kamada H, Tsutsumi Y, Yamamoto Y, Kihira T, Kaneda Y, Mu Y, et al. Antitumor Activity of Tumor Necrosis Factor-α Conjugated with Polyvinylpyrrolidone on Solid Tumors in Mice. Cancer Res. 2000;60:6416–20.
Pham CTN. Neutrophil serine proteases: specific regulators of inflammation. Nat Rev Immunol. 2006;6:541–50. doi:10.1038/nri1841.
Kellogg RA, Tay S. Noise Facilitates Transcriptional Control under Dynamic Inputs. Cell. 2015;160:381–92. doi:10.1016/j.cell.2015.01.013.
Werner SL, Kearns JD, Zadorozhnaya V, Lynch C, O'Dea E, Boldin MP, et al. Encoding NF-κB temporal control in response to TNF: distinct roles for the negative regulators IκBα and A20. Genes Dev. 2008;22:2093–101. doi:10.1101/gad.1680708.
Shannon CE. Communication in the Presence of Noise. Proc IRE. 1998;37:10–21.
Potoyan DA, Zheng W, Komives EA, Wolynes PG. Molecular stripping in the NF-κB/IκB/DNA genetic regulatory network. Proc Natl Acad Sci USA. 2016;113:110–5. doi:10.1073/pnas.1520483112.
Kumar KP, McBride KM, Weaver BK, Dingwall C, Reich NC. Regulated nuclear-cytoplasmic localization of interferon regulatory factor 3, a subunit of double-stranded RNA-activated factor 1. Mol Cell Biol. 2000;20:4159–68.
McBride KM. Regulated nuclear import of the STAT1 transcription factor by direct binding of importin-α. EMBO J. 2002;21:1754–63. doi:10.1093/emboj/21.7.1754.
McBride KM, McDonald C, Reich NC. Nuclear export signal located within the DNA-binding domain of the STAT1 transcription factor. EMBO J. 2000;19:6196–206. doi:10.1093/emboj/19.22.6196.
Wang X, Hussain S, Wang E-J, Wang X, Li MO, García-Sastre A, et al. Lack of essential role of NF-κB p50, RelA, and cRel subunits in virus-induced type 1 IFN expression. J Immunol. 2007;178:6770–6. doi:10.4049/jimmunol.178.11.6770.
De Weerd NA, Samarajiwa SA, Hertzog PJ. Type I Interferon Receptors: Biochemistry and Biological Functions. J Biol Chem. 2007;282:20053–7. doi:10.1074/jbc.R700006200.
Hartmann BM, Marjanovic N, Nudelman G, Moran TM, Sealfon SC. Combinatorial cytokine code generates anti-viral state in dendritic cells. Front Immunol. 2014;5:73. doi:10.3389/fimmu.2014.00073.
Lätzer J, Papoian GA, Prentiss MC, Komives EA, Wolynes PG. Induced Fit, Folding, and Recognition of the NF-κB-Nuclear Localization Signals by IκBα and IκBβ. J Mol Biol. 2007;367:262–74. doi:10.1016/j.jmb.2006.12.006.
This study was financed by National Science Centre (Poland) grant # 2014/13/B/NZ2/03840. The funding body had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Availability of supporting data
All supporting data is included with the submission.
Designed study TL; designed and performed experiments: ZK, MC, WP, JM; built analysis tools: MK; analysed data: KT; build computational model and performed simulations: PNJ, TL; all authors contributed to writing the manuscript. All authors read and approved the final manuscript
Dr. Zbigniew Korwek, Karolina Tudelska BSc, Dr. Maciej Czerkies, Dr. Wiktor Prus, Joanna Markiewicz MSc, Marek Kochańczyk MSc, and Prof. Tomasz Lipniacki are all from the Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.
Paweł Nałęcz-Jawecki BSc is from the College of Inter-Faculty Individual Studies in Mathematics and Natural Sciences, University of Warsaw, Warsaw, Poland.
Authors declare that no competing interests exist.
Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
Zbigniew Korwek, Karolina Tudelska, Maciej Czerkies, Wiktor Prus, Joanna Markiewicz, Marek Kochańczyk & Tomasz Lipniacki
College of Inter-Faculty Individual Studies in Mathematics and Natural Sciences, University of Warsaw, Warsaw, Poland
Paweł Nałęcz-Jawecki
Zbigniew Korwek
Karolina Tudelska
Maciej Czerkies
Wiktor Prus
Marek Kochańczyk
Tomasz Lipniacki
Correspondence to Tomasz Lipniacki.
Computational model description. (PDF 349 kb)
Computational model in BNGL. (BNGL 14 kb)
Figure S1. Analysis of functional TNFα degradation in experimental conditions. (a) Immunostaining confocal images of unstimulated cells and cells stimulated for 15 and 30 min with fresh TNFα at 10 ng/ml concentration. (b) Immunostaining images of cells stimulated for 15 and 30 min with the media harvested from above cells stimulated for 6 h with TNFα, at the initial concentration of 10 ng/ml. (c,d) Simulated single cell (thin lines) and population average (bold lines) trajectories in responses to TNFα stimulation with concentration D 1 = 10 ng/ml (orange lines) and D 2 = 1.15 ng/ml (blue lines). At the assumed TNFα degradation coefficient cdeg = 10-4/s the initial TNFα concentration D 1 is reduced to D 2 after 6 h. Figure S2. Model simulation trajectories showing unstimulated, equilibrated cells. Figure S3. Model simulation trajectories for A20-deficient cells in response to 10 ng/ml TNFα stimulation, studied experimentally by Lee et al. [12]. As in experiment, A20-deficient cells respond by a stable NF-κB translocation. Figure S4. Model simulated responses to single 10 ng/ml TNFα pulses of various durations. Simulations correspond to experimental data [30, 64] showing single NF-κB pulses of amplitude almost independent of pulse duration. Figure S5. Model simulated responses to the series of three 5 min, 10 ng/ml TNFα pulses, with pulse repeat of 60 min, 100 min, 200 min; corresponding to the experiment by Ashall et al. [34], who observed that almost all cells respond to first pulse, while about 30 % fraction of cells respond to the second and third pulse for 60 min, 100 min repeats. For 200 min repeats almost all cells respond to three TNFα pulses. Figure S6. Model simulated responses to repeated 10 ng/ml TNFα pulses corresponding to the experiment by Zambrano et al. [40], who observed NF-κB oscillations in response to pulses repeated every 45 min. Figure S7. Scatter plots showing evolution of the total IκBα/total NF-κB ratio and nuclear NF-κB/total NF-κB ratio in response to 1 μg/ml LPS. The scatter plot is based on quantified confocal images shown in Additional file 5. (PDF 3475 kb)
Full confocal images with marked cells used for fluorescence quantification after 10 ng/ml TNFα stimulation for 0, 15, 30, 60, 90 and 180 min. Data table show raw nuclear to total NF-κB ratio and normalized total IκBα to total NF-κB ratio for each analysed cell (for quantification details see Methods). (PDF 1380 kb)
Full confocal images with marked cells used for fluorescence quantification after 1 μg/ml LPS stimulation. Data table show raw nuclear to total NF-κB ratio and normalized total IκBα to total NF-κB ratio for each analysed cell (for quantification details see Methods). (PDF 1238 kb)
Full confocal images with marked nuclei used for NF-κB fluorescence quantification after 5 μg/ml CHX + 1 μg/ml LPS stimulation. CHX stimulation starts 60 min before LPS. (PDF 1087 kb)
Full confocal images of cells stimulated with 10 ng/ml TNFα for 0, 5, 10, 15, 20, and 30 min. (PDF 733 kb)
Korwek, Z., Tudelska, K., Nałęcz-Jawecki, P. et al. Importins promote high-frequency NF-κB oscillations increasing information channel capacity. Biol Direct 11, 61 (2016). https://doi.org/10.1186/s13062-016-0164-z
DOI: https://doi.org/10.1186/s13062-016-0164-z
Karyopherins
Nucleocytoplasmic transport
Channel information capacity | CommonCrawl |
Mixed strategy Nash equilibrium
A Nash equilibrium for a mixed-strategy game is stable if a small change (specifically, an infinitesimal change) in probabilities for one player leads to a situation where two conditions hold: the player who did not change has no better strategy in the new circumstance the player who did change is. Mixed strategy Nash equilibrium of (N;(A i);(u i)) is a Nash equilibrium of mixed extension (N;(( A i));(u i)). For any nite strategic game, there exists a mixed strategy Nash equilibrium. This is a corollary of the previous existence result. Obara (UCLA) Mixed Strategy Nash Equilibrium January 15, 2012 4 / 1 Game Theory 101: The Complete Textbook on Amazon: https://www.amazon.com/Game-Theory-101-Complete-Textbook/dp/1492728152/http://gametheory101.com/courses/gam.. So the game has NO pure strategy Nash Equilibrium. Mixed Strategies: Suppose in the mixed strategy NE, player 1 chooses T and B with probability p and 1 p, respectively; and player 2 chooses L and R with probability q and 1 q, respectively. Given player 2's mixed strategy (q;1 q), we have for player 1: u 1(T;(q;1 q)) = 2q + (1 q)0 = 2q u 1(B;(q;1 q)) = q + (1 q)3 = 3 2
Nash equilibrium - Wikipedi
ated strategies are never used in mixed Nash equilibria, even if they are do
So what? An immediate implication of this lesson is that if a mixed strategy forms part of a Nash Equilibrium then each pure strategy in the mix must itself be a best response. Hence all the strategies in the mix must yield the same expected payo . We will use this fact to nd mixed-strategy Nash Equilibria. Finding Mixed-Strategy Nash Equilibria
Mixed Strategy Nash EquilibriumNash Equilibrium • A mixed strategy is one in which a player plays his available pure strategies with certain probabilities. • Mixed strategies are best understood in the context of repeated games, where each player's aim is to keep the othe Finding Mixed Strategy Nash Equilibria. 6. Pure vs mixed strategy Nash Equilibria. 0. Finding all mixed Nash equilibria in a $3\times 3$ game. 1. Game Theory - Mixed strategy Nash equilibria. Hot Network Questions Why did the Metall und Lackierwarenfabrik company get asked to bid on the creation of the MG42? Do genies exist in the Harry Potter world? Is there anything different about the.
Mixed strategy Nash equilibrium Harrington: Chapter 7, Watson: Chapter 11. First, note that if a player plays more than one strategy with strictly positive probability, then he must be indi⁄erent between the strategies he plays with strictly positive probability. Notation: non-degenerate mixed strategies denotes a set o It can probably also used to find the mixed strategy BNE, but is perhaps more complicated then what is described in methods 2. For reference, here are some notes on the topic. These notes give instructions on how to solve for the pure strategy Nash equilibria using the transformation that you've given. It also demonstrates how to solve the mixed strategy equilibria using method 1. (Se Mixed-strategy Nash equilibrium This section identifies the mixed-strategy Nash equilibrium in a PPS estimated by DEA with inputs, desirable outputs, and undesirable outputs. Consider a multiple-input and multiple-output production process So when using mixed strategies the game above that was said to have no Nash equilibrium will actually have one. However, determining this Nash equilibrium is a very difficult task. Nash Equilibria in Practice. An example of a Nash equilibrium in practice is a law that nobody would break. For example red and green traffic lights. When two cars drive to a crossroads from different directions there are four options. Both drive, both stop, car 1 drives and car 2 stops, or car 1 stops. Key Takeaways A mixed strategy Nash equilibrium involves at least one player playing a randomized strategy and no player being able to... A Nash equilibrium without randomization is called a pure strategy Nash equilibrium. If a player is supposed to randomize over two strategies, then both must.
A mixed-strategy Nash equilibrium is weak in the same sense as the (North, North) equilibrium in the Battle of the Bismarck Sea: to maintain the equilibrium a player who is indifferent between strategies must pick a particular strategy from out of the set of strategies. One way to reinterpret the Welfare Game is to imagine that instead of a single pauper there are many, with identical tastes. About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features Press Copyright Contact us Creators. Lecture 6: Mixed strategies Nash equilibria and reaction curves Nash equilibrium: The concept of Nash equilibrium can be extended in a natural manner to the mixed strategies introduced in Lecture 5. First we generalize the idea of a best response to a mixed strategy De nition 1. A mixed strategy b˙ R is a best response for Rto some mixed.
Game Theory 101 (#7): Mixed Strategy Nash Equilibrium and
1 Describing Mixed Strategy Nash Equilibria Consider the following two games. The -rst game is one you might be familiar with: Rock, Paper, Scissors. In case you are not, in this game there are 2 players who simultaneously determine which object to form with their -ngers. Each player has 3 strategies Œform a Rock, form Paper, or form Scissors. If both players form the same object the Nash Equilibrium is a pair of strategies in which each player's strategy is a best response to the other player's strategy. In a game like Prisoner's Dilemma, there is one pure Nash Equilibrium where both players will choose to confess. However, the players only have two choices: to confess or not to confess Nash Equilibrium in Mixed Strategies . Last time we saw an example of a matrix game which has no NE. (from problem set one). Consider the game, with solution . Strategy. L. R. T (0,3) (3,0) B (2,1) (1,2) If P1 reveals that they will play T, then P2 will play L, resulting in P1 have the worst possible payoff of 0. Any play that P1 announces will result in them getting the worst possible payoff. Nash equilibrium is useful to provide predictions of outcome. It does not require dominant strategies. Some games do not have the Nash equilibrium. It is realistic and useful to expand the strategy space. It includes random strategy in which Nash equilibrium is almost and always exists. These random strategies are called mixed strategies
And there it is. According to this diagram the Mixed Strategy Nash Equilibrium is that John will choose Red Lobster 36% of the time (and Outback 64% of the time) while Mary will choose Red Lobster 77% of the time (and Outback 23% of the time). Note that PSE stands for Pure Strategy Equilibrium 3 Mixed Nash Equilibrium Definition2.7. Amixedstrategyσ i forplayeriisaprobabilitydistributionoverthesetof purestrategiesS i. Wewillonlyconsiderthecaseoffinitelymanypurestrategiesandfinitelymanyplayers. In thiscase,wecanwriteamixedstrategyσ i as(σ i,s i) s i∈S i with P s i∈S i σ i,s i = 1. Thepayoffofa mixedstateσforplayeriis u i(σ) = X s∈S p(s)·u i(s) Mixed-strategy Nash equilibrium for a discontinuous symmetric N-player game H J Hilhorst1 and C Appert-Rolland Laboratoire de Physique Théorique (UMR 8627), CNRS, Université Paris-Sud, Uniersitvé Paris-Saclay, 91405 Orsay Cedex, France E-mail: [email protected] Received 13 October 2017, revised 15 January 2018 Accepted for publication 17 January 2018 Published 6 February 2018. Mixed strategy Nash equilibria are equilibria where at least one player is playing a mixed strategy. While Nash proved that every finite game has a Nash equilibrium, not all have pure strategy Nash equilibria. For an example of a game that does not have a Nash equilibrium in pure strategies, see Matching pennies
Find all mixed-strategy Nash Equilibria of 2x3 game
Chapter 10: Mixed strategies Nash equilibria, reaction curves and the equality of payo s theorem Nash equilibrium: The concept of Nash equilibrium can be extended in a natural manner to the mixed strategies introduced in Lecture 5. First we generalize the idea of a best response to a mixed strategy De nition 1. A mixed strategy b˙ R is a best response for Rto some mixed strategy ˙ C of Cif. Correlated Equilibrium aMixed strategy Nash equilibria tend to have low efficiency aCorrelated equilibria `public signal `Nash equilibrium in game that follows 32 Asymmetric Mixed Strategy Equilibria aMaking a game asymmetric often makes its mixed strategy equilibrium asymmetric aAsymmetric Market Niche is an example 33 Asymmetrical Market Niche This leads to the following characterization of a mixed strategy Nash equilibrium. 4. Proposition 1. A mixed strategy pro le ˙ is a mixed strategy Nash equilibrium if and only if for each player i, u i(˙ i;˙ i) u i(s i;˙ i) for all s i 2S i: We also have the following useful characterization of a mixed strategy Nash equilib-rium in nite strategy set games. Proposition 2. Let G= hI;(S i. The payoff for both players in this particular strategy profile or in Nash Equilibrium is equal to 0.8. On this slide, you can see a list of references. You can find the proof of the fundamental theorem for the non-cooperative games, the theorem of existence of the Nash Equilibrium in mixed strategies. Also, you could find the proofs of the. Fundamental theorem of mixed-strategy Nash equilibrium A mixed strategy profile �� is a Nash equilibrium if and only if for any player i = 1, , n with pure-strategy set S i if the following conditions are met: - If ����,����′�� ���� occur with positive probability in ����, then the expected payoffs to ���� and ����′ are equal when played against ��−��.
game theory - Bayesian Nash Equilibrium - Mixed Strategies
Nash Equilibria in Mixed Strategies Definition Pure and Mixed Strategies In all games so far, all players had to choose exactly one strategy: Smith and Wesson had to either confess or remain silent in the prisoners' dilemma; George and Helena had to go either to the soccer match or the concert in the battle of the sexes; David and Edgar could only either swerve or go on driving in the.
Teaching Mixed Strategy Nash Equilibrium to Undergraduates Kenneth Garrett, Evan Moore, * [email protected] * Evan Moore, Associate Professor and Head of the Department of Economics, Auburn University Montgomery, P.O. Box 244023, Montgomery, AL 36124â€4023, USA Abstract The authors present a simple and effective method for improving student comprehension of mixed strategies
ated strategies, iterated strict do
Mixed strategy Nash equilibrium • A mixed strategy of a player in a strategic game is a probability distribution over the player's actions, denoted by αi(ai); e.g., αi(left) = 1/3,αi(right) = 2/3. A pure strategy is a mixed strategy that assigns probability 1 to a particular action. • The mixed strategy profile α∗ in a strategic.
Therefore, those probabilities are a Mixed Strategy Nash Equilibrium. Beyond this example ! When you are asked to find the Nash Equilibria of a game, you first state the Pure Strategy Nash Equilibria, and then look for the mixed strategy one as well. ! Find the probabilities of the expected payoffs for each player with the method described above. ! If a player has three or more action plans.
Thus, under the mixed strategy Nash equilibrium, the two players share the chance of winning. Unfortunately, they also create the possibility of a crash (which happen with probability 1/4). Thus the expected payoff of each player at the mixed strategy Nash equilibrium is (1.5,1.5), which is worse than each would get under (C,C). However, as with any Nash equilibrium, it would constitute.
Mixed-strategy Nash equilibrium in data envelopment
In the following article, we will look at how to find mixed strategy Nash equilibria, and how to interpret them. Previous Post Okanagan Apple to Serve as Litmus Test for GMOs Next Post Implications of a Strong USD. 6 Comments on Pure vs. Mixed Strategies BrigitteSmall December 13, 2017 at 5:21 pm. Reply. I have checked your page and i have found some duplicate content, that's why you don.
Then a mixed strategy Bayesian Nash equilibrium exists. Theorem Consider a Bayesian game with continuous strategy spaces and continuous types. If strategy sets and type sets are compact, payoff functions are continuous and concave in own strategies, then a pure strategy Bayesian Nash equilibrium exists. The ideas underlying these theorems and proofs are identical to those for the existence of.
Exercise 2 - Mixed strategy Nash equilibrium with N players. a) The normal form representation of the game for n=2 players is given below. Player 2 Player 1 X Y X 3,3 4,3 Y 3,4 2,2 There are three pure strategy Nash equilibria in this game, (X,X), (X,Y) and (Y,X). b) When introducing n=3 players, the normal form representation of the game is
e whether it.
Solve for the mixed strategy Nash equilibrium. Write the probabilities of playing each strategy next to those strategies. For each cell, multiply the probability player 1 plays his corresponding strategy by the probability player 2 plays her corresponding strategy. Write this in the cell. Choose which player whose payoff you want to calculate. Multiply each probability in each cell by his or.
7 Mixed Strategy Nash Equilibrium 8 Existence of NE 9 Exercises C. Hurtado (UIUC - Economics) Game Theory. Rationalizability Rationalizability Penalty Kick Game l r L 4,-4 9,-9 M 6,-6 6,-6 R 9,-9 4,-4 I Penalty Kick Game is one of the most important games in the world. I This game has no dominant strategies. I We need refinements to solve more games. C. Hurtado (UIUC - Economics) Game Theory. Borel game has a mixed strategy Nash equilibrium if its mixed extension is better-reply secure.2 In applications, better-reply security usually follows from two conditions: one related to reciprocal upper semicontinuity and the other to payo⁄security. Establishing the payo⁄ security of a game™s mixed extension often con- stitutes a complicated problem. The concept of uniform payo.
Math: How to Easily Find a Nash Equilibrium in Game Theory
Use our online Game theory calculator to identify the unique Nash equilibrium in pure strategies and mixed strategies for a particular game. Enter the details for Player 1 and Player 2 and submit to know the results of game theory. Economists call this theory as game theory, whereas psychologists call the theory as the theory of social situations. This Nash equilibrium calculator will be a.
As a result, in pure strategies the Equilibria are L,L and R,R and, in Mixed strategies, q=4/7 and p can take any value between 0 and 1. Hence, there exist infinite possible Nash Equilibria (p just has to obey the fundamental laws of probability)
A mixed strategy Nash-equilibrium is a mixed strategy profile with the property that no single player can obtain a higher value of expected utility by deviating unilaterally from this profile.
e mixed strategy NEs for 2-persongames with 2x2 action sets • In general, there is no poly-time algorithm knownfor.
istic strategy in matching pennies Idea: confuse the opponent by playing randomly Define a strategy s i for agent i as.
A mixed strategy profile induces a probability distribution or lottery over the possible outcomes of the game. A (mixed strategy) Nash equilibrium is a strategy profile with the property that no single player can, by deviating unilaterally to another strategy, induce a lottery that he or she finds strictly preferable. In 1950 the mathematician John Nash proved that every game with a finite set.
This paper introduces Hermite's polynomials, in the description of quantum games. Hermite's polynomials are associated with gaussian probability density. The gaussian probability density represents minimum dispersion. I introduce the concept of minimum entropy as a paradigm of both Nash's equilibrium (maximum utility MU) and Hayek equilibrium (minimum entropy ME) Mixed strategy nash equilibrium calculator 2x3 Author: Wavowu Lofalewe Subject: Mixed strategy nash equilibrium calculator 2x3. If you are not redirected automatically, follow this link to example. Your q_1+2q_{2}=1$ tells Created Date: 1/21/2020 3:50:40 A
Mixed Strategy Nash Equilibrium Matt Golder Pennsylvania State University Nash Equilibrium A Nash equilibrium of a strategic game is an action pro le in which every player's action is optimal given every other player's action. Such a pro le represents a steady state: every player's behavior is the same whenever she plays the game, and no player wishes to change her behavior. More general. one can choose any of the Nash equilibrium, including one in a mixed strategy. Every choice of equilibrium leads to a different subgame-perfect Nash equilibrium in the original game. By varying the Nash equilibrium for the subgames at hand, one can compute all subgame perfect Nash equilibria. A subgame-perfect Nash equilibrium is a Nash equilibrium because the entire game is also a subgame. Nash Equilibrium in Mixed Strategies. Mixed Strategy Equilibrium. In many games players choose unique actions from the set of available actions. These are called pure strategies .In some situations though a player may want to randomise over several actions. If a player is choosing which action to play randomly, we say that the player is using a mixed strategy as opposed to a pure strategy. Mixed strategies Nash equilibrium computation 3. Interpretations of mixed strategies 19. Computation of mixed strategy NE • Hard if the support is not known • If you can guess the support, it becomes very easy, using the property shown earlier: 20 Proposition: For any (mixed) strategy s -i, if , then. In particular, u i(a i, s-i) is the same for all a i such that (i.e., a i in the support.
In a mixed strategy Nash equilibrium, at least one of the players plays multiple strategies with positive probability. This mixed strategy leaves the opponent indifferent to playing his pure strategies. (When there are more than two strategies, this gets a little more complicated—it may be the mixed strategy leaves the other player indifferent between playing two of his strategies and. Lecture 4: Normal form games: mixed strategies and Nash equilibrium Dominated mixed strategies Recall: A strictly dominated pure strategy cannot play a part in a Nash equilibrium! But: A mixed strategy can be dominated by a pure even if all strategies in its support are not dominated. LMR T 3 8 0 0 1 5 B 0 0 3 8 1 5 Neither the pure strategy L nor M are strictly dominated by R. The strategy. We'll now see explicitly how to find the set of (mixed-strategy) Nash equilibria for general two-player games where each player has a strategy space containing two actions (i.e. a 2×2 matrix game). We first compute the best-response correspondence for a player. We partition the possibilites into three cases: The player is completely indifferent; she has a dominant strategy; or, most.
Mixed strategy Nash Equilibrium • A mixed strategy is one in which a player plays his available pure strategies with certain probabilities. • A strictly mixed strategy Nash equilibrium in a 2 player, 2 choice (2x2) game is a p>0 and a q>0 such that p is a best response by the row player to column player's choices, and q is the best response by the column player to the row player's choices Theorem 1 (Nash, 1951) There exists a mixed Nash equilibrium. Here is a short self-contained proof. We will define a function Φ over the space of mixed strategy profiles. We will argue that that space is compact and that Φ is continuous, hence the sequence define by: σ(0) arbitrary, σ(n) = Φ(σ(n−1)), has an accumulation point. We will argue that every fixed point of Φ must be a. A mixed-strategy Nash equilibrium is a strategy set with the property that at least one player is playing a randomized strategy and no player can obtain a higher expected payoff by deviating unilaterally and playing an alternate strategy. In cases such as game 2, instead of choosing a single strategy, players can instead choose probability distributions over the set of strategies available to.
Nash equilibrium states that nothing is gained if any of the players change their strategy if all other players maintain their strategy. Dominant strategy asserts that a player will choose a. Mixed-strategy equilibria in the Nash Demand Game 245 approximations. In this respect, this paper is closer in spirit, for example, to Baye et al. (1996a,b) who study the mixed-strategy equilibrium of a continuous strategy-space game as the limit of games with finite strategy sets, thereby deducing properties of the limiting equilibrium from properties of the finite games. Finally, Nash's.
Mixed Strategies - GitHub Page
Bayesian Nash equilibrium Felix Munoz-Garcia Strategy and Game Theory - Washington State University. So far we assumed that all players knew all the relevant details in a game. Hence, we analyzed complete-information games. Examples: Firms competing in a market observed each othersí production costs, A potential entrant knew the exact demand that it faces upon entry, etc. But, this assumption.
The Nash equilibria are the points in the intersection of the graphs of A's and B's best-response correspondences We know that a mixed-strategy profile (p,q) is a Nash equilibrium if and only if 1 p is a best response by A to B's choice q and 2 q is a best response by B to A's choice p. We see from (1) that the firs
First we discuss the payoff to a mixed strategy, pointing out that it must be a weighed average of the payoffs to the pure strategies used in the mix. We note a consequence of this: if a mixed strategy is a best response, then all the pure strategies in the mix must themselves be best responses and hence indifferent. We use this idea to find mixed-strategy Nash equilibria in a game within a.
Mixed strategies need to be analysed in game theory when there are many possible equilibria, which is especially the case for coordination games. The battle of the sexes is a common example of a coordination game where two Nash equilibria appear (underlined in red), meaning that no real equilibrium can be reached.. In the battle of the sexes, a couple argues over what to do over the weekend
Nash equilibrium has long been a desired solution concept in multi-player games, especially for those on continuous strategy spaces, which have attracted a rapidly growing amount of interests due to advances in research applications such as the generative adversarial networks. Despite the fact that several deep learning based approaches are designed to obtain pure strategy Nash equilibrium, it.
Then both Up versus Left as well as Down versus Left are pure Nash equilibria, and every value of p between 0 and 1 would produce a mixed strategy for Ann that would form a Nash equilibrium with Left. Therefore we would have infinitely many mixed Nash equilibria, with two pure ones as extreme cases. The other cases are similar. So ordinarily we would have at most one mixed Nash.
I am looking for Tools/Software/APIs that will allow me to automatically calculate mixed-strategy Nash Equilibrium for repeated games. I am not looking for trivial solutions to 2x2 games Thus, the Nash equilibrium has a steady state in that no one wants to change his or her own strategy given the play of others. Second, other potential outcomes don't have that property: If an outcome is not a Nash equilibrium, then at least one player has an incentive to change what he or she is doing. Outcomes that aren't Nash equilibria involve mistakes for at least one player. Mixed Strategies So far we have considered only pure strategies, and players' best responses to deterministic beliefs. Now we will allow mixed or random strategies, as well as best responses to probabilistic beliefs. Many games have no pure strategy Nash equilibrium. But we will discuss why every nite gam Bayesian Nash equilibrium for the rst price auction It is a Bayesian Nash equilibrium for every bidder to follow the strategy b(v) = v R v 0 F(x)n 1dx F(v)n 1 for the rst price auction with i.i.d. private value. Obara (UCLA) Bayesian Nash Equilibrium February 1, 2012 17 / 2
Mixed strategy Nash Equilibrium Example 1 Part 1 - YouTub
Mixed strategy Nash equilibrium Tadelis: Chapter 6. First, note that if a player plays more than one strategy with strictly positive probability, then he must be indi⁄erent between the strategies he plays with strictly positive probability. Notation: non-degenerate mixed strategies denotes a set of strategies that a player plays with strictly positive probability. Whereas degenerate.
In this paper we consider strong Nash equilibria, in mixed strategies, for finite games. Any strong Nash equilibrium outcome is Pareto efficient for each coalition. First, we analyze the two--player setting. Our main result, in its simplest form, states that if a game has a strong Nash equilibrium with full support (that is, both players randomize among all pure strategies), then the game is.
Nash equilibrium, which we encountered for pure strategies, automatically and almost entirely. Nash's celebrated theorem shows that, under very general cir-cumstances (which are broad enough to cover all the games that we meet in this book and many more besides), a Nash equilibrium in mixed strategies exists
A mixed strategy profile is a mixed strategy Nash equilibrium if and only if, for each player , the following two conditions are satisfied: Every pure strategy which is given positive probability by yields the same expected payoff against ; that is, . Every pure strategy which is given probability.
ant Strategies. Nash Equilibrium is a term used in game theory to describe an equilibrium where each player's strategy is optimal given the strategies of all other players. A Nash Equilibrium exists when there is no unilateral profitable deviation from any of the players involved. In other words, no player in the game would take a different action as long as every.
Actually we will show below that Game 2, if mixed strategies are allowed, has three mixed Nash equilibria: In the first, Ann chooses Up with probability 2/3 and Beth chooses Left with probability 2/3, having the same... In the second one Ann chooses Up and Beth chooses Right. Payoffs are 10.
Applying Nash Equilibrium to Rock, Paper, and Scissors
I know from the theory that at least one mixed strategy Nash Equilibrium exists. Can someone please tell me how do I find one of those equilibrium points by numerical simulation? I can not find in the book any explanation of how to simulate. I just need the basic direction. I have asked this question in math.stackexchange as well. But I posted here too as I noticed users here are not noticed. There are three Nash equilibria to the same, two pure-strategy equilibria, and one mixed-strategy equilibrium. (Strategies in pure strategy equilibria are played with probability 1 or zero; strategies in mixed-strategy equilibria are played with probabilities less than one but greater than zero.) The two pure strategy equilibria are mirrors of each other. One of the boys pairs with the blonde.
Game Theory - pi.math.cornell.ed
Mixed strategy equilibria (msNE) with N players Ana Espinola-Arredondo Week 6. Summarizing... We learned how to -nd msNE in games: with 2 players, each with 2 available strategies (2x2 matrix) e.g., matching pennies game, battle of the sexes, etc. with 2 players, but each having 3 available strategies (3x3 matrix) e.g., tennis game (which actually reduced to a 2x2 matrix after deleting. Example of finding Nash equilibrium using the dominant strategy method: We can first look at Row player's payoffs to see that if column chooses high, it is in row's best interest to choose high because 1>-2, and if column choose low, row will also choose high because 6>3 Lecture 4: Mixed Strategies & Mixed Nash Equilibria March 8, 2011 Summary: The ability for players to randomize their choices gives mixed strate-gies, in contrast to the pure strategies we have considered previously. To analyze mixed strategies we introduce a stronger assumption on players' preferences. In a later lecture we will prove a Nash equilibrium in mixed strategies (mixed Nash.
Nash Equilibrium Strategies of Game Theory Microeconomic
The trick for finding a mixed strategy Nash Equilibrium is that given everyone else's strategies, all players will be indifferent between each of the options their randomizing over (ie. those options will yield the same payoff). So all you need to do is write an expression relating each player's expected payoffs for each strategy, and solve for the frequencies. Letting x represent the.
In this chapter, we introduce the notions of a mixed strategy and a mixed strategy Nash equilibrium. We state and prove a crucial and useful theorem that provides necessary and suffcient conditions for a mixed strategy profile to be a mixed strategy Nash equilibrium. We present several examples that help gain an intuitive understanding of this important notion. We next discuss the notions of.
ation Lemma 38. To illustrate the use of this result let us return to the beauty contest game discussed in Examples 2 of Chapter 1 and 10 in Chapter 4. We explained there that (1,...,1) is a Nash equilibrium. Now we can draw a stronger conclusion
ated and do
Mixed strategy Nash equilibrium is p=10/11; q=5/7. 9. Consider a bargaining game: 1/2: Yes: No: High: 1,4: 0,0: Low: 4,1: 0,0: Find all pure strategy Nash equilibrium: Solution: Suppose 1 chooses low then best response of 2 will be to choose yes. Now consider the other way round if 2 chooses yes then 1's best response will be low. So neither of two would want to.
Nash Equilibria in Mixed Strategies LATEX le: mixednashmathematica-nb-all Š Daniel A. Graham <[email protected]>, June 22, 2005 Rock-Paper-Scissors Since the game is symmetric, we'll solve for the probabilities that player 2 (column chooser) must use to make player 1 (row chooser) indi erent. The probabilities that player 1 must use to.
Mixed-Strategy Equilibrium • Reading - Osborne Chapter 4.1 to 4.10 • By the end of this week you should be able to: - find a mixed strategy Nash Equilibrium of a game - explain why mixed strategies can be important in applications . Example: Matching Pennies Tail -1,1 1,-1 Player 1 Head 1,-1 -1,1 Head Tail Player 2 •Matching pennies does not have a Nash equilibrium (in the game.
The Basics of Game Theory: Mixed Strategy Equilibria and
What are the general rules of mixed strategy nash equilibria?-each player choses a mix of pure strategies so as to make every other player indifferent between any mix of the pure strategies that appear in their own mixed strategy -at equilibrium no player would want to change mixed strategy choice if they know other players choice. What is the row players formula to calculate equilibria?-row. Mixed Strategy Nash Equilibrium Sanjay Singh 1 Department of Information and Communication Technology Manipal Institute of Technology, MAHE Manipal-576104, INDIA [email protected] 2 Centre for Artificial and Machine Intelligence (CAMI) MAHE, Manipal-576104, INDIA April 10, 2021 Sanjay Singh MSNE 1 A strategy profile with an outcome which is simultaneously the smallest number in its row and the. Finding Mixed Strategy Nash Equilibrium for Continuous Games through Deep Learning. 10/26/2019 ∙ by Zehao Dou, et al. ∙ 0 ∙ share . Nash equilibrium has long been a desired solution concept in multi-player games, especially for those on continuous strategy spaces, which have attracted a rapidly growing amount of interests due to advances in research applications such as the generative. As nicely pointed out by Laura Madsen, for me to want to mix you have to make me indifferent. Let's analyse it a bit. Let's suppose player 2 plays a random mixed strategy which is not making player 1 indifferent. In such a condition, for sure, one..
Mixed-strategy Nash equilibrium for a discontinuous
not necessarily select purely mixed strategies at Nash equilibrium, i.e. we use decision trees to address the strategy selection task. In the final section, we discuss our results and describe the development of a com- prehensive approach to determining all types of equi- libria in signaling games as a direction for future re- search. 2. Preliminaries . This section outlines notation and. Solution for The mixed strategy Nash equilibrium of the following game is Player 2 R. 2,2 3,1| D 3.-1 0,0 L Player 1 U U with 3/4 probability and D with 1/ Mixed-strategy Nash equilibrium (MSNE) is a common solution concept employed in many theoretical and applied-theory articles in economics, management, and other disciplines. In a pure-strategy Nash equilibrium, each player chooses an action and the actions constitute an equilibrium if given the equilibrium actions of the other players, no player finds it beneficial to deviate from his. This game has two pure-strategy Nash equilibria (circled above) and one mixed-strategy Nash equilibrium How to find the mixed-strategy Nash equilibrium? Example Husband Wife Opera Football Opera 2, 1 0, 0 Football 0, 0 1, 2 Nash equilibria . Nau: Game Theory 15 Finding Mixed-Strategy Equilibria Generally it's tricky to compute mixed-strategy equilibria But easy if we can identify the support. Consider a 2times3 matrix for a mixed extended game The set of Nash equilibria red in a particular game is determined by the intersection of the graphs of best response mappings of the blue and green playersSliders define the elements of the 2times3 matrices and and the opacity of the players graphs First mixed strategies of the players are used for the graphical representation of the set of.
Strategy (game theory) - Wikipedi
Mixed Strategy Nash Equilibrium Many games, such as the 3-player instance of the Hotelling game from a few lectures ago, do not have pure strategy Nash Equilibria, so we must consider a more general type of equilibrium, the mixed strategy Nash Equilibrium (mixed Nash). Here, instead of selecting a single strategy s i2S i, player iselects a probability distribution ˙ iover S i. We denote the. A mutual best reply and therefore it's a kind of Nash equilibrium, it's a Nash equilibrium in random strategies, and this is what is called mixed strategy equilibrium. Okay, to sum up this observed behavior, equal mixing of rock, paper, and scissors is a Nash equilibrium in random strategy, and it's called a mixed strategy equilibrium. Explore our Catalog Join for free and get personalized. I thought someone really ought to post an explanation about mixed strategy Nash equilibria. Then I figured that that someone may as well be me. I will assume readers are familiar with the concepts of a game (a setting with several players, each having a choice of strategies to take and a payoff which depends on the strategies taken by all players) and of a Nash equilibrium (an optimal.
Mixed Strategies and Nash Equilibrium - Noncooperative
Then a mixed strategy Bayesian Nash equilibrium exists. Theorem Consider a Bayesian game with continuous strategy spaces and continuous types. If strategy sets and type sets are compact, payo functions are continuous and concave in own strategies, then a pure strategy Bayesian Nash equilibrium exists. The ideas underlying these theorems and proofs are identical to those for the existence of. A Nash Equilibrium (NE) is a pro-le of strategies such that each player™s strat-egy is an optimal response to the other players™strategies. De-nition 3 A mixed-strategy pro-le ˙ is a Nash Equilibrium if, for each i and for all ˙0 i 6= ˙ i u i (˙ i;˙ i) u i(˙ 0;˙ i) A pure-strategy Nash Equilibrium is a pure-strategy pro-le. Nash Equilibria Overview. This tutorial shows how to find stable equilibria in asymmetric games. It assumes that you have already completed the Stable Strategies tutorial for symmetric games and have a basic understanding of asymmetric games, from starting either the Conflict II or Parental Care tutorial. If you work through all the example problems in detail, this tutorial should take about. By default, the program computes all pure-strategy Nash equilibria in an extensive game. This switch instructs the program to find only pure-strategy Nash equilibria which are subgame perfect. (This has no effect for strategic games, since there are no proper subgames of a strategic game.) -h¶ Prints a help message listing the available options.-q¶ Suppresses printing of the banner at. The behaviour of players in games with a mixed strategy Nash equilibrium: Evidence from a stylised Poker experiment. Department of Economics, University of Essex Katherine Drakeford Registration Number: 0919758 Email: [email protected] Abstract This paper analyses and compares the strategies chosen by experienced and inexperienced card players, participating in a simplified construction of.
Teaching Mixed Strategy Nash Equilibrium to Undergraduates
Nash equilibrium Intuitively, a Nash equilibrium is a stable strategy profile: no agent would want to change his strategy if he knew what strategies the other agents were following. This is because in a Nash equilibrium all of the agents simultaneously play best responses to each other's strategies. 2 Proving the existence of Nash equilibri Mixed-strategy Nash equilibrium (MSNE) is a commonly-used solution concept in game-theoretic models in various fields in economics, management, and other disciplines, but the experimental results whether the MSNE predicts well actual play in games is mixed. Consequently, evidence for naturally-occurring games in which the MSNE predicts the outcome well is of great importance, as it can justify. 4 Mixed Strategy Equilibrium 4.1 Introduction 97 4.2 Strategic games in which players may randomize 103 4.3 Mixed strategy Nash equilibrium 105 4.4 Dominated actions 117 4.5 Pure equilibria when randomization is allowed 120 4.6 Illustration: expert diagnosis 121 4.7 Equilibrium in a single population 126 4.8 Illustration: reporting a crime 12
Nash Equilibrium is a game theory Game Theory Game theory is a mathematical framework developed to address problems with conflicting or cooperating parties who are able to make rational decisions.The concept that determines the optimal solution in a non-cooperative game in which each player lacks any incentive to change his/her initial strategy. Under the Nash equilibrium , a player does not. Nash Equilibrium u A game consists of strategy equilibrium. u Edgeworth (1897) - Capacity Constraints Neither firm can meet the entire market demand, but can meet half market demand. Constant MC to a point, then decreasing returns Under these conditions, Edgeworth cycle: prices fluctuate between high and low If firms are capacity constrained, then a mixed strategy equilibrium results. Finding Nash Equilibria The Best Response Method. When a game does not have any dominant or dominated strategies, or when the iterated deletion of dominated strategies does not yield a unique outcome, we find equilibria using the best reply method. Note that this method will always find all of the Nash equilibria (in pure strategies—we'll learn about mixed strategies later) even if the game. FTRL treat Nash equilibria in mixed (i.e., randomized) vs. pure strategies. For the case of mixed Nash equilibria we establish a sweeping negative result to the effect that the notion of mixed Nash equilibrium is antithetical to no-regret learning. More precisely, we show that any Nash equilibrium which is not strict (in the sense that every player has a unique best response) cannot be stable.
Uniklinik Göttingen Privatstation.
G Kraft Formel 1.
Blastozysten Erfahrungen.
Logitech Capture startet nicht.
Klassik Manufaktur Leverkusen.
Van der Waals Konstanten SF6.
Apotheke Murnau.
Vogeltränke glasieren.
Warehouse clubs.
Uniklinik Dresden Ärzte Neurochirurgie.
Photobox Lieferzeiten.
Capricorn personality male.
Stefan Ratzeburg YouTube.
Vorhang über Stange drapieren.
Nina und Uwe Tango Köln.
Serge Menga Facebook.
Gut Schnede Pferde.
Brillengläser Fielmann Preise.
Oszilloskop Signale.
Leuchtstäbe für Blumenkasten.
Kogha feeder stipsits main.
Kidnapping Stella.
Babymarkt Gutschein kaufen.
Bitcoin Prognose aktuell.
EToro Gebühren PayPal.
Kleidungsstil Männer Sportlich elegant.
Website erstellen lassen Freelancer.
Pizza Taxi Düsseldorf.
Zwischenüberschriften finden Grundschule.
Und plötzlich liebte ich eine Frau.
Milton s. hershey.
Bridie Carter Tobias Wilson.
Asus Grafikkarten Vergleich.
Pilotensuizid.
Freedom Kygo Songtext Deutsch.
A40 mh Styrum. | CommonCrawl |
Journal of Agricultural and Applied Economics
WILLINGNESS TO RENT PUBLIC LAND...
WILLINGNESS TO RENT PUBLIC LAND FOR ROTATIONAL GRAZING: THE IMPORTANCE OF RESPONSE BEHAVIOR
2. Stated Preferences, Rotational Grazing on Public Land, and Survey Response Behavior
3. The Ex Ante Land Rental Decision
4. Mail Survey of Beef Cattle Producers
5. Study Variables and Estimating Equations
6. Empirical Analysis and Results
7. Discussion and Implications
Ellis, Matthew B. 2020. The impact of gender, age, and engagement on survey response timing: 10 years of data collection from UK hunters. Human Dimensions of Wildlife, Vol. 25, Issue. 1, p. 92.
Journal of Agricultural and Applied Economics, Volume 51, Issue 1
February 2019 , pp. 27-48
DANIEL F. MOONEY (a1), COURTNEY BOLINSON (a2) and BRADFORD L. BARHAM (a3)
Department of Agricultural and Resource Economics, Colorado State University, Fort Collins, Colorado
Robinson Evaluation, Madison, Wisconsin
Department of Agricultural and Applied Economics, University of Wisconsin-Madison, Madison, Wisconsin
Copyright: © The Author(s) 2018
This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
DOI: https://doi.org/10.1017/aae.2018.19
Table 1. Descriptive Statistics for Variables in the Selection Equation by Response Statusa
Table 2. Descriptive Statistics for Variables in the Outcome Equationa
Table 3. Regression Results for Independent Estimation of the Selection Equation (N = 611)
Table 4. Regression Results with and without Nonrandom Selection, Pooled Contingent Valuation Scenarios
Table 5. Regression Results with Nonrandom Selection, by Grass and Shrub Contingent Valuation (CV) Scenario
Table 6. Willingness-to-Pay (WTP) to Rent Public Land for Rotational Grazing by Contingent Valuation Scenario ($/acre)
Ex ante analyses of agricultural practices often examine stated preference data, yet response behavior as a potential source of bias is often disregarded. We use survey data to estimate producers' willingness to rent public land for rotational grazing in Wisconsin and combine it with information on nonrespondents to control for nonresponse and avidity effects. Previous experience with managed grazing and rental decisions influenced who responded as well as their rental intentions. These effects do not produce discernable bias but still encourage attention to this possibility in other ex ante contexts. Land rental determinants and willingness-to-pay estimates are also related to grazing initiatives.
The authors thank the U.S. Department of Agriculture (USDA), National Agricultural Statistics Service regional field office in Des Moines, Iowa, for data support. They also thank Tom Cox, Brian Gould, Randy Jackson, Laura Paine, and Mark Rickenbach for helpful comments on this research and acknowledge the USDA National Institute of Food and Agriculture Hatch Project WIS01785, USDA North Central Region Sustainable Agriculture Research and Education Project GNC15-209, and U.S. Department of Energy (DOE) Great Lakes Bioenergy Research Center (DOE Office of Science BER DEFC02–07ER64494) for financial support.
Grazing has been widely practiced on public land in the United States for more than a century. For example, the U.S. Bureau of Land Management presides over 245 million acres of land and authorizes livestock grazing on nearly two-thirds of that area (U.S. Bureau of Land Management, 2016). Since the Taylor Grazing Act in 1935, private ranchers have paid to graze public lands, but certain aspects remain contentious. Historically, user fees have not covered the full financial costs of program management (Gardner, 1997). In addition, these programs possibly affect wild animal populations that compete with cattle for habitat or are perceived as a threat (Frisina and Morin, 1991; Whittlesey, Huffaker, and Butcher, 1993). Despite these potential drawbacks, public land stakeholders in other U.S. regions are investigating whether renting public land to private livestock producers for management intensive grazing (MIG) practices such as rotational grazing, mob grazing, or cell grazing could serve as a cost-effective tool for maintaining a stock of productive and diverse public grassland ecosystems (Melkonyan and Taylor, 2013).
From a public land management perspective, MIG practices can complement or replace the need for other costly grassland management tools like burning, mowing, haying, or herbicide that are more traditionally used to counter forest encroachment (Harrington and Kathol, 2009; Sulak and Huntsinger, 2007). The loss of grazing land is a concern because it may pose a threat to regional heritage, rare animal and insect species, upland game habitat, and other cultural and environmental services valued by the public. Disturbances from MIG can also improve soil health and increase species diversity with minimal negative effects on water quality (Barry, 2011; Hubbard, Newton, and Hill, 2004). This aspect is especially important given shrinking conservation budgets have limited the capacity of public agencies to apply conventional management tools.
Rotational grazing—which divides pastures into paddocks with livestock cycled through them based on forage growth, weather, and other factors (Undersander et al., 2011)—is one example of an MIG practice that has recently gained traction in the U.S. upper Midwest. In Minnesota, initiatives by the U.S. Fish and Wildlife Service, Minnesota Department of Natural Resources (DNR), and the Nature Conservancy permitted grazing on nearly 25,000 acres of public land in 2014 with plans to double the amount in coming years (DeVore, 2014). A similar proposed program in Wisconsin remains in the evaluation phase (Johansen, 2017; Robinson, 2017). In Wisconsin, land access is among the most cited challenges for cattle producers (Kloppenburg et al., 2012). Furthermore, overall pasture area has been in decline since 2007 (U.S. Department of Agriculture [USDA], 2013). At the same time, implementation of rotational grazing on private land is increasing, and there are 15 county-level grazing networks (Robinson, 2017). MIG is particularly prevalent on dairy farms that also frequently manage beef cattle (Brock and Barham, 2009). Previous research on rotational grazing as a public land management tool in Wisconsin is sparse. Research that does exist is primarily qualitative and does not address producer interest in renting public lands (Robinson, 2017).
In this article, we investigate Wisconsin beef cattle producers' willingness to rent public land for rotational grazing.1 Economists often use stated preference methodologies to carry out ex ante analyses of new and emerging agricultural production technologies, management techniques, and land use practices. Examples include applications to genetically modified citrus trees (Singerman and Useche, 2017), bioenergy cropping systems (Mooney, Barham, and Lian, 2015; Skevas et al., 2016; Swinton et al. 2017), prescribed grazing (Jensen et al., 2015), Bt Cotton (Hubbell, Marra, and Carlson, 2000; Qaim and de Janvry, 2003), land and water stewardship practices (Cooper, 1997; Cooper and Keim, 1996; Ma et al., 2012), and rbST (recombinant bovine somatotropin) milk production (Barham, 1996). Specifically, we employ contingent valuation (CV) methods to evaluate the factors influencing producers' stated land rental decisions for two pasture types (grass- and shrub-dominant) and to characterize the distribution of associated willingness-to-pay (WTP) values. In doing so, we make two contributions to this literature. First, we explicitly incorporate the effects of survey response behavior into stated preference work on ex ante technology adoption, which mostly ignores the issue. Second, we find that significant variation in WTP estimates arises across producers and land types based on the same factors that influence response behavior. Combined, these findings highlight the importance of attending to the potential effects of response bias.
As is the case for all voluntary survey research methods, stated preference surveys are vulnerable to response bias (Dalecki, Whitehead, and Blomquist, 1993)—with hypothetical bias, nonresponse, and avidity effects particularly relevant to ex ante analyses of agricultural production topics. This study, however, offers several advantages in terms of respondents' prior experiences with main features of the ex ante rotational grazing land rental decision. Our target population already raises beef cattle and is likely knowledgeable about private land market transactions (Paine and Gildersleeve, 2011). They may also be land constrained and therefore have a stake in the potential availability of public land for grazing purposes (Kloppenburg et al., 2012). In contrast, because only some producers currently utilize MIG methods, there may be differences in the level of familiarity with required inputs and in their interest in renting public land specifically for these purposes. In our case, both previous experience with land rental arrangements and MIG can (and do) turn out to be predictors of who chose to respond, as well as factors that influence stated rental decisions.
The value of transactional and land management experiences is that they help respondents make meaningful statements about their intentions to rent land for rotational grazing. This minimizes the potential for hypothetical bias that can arise in valuation contexts that pertain to unfamiliar environmental goods and services (Carson, Flores, and Meade, 2001). Respondents to our survey should therefore be more likely to offer accurate answers to CV scenarios based on their knowledge, relevance, and ongoing commitment to the activities described in the questions (Kling, Phaneuf, and Zhao, 2012; Whitehead et al., 1995). In contrast, nonresponse becomes a concern if those who choose not to respond differ in key characteristics from those who do respond (Edwards and Anderson, 1987; Groves, 2006). Avidity effects similarly arise if individuals with greater interest in a particular issue are more likely to respond to a survey about an issue than those with less interest (Ethier et al., 2000). Missing data on explanatory variables that are not missing at random or missing completely at random are of similar concern (Little, 1998; Tsikriktsis, 2005). In these cases, the response behavior could result in survey data similar to that drawn from a nonrandom sample selection process that targeted participants with more interest.
Nonrandom selection is a well-known specification issue in econometrics (e.g., Heckman, 1979), and several methods exist to control for sample selection bias in the analysis of stated preference data. The first is to use stratified random sampling methods and apply postsurvey sampling weights. To extend inference from respondents to the general population, constructed weights adjust for over- or underrepresentation within strata relative to a known population frame (Mitchell and Carson, 1989; Peress, 2010). This approach assumes that respondents are representative of population members within their respective strata and therefore only partially addresses the potential for bias (Loomis, 1987). A second method for surveys with multiple waves of respondent contacts, such as those following the Dillman method (Dillman, Smyth, and Christian, 2014), is to use late responders as a proxy for nonresponders (Studer et al., 2013). This approach assumes that nonrandom selection issues are more likely to occur when response rates are low (Edwards and Anderson, 1987). Mitigating strategies to increase response rates include improving the salience of the survey, reducing the length of the survey instrument, conducting multiple survey mailings, or offering incentives. Mixed evidence is available on the effectiveness of this approach, with several researchers finding no support for it (Dalecki, Whitehead, and Blomquist, 1993; Lahaut et al., 2003).
A final method is to compare characteristics of survey respondents with nonrespondents. The primary challenge here is that data on nonrespondents are often unavailable or otherwise costly to collect (Hudson et al., 2004). Some studies in this vein collected data on nonrespondents via postsurvey phone calls (Dalecki, Whitehead, and Blomquist, 1993; Whitehead, Groothuis, and Blomquist, 1993). In other cases, and more commonly found in medical settings, researchers have access to a rich sampling frame or matched data sets that contain supplemental information on all members of the sample (Groves, 2006). When data on both respondents and nonrespondents are available, probit models with nonrandom selection can be used to test and control for selection bias (Van de Ven and Van Pragg, 1981; Whitehead, Groothuis, and Blomquist, 1993). Edwards and Anderson (1987) outline this approach for use in CV studies, and Messonnier et al. (2000) and Whitehead and Cherry (2007) apply the approach to environmental and energy choices.
Cooper and Keim (1996) applied a selection model to producer demand for water quality improvements in an ex ante context, but recent efforts generally ignore the issue or rely primarily on postsurvey weighting methods. In this article, we exploit information on respondents and nonrespondents to control for response behavior. Our research strategy involves (1) collection of survey data on producer responses to questions about their willingness to rent public lands for rotational grazing and (2) estimation of a bivariate probit model that jointly estimates producers' willingness to rent public grazing land and their survey response decisions. The econometric model ensures that the identification of factors shaping producer willingness to rent public lands accounts for potential bias from nonresponse or avidity effects. Accurately determining these factors is important because they can help land managers and policy makers to site public grazing programs, design effective outreach to potential participants, and estimate program costs.
Consider a utility-maximizing producer who chooses whether to rent land for rotational grazing. Consistent with utility maximization, the producer is willing to rent land whenever the expected utility gain from switching to the CV state is positive (Haab and McConnell, 2003). A typical approach to estimating binary CV choices is the probit model, where the utility difference between CV and non-CV states is explained by the CV offer price and other factors thought to influence the decision and then transformed into a probability using properties of the cumulative normal distribution function (Hanemann, 1984). Because this model relies on survey data voluntarily provided by real decision makers, we also incorporate the potential effects of response behavior.
The decision to respond to a mail survey can similarly be cast in a random utility maximization framework. An assumption implicit in studies that disregard response behavior is that the expected utility of responding is a random variable not explained by factors that also influence the outcome of interest. In the absence of such effects, inferences based on the respondent sample can be extended to the population of interest. However, if the outcome and selection choices are correlated, then avidity or nonresponse behavior can result in nonrandom selection (positive or negative) that potentially leads to biased estimates for the outcome equation. In this case, extrapolating inferences from the sample to the population of interest could result in misleading conclusions about the significance and magnitude of certain factors.
We therefore analyze Wisconsin beef producers' stated land rental decisions using a probit model with nonrandom selection to account for the potential effects of response behavior (Van de Ven and Van Pragg, 1981). The bivariate model yields consistent parameter estimates under nonrandom selection and consists of (1) a binary outcome equation that explains producer willingness to rent land for rotational grazing and (2) a binary selection equation that explains survey response behavior. The model we employ is the following:
(1) $$\begin{eqnarray} REN{T_i}{\rm{\ }} &=& {\rm{\ }}\left\{ {\begin{array}{@{}*{2}{c}@{}} {1{\rm{\ if\ }}Y_i^{\rm{*}} > 0}&{{\rm{where\ }}Y_i^{\rm{*}} = {{\bm{X}}_{\bm{i}}}{\bm{\beta }} + {u_{1i}}}\\ {0{\rm{\ if\ }}Y_i^{\rm{*}} \le 0}&\, \end{array}} \right. \nonumber\\ RESPON{D_i}{\rm{\ }} &=& {\rm{\ }}\left\{ {\begin{array}{@{}*{2}{c}@{}} {1{\rm{\ if\ }}W_i^{\rm{*}} > 0}&{{\rm{where\ }}W_i^{\rm{*}} = {{\bm{Z}}_{\bm{i}}}{\bm{\gamma }} + {u_{2i}}}\\ {0{\rm{\ if\ }}W_i^{\rm{*}} \le 0}&\, \end{array}} \right.\nonumber\\ CORR\left( {{u_1},{u_2}} \right) &=& \rho \end{eqnarray}$$
where RENT is the dependent variable in the outcome equation; RESPOND is the dependent variable in the selection equation; the latent variables Y*i and W*i represent the expected utility gain from the land rental and survey response decisions, respectively; β and γ are parameters to be estimated; and u 1i and u 2i are error terms assumed to be normally distributed. The model permits a test of ρ = 0 to evaluate if estimation of an ordinary probit without selection would yield biased coefficient estimates. This strategy allows us to control for observed factors that may shape response behavior, as captured in Z, as well as unobserved ones that could be correlated with the land rental decision. The unobserved factors are accounted for by the correlation of error terms across the two equations. With effective observed control variables, it is possible that the selection term ρ may or may not be significant in the joint bivariate estimation.
Covariates X and Z include factors hypothesized to influence the land rental and survey response decisions, respectively, with specific variables to be included drawn from related CV studies and the agricultural technology adoption literature. Such factors typically include measures of farm size or scale of operation, demographic characteristics such as age and education level, current management practices, attitudinal measures, and farm specialization (e.g., income source) measures. These factors are applicable to the adoption of rotational grazing practices and have been included in previous analyses of MIG strategies on beef and dairy farms (Brock and Barham, 2009; Foltz and Lang, 2005; Gillespie et al., 2008; Jensen et al., 2015; Kim, Gillespie, and Paudel, 2005). Descriptions of the specific variables used to control for these factors in our analysis are introduced in Section 5.
Information on producer willingness to rent public land for rotational grazing comes from a 2016 mail survey of Wisconsin beef cattle producers. The 2012 U.S. Census of Agriculture listed this population at 18,433 producers, and sample construction was based on a confidential list frame maintained by the USDA National Agricultural Statistics Service (NASS) regional field office in Des Moines, Iowa. The sampling procedure used a stratified design based on herd size and MIG practices. We defined seven herd size strata: less than 20 head, 20–49 head, 50–99 head, 100–199 head, 200–499 head, 500–999 head, and more than 1,000 head. Designation as an MIG or non-MIG producer was based on whether producers used MIG practices as indicated in the 2012 U.S. Census of Agriculture (USDA-NASS, 2013). The final sample consisted of 1,172 producers, of whom 22% were MIG practitioners.
This analysis uses 105 observations from active producers who returned a completed survey, for an effective response rate of 16% after removing returns from ineligible members of the population. We suspect that the relatively low response rate stemmed from a combination of disinterest in grassland rental among non-MIG farmers and the inability to implement a third round of survey mailing because of institutional constraints. Among respondents, 59% checked the MIG box in the census, which means that MIG producers were more than three times as likely to respond to our survey as non-MIG producers. The low overall response rate combined with the marked avidity effect for MIG producers affirms the need to consider nonrandom selection in our analysis.
Stated land rental intentions come from a CV module in the study questionnaire. It asked respondents whether they would rent public land for rotational grazing at a given land rental offer price. The module included an introduction, summary of grazing conditions, list of required inputs to be supplied by the producer, and contract length. Although rental decisions are complex, and require the producer to consider a variety of variables including distance from their farm and infrastructure availability, respondents receive sufficient details to allow them to form an expectation of profitability and other related trade-offs for their decision. We piloted the survey instrument with grazing experts and beef cattle producers.
Each questionnaire included two CV scenarios for rotational grazing: one for grass-dominant and another for shrub-dominant land types. The scenarios were posed to respondents independently such that they could choose to participate in one, both, or neither program. Respondents indicated their willingness to rent public land by agreeing to the given CV grazing scenario. Three questionnaire versions were used that varied only in terms of rental rate offer price. Each version had either low, medium, or high offer prices assigned at random across the sample. The offer prices ranged from a low of $10/acre in both scenarios to a high of $40/acre in the grass scenario and $30/acre in the shrub scenario and were determined through conversations with cattle producers and grazing professionals in Wisconsin. The decision to rent was voluntary and elicited via dichotomous choice questions.
We estimate the probit model with and without selection for the grass and shrub CV scenarios separately, as well as for the pooled CV scenarios using maximum likelihood. The empirical specification is
(2) $$\begin{eqnarray} {\it{Pr}}\left( {{\it{RENT}}} \right) & = & {{\bf \Phi }}\big( {{{\bf \beta }}_0} + {{{\bf \beta }}_1}{\it{PRICE}} + {{{\bf \beta }}_2}{\it{HERDSIZE}} + {{{\bf \beta }}_3}{\it{MIG}} \nonumber\\ && + {{{\bf \beta }}_4}{\it{PASTURE}} + {{{\bf \beta }}_5}{\it{INCFARM}} + {{{\bf \beta }}_6}{\it{RENTHIST}} \nonumber\\ && + {{{\bf \beta }}_7}{\it{AGE}} + {{{\bf \beta }}_8}{\it{DIVERSE}} + {{{\bf \beta }}_9}{\it{ATTITUDE}} + {{\it{u}}_1} \big)\nonumber\\ {\it{Pr}}\left( {{\it{RESPOND}}} \right) & = & {{\bf \Phi }}({{{\bf \gamma }}_0} + {{{\bf \gamma }}_1}{\it{HERDSIZE}} + {{{\bf \gamma }}_2}{\it{MIG}} + {{{\bf \gamma }}_3}{\it{PASTURE}}{\rm{\ }}\nonumber\\ && + {{{\bf \gamma }}_4}{\it{INCFARM}} + {{{\bf \gamma }}_5}{\it{RENTHIST}} + {{{\bf \gamma }}_6}{\it{AGE}} + {{\it{u}}_2}), \end{eqnarray}$$
where Φ is the cumulative distribution function of the standard normal distribution, and the expressions inside of parentheses determine the latent index variables Y*i and W i* defined in equation (1), respectively. In addition to controlling for the survey response decision, we also evaluated whether dropping observations because of missing values for explanatory variables might affect our estimates but did not find support for this possibility.2
Median predicted values of WTP are recovered using the estimated coefficients from equation (2) as (Haab and McConnell, 2003) follows:
(3) $$\begin{eqnarray} WTP &=& [{{\rm{\beta }}_0} + {{\rm{\beta }}_2}\overline {HERDSIZE} + {{\rm{\beta }}_3}\overline {MIG} + {{\rm{\beta }}_4}\overline {PASTURE} \nonumber\\ && + {{\rm{\beta }}_5}\overline {INCFARM} + {{\rm{\beta }}_6}\overline {RENTHIST} + {{\rm{\beta }}_7}\overline {AGE} + {{\rm{\beta }}_8}\overline {DIVERSE} \nonumber\\ && + {{\rm{\beta }}_9}\overline {ATTITUDE} ]/{{\rm{\beta }}_1}, \end{eqnarray}$$
where the bar above variable names denotes sample average values. To provide a measure of spread, we report the interquartile range for WTP values subsequently, recovered using individual values of the explanatory variables.
Variables are constructed using data from two sources. First, all explanatory variables in the RESPOND equation come from the USDA-NASS sample frame used to administer the mail survey (Table 1). The frame drew on the 2012 U.S. Census of Agriculture and included data for both respondents and nonrespondents. Therefore, no mail survey data were introduced at this stage beyond the response itself. Two variables pertain specifically to the livestock enterprise—namely, herd size (HERDSIZE), which is included as a measure of operation size, and an indicator for whether the producer currently employs an MIG practice (MIG). Another three variables relate to the farming operation more generally and include pasture area (PASTURE), the share of household income the producer receives from farming activities (INCFARM), and previous experience with land rental transactions (RENTHIST). Finally, we controlled for age (AGE) as a demographic characteristic. To explore response behavior, we tested for differences in the descriptive statistics by response status, under the null that no differences exist. Test results reported in the last column of Table 1 confirm that significant differences do exist between respondent and nonrespondent characteristics. Notably, producers who currently practice MIG and have previous land rental experience and fewer pasture acres were more likely to respond. This suggests that nonrandom selection attributable to nonresponse or avidity behavior is present.
a Data are drawn from the mail survey sample frame and were analyzed at the U.S. Department of Agriculture, National Agricultural Statistics Service regional field office in Des Moines, Iowa. The anonymity of producers selected into the sample frame was maintained throughout and the summary statistics reported here in order to summarize respondent characteristics in aggregate.
b Difference tests for continuous variables evaluated the difference in means using t-statistics. Difference tests for binary variables evaluated the difference in proportions using z-statistics. Asterisks (***, **, *) indicate significant differences at the 0.01, 0.05, and 0.10 levels, respectively.
Second, variables in the RENT equation use data from the producer mail survey (Table 2). The dependent variable is constructed from responses to the CV land rental scenarios. We expect the CV land rental offer price (PRICE) to be negative in accordance with the law of demand. A measure of herd diversification (DIVERSE) is captured by an index variable that reflects the number of different cattle types managed within a producer's operation. We expect this index to have a negative effect on willingness to rent land for rotational grazing because of the added management complexity of diverse animal feeding demands. We also hypothesize that favorable attitudes toward public agencies involved in grassland management could positively influence the willingness to rent land for rotational grazing. To measure this, we created an attitude index (ATTITUDE) based on the responses to two Likert-type statements where higher scores reflect more favorable attitudes toward working with public land agencies.
a Data are from a 2016 mail survey of Wisconsin beef cattle producers.
b Binary variables coded as 1 = yes, 0 = otherwise.
c Index constructed as the count of different cattle types currently managed within the operation among the following six choices: dry beef cows, cow-calf pairs, finish animals, young stock, dairy heifers, and other.
d Index constructed as the sum of Likert-scale responses to two statements related to public agencies where the Likert values ranged from 1 (least favorable) to 5 (most favorable) and the statements were "I am interested in grazing public land" and "I am willing to work with a public agency, such as the Wisconsin Department of Natural Resources," respectively.
The remaining variables in Table 2 are as previously defined. We expect MIG to have a positive effect on the willingness to rent public land because producers with previous MIG experience are already familiar with grazing plans and regimes and may perceive more value in public lands than would non-MIG operations. We expect PASTURE to negatively influence the willingness to rent public land for grazing because producers with more pasture area are less likely to be land constrained. Producers with a greater share of income from farming have a greater dependence on these activities for their livelihood, and we expect INCFARM to positively influence the rental decision. Younger producers are typically more willing to adopt new technologies and practices as compared with older producers, and we expect AGE to have a negative effect. Transaction costs associated with renting land for the first time (e.g., search costs, screening, contracting and legal advice, and negotiation) are high, and therefore, we expect RENTHIST to increase their willingness to participate in this market.
Before presenting our main, it is useful to focus on the issue of nonrandom selection. In Table 2, we observed significant differences in three descriptive characteristics associated with the response decisions of producers: MIG practices, previous land rental experience, and pasture area. Table 3 presents results for an independent probit estimation of the factors affecting response decisions. The results show that only the first two factors, MIG and previous rental experience, significantly influenced the decision to respond to the survey in a regression context, with both having a positive effect. None of the other factors, including pasture area, significantly predicted response.
a Standard errors in parentheses. Asterisks (***, **, *) indicate that the values are significant at the 0.01, 0.05, and 0.10 levels, respectively.
b Marginal effects evaluated at sample means.
c Coefficients on HERDSIZE and INCFARM are scaled by a factor of 10.
d Model degrees of freedom in curly brackets and P values in square brackets. LR, likelihood ratio.
Table 4 reports estimation results for the pooled CV scenarios.3 The model includes an additional indicator variable (SHRUB) to allow for differential intercept terms between the grass- and shrub-dominated scenarios. From the regression coefficients and marginal effect estimates, we extract three main observations. First, many of the estimates are statistically significant and have the expected signs in terms of predicting enrollment. These findings provide a useful basis for summarizing factors that positively shape the likelihood of participating in public land rentals. Second, the sign and statistical significance of the coefficient estimates in the outcome equation with and without nonrandom selection are nearly identical. Only for the age variable does the estimate go from being a statistically significant determinant of the land rental decision at conventional levels (P < 0.1) in the regression without selection to being statistically insignificant when controlling for selection. Moreover, none of the coefficient magnitudes change markedly across the regressions. Third, the coefficient estimate on the term controlling for nonrandom selection is not statistically significant, further confirming that the selection control is not vital to the results of this estimation. Overall, these results suggest that estimates of WTP reported later are based on robust predictors.
d Model degrees of freedom in curly brackets and P values in square brackets.
In Table 5, we report results for probit estimations with nonrandom selection for the grass and shrub CV scenarios, and noticeable differences between the two land rental options are evident. First, as one would expect for a demand estimate, the land rental price was negatively associated with stated land rental decisions for the shrub-dominant scenario. The sign on the grass-based scenario is also negative, but the coefficient estimate is not statistically significant. This could be related to the smaller sample sizes involved in separating the two regressions. Second, producers who will be most likely to participate in rental arrangements on grass-dominant lands also practice MIG, have positive attitudes toward public agencies, have larger operations in terms of herd size, are younger, and manage fewer types of cattle. Based on the marginal effects, MIG increases the probability of agreeing to rent public land for rotational grazing by 0.25 as compared with a non-MIG producer. Other things being equal, an increase in age significantly decreases this probability, whereas an increase in herd size significantly increases it but by much smaller magnitudes. Finally, increase in the types of cattle managed decreases the probability of a producer agreeing to rent public land by 0.25.
These are substantive effects from a program design perspective. Arguably, the first two if not three characteristics could easily be identified by those knowledgeable about local cattle producers. Although the estimates with nonrandom selection are again not statistically significant in either regression, they exhibit opposite signs, negative in the grasslands and positive in the shrub land rental. Furthermore, the Wald tests for overall model fit in the three main sets of regressions are significant at the 0.05 level across the pooled model and the grasslands model but not for the shrub-dominant scenario.
Because of the distinctive results across the pooled and separate land type CV scenarios, we present WTP derived from all three cases in Table 6. The first row reports overall median WTP values for the pooled CV scenarios with and without selection. Notably, the estimated median values are negative with only a slight difference in magnitude of $0.27, implying that less than half of producers will have a positive WTP to rent public grazing land. Sensitivity analysis was performed to condition WTP values by MIG and previous land rental experience. The more striking results are WTP estimates based on whether the respondents practice MIG and have previous rental experience for the grass-dominant scenario. In those cases, the WTP estimates move substantially into the positive range, to $16.88 per acre for MIG, $23.26 per acre for rental experience, and $45.09 for acre for producers with both MIG and land rental experience. A much smaller shift is evident in the shrub scenario, with the highest WTP estimate being just over $7 per acre for respondents with MIG and previous land rental experience. The results from this table provide a magnified view of the avidity effect in terms of WTP.
a WTP values evaluated at mean values of explanatory variables except where noted.
We highlight several policy implications from our results. First, we find that the median beef cattle producer has a negative WTP and is therefore unwilling to rent public land for rotational grazing. This relatively low overall interest at the prices offered implies that the expected benefits gained do not exceed costs for most producers. Additional subsidies or cost sharing might be needed to increase participation rates in rotational grazing programs on public lands and would contribute to overall program feasibility. Whereas some respondents likely decided against renting public grazing lands because they are not interested in adopting rotational grazing practices in general, others who may indeed be interested likely declined because they found the expected costs (e.g., water, temporary fencing, and/or transportation costs) of rental agreements on public lands too prohibitive as compared with private land rental. For instance, among producers who indicated some willingness to rent land and responded to survey debriefing questions, only 15%–30% would remain committed to their stated land rental decision if they needed to purchase and install fencing. Additional research to value the ecological services that rotational grazing provides to grasslands and society more generally could help boost public support for subsidy or cost-share programs and increase overall economic efficiency.
Second, as evidenced by survey response rates and much higher WTP estimates, Wisconsin producers most likely to be interested in public land rental opportunities are those that practice MIG and have prior rental experience. Indeed, for grasslands, they were the main class of respondents for whom WTP values were positive. Notably, we estimated median WTP values for this group to be $45/acre, whereas the statewide average pasture rental rate in Wisconsin for 2016 was $35/acre (USDA-NASS, 2016). Without the interest of these groups, a grazing program on public land in Wisconsin might not be successful. Not surprisingly perhaps is our conclusion that Wisconsin DNR and other public agencies interested in starting public grazing programs should target efforts toward MIG producers already active in land rental markets. In the longer term, they can also work with extension agents, grazing brokers, and other grazing specialists to build producer capacity for practicing MIG on private landholdings, which may in turn increase overall interest in participating in a public grazing program.
Finally, similar to other ex ante technologies, the successful diffusion of grazing programs on public lands will depend on both the physical and economic availability of these lands (Barham, Mooney, Swinton, 2016). Whether sufficient producers with enough interest in renting public lands operate within a close distance to where most public grazing land is located remains an open question. Based on physical inventories of potential sites, most potential grazing lands are in the western, central, and southwestern regions of Wisconsin. These regions also boast sizable populations of beef and other nondairy cattle producers according to the U.S. Census of Agriculture. Furthermore, survey respondents who agreed to rent in the grass-dominant scenario indicated they are willing to travel for access to rented grazing land, with greater than 50% willing to travel 20 miles or more at our medium rental price level. As such, we are cautiously optimistic about the logistical feasibility of such grazing initiatives. Future research should explore whether such ex ante decisions might depend on one another in a spatial context (Lewis, Barham, and Robinson, 2011; Skevas, Skevas, and Swinton, 2018).
This article contributes to economic literature on ex ante analysis of agricultural production practices, technologies, and management strategies by examining the effect of survey response behavior on Wisconsin beef cattle producers' willingness to rent public land for rotational grazing. Our econometric model controls for nonrandom selection attributable to nonresponse and avidity effects driven by whether the producers followed an MIG practice in their current operation and had previous land rental experience. Even though the regressions were not particularly sensitive to selection issues in our estimations, we encourage attention to this issue in other ex ante contexts. The same factors that influenced producers' response behavior also played a significant role in their stated land rental intentions and suggests that bias could easily arise in other circumstances.
Grass-dominant land was the more popular of the two types of land that producers were asked to consider. For that rental opportunity, we expect younger producers with larger farms and less diverse cattle operations to be interested, especially among those practicing MIG already. The median WTP to rent public land in the grass-dominant scenario was $45/acre, for those already practicing MIG and participating in land rental markets, which was above the Wisconsin statewide average rental rate of $35/acre in 2016, and suggests such scenarios may be viable. For public grassland managers interested in promoting grazing as a sustainable management practice on public land, this suggests they could target initial outreach efforts at these types of producers with assistance from extension agents and grazing networks.
For shrub-dominant land, the median WTP was much lower, at under $4/acre. Based on regression estimates, we expect producers with less pasture in their possession to be more interested in participating. Here, tailoring recruitment efforts might be a bit more difficult because those factors are less easily observed. However, they might become more feasible as discussions with producers advance with respect to public grassland rentals. Our results also show that grazing decisions extend beyond simple price considerations. Younger producers, for example, may have specific constraints or interest in expansion that make them more likely to seek grass-dominant rental opportunities. In general, producers will make their rental decision based on their own operational context—the size of their operation, how many pasture acres they own (and therefore need to rent), and how many different types of cattle they must manage.
For policy makers, this reflects a need for flexibility with regard to contract design. To entice producers to rent shrub-dominant land, additional incentives may be needed—for example, lower rental rates, longer contracts, or more favorable cost-share agreements for fencing inputs or transportation expenses. Additional research should revolve around producer willingness to travel to graze public land and the technical feasibility that assesses where this public land is located and how many grazers are located nearby to address viability concerns.
1 Beef cattle have less stringent feed requirements as compared with dairy cows and do not need milking. They can be managed on land farther from the farm more easily and represent the most likely type of livestock to be served by public grazing initiatives in the state.
2 We tested whether the proportion of respondents that agreed to rent public land for rotational grazing differed depending on whether the observation could be dropped because of a missing data point elsewhere. The z-statistic was not significant at a 0.05 level for either scenario.
3 To evaluate the data pooling, we conducted an initial Chow Test [F(10,179) = 0.349, P = 0.966] and failed to reject at conventional significance levels. We also present results for the CV scenarios estimated separately in Table 5, to highlight their potential differences.
Barham, B. "Adoption of a Politicized Technology: bST and Wisconsin Dairy Farmers." American Journal of Agricultural Economics 78,4(1996):1056–63.
Barham, B., Mooney, D., and Swinton, S.. "Inconvenient Truths about Landowner (Un) Willingness to Grow Dedicated Bioenergy Crops." Choices 31,4(2016):1–8.
Barry, S. "Current Findings on Grazing Impacts: California's Special Status Species Benefit from Grazing." California Cattleman, June 2011, 18–20.
Brock, C., and Barham, B.. "Farm Structural Change of a Different Kind: Alternative Dairy Farms in Wisconsin—Graziers, Organic and Amish." Renewable Agriculture and Food Systems 24,1(2009):25–37.
Carson, R., Flores, N., and Meade, N.. "Contingent Valuation: Controversies and Evidence." Environmental and Resource Economics 19,2(2001):173–210.
Cooper, J. "Combining Actual and Contingent Behavior Data to Model Farmer Adoption of Water Quality Protection Practices." Journal of Agricultural and Resource Economics 22,1(1997):30–43.
Cooper, J., and Keim, R.. "Incentive Payments to Encourage Farmer Adoption of Water Quality Protection Practices." American Journal of Agricultural Economics 78,1(1996):54–64.
Dalecki, M., Whitehead, J., and Blomquist, G.. "Sample Non-response Bias and Aggregate Benefits in Contingent Valuation: An Examination of Early, Late and Non-respondents." Journal of Environmental Management 38,2(1993):133–43.
DeVore, B. "Grazing as a Public Good in Western Minnesota." Land Stewardship Letters 32,1(2014):24–25. Internet site: landstewardshipproject.org/repository/1/1168/no_1_2014_lsl.pdf (Accessed September 2017).
Dillman, D., Smyth, J., and Christian, L.. Internet, Phone, Mail, and Mixed-mode Surveys: The Tailored Design Method. Hoboken, NJ: John Wiley and Sons, 2014.
Edwards, S., and Anderson, G.. "Overlooked Biases in Contingent Valuation Surveys: Some Considerations." Land Economics 63,2(1987):168–78.
Ethier, R., Poe, G., Schulze, W., and Clark, J.. "A Comparison of Hypothetical Phone and Mail Contingent Valuation Responses for Green-Pricing Electricity Programs." Land Economics 76,1(2000):54–67.
Foltz, J., and Lang, G.. "The Adoption and Impact of Management Intensive Rotational Grazing (MIRG) on Connecticut Dairy Farms." Renewable Agriculture and Food Systems 20,4(2005):261–66.
Frisina, M., and Morin, F.. Grazing Private and Public Land to Improve the Fleecer Elk Winter Range. Rangelands 13,6(1991):291–94.
Gardner, B. "Some Implications of Federal Grazing, Timber, Irrigation, and Recreation Subsidies." Choices 12,3(1997):9–14.
Gillespie, J., Wyatt, W., Venuto, B., Blouin, D., and Boucher, R.. "The Roles of Labor and Profitability in Choosing a Grazing Strategy for Beef Production in the US Gulf Coast Region." Journal of Agricultural and Applied Economics 40,1(2008):301–13.
Groves, R. "Nonresponse Rates and Nonresponse Bias in Household Surveys." Public Opinion Quarterly 70,5(2006):646–75.
Haab, T, . and McConnell, K.. The Econometrics of Non-market Valuation. Northampton, MA: Edward Elgar, 2003.
Hanemann, W. "Welfare Evaluations in Contingent Valuation Experiments with Discrete Responses." American Journal of Agricultural Economics 66,3(1984):332–41.
Harrington, J., and Kathol, E.. "Responses of Shrub Midstory and Herbaceous Layers to Managed Grazing and Fire in a North American Savanna (Oak Woodland) and Prairie Landscape." Restoration Ecology 17,2(2009):234–44.
Heckman, J. "Sample Selection Bias as a Specification Error." Econometrica 47,4(1979):153–61.
Hubbard, R., Newton, G., and Hill, G.. "Water Quality and the Grazing Animal." Journal of Animal Science 82,S13(2004):E255–63.
Hubbell, B., Marra, M., and Carlson, G.. "Estimating the Demand for a New Technology: Bt Cotton and Insecticide Policies." American Journal of Agricultural Economics 82,1(2000):118–32.
Hudson, D., Seah, L.-H., Hite, D., and Haab, T.. "Telephone Presurveys, Self-Selection, and Non-response Bias to Mail and Internet Surveys in Economic Research." Applied Economics Letters 11,4(2004):237–40.
Jensen, K., Lambert, D., Clark, C., Holt, C., English, B., Larson, J., Yu, T., and Hellwinckel, C.. "Cattle Producers' Willingness to Adopt or Expand Prescribed Grazing in the United States." Journal of Agricultural and Applied Economics 47,2(2015):213–42.
Johansen, K. "Greener Pastures." Wisconsin Natural Resources 17(April 2017):17–19. Internet site: http://dnr.fiwi.gov/wnrmag/2017/04/Grazing.PDF (Accessed September 2017).
Kim, S., Gillespie, J., and Paudel, K.. "The Effect of Socioeconomic Factors on the Adoption of Best Management Practices in Beef Cattle Production." Journal of Soil and Water Conservation 60,3(2005):111–20.
Kling, C., Phaneuf, D., and Zhao, J.. "From Exxon to BP: Has Some Number Become Better Than No Number?" Journal of Economic Perspectives 26,4(2012):3–26.
Kloppenburg, J., Cates, R., Gildersleeve, R., Johnson, D., Mahalko, K., Paine, L., and Thomforde, S.. Growing Wisconsin's Grazing Future. Blue Sky Greener Pastures Steering Committee. Blue Sky Greener Pastures Steering Committee, July 2012. Internet site: https://www.cias.wisc.edu/fiwp-content/uploads/2012/07/blueskygreenerpastweb.pdf (Accessed September 2017).
Lahaut, V., Jansen, H., van de Mheen, D., Garretsen, H., Verdurmen, J., and Van Dijk, A.. "Estimating Non-response Bias in a Survey on Alcohol Consumption: Comparison of Response Waves." Alcohol and Alcoholism 38,2(2003):128–34.
Lewis, D., Barham, B., and Robinson, B.. "Are There Spatial Spillovers in the Adoption of Clean Technology? The Case of Organic Dairy Farming." Land Economics 87,2(2011):250–67.
Little, R. "A Test of Missing Completely at Random for Multivariate Data with Missing Values." Journal of the American Statistical Association 83,404(1998):1198–202.
Loomis, J. "Expanding Contingent Value Sample Estimates to Aggregate Benefit Estimates: Current Practices and Proposed Solutions." Land Economics 63,4(1987):396–402.
Ma, S., Swinton, S., Lupi, F., and Jolejole-Foreman, C.. "Farmers' Willingness to Participate in Payment-for-Environmental-Services Programmes." Journal of Agricultural Economics 63,3(2012):604–26.
Melkonyan, T., and Taylor, M.. "Regulatory Policy Design for Agroecosystem Management on Public Rangelands." American Journal of Agricultural Economics 95,3(2013):606–27.
Messonnier, M., Bergstrom, J., Cornwell, C., Teasley, R., and Cordell, H.. "Survey Response-Related Biases in Contingent Valuation: Concepts, Remedies, and Empirical Application to Valuing Aquatic Plant Management." American Journal of Agricultural Economics 82,2(2000):438–50.
Mitchell, R., and Carson, R.. Using Surveys to Value Public Goods: The Contingent Valuation Method. Washington, DC: Resources for the Future, 1989.
Mooney, D., Barham, B., and Lian, C.. "Inelastic and Fragmented Farm Supply Response for Second-Generation Bioenergy Feedstocks: Ex Ante Survey Evidence from Wisconsin." Applied Economic Perspectives and Policy 37,2(2015):287–310.
Paine, L., and Gildersleeve, R.. A Summary of Beef Grazing Practices in Wisconsin. Madison: College of Agricultural and Life Sciences, University of Wisconsin-Madison, 2011. Internet site: https://www.cias.wisc.edu/wp-content/uploads/2018/05/2011-Dairy-Grazing-Summary.pdf (Accessed July 7, 2018).
Peress, M. "Correcting for Survey Nonresponse Using Variable Response Propensity." Journal of the American Statistical Association 105,492(2010):1418–30.
Qaim, M., and de Janvry, A.. "Genetically Modified Crops, Corporate Pricing Strategies, and Farmers' Adoption: The Case of Bt Cotton in Argentina." American Journal of Agricultural Economics 85,4(2003):814–28.
Robinson, C. "Opportunities and Challenges with Grazing Public Grassland in Wisconsin: The Producer Perspective." Master's thesis, University of Wisconsin-Madison, 2017.
Singerman, A., and Useche, P.. "Florida Citrus Growers' First Impressions on Genetically Modified Trees." AgBioForum 20,1(2017):67–83.
Skevas, T., Hayden, N., Swinton, S., and Lupi, F.. "Landowner Willingness to Supply Marginal Land for Bioenergy Production." Land Use Policy 50(January 2016):507–17.
Skevas, T., Skevas, I., and Swinton, S.. "Does Spatial Dependence Affect the Intention to Make Land Available for Bioenergy Crops?" Journal of Agricultural Economics 69,2(2018):393–412.
Studer, J., Baggio, S., Mohler-Kuo, M., Dermota, P., Gaume, J., Bertholet, N., Daeppen, J., and Gmel, G.. "Examining Non-response Bias in Substance Use Research—Are Late Respondents Proxies for Non-respondents?" Drug and Alcohol Dependence 132,1–2(2013):316–23.
Sulak, A., and Huntsinger, L.. "Public Land Grazing in California: Untapped Conservation Potential for Private Lands? Working Landscapes May Be Linked to Public Lands." Rangelands 29,3(2007):9–12.
Swinton, S., Tanner, S., Barham, B., Mooney, D., and Skevas, T.. "How Willing Are Landowners to Supply Land for Bioenergy Crops in the Northern Great Lakes Region?" Global Change Biology-Bioenergy 9,2(2017):414–28.
Tsikriktsis, N. "A Review of Techniques for Treating Missing Data in OM Survey Research." Journal of Operations Management 24,1(2005):53–62.
Undersander, D., Albert, B., Cosgrove, D., Johnson, D., and Peterson, P.. Pastures for Profit: A Guide to Rotational Grazing. Madison: University of Wisconsin Extension, Publication A3529, 2011. Internet site: https://www.nrcs.usda.gov/Internet/FSE_DOCUMENTS/stelprdb1097378.pdf (Accessed September 2017).
U.S. Bureau of Land Management. BLM's Management of Livestock Grazing. Unnumbered Fact Sheet, 2016. Internet site: https://www.blm.gov/sites/blm.gov/files/documents/files/fiGrazingInfographic100516FINAL%20%281%29%20%282%29.pdf (Accessed June 2017).
U.S. Department of Agriculture, National Agricultural Statistics Service (USDA-NASS). 2012 U.S. Census of Agriculture. Washington, DC: USDA-NASS, 2013.
U.S. Department of Agriculture, National Agricultural Statistics Service (USDA-NASS). 2016 Cash Rent Survey. Washington, DC: USDA-NASS, 2016.
Van de Ven, W., and Van Praag, B.. "The Demand for Deductibles in Private Health Insurance: A Probit Model with Sample Selection." Journal of Econometrics 17,2(1981): 229–52.
Whitehead, J., Blomquist, G., Hoban, T., and Clifford, W.. "Assessing the Validity and Reliability of Contingent Values: A Comparison of On-Site Users, Off-Site Users, and Non-users." Journal of Environmental Economics and Management 29,2(1995): 238–51.
Whitehead, J., and Cherry, T.. "Willingness to Pay for a Green Energy Program: A Comparison of Ex-Ante and Ex-Post Hypothetical Bias Mitigation Approaches." Resource and Energy Economics 29,4(2007):247–61.
Whitehead, J., Groothuis, P., and Blomquist, G.. "Testing for Non-response and Sample Selection Bias in Contingent Valuation: Analysis of a Combination Phone/Mail Survey." Economics Letters 41,2(1993):215–20.
Whittlesey, N., Huffaker, R., and Butcher, W.. "Grazing Policy on Public Lands." Choices 8,3(1993):15–19. | CommonCrawl |
Deep-learned faces: a survey
Samadhi P. K. Wickrama Arachchilage ORCID: orcid.org/0000-0003-1169-13131 &
Ebroul Izquierdo1
EURASIP Journal on Image and Video Processing volume 2020, Article number: 25 (2020) Cite this article
Deep learning technology has enabled successful modeling of complex facial features when high-quality images are available. Nonetheless, accurate modeling and recognition of human faces in real-world scenarios "on the wild" or under adverse conditions remains an open problem. Consequently, a plethora of novel deep network architectures addressing issues related to low-quality images, varying pose, illumination changes, emotional expressions, etc., have been proposed and studied over the last few years.This survey presents a comprehensive analysis of the latest developments in the field. A conventional deep face recognition system entails several main components: deep network, optimization loss function, classification algorithm, and train data collection. Aiming at providing a complete and comprehensive study of such complex frameworks, this paper first discusses the evolution of related network architectures. Next, a comparative analysis of loss functions, classification algorithms, and face datasets is given. Then, a comparative study of state-of-the-art face recognition systems is presented. Here, the performance of the systems is discussed using three benchmarking datasets with increasing degrees of complexity. Furthermore, an experimental study was conducted to compare several openly accessible face recognition frameworks in terms of recognition accuracy and speed.
Face conveys a plethora of discriminative features rich enough to determine one's identity [1]. These features can be extracted in unconstrained scenarios and non-intrusive manners. Hence, automated face recognition can be exploited in a large number of practical applications [2]. Among others, it has shown excellent capabilities in security applications like intelligent surveillance [3, 4], user authentication applications like traveler verification at border crossing points [5, 6], and diverse other mobile and social media applications [7–10]. Indeed, person identity prediction based on facial features for practical purposes is a valuable tool in modern information technology [11]. Straightforwardly, as it may seem, the underlying modeling and mapping of faces is complex and it becomes daunting due to the diversity of facial features. Such complexity is further exacerbated by other variations like emotions, illumination, make up, and low-quality sensing [12, 13]. To tackle this important, yet challenging problem of face recognition, intensive research efforts have been reported by numerous research groups and scholars. The discipline can be traced back to the sixties [14, 15], when both feature based approaches and holistic approaches were reported. Feature-based approaches exploit the geometric relationships among distinctive facial features such as eyes, mouth, and other face landmarks [16–23]. In contrast, holistic approaches aim at capturing features of the entire facial area in an image [24–29]. Holistic approaches assign equal importance to all the pixels rather than special attention to a set of points of interest. Hence, these approaches encompass higher distinctive power at the cost of increased computational complexity [6, 30].
Deep convolutional neural networks (DCNNs) are a holistic approach that recently enabled a quantum leap in the field. In 2014, Facebook reported a face recognition system named DeepFace [27] which achieved near-human performance on LFW benchmark [31]. This accuracy was quickly surpassed by systems like DeepId3 [28] and FaceNet [29]. Such substantial progress of face recognition technology is a reflection of cutting-edge research developments in deep network architectures. Starting from LeNet in 1989 [32], DCNNs have evolved into sophisticated networks particularly fueled by classification challenges like The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [33]. AlexNet [34], VGGNet [35], and GoogleNet [36] are arguably the three most influential ILSVRC networks.
The role of a deep network in a classic image classification system is to map the complex high-dimensional image information into a low-dimensional proprietary template, i.e., feature vector. The generated feature vectors can be interpreted as points in a fixed-dimensional space. Clearly, face images are a subspace in the much larger image space. This fact implies that network architectures that succeeded in the problem of image classification are adaptable to face classification. Some successful face recognition applications that emerged from image classification networks are as follows: DeepId3 [28] which was influenced by VGGNet [35] and GoogLeNet [36], Google's Facenet [29] which used GoogleNet [36] architecture, and VGGFace [37] that exploited concepts from VGGNet [35].
A deep network is generally underpinned by an optimization loss function. When the deep net outputs feature vectors from input images, the loss function adds discriminative power to the generated features. Over the years, loss functions have evolved complementing the network architectures. These loss functions can be categorized as classification based approaches, i.e., softmax loss and it's variants, and metric learning approaches, i.e., contrastive loss and triplet loss. Successful exploitation of suitable loss functions in face recognition includes softmax loss in DeepFace [27], a variation of softmax loss as used in Arcface [38] and a tripletloss used in FaceNet [29].
Figure 1 shows the data flow of a typical face recognition system. During training, the network model learns from large training datasets. The trained model is then used to generate feature vectors for test faces. A classic face recognition task generally includes a gallery of labelled faces and probe/query images. Labelled gallery images are usually processed in advance in a step called 'enrolment process'. Here, the feature vectors/templates of the gallery subjects are generated. These features are then either stored with their corresponding labels or used to generate subject specific models. During the face recognition phase, the template of the query face is compared to the enrolled templates. This comparison can either use a nearest neighbor search or a model based classification. The former approach is referred throughout this paper as template learning and the latter is referred as subject-specific modelling.
The dataflow of a deep face recognition system. During training the network learns feature representations (f_i) of faces by getting gradually penalized by the loss function. During testing, the pre-trained model is used to generate features of test faces. The generated features are classified/compared for identity determination
Top 1 one-crop accuracy (using only the center crop) on the ImageNet-1k validation set with respect to the computational complexity of the model. The computational complexity is measured as the number of floating-point operations (FLOPs) required for a single forward pass. The size of each ball corresponds to the number of learnable parameters
An important aspect of face recognition is benchmarking. As mentioned before, network architectures together with optimization loss functions and sufficient and diversified train datasets have enabled successful modeling of complex facial features when provided with high-quality images. These face recognition systems reported near-perfect performance on classic benchmarks like LFW [31]. However, the performance saturation on these benchmarks resulted in more challenging benchmarks [39–41] entailing more realistic pictures captured under adverse conditions. The evaluations on such real-world data shows that the performance of face recognition systems is affected by many factors including emotions, illuminations variations, make up and pose variations [39, 40, 42, 43].
Surveys on deep face recognition
Due to the importance of the topic and the vast number of face recognition papers reported in the past, there is indeed no shortage of related surveys either. Some noteworthy face recognition surveys include Zhang et al. [11], Jafri et al. [30], Bowyer et al. [44], and Scheenstra et al. [45]. These comprehensively survey face recognition systems prior to DeepFace. Hence, these surveys do not discuss the new sophisticated deep learning approaches that emerged during the last decade. Surveys that discuss deep face recognition have singled out face recognition as an individual discipline rather than a collection of components adopted from different studies. These surveys generally discuss the face recognition pipeline: face pre-processing, network, loss function, and face classification [42, 46, 50] or discuss a single aspect of face recognition such as 3-D face recognition [47], illumination face recognition [52] or pose invariant face recognition [51]. Although these surveys are important and provide an excellent basis for the analysis of the state-of-the-art in the field, they do not provide conclusive comparisons or analysis of the underlying network architectures.
To better illustrate the difference of the key contributions in the past and this survey, Table 1 summarises the main deep face recognition surveys. The analysis presented by Wang et al. [46] is arguably the most comprehensive survey yet in the field. It provides a holistic overview of the broad topics of deep face recognition including the face recognition pipeline, face datasets, benchmarks, and industry scenes, briefly surveying all elements of face recognition. In contrast, this paper focuses on deep learning based components in the recognition pipeline and delivers a much detailed analysis of the 18 most critical deep face recognition systems. The paper describes a face recognition system as a unique combination of a deep net, loss function, classification approach, train dataset, and other system specific novelties if any. To properly understand how each system was derived, the paper also discusses the evolution of the aforementioned components.
Table 1 Surveys that discuss deep face recognition
Paper contribution
The key contributions of this survey include:
The background knowledge required to understand and analyze the underlying frameworks used in face recognition, including,
The origin and evolution of DCNN frameworks that were effective in face recognition (Table 2).
Table 2 DCNN frameworks that had significant impact on face recognition
The loss functions used in face recognition, categorized and compared under two classes: classification based approaches and metric learning approaches.
A comparative discussion on two main classification approaches used in face recognition, i.e, template learning and subject specific modelling.
A brief discussion on key face datasets and evaluation benchmarks.
An elaborated discussion on 18 state-of-the-art face recognition systems (DeepFace [27], DeepId [54], DeepID2 [55], DeepID2+ [56], VGGFace [37], DeepID3 [28], FaceNet [29], Baidu [57], NAN [58], Template Adaptation [59], SphereFace [60], CosFace [61], ArcFace [38], B-CNN [62], DCNNmanual+metric [63], DLIB [64], OpenFace [65], and FaceNet_Re [66]).
The face recognition systems are analyzed based on the network architecture, loss function, classification approach, and train data and other unique system design details.
The performance of face recognition is discussed based on three scenarios:
The performance on good quality data (LFW [31] benchmark)
The performance on unconstrained data (IJB-A [39] benchmark)
The performance under millions of gallery distractors (MegaFace [67] benchmark)
An experimental study that compares three face recognition systems (DLIB [64], OpenFace [65], and FaceNet_Re [66]) with respect to face recognition accuracy and speed.
Discussion on open issues and challenges in face recognition highlighting possible future research.
The remainder of the survey is organized as follows. Section 2 presents a cognitive study of the evolution of DCNN architectures. Then, the paper presents a comparative analysis of loss functions in Section 3, a study of classification algorithms in Section 4, and face datasets and evaluation benchmarks in Section 5. Section 6 presents a study on state-of-the-art face recognition systems. This study is three fold and includes an individual systems analysis, a comparative performance analysis on three benchmarks and an experimental performance analysis. Finally, the paper presents the open issues of face recognition followed by the conclusion.
The evolution of deep face architectures
Andrew Ng, the Chief Scientist at Baidu Research, described the notion of deep learning as "Using brain simulations, hope to make learning algorithms much better and easier to use and make revolutionary advances in machine learning and AI". While deep neural networks (DNNs) have conquered different disciplines, convolutional neural networks (ConvNets or CNNs) have been particularly effective in visual science [68]. Given the appropriate network architecture, CNNs are able to process, analyze, and classify high-dimensional patterns, resulting in an extremely valuable tool in computer vision.
A typical DCNN adheres to a conventional structure which consists of a set of stacked convolutional layers followed by contrast normalization and max-pooling and finally one or more fully connected layers [36]. Different variants of this structure have been explored for performance enhancements. Please refer to Fig. 5 for the general structure of a DCNN.
The evolution of deep network architectures initiated with increased size with respect to depth, the number of levels, and width, the number of units at each level [34, 35, 69]. Nonetheless, the increased complexity associated with larger nets was not favored in practical applications. Hence, systems like GoogleNet pioneered architecturally enhanced networks with lesser parameters [36]. This was followed by Microsoft's efforts to simplify the training process by using networks with lesser complexity [70]. In the immediate history, researchers have combined these two design techniques for further simplified networks [71].
Classification challenges such as ILSVRC [33], MNIST, and CIFAR have led to several milestone in image recognition. AlexNet [34], the winner of ILSVRC 2012, achieved a top-5 test error rate of 15.3%, which is the pioneer of DNN-based image recognition. To this day, the publication is considered to be one of the most influential breakthroughs. The second milestone was recorded when VGGNet [35], the second place winner ILSVRC 2014, achieved significant improvements (top 5 test error rate of 6.8%) with increased depth in DNNs.
Despite the fact that going deeper with convolutions seemed to be the straightforward solution for accuracy enhancements [34, 35, 69], this approach had two main drawbacks: (1) the large number of parameters that these deeper networks encompassed made the network prone to over-fitting and (2) the deeper networks meant increased computer resource consumption. These factors turned the attention of research community towards sparsely connected systems. Nonetheless, sparse systems were not a simple solution and possessed complications and limitations. The calculations associated with these non-uniform sparse systems, even if the number of arithmetic operations were fewer, suffered from the overhead of look-ups and cache misses. In comparison, dense nets, even with higher number of arithmetic operations, had the advantage of fast dense matrix multiplication operations provided by improved numerical libraries [34, 72].
GoogleNet [36], which was the winner of ILSVRC 2014, introduced an architecture code-named Inception, which was capable of outperforming AlexNet with 12 times fewer parameters as that of AlexNet. The main concept behind this architecture is finding optimal local sparse structure covered by readily available dense components.
The inception architecture was learned layer by layer. In a single layer, units with high correlation were clustered together. These clusters which are connected to the layer units were considered as the next layer. When these inception modules were stacked, the higher layers required more and more 3 ×3 and 5 ×5 convolutions. This is because the highly abstract features are captured by the higher layers, and their spatial concentration reduces as a result. To avoid such complexities, dimension reduction was introduced to the architecture. In doing so, 1 ×1 convolutions were introduced before the 3 ×3 and 5 ×5 convolutions so that these 1 ×1 convolutions can compute reductions prior to feeding them to more expensive convolutions. The Inception architecture was later modified in the subsequent versions by adding batch normalization in Inception V2 [73] and additional factorizations in Inception V3 [74].
The uniqueness of Inception architecture is that the design principles focus more on computational simplicity, enabling the inference to be run even on a single machine. Due to this nature of GoogleNet, it was later used by many face recognition systems including Google's FaceNet [29] and DeepId3 [28].
In a contemporary research, the Microsoft Research employed the concept of deep residual learning [70] for image recognition. The authors show that the residual learning framework enables very deep networks, deeper than the traditional DNNs, to be implemented with lesser complexity. The study presented a DNN with 152 layers, which is eight times as deep as VGGNet [35].
Residual learning can be explained as follows. Consider a set of layers stacked, this could be the entire network or a part of it. Let the input to the stack of layers be x and the underlying mapping be H(x). Instead of training the layers to learn the traditional complicated function, the stack of layers are trained to learn the corresponding residual function, i.e., H(x)−x thus deriving Eq. 1.
$$\begin{array}{@{}rcl@{}} F(x) = H(x) -x\\ F(x) + x = H(x) \end{array} $$
The authors presented several networks of different sizes. ResNet-152, which is 152 layers deep, outperformed VGGNet and GoogleNet in ImageNet validation with a top 5 error of 4.49%. An ensemble of ResNets which achieves 3.57% top 5 error won the ILSVRC-15.
The residual networks, even though much deeper, have lower complexity than the traditional DNNs. An architectural comparison between VGG-19 model, a 34-layer deep network and the same 34-layer network with residual connections (ResNet-34), explains the complexity reduction in ResNets. VGG-19 model has 19.6 billion FLOPS whereas the 34-layer deep networks, both plain and with residual connections, have only 3.6 million FLOPS each. The plain net and ResNet both have the same FLOPs because the identity mappings do not introduce any parameters nor computational complexity. Despite the lesser complexity, ResNet34 outperformed the VGG-19 model in ImageNet validation.
When residual connections on top of a traditional DCNN architectures achieved closer performance to that of Inception V3, it raised the question whether residual connections on top of Inception would further enhance the performance. This hypothesis was explored in Inception V4 (Fig. 3) [71]. The authors showed that, while it is feasible to achieve competitive results through very deep networks without the use of residual connections, inclusion of residual connections in fact improves training speed in a greater scale.
From left to right in order are original residual connections [70], modified residual connections used in [36], the schema for 17 ×17 grid of the pure InceptionV4 network, and the schema for 17 ×17 grid (Inception-ResNet-B) module of Inception-ResNet-v1 network
In addition to discussed networks, bilinear CNNs are a model designed for image recognition and later adopted in face recognition. The network consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain a bilinear vector [75]. This model was proven to be effective in fine-grained recognition tasks.
The major architectural innovations in DCNN history are associated with three concepts: increased network size, inception architecture, and residual connections. These innovations vary in performance indices like model complexity, computational complexity, memory usage, and inference time. These indices are vital in selecting an appropriate architecture compatible with the resource constraints in practical deployment. Canziani et al. [76] and Bianco et al. [77] presents an experimental comparison between different DNNs. From the results of Canziani et al., Fig. 2 presents the model complexity and computational complexity of DCNNs that have major impact on face recognition (with the exception of E-Net, BN-NIN, and BN-AlexNet).
Comparative analysis of loss functions
The loss function is the supervisory signal used to train a deep network. The study of loss functions has been carried along two main lines of research: Fig. 4 (1) classification-based approaches (conventional softmax classifier [27, 37, 78] and modified versions of softmax loss [38, 60, 61, 79–86]) and (2) metric learning approaches (contrastive loss [28, 55, 56, 87, 88] and triplet loss [29]). The softmax loss learns by classifying each train image into one of the pre-defined classes. Variants of softmax loss have made efforts to increase the intra-class compactness in the process. In contrast, metric learning approaches learn by increasing the similarity between faces of same identity while decreasing the similarity between the faces of different identities. Regardless of the approach, all deep face supervisory signals are driven towards a single goal, inter-class discrepancy with intra-class compactness.
Classification boundaries of different loss functions in binary classification
Architecture of VGG16 model
3.0.1 Classification-based approaches
The softmax loss is a multi-class classification problem where the input data contains one or more images of a set of individuals and the classifier learns the features of each individual. Despite being referred as softmax loss for convenience, technically, a k-way softmax function is employed to obtain a probability distribution over labels of k classes [27]. And the minimization is carried out for the cross-entropy loss for each training sample. The softmax loss is denoted in Eq. 2,
$$\begin{array}{@{}rcl@{}} L &=& - \frac{1}{N} \sum^{N}_{i=1} log \frac{e^{W^{T}_{y_{i}} x_{i} + b_{y_{i}}}}{\sum^{n}_{j=1} e^{W^{T}_{j} x_{i} + b_{j}} } \end{array} $$
where xiεRd denotes the d-dimensional deep feature of the ith sample, belonging to the \(y_{i}^{th}\) class. WjεRd denotes the jth column of the weight WεRd∗n and bjεRn is the bias term. The batch size and the class number are N and n, respectively.
Softmax loss, despite achieving inter-class dispersion, provides no particular inclination towards intra-class compactness. Hence, the features learned through softmax loss may not be discriminative enough for rather challenging open-set classification problem [38]. Studies that followed have reported several efforts to enhance the discriminative power of softmax loss [60, 61, 79–81, 83–86]. An extension of softmax loss named center-loss [79] attempted to achieve the missing intra-class compactness by taking into account the euclidean distance between the feature vector and the center of the class. The authors show that a combination of center-loss and the softmax loss could be an optimum solution. However, in the matter of huge training datasets with a large number of classes, the class-wise learning approach becomes complicated and difficult. In an effort to solidify class-wise learning in large datasets, a new approach named SphereFace [60] incorporated a multiplicative angular margin penalty. Even though a new loss function was introduced in this publication, the presented optimum solution was a hybrid with softmax loss. Later, Wang et al. [61] proposed a system named CosFace which used a cosine margin penalty. As opposed to Sphereface, CosFace was an additive margin. This approach outperformed Sphereface.
Most recently in 2019, a research team from Imperial college introduced an additive angular margin loss named ArcFace [38]. The derivation of ArcFace can be outlined as follows.
Consider the traditional softmax loss denoted in Eq. 2. The bias is removed and the logit \( W^{T}_{j} x_{i} \) is transformed to its dot product as \( W^{T}_{j} x_{i} = \parallel {W_{j}} \parallel \parallel x_{i} \parallel \cos {\theta _{j}} \). When l2 normalization is applied on individual weight and embedding feature, ∥Wj∥=1 and ∥xi∥ is re-scaled to s yielding the following equation.
$$\begin{array}{@{}rcl@{}} L_{1} &=& - \frac{1}{N} \sum^{N}_{i=1} log \frac{e^{s cos \theta_{y_{i}}}}{ e^{s cos \theta_{y_{i}}} + \sum^{n}_{j=1, j!=y_{i}} e^{s cos \theta_{j}}} \end{array} $$
Now, the predictions only depend on the angle between the feature and the weight. The inter-class discrepancy and intra-class compactness is achieved by the additive angular margin penalty m; hence, the final equation is as follows.
$$\begin{array}{@{}rcl@{}} L_{1} &=& - \frac{1}{N} \sum^{N}_{i=1} log \frac{e^{s cos \theta_{y_{i}} + m}}{ e^{s cos \theta_{y_{i}} +m} + \sum^{n}_{j=1, j!=y_{i}} e^{s cos \theta_{j}}} \end{array} $$
ArcFace system reported considerable improvement reporting 99.83% LFW accuracy.
3.0.2 Metric learning approaches
Metric learning approaches are a different optimization approach than softmax loss or its variants. In metric learning, the network is provided with sample images and is penalized based on whether the samples are of the same class or not. Contrastive loss and triplet loss are two metric learning approaches popular in face recognition.
Contrastive loss is generally used in Siamese style networks. A Siamese network is an architecture with two parallel neural networks with shared weights. Each network takes a different input, and the two outputs are combined to provide some prediction [89]. Contrastive loss was proposed by Hadsell et al. [90] and was used in face recognition systems like DeepId2 [55], DeepId2+ [56], DeepId3 [28], and others [87, 88]. Figure 7 shows the Siamese network used in DeepId2+.
The researchers of Google presented a system that learns a direct mapping from face images to discrete points in the compact euclidean space [29]. The optimization loss function is triplet loss. Given a triplet (an anchor, positive sample and negative sample), this loss aims at minimizing the distance between the anchor and its positive while maximizing the distance between the anchor and the negative. Contemporary research carried out by Baidu Research also reports the use of triplet loss [57].
Despite being conceptually straightforward, the effectiveness of metric learning mainly depends on the input samples. For example, FaceNet uses a hard sample mining algorithm for optimum triplets. Moreover, the number of possible triplets grows exponentially with dataset size and hence effective triplet mining becomes complicated. Studies that followed [37, 38] reported that while triplet loss is an effective approach, learning by classification and metric learning approaches makes the training more convenient.
A study on face classification algorithms
Generally, the train data in face recognition are large scale datasets diversified with variations in gender, ethnicity, profession, etc. In contrast, gallery set is much smaller and application specific (e.g., mugshot images of persons of interest). Often times, gallery images are disjoint from the train data. Even if the gallery set was included in the much larger train set, each update to gallery will require complete retraining of the network. In these situations, the effort to use the trained model without alteration, for online face recognition, is inconvenient and naive. To this end, deep face recognition exploits the strategy of transfer learning. In this approach, as shown in Fig. 6, the network learns from large volumes of train data and the trained model is used to generate features for test faces. A shallow classifier is then used on the generated features for face identification. In doing so, the enrolment of gallery samples, i.e., training the shallow classifier, is carried out as an intermediary step between offline model training and online face recognition. The enrolment could be a model based approach or a template learning approach. This section aims to discuss the algorithms used in this shallow classification process.
Feature based transfer learning with pre-trained models. conv, convolution layer; fc, fully connected layer
DeepID2+ net. Conv n, nth convolutional layer (with max-pooling). FC n, nth fully connected layer. Id, identification supervisory signal. Ve, verification supervisory signal. Dashed arrows denote forward propagation. Solid arrows denote supervisory signals. Nets in the left and right are two DeepID2+ nets with shared weights and different input faces
Generally, transfer learning includes a source domain which is trained offline and a target domain for online processing [59, 91, 92]. In this context, the source domain is the large datasets used for offline training of the network model and the target domain is the online face recognition data. Prior to DeepFace, transfer learning meant fine-tuning the network model with the gallery samples. DeepFace presented a varied approach of transfer learning for face recognition. The DeepFace net was initially trained as a traditional multi-class face classification problem. The authors considered the output of the last fully connected layer as a raw feature representation of the input face. With this notion, DeepFace used two identical DNNs with shared weights to simultaneously generate feature vectors for two faces for face verification. A contemporary research that exploited a similar concept is the DeepId series [28, 54–56], which used the generated feature vectors for tasks like face verification and recognition. This feature vector based classification was exploited in face recognition in two main approaches: (1) subject specific modelling and (2) template based learning.
When several gallery images are available for a single subject, it results in multiple feature vectors per subject. These feature vectors can be modeled into a single representation for the subject. This is generally carried out with the use of algorithms like support vector machines (SVM). The model-based approaches yield optimum performance multiple imagery per subject is available. In other circumstances, template-based learning is a straightforward approach. In template learning, the unknown feature vector, i.e., template, is compared to known templates to calculate the nearest neighbor.
In contrast to image-based face recognition, video-based face recognition generally has more than one face image for a probe subject spread across a set of consecutive frames. Hence, multiple feature vectors are available for comparison [58, 62, 63, 93, 94] during the classification. The studies on video face recognition has carried out classification in three main approaches: (1) perform classification on each frame, (2) result pooling over set of frames [62, 63, 94], and (3) integration of information across frames for one-time face recognition [58, 95]. The second and third approaches maintain the information across all frames and have reported progress on IJB benchmark.
Face datasets and evaluation benchmarks
The data serves two purposes in a typical face recognition system; it serves as training data and as benchmarks for system validation. It known that the quality of train data has a huge impact on the performance of a DNN. Similarly, the quality of the validation data has a huge impact on the reliability of the benchmark results. The term "quality" refers to the size and the level of inter and intra-class variations. The intra-class variation is a measure of the depth of the dataset, i.e., the number of images per each individual and the inter-class variations is achieved by increasing the breadth, the number of individuals in the dataset (Table 3).
Table 3 The benchmark datasets and train datasets for face recognition
Initially, face datasets consisted of high-quality images mostly featuring celebrities [31, 88, 96]. Datasets that followed were more practicality driven and hence consisted of data captured at unconstrained environments (e.g., surveillance footages) [39, 40]. Moreover, several datasets aimed at including challenging variations like age gaps [97–100], pose [101], disguise [102], and ethnic variations [37, 67].
Over the years, face recognition systems have been employing train datasets of increasing scale. Facebook once used a dataset of 500 million images of over 10 million subjects for training face recognition models [103] and Google used a dataset of 4 million facial images from 4000 subjects [27]. The success of these systems, backed up by large-scale private datasets, attracted research attention towards large and openly accessible face datasets like VGGFace2 [78].
The evaluation benchmarks are generally disjoint from the train datasets. They provide an estimate on the reliability of the trained model under different protocols like face verification, closed-set face identification, and open-set face identification [104]. For an unbiased comparison, the results are denoted in notations specified by the benchmark. Face verification is the task of determining if two faces belong to the same identity or not. Verification accuracy is generally represented in the receiver operating characteristic (ROC) [31]. The curve plots variance of the true acceptance rate against the false acceptance rate. Closed-set face identification is the task of identifying a probe against the pre-defined gallery with the assumption that the probe has a mate in the gallery. The accuracy of closed-set recognition is commonly denoted using cumulative match characteristic (CMC) [39, 40]. The CMC curve measures the percentage of true identifications within a given rank, i.e., rank 5 identification accuracy denotes the true identifications within the top 5 predictions. Open-set face identification is the task of identifying a probe against the pre-defined gallery while being open to the possibility that the probe may not have a mate in the gallery. The open-set face recognition accuracy can be denoted using decision error trade-off (DET) [39]. The DET curve to plots the false-negative identification rate (FNIR) as a function of false-positive identification rate (FPIR).
This sections aims to provide an overview of face datasets that have been effective in face recognition discussing their important features, advantages, and disadvantages.
5.0.3 LFW [31]
LFW is by far the most effective benchmark for unconstrained face recognition. The dataset comprises 13,233 images of 5749 people under varying conditions of pose, lighting, focus, resolution, etc. The cropped faces are detections of Haar cascade-based face detector by Viola and Jones [105].
The benchmark targets the pair matching problem/face verification. Two evaluation protocols are provided: (1) restricted, the pairs are provided, and (2) unrestricted, the pairs can be generated as per user's preference. The ROC curve is used for recording the results.
5.0.4 YTF [96]
Following LFW, a similar dataset and a benchmark was released with the purpose of evaluation of face recognition in videos under unconstrained category. The dataset comprises 3425 videos of 1595 individuals. These individuals are a subset of those of the LFW dataset.
Since the dataset was designed so as to align with LFW, the benchmark tests were designed the same way. The benchmark includes pair matching tests under two protocols restricted and unrestricted.
5.0.5 VGGFace [37]
VGGFace [37] consists of 2.6 million images of 2 622 individuals. Despite being recognized as one of the largest publicly available datasets for training, the refined dataset where label noise is removed by human annotators, consisting of 800, 000 images.
5.0.6 VGGFace2 [78]
VGGFace2 consists of 3.31 million images of 9131 s classes giving an average of 362.6 images per class. The dataset was created with the aim of achieving a higher depth and breadth. The additional design goals of the dataset include achieving wide range of age, pose, and ethnic variations.
5.0.7 CASIA-Webface [88]
The CASIA-Webface dataset which consists of total of 453,453 images over 10,575 identities. The data is collected from IMDb website. The dataset is designed to be compatible with LFW benchmark, meaning that there are no any overlappings between the two datasets. Hence, a system trained on CASIA-Webface can be independently evaluated on LFW.
5.0.8 CelebFaces [106]
CelebFaces contains 87,628 face images of 5436 celebrities from the Internet, with approximately 16 images per person on average.
5.0.9 Ms-celeb-1m [107]
Ms-celeb-1m dataset consists of a benchmark test which includes evaluation data and evaluation protocol and a separate dataset for training. The evaluation dataset comprises data from one million celebrities and the training dataset comprises approximately 10 million images of 100,000 celebrities.
5.0.10 MegaFace [67]
MegaFace challenge evaluates the performance of face recognition and face verification with up to 1 million distractors. Moreover, it includes protocols for age invariant face recognition. The probe data collection of MegaFace is composed of two datasets: (1) FaceScrub dataset [108] which consists of 100,000 photos of 530 celebrities and (2) FG-Net dataset [109, 110] which consists of 975 photos of 82 people. The latter encompasses variations of age with photos spanning many ages of each subject. The MegaFace distraction data, i.e., gallery collection, includes 1 million photos of more than 690,000 unique subjects collected from Yahoo's Flickr dataset [111].
The evaluation protocol for face recognition is as follows. Let the probe set have M faces of a subject, out of which one is placed in the gallery of 1 million distractors. The face recognition system is provided with the remaining M-1 images. The system is expected to learn from these M-1 images and rank the distractor set in the order of similarity. Ideally, the one image from the probe set should be ranked in the first place. The results are provided via CMC curves. For evaluations on face verification, all pairs between the probe set and distractor set are provided within the dataset. This contains 4 billion negative pairs. The verification results are provided via ROC curves.
5.0.11 IJB [39–41]
In contrast to LFW benchmark which used a commodity face detector, IJB dataset provides a set of face images that are manually aligned (Fig. 8). The manual alignment process aims at preserving challenging variations such as pose, occlusion, and illumination, that are generally filtered out with automated detection. The dataset is a collection of media in the wild which contains both images and videos. The dataset contains media from 500 individuals gathered so as to produce a near-uniform geographic distribution. The complete dataset comprises 5712 images and 2085 videos.
Sample data in three benchmarks LFW (top), video frames from IJB-A (middle) and MegaFace (bottom)
This dataset is benchmarked for face verification and closed-set and open-set face recognition. The performance evaluation on IJB is a process of 10-fold cross validation. The dataset is split 10 random train and test splits with 333 subjects allocated for training at each level and the remaining 167 subjects for testing. The train set can be used to either fine-tune the network or experimentally derive the optimum threshold distance between two facial feature vectors, which, when exceeded, it can be concluded that the faces are of different identities. The test set is then split into two parts, gallery set and probe set. Each subject has media in both the sets. The media in the probe set are used as the search term and the gallery set is the database that the probe image is tested against. To facilitate open-set classification problem, 55 randomly picked subjects are removed from the gallery. In the protocol specified for face verification, the actual and imposter pairs are provided similar following the LFW convention, but to increase the difficulty, the imposter pairs are selected with restrictions so as to pick pairs of more similarity. The performance is reported using ROC, CMC, and DET curves.
State-of-the-art face recognition systems
The conventional face recognition pipeline begins with row input images and followed by pre-processing [112–114], face and facial landmark detection [105, 115–118], alignment [119–124], feature generation, and classification. Although each step along the pipeline has been subjected to research, this survey focuses on the steps controlled by deep learning, i.e., feature generation and classification.
Study 1 : Individual analysis of system designs
DeepFace [27]
DeepFace uses a nine-layer deep neural network with more than 120 million parameters for face recognition. Softmax loss was employed to train the network, and the train dataset was a private dataset of four million facial images of more than 4000 identities. The system also implements an effective pre-processing mechanism where a 3D model is used to align faces into a canonical pose. In summary, the success of DeepFace is due to three main factors: (1) sound pre-processing step, (2) network architecture, and (3) large scale train data.
In addition to the proposed system, DeepFace also presents an end-to-end face verification system using a Siamese network. Following the training, the network excluding the classification layer is replicated twice to generate features simultaneously for two images. The generated feature vectors are compared in deciding if the two images are of the same person.
DeepId series [28, 54–56]
DeepId introduced the concept that when a CNN is trained for face classification with approximately 10,000 identities and the network is designed such that the number of neurons is reduced as we go higher in the feature extraction hierarchy, it results in the top layers producing compact identity related features with only a few neurons. These identity features, referred to as DeepIds, can then be generalized to other tasks like face verification. This approach of learning facial feature representations through a classification tasks has conceptual similarities to the Siamese network proposed by DeepFace.
The network used in DeepId consists of four convolutional layers, each followed by a max pooling layer. On top of this lies the fully connected layer which is referred to as DeepId layer. The layer was named so because the DeepIds are extracted from this layer. DeepId layer is then followed by the top layer which is a softmax layer. The DeepIds extracted from this network is fed to joint Bayesian technique via which the verification is carried out. The system was trained on an extended version of CelebFaces [106], code-named CelebFaces+, which contains 202,599 face images of 10,177 celebrities. The system yielded 97.45% verification accuracy on unconstrained face verification in LFW.
Following the success of DeepId, DeepId2 suggested that including both face identification signals and face verification signals (contrastive loss) for supervision can further increase the accuracy of face recognition/verification systems. This hypothesis was based on the premise that the face identification signals contribute in increasing inter-personal variations whereas face verification signals contribute in reducing intra-personal variations. DeepId2 achieved 99.15% LFW accuracy. This performance was further improved by DeepId2+ which introduced two system improvements: (1) increasing the dimension of hidden representations and (2) introducing supervisory signals to early convolution layers. Please refer to Fig. 7 for DeepId2+ network.
Adding to the continuous improvements, DeepId3 used both identification and verification signals as supervision but on deeper architectures than those of previous DeepId versions. The DeepId3 nets were influenced by VGGNet (stacking of convolutions to achieved increased depth) and GoogLeNet (Inception) architectures. By this implementation, an ensemble of two DeepId3 nets achieved 99.53% LFW accuracy (Fig. 8).
VGGFace [37]
Inspired by VGGNet which showed that deeper convolutions can be more effective in large-scale image recognition, VGGFace applies the same concept for face recognition. The authors employed a modified version of the architecture presented in VGGNet and trained on VGGFace dataset. The authors evaluated two loss functions, softmax loss and triplet loss, and concluded that the triplet loss certainly does provide a better overall performance. Nonetheless, the authors report that training the network as a classifier with softmax loss makes the training significantly easier and faster.
Template adaptation [59]
The VGGFace system was later used for transfer learning with template adaptation. In this implementation, the deep CNN features from pre-trained VGGNet is combined with linear SVMs trained at test time [59]. The one-vs-rest linear SVMs are reported to increase the discriminative power of the feature space.
FaceNet [29]
Instead of training a face recognition system in the form of a conventional classifier, FaceNet implements a system which directly maps the input face thumbnails to the compact Euclidean space. The Euclidean space is generated such that the l2 distance between all faces of the same identity is small, whereas the l2 distance between a pair of face images from different identities is large. This is enabled by triplet loss which, by definition, aims at minimizing the distance between pairs of same identity while maximizing the distance between pairs of different identities.
The authors used two DNNs, (1) Zeiler Fergus [125] and the (2) GoogleNet [36] architecture. The nets were trained on an in-house dataset of 100–200 million face images of about 8 million different identities. Out of the two nets used, Zeiler Fergus, achieved an impressive LFW accuracy of 99.63% and a 95.12% YTF accuracy.
Baidu [57]
The authors present a network comprising 9 convolutions trained with triplet loss. The system reports a near-perfect LFW verification accuracy. The authors conclude that triplet loss, compared to multi-class classification, is more suitable for face verification and retrieval problems.
DLIB [64]
Dlib [64] is library written in C++ which provides software components targeting specialities like data mining, machine learning, image processing, and linear algebra. The library includes a face recognition component that uses a modified version of ResNet-34 [70] to obtain a unique embedding for each face thumbnail. The output feature vectors are of 128 numerical dimensions and the network is trained using triplet loss. The network has been trained on a dataset of 3 million images.
The face recognition component of the Dlib library employs transfer learning to offer flexibility to the user to provide an annotated dataset against which the probe face image/video is compared to. During the enrolment process, the pre-trained model generates vectors for the annotated face images and are stored. During the recognition process, the Euclidean distance between the probe feature vector and each of the stored gallery feature vectors is calculated. During the classification, if the calculated distance lies below a pre-defined threshold, the two faces are considered to be of the same identity. This implementation identifies one or more subjects as possible identity of the unknown face.
OpenFace [65]
OpenFace [65] is a face recognition system open sourced under the Apache 2.0 license. The system was developed with the purpose of bridging the gap between the publicly available face recognition systems and the state-of-the-art high performing private systems. The system is based on concepts introduced in GoogleNet [36] and FaceNet [29].
OpenFace uses a modified version of nn4 network from GoogleNet which was also used in FaceNet. The DNN is trained using triplet loss. The output feature vectors obtained from this trained model are of 128 numerical dimensions.
The face classification is carried out by subject specific modelling approach using a linear SVM. Given the labeled face images of train data, the system generates feature vectors for each face. Then, the feature vectors are fed to the SVM which creates a model based on face feature vectors. When provided with a facial feature vector of an unknown face image, the SVM model classifies the unknown face.
FaceNet : re-implementation (FaceNet_Re) [66]
This openly accessible face recognition system is a modified re-implementation of FaceNet [29]. The system provides three pre-trained models of Inception ResNet V1 architecture, trained with varying loss functions and train datasets. As seen in Table 4, the model trained with softmax loss and VGGFace2 reported the highest LFW accuracy out of the three.
Table 4 Important milestones in face recognition with corresponding LFW verification accuracies
Similar to DeepId series, once trained, the inference network which is the network omitting the top layer is used as the pre-trained model generate feature vectors of 512 numerical dimensions. Similar to OpenFace implementation, an SVM classifier is used for classification task.
SphereFace [60] and CosFace [61]
SphereFace and CosFace are two face recognition systems which were used to introduce SphereFace loss and CosFace loss respectively. Both systems use the ResNet-64 architecture and is trained on CASIA-WebFace. Additionally, CosFace trains the system with another private dataset and reports a higher performance.
ArcFace [38]
ArcFace, which is a quite recent publication, implements a series of DNNs (ResNet-100, ResNet-50 and ResNet-34) along with the ArcFace loss. This system outputs a 512-dimensional feature vector for face images. The DNNs were trained on a modified version of Ms Celeb dataset (ms1m). In a series of experimental results, the authors show that this implementation outperforms majority of the reported state-of-the-art results.
Neural aggregation network (NAN) [58]
NAN is a system designed for video face recognition. It comprised a deep network and an aggregation module. The deep network generates feature vectors for faces in video frames. The aggregation module aggregates the feature vectors to form a single feature inside the convex hull spanned by them. This aggregation is invariant to the image order and hence does not utilize the temporal information across video frames. The network used in the paper is of GoogLeNet [36] architecture with the batch normalization [73]. Face verification is carried out with a Siamese NAN structure with two NANs trained with contrastive loss. Face identification is carried out by adding a fully connected layer on top of the NAN for softmax loss. The train dataset uses about 3M face images of 50K identities from the Internet.
Bilinear CNNs (B-CNN) [62]
The system uses a symmetric bilinear-CNN model, comprising two Imagenet-pretrained "M-net" models from VGG's MatConvNet [126]. The models are fine-tuned with FaceScrub dataset. One-versus-rest linear SVM classifiers are trained on the gallery set during experiments.
DCNNmanual+metric [63]
The paper presents an end-to-end system for face verification. The authors train a DCNN with 10 convolutional layers, 5 pooling layers, and 1 fully connected layer with CASIA-WebFace dataset [88]. The system uses joint Bayesian metric learning [127, 128] for face verification. Out of presented deep nets, the network named DCNNmanual+metric yields the best performance. DCNNmanual+metric uses the model trained on CASIA-WebFace dataset further fine-tuned using the IJB-A [39] and its extended version Janus Challenging set 2 dataset. The system uses cosine distance as a measure of similarity between faces. Manual stands for using training data with manual annotation and metric stands for applying metric learning to compute similarity.
Study 2: Comparative performance analysis
LFW (2007)
LFW has been the commodity benchmark for face verification, over the last decade. Table 4 presents the summary of recent milestones in face recognition alongside the reported LFW accuracy.
The reported high accuracies on LFW indicates that the benchmark has reached saturation, creating requirement for advanced benchmarks. This near-perfect performance at LFW has been explained by Klare et al. [39], in terms of the nature of the face detector used. This commodity face detector, despite having attractive features like being scalable and real-time efficiency, is not resilient to variations in visual data. Once the faces are mined using this detector, variations like pose, occlusion, and illumination are filtered. The clear and good quality images of frontal pose makes it more convenient to the face recognition systems, thus overlooking the probable challenges in advanced applications like intelligent surveillance. In comparison to the face recognition results reported on larger benchmarks like MegaFace (Table 5), dataset size can be identified as a second factor that enables higher accuracy on LFW.
Table 5 Face identification and verification evaluation of different methods on MegaFace Challenge1 using FaceScrub as the probe set
MegaFace
MegaFace challenge advocates evaluation of deep face recognition at the hand of million distractors. The aim of the benchmark is to scale with the real-world applications that usually involve recognizing a face at a planetary scale.
"Algorithms that achieve above 95% performance on LFW (equivalent of 10 distractors in our plots), achieve 35–75% identification rates with 1 million distractors," reports MegaFace. Accounting for the reported results on this benchmark, Google's FaceNet which achieved near perfect LFW accuracy has recorded an accuracy level of 70%s on MegaFace. The other noteworthy results were of a commercial system named NTechLab. While the reported situation in 2014–2015 was not perfect nor impressive, the years that followed reported progress in recognition results [60, 61]. The recent results reported by ArcFace [38] indicate an impressive near perfect accuracy on MegaFace benchmark. Please refer to Table 5 for a summary of identification and verification results on MegaFace.
IJB (2015)
Table 6 presents a summary of reported face recognition results on IJB benchmark. While the reported results on this benchmark are comparatively higher than those on MegaFace, the results are not perfect, nor near-perfect. Hence, these results are an indication that the face recognition is challenged by complications in unconstrained data. A noteworthy fact regarding this benchmark is that, since the dataset includes multiple imagery for a single recognition, ideally, the system should include a mechanism to fully exploit the excess information. While the authors of the dataset suggest subject specific modelling, systems like B-CNN have employed other approaches like result pooling.
Table 6 Performance evaluation on the IJB-A dataset
Study 3: Experimental analysis
Bianco et al. [77] presents an experimental analysis of DCNN frameworks for image recognition. Here, experiments on all systems are carried out on the same computational resources. Hence, it provides an unbiased comparison of strengths and weaknesses of the frameworks. Inspired by the work of Bianco et al., this experimental study analyzes the performance of three open-source face recognition systems (DLIB library, OpenFace, and FaceNet_Re) in terms of recognition accuracy and speed. The systems in comparison use three main deep network architectures discussed in this survey; ResNet, GoogleNet, and Inception-ResNet and the two main classification approaches, subject-specific modeling with SVM and template learning.
OpenFace and DLIB uses HoG face detector [116] while FaceNet_Re uses MTCNN face detector [115]. To avoid dependencies from the detectors, only the faces detected by both algorithms were considered in the experiment. Taking into account the dependencies from different classification approaches, the two systems that used subject-specific modeling with linear SVMs (OpenFace and FaceNet_Re) were modified to perform template comparison in a similar manner to that of DLIB (nearest neighbor based on euclidean distance). In addition, the results from the original SVM implementation was also reported for comparison.
Depending on the use case, the recognition could be from Still images to Still images (S2S), from Video to Video (V2V), and from Still images to Video (S2V). Several benchmarks have addressed the first two approaches; LFW and MegaFace address S2S, and IJB addresses the combination of S2V, S2S, and V2V tests. While some benchmarks have made efforts to address S2V, these are problem specific datasets with some form of bias [130]. Hence, the experiment measures S2V recognition with LFW dataset as the set of gallery images and selected videos of YTF dataset as the probe videos. The experiment measures the rank 1 recognition accuracy with increasing gallery sizes. In addition, the average time taken by the system to run the forward pass on a single face thumbnail is compared.
Figure 9 plots the recognition accuracy against the gallery size. The graph depicts two observations: (1) comparing the performance of the three systems with template learning as the classifier, FaceNet_Re TL and OpenFace TL show performance decrease with the gallery size; however, DLIB system shows considerable stability against the growth of gallery; and (2) comparing the performance of the same system with SVM and template learning classifiers, in both the instances (OpenFace and FaceNet_Re), the SVM is effective with limited number of subjects, but the performance drops drastically as the number of subjects increases. And one-to-one template learning is comparatively more stable against larger gallery sets. Since many studies have encouraged the use of subject-specific modeling to better utilize all the available information from multiple visual data [27, 39–41, 59], it is important to properly analyze the strengths and weaknesses of different modeling approaches. The popularity of SVM in image classification can be explained by its ability to scale well with high dimensional data [131–133]. Although this works well when provided with small number of classes, increased number of classes with limited train data per class could complicate the process of finding the separation hyperplane.
Face recognition accuracy against the gallery size. SSM, subject-specific modeling; TL, template learning. The gallery contains three images per subject and ranges from 150 images to 2700 images, i.e., 50–900 subjects
Table 7 reports the average time taken by each system to run the DCNN model on a single face thumbnail, as recorded on an Intel Core i7-7740X CPU @ 4.30GHz. The times reflect the underlying computational complexity involved in feature extraction from raw pixels. OpenFace and FaceNet_Re that includes Inception modules in the framework have reported lesser forward pass time in comparison to the DLIB model. Among the limited records in literature on computational efficiency, DeepFace reports an 0.18-s feature extraction time on a single core Intel 2.2GHz CPU and FaceNet reports a 30 ms/image on a mobile phone with a small NN which is reported to have lesser, yet sufficient-for-face-clustering accuracy.
Table 7 Feature extraction time per face
Starting from face verification with high-quality data, face recognition has advanced over the recent years to address complicated scenarios like face recognition in unconstrained images and video face recognition. Simultaneously, face recognition benchmarks like IJB and MegaFace have aimed to replicate real-world applications. While the reports indicates a continuous progress, there are some un-addressed issues in terms of face recognition systems and benchmarks.
A comparative analysis for face recognition accuracy
Several studies have carried out experimental evaluations comparing state-of-the-art DCNN frameworks for image classification [68, 76, 134]. These experiments provide unbiased comparisons of the systems. This is particularly important in situations where all the systems are not evaluated on the same benchmark. The condition applies to face recognition as well. While almost all the face recognition systems provide the LFW accuracy, systems that were implemented prior to benchmarks like IJB and MegaFace do not provide evaluation results on them. Hence, there exists the necessity for these systems to be evaluated under a common benchmark.
A comparative analysis for computational complexity
While the studies report the recorded accuracy, only limited publications report the associated computational complexity. Despite offline processing being generally flexible on computational complexity, it is one of the most critical requirements in real-time applications. Hence, there exists the need for a comparative analysis of the deep face recognition systems with respect to performance indices like computational complexity, memory consumption, and inference time.
End-to-end systems
Majority of the studies and benchmarks tend to isolate face recognition as an individual discipline and hence do not provide sufficient insights on critical issues arising from inevitable integration with modules like face detection (e.g., false recognitions resulting from false detections). Despite limited studies [63, 135, 136], end-to-end face recognition is still an open research.
Multi-model face recognition
Most of the deep face recognition systems generate a single feature embedding for each face. This approach consider holistic features and does not contemplate component level features. Several studies have aimed to implement multi-model face recognition systems to gain optimum use of diverse information in a face image [137–139]. Several studies have made efforts to perform fusion of multiple descriptors across face [140–142]. These systems show that despite the possibility of increased computational complexity, the multi-model systems can yield positive results. Hence, this study an be improved targeting applications that require offline processing.
Multi-face recognition and tracking
The benchmarks and systems for video face recognition portray the problem as face recognition on a set of face images per subject [39–41]. These benchmarks does not evaluate face tracking. Nonetheless, face tracking is of vital importance in multi-face recognition in videos. In this scenario, the pixel level information in a video frame and the temporal information across video frames can be fused for an improved result. While face tracking and face clustering have been studies as a separate discipline [143–146], in practical applications, they are applied along with face recognition. Hence, evaluating the state-of-the-art face recognition systems along with face tracking can be a possible research with practical use.
Ensemble of deep learning and traditional face descriptors
While deep face descriptors are becoming the main feature representations for face recognition, traditional visual appearance descriptors can be used as an additional informational guidance. Recent developments have demonstrated effective usage of traditional visual descriptors in image processing tasks such as image semantic learning [147] and text mining in complex background images [148]. Exploring their effectiveness as an ensemble of deep learning could be possible future research.
Video face recognition
Frame-wise face recognition, feature aggregation across frames, and score pooling across frames are popular approaches of video face recognition. The first approach provides a crisp classification output that the probe face belongs to identity x. Unconstrained videos where faces are subjected to motion blur and other factors like partial faces due to pose might require aggregation of several partial truths into a higher truth. Despite score pooling and feature aggregation being straightforward aggregations across video frames, there exists room for sophisticated algorithms like inference based on fuzzy logic. They can be adapted from research work of similar disciplines like image annotation [149]. Through this mechanism, a degree of certainty can be calculated to the classification output, against factors like the quality of the image and the fraction of face visible.
While temporal attention has proven to be effective in video face recognition, research disciplines like video captioning has shown improved performance by including spatial affinities in the attention [150]. Hence, spatial-temporal attention emerges as a possible research for video face recognition.
Application-specific designs
The expected functionality of face recognition varies with application. A face recognition application designed for intelligent surveillance where the cost of false alarms (registered individuals recognized as possible intruders) is high and the cost of missed alarms (possible intruders recognized as registered) is even higher should strive for minimum FPIR with reasonable flexibility on false-negative rejections. In contrast, applications that detects persons of interest is expected to have minimum FNIR (person of interest recognized as unknown) with reasonable flexibility of FPIR (false alarm where a regular individual is identified as a person of interest). Hence, the need for scenario specific designs and benchmarks cannot be overlooked.
Basic challenges for face recognition
Despite many architectural enhancements and diversified datasets, face recognition still has scope for improvement in terms of elementary complications arising from visual variations like pose, expression, occlusion, and illumination. In addition to direct studies like expression invariant face recognition [151], face recognition under occlusion [152], or illumination face recognition [52], tools like video segmentation [153, 154] and region of interest extraction [155] can provide potential indirect assistance for face recognition on noisy imagery.
Regardless of the varying modeling approaches and application specific fine-tuning, face recognition has mainly been influenced by DCNN frameworks, loss functions, classification algorithms, and train data. The continuous advancement of deep network architectures in image classification generates networks adaptable for face recognition. The study of Dong et al. [156], Bruna and Mallat [157], and Hankins et al. [158] are some image classification networks with prospect for face classification. Hence, face recognition will remain an active research striving for sophisticated frameworks.
This survey has presented the origin and evolution and a comparative analysis of 18 face recognition systems. Through this, the survey aims provide an informational guidance to simulate future research. In doing so, the paper has analyzed the performance of the systems in terms of benchmark results reported on three benchmarks which addresses different aspects of face recognition and an experimental study. Additionally, the survey has discussed the open issues in face recognition along with a note on possible future research.
A. M. Burrows, J. F. Cohn, in Encyclopedia of Biometrics, Second Edition. Comparative anatomy of the face, (2015), pp. 313–321. https://doi.org/10.1007/978-1-4899-7488-4_190.
R. Chellappa, C. L. Wilson, S. Sirohey, Human and machine recognition of faces: a survey. Proc. IEEE. 83(5), 705–741 (1995). https://doi.org/10.1109/5.381842.
Y. Wang, T. Bao, D. Ding, M. Zhu, in 2017 2nd International Conference on Image, Vision and Computing (ICIVC). Face recognition in real-world surveillance videos with deep learning method, (2017), pp. 239–243. https://doi.org/10.1109/ICIVC.2017.7984553.
S. Degadwala, S. Pandya, V. Patel, S. Shah, U. Doshi, in International Conference on Recent Trends in Engineering, Science Technology - (ICRTEST 2016). A review on real time face tracking and identification for surveillance system, (2016), pp. 1–5. https://doi.org/10.1049/cp.2016.1477.
, in International Civil Aviation Organization (ICAO) Doc 9303 vol. 2. Machine readable travel documents, (2006).
R. D. Labati, A. Genovese, E. Muñoz, V. Piuri, F. Scotti, G. Sforza, Biometric recognition in automated border control: a survey. ACM Comput. Surv.49(2), 24–12439 (2016). https://doi.org/10.1145/2933241.
J. Y. Choi, W. De Neve, K. N. Plataniotis, Y. M. Ro, Collaborative face recognition for improved face annotation in personal photo collections shared on online social networks. IEEE Trans. Multimed.13(1), 14–28 (2011). https://doi.org/10.1109/TMM.2010.2087320.
Q. Xu, M. Mukawa, L. Li, J. H. Lim, C. Tan, S. C. Chia, T. Gan, B. Mandal, in Proceedings of the 6th Augmented Human International Conference, AH '15. Exploring users' attitudes towards social interaction assistance on google glass (ACMNew York, NY, USA, 2015), pp. 9–12. https://doi.org/10.1145/2735711.2735831. http://doi.acm.org/10.1145/2735711.2735831.
B. Mandal, R. Y. Lim, P. Dai, M. R. Sayed, L. Li, J. H. Lim, Trends in machine and human face recognition. (M. Kawulok, M. E. Celebi, B. Smolka, eds.) (Springer, Cham, 2016). https://doi.org/10.1007/978-3-319-25958-1_7.
B. Mandal, in 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV). Face recognition: perspectives from the real world, (2016), pp. 1–5. https://doi.org/10.1109/ICARCV.2016.7838675.
X. Zhang, Y. Gao, Face recognition across pose: a review. Pattern Recogn.42(11), 2876–2896 (2009). https://doi.org/10.1016/j.patcog.2009.04.017.
K. Jia, S. Gong, Face sample quality. (S. Z. Li, A. K. Jain, eds.) (Springer, Boston, MA, 2015). https://doi.org/10.1007/978-1-4899-7488-4_86.
C. Conde, I. M. de Diego, E. Cabello, in E-business and telecommunications, ed. by M. S. Obaidat, J. L. Sevillano, and J. Filipe. Face recognition in uncontrolled environments, experiments in an airport (SpringerBerlin, Heidelberg, 2012), pp. 20–32.
W. W. Bledsoe, The model method in facial recognition. Technical Report, PRI 15, Panoramic Research (1964). Inc., Palo Alto, CA, California.
W. Bledsoe, Man-machine facial recognition: Report on a large-scale experiment. Panoramic Research (1966). Inc., Palo Alto, CA.
A. Serrano, I. M. de Diego, C. Conde, E. Cabello, Recent advances in face biometrics with gabor wavelets: a review. Pattern Recogn. Lett.31(5), 372–381 (2010). https://doi.org/10.1016/j.patrec.2009.11.002.
P. S. Penev, J. J. Atick. Local feature analysis: a general statistical theory for object representation, (1996). https://doi.org/10.1088/0954-898x_7_3_002.
T. F. Cootes, G. J. Edwards, C. J. Taylor, Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell.23(6), 681–685 (2001). https://doi.org/10.1109/34.927467.
L. Wiskott, J. -. Fellous, N. Kruger, C. von der Malsburg, in Proceedings of International Conference on Image Processing, vol. 1. Face recognition by elastic bunch graph matching, (1997), pp. 129–1321. https://doi.org/10.1109/ICIP.1997.647401.
T. Ahonen, A. Hadid, M. Pietikainen, Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell.28(12), 2037–2041 (2006). https://doi.org/10.1109/TPAMI.2006.244.
MATH Google Scholar
S. Chen, S. Mau, M. T. Harandi, C. Sanderson, A. Bigdeli, B. C. Lovell, Face recognition from still images to video sequences: a local-feature-based framework. EURASIP J. Video Process.2011(1), 790598 (2010). https://doi.org/10.1155/2011/790598.
A. Rattani, D. R. Kisku, M. Bicego, M. Tistarelli, in 2007 First IEEE International Conference on Biometrics: Theory, Applications, and Systems. Feature level fusion of face and fingerprint biometrics, (2007), pp. 1–6. https://doi.org/10.1109/BTAS.2007.4401919.
N. Dalal, B. Triggs, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 1. Histograms of oriented gradients for human detection, (2005), pp. 886–8931. https://doi.org/10.1109/CVPR.2005.177.
M. Turk, A. Pentland, Eigenfaces for recognition. J. Cogn. Neurosci.3(1), 71–86 (1991). https://doi.org/10.1162/jocn.1991.3.1.71. PMID: 23964806. http://arxiv.org/abs/https://doi.org/10.1162/jocn.1991.3.1.71.
P. N. Belhumeur, J. P. Hespanha, D. J. Kriegman, Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell.19(7), 711–720 (1997). https://doi.org/10.1109/34.598228.
J. Galbally, C. McCool, J. Fierrez, S. Marcel, J. Ortega-Garcia, On the vulnerability of face verification systems to hill-climbing attacks. Pattern Recogn.43(3), 1027–1038 (2010). https://doi.org/10.1016/j.patcog.2009.08.022.
Y. Taigman, M. Yang, M. Ranzato, L. Wolf, in 2014 IEEE Conference on Computer Vision and Pattern Recognition. Deepface: closing the gap to human-level performance in face verification, (2014), pp. 1701–1708. https://doi.org/10.1109/CVPR.2014.220.
Y. Sun, D. Liang, X. Wang, X. Tang, Deepid3: face recognition with very deep neural networks. CoRR. abs/1502.00873: (2015). http://arxiv.org/abs/1502.00873.
F. Schroff, D. Kalenichenko, J. Philbin, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Facenet: a unified embedding for face recognition and clustering, (2015), pp. 815–823. https://doi.org/10.1109/CVPR.2015.7298682.
R. Jafri, H. Arabnia, A survey of face recognition techniques. J. Inf. Process. Syst. (JIPS). 5:, 41–68 (2009). https://doi.org/10.3745/JIPS.2009.5.2.041.
G. B. Huang, M. Mattar, T. Berg, E. Learned-Miller, in Workshop on faces in 'real-life' images: detection, alignment, and recognition. Labeled faces in the wild: a database for studying face recognition in Unconstrained Environments (Erik Learned-Miller and Andras Ferencz and Frédéric JurieMarseille, France, 2008). https://hal.inria.fr/inria-00321923.
Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition. Proc. IEEE. 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, L. Fei-Fei, Imagenet large scale visual recognition challenge. Int. J. Comput. Vis.115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y.
MathSciNet Google Scholar
A. Krizhevsky, I. Sutskever, G. E. Hinton, in Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS'12. Imagenet classification with deep convolutional neural networks (Curran Associates Inc.USA, 2012), pp. 1097–1105. http://dl.acm.org/citation.cfm?id=2999134.2999257.
K. Simonyan, A. Zisserman, in International Conference on Learning Representations. Very deep convolutional networks for large-scale image recognition, (2015).
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Going deeper with convolutions, (2015), pp. 1–9. https://doi.org/10.1109/CVPR.2015.7298594.
O. M. Parkhi, A. Vedaldi, A. Zisserman, in Proceedings of the British Machine Vision Conference (BMVC), ed. by M. W. J. Xianghua Xie, G. K. L. Tam. Deep face recognition (BMVA Press, 2015), pp. 41–14112. https://doi.org/10.5244/C.29.41.
J. Deng, J. Guo, S. Zafeiriou, Arcface: additive angular margin loss for deep face recognition. CoRR. abs/1801.07698: (2018). http://arxiv.org/abs/1801.07698.
B. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. E. Allen, P. Grother, A. Mah, M. Burge, A. K. Jain, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a, (2015), pp. 1931–1939. https://doi.org/10.1109/cvpr.2015.7298803.
C. Whitelam, E. Taborsky, A. Blanton, B. Maze, J. Adams, T. Miller, N. Kalka, A. K. Jain, J. A. Duncan, K. Allen, J. Cheney, P. Grother, in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Iarpa janus benchmark-b face dataset, (2017), pp. 592–600. https://doi.org/10.1109/CVPRW.2017.87.
B. Maze, J. Adams, J. A. Duncan, N. Kalka, T. Miller, C. Otto, A. K. Jain, W. T. Niggel, J. Anderson, J. Cheney, P. Grother, in 2018 International Conference on Biometrics (ICB). Iarpa janus benchmark - c: face dataset and protocol, (2018), pp. 158–165. https://doi.org/10.1109/ICB2018.2018.00033.
I. Masi, Y. Wu, T. Hassner, P. Natarajan, in 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). Deep face recognition: a survey, (2018), pp. 471–478. https://doi.org/10.1109/sibgrapi.2018.00067.
B. Mandal, Z. Wang, L. Li, A. A. Kassim, Performance evaluation of local descriptors and distance measures on benchmarks and first-person-view videos for face identification. Neurocomputing. 184:, 107–116 (2016). https://doi.org/10.1016/j.neucom.2015.07.121. RoLoD: Robust Local Descriptors for Computer Vision 2014.
K. W. Bowyer, K. I. Chang, P. J. Flynn, A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition. Comput. Vis. Image Underst.101:, 1–15 (2006).
A. Scheenstra, A. Ruifrok, R. C. Veltkamp, in Audio- and video-based biometric person authentication, ed. by T. Kanade, A. Jain, and N. K. Ratha. A survey of 3D face recognition methods (SpringerBerlin, Heidelberg, 2005), pp. 891–899.
M. Wang, W. Deng, Deep face recognition: a survey. CoRR. abs/1804.06655: (2018). http://arxiv.org/abs/1804.06655.
S. Zhou, S. Xiao, 3D face recognition: a survey. Hum. Centric Comput. Inf. Sci.8(1), 35 (2018). https://doi.org/10.1186/s13673-018-0157-2.
C. T. Ferraz, J. H. Saito, in Proceedings of the 24th Brazilian Symposium on Multimedia and the Web, WebMedia '18. A comprehensive analysis of local binary convolutional neural network for fast face recognition in surveillance video (ACMNew York, NY, USA, 2018), pp. 265–268. https://doi.org/10.1145/3243082.3267444.
T. Patel, B. Shah, in 2017 International Conference on Innovative Mechanisms for Industry Applications (ICIMIA). A survey on facial feature extraction techniques for automatic face annotation, (2017), pp. 224–228. https://doi.org/10.1109/ICIMIA.2017.7975607.
B. Prihasto, S. Choirunnisa, M. I. Nurdiansyah, S. Mathulaprangsan, V. C. Chu, S. Chen, J. Wang, in 2016 International Conference on Orange Technologies (ICOT). A survey of deep face recognition in the wild, (2016), pp. 76–79. https://doi.org/10.1109/ICOT.2016.8278983.
C. Ding, D. Tao, A comprehensive survey on pose-invariant face recognition. CoRR. abs/1502.04383: (2015). http://arxiv.org/abs/1502.04383.
M. A. Ochoa-Villegas, J. A. Nolazco-Flores, O. Barron-Cano, I. A. Kakadiaris, Addressing the illumination challenge in two-dimensional face recognition: a survey. IET Comput. Vis.9(6), 978–992 (2015). https://doi.org/10.1049/iet-cvi.2014.0086.
R. Tyagi, G. Tomar, N. Baik, A survey of unconstrained face recognition algorithm and its applications. Int. J. Secur. Appl.10:, 369–376 (2016). https://doi.org/10.14257/ijsia.2016.10.12.30.
Y. Sun, X. Wang, X. Tang, in Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, IEEE Conference on Computer Vision and Pattern Recognition (CVPR '14). Deep learning face representation from predicting 10,000 classes (IEEE Computer SocietyWashington, DC, USA, 2014), pp. 1891–1898. https://doi.org/10.1109/CVPR.2014.244.
Y. Sun, Y. Chen, X. Wang, X. Tang, in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14. Deep learning face representation by joint identification-verification (MIT PressCambridge, MA, USA, 2014), pp. 1988–1996. http://dl.acm.org/citation.cfm?id=2969033.2969049.
Y. Sun, X. Wang, X. Tang, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Deeply learned face representations are sparse, selective, and robust, (2015), pp. 2892–2900. https://doi.org/10.1109/cvpr.2015.7298907. https://app.dimensions.ai/details/publication/pub.1095455292andhttp://arxiv.org/pdf/1412.1265.
J. Liu, Y. Deng, T. Bai, C. Huang, Targeting ultimate accuracy: face recognition via deep embedding. CoRR. abs/1506.07310: (2015). http://arxiv.org/abs/1506.07310.
J. Yang, P. Ren, D. Chen, F. Wen, H. Li, G. Hua, Neural aggregation network for video face recognition, (2016). https://doi.org/10.1109/cvpr.2017.554.
N. Crosswhite, J. Byrne, C. Stauffer, O. Parkhi, Q. Cao, A. Zisserman, in 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017). Template adaptation for face verification and identification, (2017), pp. 1–8. https://doi.org/10.1109/FG.2017.11.
W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, L. Song, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Sphereface: deep hypersphere embedding for face recognition, (2017), pp. 6738–6746. https://doi.org/10.1109/CVPR.2017.713.
H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, W. Liu, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Cosface: large margin cosine loss for deep face recognition, (2018), pp. 5265–5274. https://doi.org/10.1109/CVPR.2018.00552.
A. Roy Chowdhury, T. Lin, S. Maji, E. G. Learned-Miller, Face identification with bilinear CNNs. CoRR. abs/1506.01342: (2015). http://arxiv.org/abs/1506.01342.
J. Chen, R. Ranjan, A. Kumar, C. Chen, V. M. Patel, R. Chellappa, in 2015 IEEE International Conference on Computer Vision Workshop (ICCVW). An end-to-end system for unconstrained face verification with deep convolutional neural networks, (2015), pp. 360–368. https://doi.org/10.1109/ICCVW.2015.55.
High quality face recognition with deep metric learning. http://blog.dlib.net/2017/02/high-quality-face-recognition-with-deep.html. Accessed 20 May 2019.
B. Amos, B. Ludwiczuk, M. Satyanarayanan. Openface: a general-purpose face recognition library with mobile applications, CMU-CS-16-118, CMU School of Computer Science, (2016).
Face recognition using TensorFlow. https://github.com/davidsandberg/facenet. Accessed 20 May 2019.
I. Kemelmacher-Shlizerman, S. M. Seitz, D. Miller, E. Brossard, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). The megaface benchmark: 1 million faces for recognition at scale, (2016), pp. 4873–4882. https://doi.org/10.1109/CVPR.2016.527.
Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature. 521:, 436–44 (2015). https://doi.org/10.1038/nature14539.
M. Lin, Q. Chen, S. Yan, Network in network. CoRR. abs/1312.4400: (2013).
K. He, X. Zhang, S. Ren, J. Sun, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Deep residual learning for image recognition, (2016), pp. 770–778. https://doi.org/10.1109/CVPR.2016.90.
C. Szegedy, S. Ioffe, V. Vanhoucke, A. A. Alemi, in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17. Inception-v4, inception-resnet and the impact of residual connections on learning (AAAI Press, 2017), pp. 4278–4284. http://dl.acm.org/citation.cfm?id=3298023.3298188.
F. Song, J. Dongarra, in Proceedings of the 28th ACM International Conference on Supercomputing, ICS '14. Scaling up matrix computations on shared-memory manycore systems with 1000 CPU cores (ACMNew York, NY, USA, 2014), pp. 333–342. https://doi.org/10.1145/2597652.2597670.
S. Ioffe, C. Szegedy, in Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15. Batch normalization: accelerating deep network training by reducing internal covariate shift (JMLR.org, 2015), pp. 448–456. http://dl.acm.org/citation.cfm?id=3045118.3045167.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, in 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Rethinking the inception architecture for computer vision, (2016). https://doi.org/10.1109/CVPR.2016.308.
T. Lin, A. RoyChowdhury, S. Maji, in 2015 IEEE International Conference on Computer Vision (ICCV). Bilinear CNN models for fine-grained visual recognition, (2015), pp. 1449–1457. https://doi.org/10.1109/ICCV.2015.170.
A. Canziani, A. Paszke, E. Culurciello, An analysis of deep neural network models for practical applications. CoRR. abs/1605.07678: (2016). http://arxiv.org/abs/1605.07678.
S. Bianco, R. Cadène, L. Celona, P. Napoletano, Benchmark analysis of representative deep neural network architectures. CoRR. abs/1810.00736: (2018). http://arxiv.org/abs/1810.00736.
Q. Cao, L. Shen, W. Xie, O. M. Parkhi, A. Zisserman, in International Conference on Automatic Face and Gesture Recognition. Vggface2: a dataset for recognising faces across pose and age, (2018). https://doi.org/10.1109/fg.2018.00020.
Y. Wen, K. Zhang, Z. Li, Y. Qiao, in Computer vision – ECCV 2016, ed. by B. Leibe, J. Matas, N. Sebe, and M. Welling. A discriminative feature learning approach for deep face recognition (SpringerCham, 2016), pp. 499–515.
J. Deng, Y. Zhou, S. Zafeiriou, in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Marginal loss for deep face recognition, (2017), pp. 2006–2014. https://doi.org/10.1109/CVPRW.2017.251.
X. Zhang, Z. Fang, Y. Wen, Z. Li, Y. Qiao, in 2017 IEEE International Conference on Computer Vision (ICCV). Range loss for deep face recognition with long-tailed training data, (2017), pp. 5419–5428. https://doi.org/10.1109/ICCV.2017.578.
W. Liu, Y. Wen, Z. Yu, M. Yang, in Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16. Large-margin softmax loss for convolutional neural networks (JMLR.org, 2016), pp. 507–516. http://dl.acm.org/citation.cfm?id=3045390.3045445.
F. Wang, J. Cheng, W. Liu, H. Liu, Additive margin softmax for face verification. IEEE Signal Process. Lett.25(7), 926–930 (2018). https://doi.org/10.1109/LSP.2018.2822810.
B. Chen, W. Deng, J. Du, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Noisy softmax: improving the generalization ability of dcnn via postponing the early softmax saturation, (2017). https://doi.org/10.1109/CVPR.2017.428.
W. Wan, Y. Zhong, T. Li, J. Chen, Rethinking feature distribution for loss functions in image classification. CoRR. abs/1803.02988: (2018). http://arxiv.org/abs/1803.02988.
X. Qi, L. Zhang, Face recognition via centralized coordinate learning. CoRR. abs/1801.05678: (2018). http://arxiv.org/abs/1801.05678.
Y. Sun, X. Wang, X. Tang, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Sparsifying neural network connections for face recognition, (2016), pp. 4856–4864. https://doi.org/10.1109/CVPR.2016.525.
D. Yi, Z. Lei, S. Liao, S. Z. Li, Learning face representation from scratch. CoRR. abs/1411.7923: (2014). http://arxiv.org/abs/1411.7923.
J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, R. Shah, in Proceedings of the 6th International Conference on Neural Information Processing Systems, NIPS'93. Signature verification using a "siamese" time delay neural network (Morgan Kaufmann Publishers Inc.San Francisco, CA, USA, 1993), pp. 737–744. http://dl.acm.org/citation.cfm?id=2987189.2987282.
R. Hadsell, S. Chopra, Y. LeCun, in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), 2. Dimensionality reduction by learning an invariant mapping, (2006), pp. 1735–1742. https://doi.org/10.1109/CVPR.2006.100.
A. S. Razavian, H. Azizpour, J. Sullivan, S. Carlsson, in 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops. CNN features off-the-shelf: an astounding baseline for recognition, (2014), pp. 512–519. https://doi.org/10.1109/CVPRW.2014.131.
S. J. Pan, Q. Yang, A survey on transfer learning. IEEE Trans. Knowl. Data Eng.22(10), 1345–1359 (2010). https://doi.org/10.1109/TKDE.2009.191.
K. Kim, Z. Yang, I. Masi, R. Nevatia, G. Medioni, in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). Face and body association for video-based face recognition, (2018), pp. 39–48. https://doi.org/10.1109/WACV.2018.00011.
H. Li, G. Hua, X. Shen, Z. Lin, J. Brandt, in Computer vision – ACCV 2014, ed. by D. Cremers, I. Reid, H. Saito, and M. -H. Yang. Eigen-pep for video face recognition (SpringerCham, 2015), pp. 17–33.
Z. Liu, H. Hu, J. Bai, S. Li, S. Lian, in The IEEE International Conference on Computer Vision (ICCV) Workshops. Feature aggregation network for video face recognition, (2019). https://doi.org/10.1109/iccvw.2019.00128.
L. Wolf, T. Hassner, I. Maoz, in 2011 IEEE Conference on Computer Vision and Pattern Recognition. Face recognition in unconstrained videos with matched background similarity, (2011), pp. 529–534. https://doi.org/10.1109/CVPR.2011.5995566.
B. Chen, C. Chen, W. H. Hsu, Face recognition and retrieval using cross-age reference coding with cross-age celebrity dataset. IEEE Trans. Multimed.17(6), 804–815 (2015). https://doi.org/10.1109/TMM.2015.2420374.
B. -C. Chen, C. -S. Chen, W. H. Hsu, in Computer vision – ECCV 2014, ed. by D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars. Cross-age reference coding for age-invariant face recognition and retrieval (SpringerCham, 2014), pp. 768–783.
T. Zheng, W. Deng, J. Hu, in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Age estimation guided convolutional neural network for age-invariant face recognition, (2017), pp. 503–511. https://doi.org/10.1109/cvprw.2017.77.
K. Ricanek, T. Tesafaye, in 7th International Conference on Automatic Face and Gesture Recognition (FGR06). Morph: a longitudinal image database of normal adult age-progression, (2006), pp. 341–345. https://doi.org/10.1109/FGR.2006.78.
T. Zheng, W. Deng, Cross-pose LFW: a database for studying cross-pose face recognition in unconstrained environments. Technical Report 18-01 (Beijing University of Posts and Telecommunications, 2018).
V. Kushwaha, M. Singh, R. Singh, M. Vatsa, N. Ratha, R. Chellappa, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Disguised faces in the wild, (2018), pp. 1–18. https://doi.org/10.1109/CVPRW.2018.00008.
Y. Taigman, M. Yang, M. Ranzato, L. Wolf, Web-scale training for face identification. CoRR. abs/1406.5266: (2014). http://arxiv.org/abs/1406.5266.
P. J. Phillips, P. Grother, R. Micheals, Evaluation methods in face recognition. (S. Z. Li, A. K. Jain, eds.) (Springer, London, 2011). https://doi.org/10.1007/978-0-85729-932-1_21.
P. Viola, M. Jones, in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1. Rapid object detection using a boosted cascade of simple features, (2001). https://doi.org/10.1109/CVPR.2001.990517.
Y. Sun, X. Wang, X. Tang, Hybrid deep learning for face verification. IEEE Trans. Pattern Anal. Mach. Intell.38(10), 1997–2009 (2016). https://doi.org/10.1109/TPAMI.2015.2505293.
Y. Guo, L. Zhang, Y. Hu, X. He, J. Gao, in European Conference on Computer Vision, vol. 9907. Ms-celeb-1m: a dataset and benchmark for large-scale face recognition, (2016), pp. 87–102. https://doi.org/10.1007/978-3-319-46487-9_6.
H. Ng, S. Winkler, in 2014 IEEE International Conference on Image Processing (ICIP). A data-driven approach to cleaning large face datasets, (2014), pp. 343–347. https://doi.org/10.1109/ICIP.2014.7025068.
G. Panis, A. Lanitis, An Overview of Research Activities in Facial Age Estimation Using the FG-NET Aging Database. 8926:, 737–750 (2015). https://doi.org/10.1007/978-3-319-16181-5_56.
I. Kemelmacher-Shlizerman, S. Suwajanakorn, S. M. Seitz, in 2014 IEEE Conference on Computer Vision and Pattern Recognition. Illumination-aware age progression, (2014), pp. 3334–3341. https://doi.org/10.1109/CVPR.2014.426.
B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, L. Li, The new data and new challenges in multimedia research. CoRR. abs/1503.01817: (2015). http://arxiv.org/abs/1503.01817.
F. H. d.B. Zavan, N. Gasparin, J. C. Batista, L. P. e.Silva, V. Albiero, O. R. P. Bellon, L. Silva, in 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T). Face analysis in the wild, (2017), pp. 9–16. https://doi.org/10.1109/SIBGRAPI-T.2017.11.
Y. Wu, Q. Ji, Facial landmark detection: a literature survey. Int. J. Comput. Vis.127(2), 115–142 (2019). https://doi.org/10.1007/s11263-018-1097-z.
Y. Wu, T. Hassner, K. Kim, G. Medioni, P. Natarajan, Facial landmark detection with tweaked convolutional neural networks. IEEE Trans. Pattern Anal. Mach. Intell.40(12), 3067–3074 (2018).
K. Zhang, Z. Zhang, Z. Li, Y. Qiao, Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett.23(10), 1499–1503 (2016). https://doi.org/10.1109/LSP.2016.2603342.
O. Çeliktutan, S. Ulukaya, B. Sankur, A comparative study of face landmarking techniques. EURASIP J. Image Video Process.2013(1), 13 (2013). https://doi.org/10.1186/1687-5281-2013-13.
B. Johnston, P. d. Chazal, A review of image-based automatic facial landmark identification techniques. EURASIP J. Image Video Process.2018(1), 86 (2018). https://doi.org/10.1186/s13640-018-0324-4.
X. Lin, Y. Liang, J. Wan, C. Lin, S. Z. Li, Trans. Multimed. IEEE, Region-based context enhanced network for robust multiple face alignment, 1–1 (2019). https://doi.org/10.1109/TMM.2019.2916455.
R. Weng, J. Lu, Y. Tan, J. Zhou, Learning cascaded deep auto-encoder networks for face alignment. IEEE Trans. Multimed.18(10), 2066–2078 (2016). https://doi.org/10.1109/TMM.2016.2591508.
W. AbdAlmageed, Y. Wu, S. Rawls, S. Harel, T. Hassner, I. Masi, J. Choi, J. Lekust, J. Kim, P. Natarajan, R. Nevatia, G. Medioni, in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). Face recognition using deep multi-pose representations, (2016), pp. 1–9. https://doi.org/10.1109/WACV.2016.7477555.
A. Bulat, G. Tzimiropoulos, How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks). CoRR. abs/1703.07332: (2017). http://arxiv.org/abs/1703.07332.
F. Chang, A. T. Tran, T. Hassner, I. Masi, R. Nevatia, G. G. Medioni, Faceposenet: making a case for landmark-free face alignment. CoRR. abs/1708.07517: (2017). http://arxiv.org/abs/1708.07517.
X. Zhu, Z. Lei, X. Liu, H. Shi, S. Z. Li, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Face alignment across large poses: a 3D solution, (2016), pp. 146–155. https://doi.org/10.1109/cvpr.2016.23.
A. Vedaldi, K. Lenc, in Proceeding of the ACM Int. Conf. on Multimedia. Matconvnet – convolutional neural networks for matlab, (2015). https://doi.org/10.1145/2733373.2807412.
X. Cao, D. Wipf, F. Wen, G. Duan, J. Sun, in 2013 IEEE International Conference on Computer Vision. A practical transfer learning algorithm for face verification, (2013), pp. 3208–3215. https://doi.org/10.1109/ICCV.2013.398.
D. Chen, X. Cao, L. Wang, F. Wen, J. Sun, in ECCV 2012. Bayesian face revisited: a joint formulation, (2012). https://www.microsoft.com/en-us/research/publication/bayesian-face-revisited-a-joint-formulation/.
D. Chen, X. Cao, L. Wang, F. Wen, J. Sun, in Computer vision – ECCV 2012, ed. by A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, and C. Schmid. Bayesian face revisited: a joint formulation (SpringerBerlin, Heidelberg, 2012), pp. 566–579.
Z. Huang, S. Shan, R. Wang, H. Zhang, S. Lao, A. Kuerban, X. Chen, A benchmark and comparative study of video-based face recognition on cox face database. IEEE Trans. Image Process. Publ. IEEE Signal Process. Soc.24: (2015). https://doi.org/10.1109/TIP.2015.2493448.
J. Chen, T. Takiguchi, Y. Ariki, A robust SVM classification framework using PSM for multi-class recognition. EURASIP J. Image Video Process.2015(1), 7 (2015). https://doi.org/10.1186/s13640-015-0061-x.
V. Vapnik, R. Izmailov, Knowledge transfer in SVM and neural networks. Ann. Math. Artif. Intell.81(1), 3–19 (2017). https://doi.org/10.1007/s10472-017-9538-x.
MathSciNet MATH Google Scholar
P. Wei, Z. Zhou, L. Li, J. Jiang, Research on face feature extraction based on k-mean algorithm. EURASIP J. Image Video Process.2018(1), 83 (2018). https://doi.org/10.1186/s13640-018-0313-7.
J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, K. Murphy, Speed/accuracy trade-offs for modern convolutional object detectors. CoRR. abs/1611.10012: (2016). http://arxiv.org/abs/1611.10012.
S. W. Arachchilage, E. Izquierdo, in 2019 IEEE Visual Communications and Image Processing (VCIP). A framework for real-time face-recognition, (2019), pp. 1–4. https://doi.org/10.1109/VCIP47243.2019.8965805.
W. Jiang, W. Wang, in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Face detection and recognition for home service robots with end-to-end deep neural networks, (2017), pp. 2232–2236. https://doi.org/10.1109/ICASSP.2017.7952553.
C. Ding, D. Tao, Robust face recognition via multimodal deep face representation. IEEE Trans. Multimed.17(11), 2049–2058 (2015). https://doi.org/10.1109/TMM.2015.2477042.
Z. Cui, S. Shan, X. Chen, L. Zhang, in Face and Gesture 2011. Sparsely encoded local descriptor for face recognition, (2011), pp. 149–154. https://doi.org/10.1109/FG.2011.5771389.
C. Ding, J. Choi, D. Tao, L. S. Davis, Multi-directional multi-level dual-cross patterns for robust face recognition. IEEE Trans. Pattern Anal. Mach. Intell.38(3), 518–531 (2016). https://doi.org/10.1109/TPAMI.2015.2462338.
Z. Cao, Q. Yin, X. Tang, J. Sun, in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Face recognition with learning-based descriptor, (2010), pp. 2707–2714. https://doi.org/10.1109/CVPR.2010.5539992.
Z. Cui, W. Li, D. Xu, S. Shan, X. Chen, in 2013 IEEE Conference on Computer Vision and Pattern Recognition. Fusing robust face region descriptors via multiple metric learning for face recognition in the wild, (2013), pp. 3554–3561. https://doi.org/10.1109/CVPR.2013.456.
A. Afaneh, F. Noroozi, Ö. Toygar, Recognition of identical twins using fusion of various facial feature extractors. EURASIP J. Image Video Process.2017(1), 81 (2017). https://doi.org/10.1186/s13640-017-0231-0.
B. Wu, S. Lyu, B. Hu, Q. Ji, in 2013 IEEE International Conference on Computer Vision. Simultaneous clustering and tracklet linking for multi-face tracking in videos, (2013), pp. 2856–2863. https://doi.org/10.1109/ICCV.2013.355.
S. Zhang, J. Huang, J. Lim, Y. Gong, J. Wang, N. Ahuja, M. Yang, Tracking persons-of-interest via unsupervised representation adaptation. CoRR. abs/1710.02139: (2017). http://arxiv.org/abs/1710.02139.
M. Roth, M. Bäuml, R. Nevatia, R. Stiefelhagen, in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). Robust multi-pose face tracking by multi-stage tracklet association, (2012), pp. 1012–1016.
B. Wu, Y. Zhang, B. Hu, Q. Ji, in 2013 IEEE Conference on Computer Vision and Pattern Recognition. Constrained clustering and its application to face clustering in videos, (2013), pp. 3507–3514. https://doi.org/10.1109/CVPR.2013.450.
C. Yan, L. Li, C. Zhang, B. Liu, Y. Zhang, Q. Dai, Cross-modality bridging and knowledge transferring for image understanding. IEEE Trans. Multimed.21(10), 2675–2685 (2019). https://doi.org/10.1109/TMM.2019.2903448.
C. Yan, H. Xie, J. Chen, Z. Zha, X. Hao, Y. Zhang, Q. Dai, A fast uyghur text detector for complex background images. IEEE Trans. Multimed.20(12), 3389–3398 (2018). https://doi.org/10.1109/TMM.2018.2838320.
S. Navid Hajimirza, M. Proulx, E. Izquierdo, Reading users' minds from their eyes: a method for implicit image annotation. Multimed. IEEE Trans.14:, 805–815 (2012). https://doi.org/10.1109/TMM.2012.2186792.
C. Yan, Y. Tu, X. Wang, Y. Zhang, X. Hao, Y. Zhang, Q. Dai, Stat: spatial-temporal attention mechanism for video captioning. Trans. Multimed. IEEE, 1–1 (2019). https://doi.org/10.1109/TMM.2019.2924576.
X. Wang, Q. Ruan, Y. Jin, G. An, Three-dimensional face recognition under expression variation. EURASIP J. Image Video Process.2014(1), 51 (2014). https://doi.org/10.1186/1687-5281-2014-51.
L. Yang, J. Ma, J. Lian, Y. Zhang, H. Liu, Deep representation for partially occluded face verification. EURASIP J. Image Video Process.2018(1), 143 (2018). https://doi.org/10.1186/s13640-018-0379-2.
E. Izquierdo, M. Ghanbari, Key components for an advanced segmentation system. IEEE Trans. Multimed.4(1), 97–113 (2002). https://doi.org/10.1109/6046.985558.
H. Jiang, G. Zhang, H. Wang, H. Bao, Spatio-temporal video segmentation of static scenes and its applications. IEEE Trans. Multimed.17(1), 3–15 (2015). https://doi.org/10.1109/TMM.2014.2368273.
X. Sun, J. Foote, D. Kimber, B. S. Manjunath, Region of interest extraction and virtual camera control based on panoramic video capturing. IEEE Trans. Multimed.7(5), 981–990 (2005). https://doi.org/10.1109/TMM.2005.854388.
L. Dong, L. He, M. Mao, G. Kong, X. Wu, Q. Zhang, X. Cao, E. Izquierdo, Cunet: a compact unsupervised network for image classification. IEEE Trans. Multimed.20(8), 2012–2021 (2018). https://doi.org/10.1109/TMM.2017.2788205.
J. Bruna, S. Mallat, Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intell.35(8), 1872–1886 (2013). https://doi.org/10.1109/TPAMI.2012.230.
R. Hankins, Y. Peng, H. Yin, in Intelligent Data Engineering and Automated Learning – IDEAL 2018, ed. by H. Yin, D. Camacho, P. Novais, and A. J. Tallón-Ballesteros. Towards complex features: competitive receptive fields in unsupervised deep networks (SpringerCham, 2018), pp. 838–848.
The research activity leading to the publication has been partially funded by the European Union Horizon 2020 research and innovation program under grant agreement No. 787123 (PERSONA RIA project).
Multimedia and Vision Group, School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Rd, London, E1 4NS, UK
Samadhi P. K. Wickrama Arachchilage & Ebroul Izquierdo
Samadhi P. K. Wickrama Arachchilage
Ebroul Izquierdo
SWA drafted the manuscript, acquired and analyzed the data, and carried out the experimental study presented in the manuscript. EI made substantial contributions to the conception and design for the research and experimental study. EI revised the manuscript completely and critically for important intellectual content. Each author has given final approval of the version to be published and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The authors read and approved the final manuscript.
Correspondence to Samadhi P. K. Wickrama Arachchilage.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
K. Wickrama Arachchilage, S.P., Izquierdo, E. Deep-learned faces: a survey. J Image Video Proc. 2020, 25 (2020). https://doi.org/10.1186/s13640-020-00510-w | CommonCrawl |
Riemann Sum Calculator
Area Between Two Curves Calculator
Euler's Method Calculator
Interval of Convergence Calculator
Newton's Method Calculator
Cross Product Calculator
Simpson's Rule Calculator
On behalf of our dedicated team, we thank you for your continued support. It's fulfilling to see so many people using Voovers to find solutions to their problems. Thanks again and we look forward to continue helping you along your journey!
Nikkolas and Alex
Founders and Owners of Voovers
Home » Calculus » Radius of Convergence Calculator
Radius of Convergence Calculator
loge
√x
∙10x
2.) ...
Show full steps
Radius of Convergence Lesson
Lesson Contents
What is Radius of Convergence?
The radius of convergence of a power series is the size of the disk where the series has absolute convergence. It can be either a positive number or infinity.
A power series is an infinite series of the form:
$$\sum\limits_{n = 0}^\infty {{c_n}{{\left( {x – a} \right)}^n}}$$
Where cn is a coefficient that varies with n and the series is a function of x with its terms varying with the nth term of the series.
Now, let's take a deeper dive into what convergence means in the context of a power series. When we add an infinite number of terms as we do with a power series, the sum of those terms will either be a finite number or infinite.
When the sum of those terms is finite, it is considered to converge absolutely. If the sum of those terms is infinite, it is considered to diverge. When we solve the radius of convergence, we are finding the value of R in |x – a| < R such that the series converges.
Why do we Learn Radius of Convergence?
Compared to humans, computers are really good at certain types of calculations but have difficulties performing other types of calculations. For example, the seemingly simple ex button commonly found on hand calculators is one that the calculator's computer cannot easily and accurately solve directly.
By learning how to find the radius of convergence, we can program an otherwise incapable computer to indirectly find the value of ex via use of a power series.
If we are evaluating ex with a large exponent, a calculator's computer has to multiply large, messy numbers by large, messy numbers many times over. Because of how computers store floating-point numbers and create round-off error, this process can take the computer very long and can give an inaccurate answer.
Luckily, the power series f(x) = xn⁄n! represents the expression ex when carried out to many terms. If we check the radius of convergence for this power series, we find that it is r = ∞ and that the interval of convergence is ∞ < x < ∞. This is great news because it means the power series will converge everywhere and can be used for ex with all possible input x values.
By programming this routine into a computer, we enable it to quickly and accurately solve for the value of ex with any value of x. This is just one example of a use for the radius of convergence, and there are many more applications that work behind the scenes inside computer software to help us every day!
Calculating the Radius of Convergence of a Power Series
There are several tests we may use to solve for radius of convergence, including the ratio test and the root test. The ratio test is simple, works often, and is used by the calculator on this page so that is what we will learn about here.
The ratio test uses a ratio of the power series and a modified n + 1 version of itself to solve for a radius of x which satisfies the convergence criteria. The ratio test formula is given as:
$$\text{Convergence when} \; L < 1, \; L = \lim_{n\to\infty} \left\lvert\frac{a_{n+1}}{a_{n}} \right\rvert$$
Where an is the power series and an + 1 is the power series with all terms n replaced with n + 1.
First, we plug each version of the power series into its respective side of the fraction inside the formula. Then, we simplify the fraction when possible.
Next, we evaluate the limit as n approaches infinity. When we plug infinity in for each instance of n, we end up with an expression that may appear unsolvable. However, we may use the standard infinite limit reduction strategies such as eliminating insignificant, non-infinite terms.
Once the limit is evaluated and reduced to its simplest form, we set it in the inequality L < 1. We will now have an inequality resembling the form of 1⁄c×|x – a| < 1.
The radius of convergence is equal to the constant c because moving it over to the right side such that |x – a| < c provides a radius of x values that satisfy the convergence criteria.
How the Calculator Works
The calculator on this page is written in three common web frontend languages: HTML, CSS, and JavaScript (JS). It also utilizes a JS-native computer algebra system (CAS) which performs some algebraic steps during the computation process. Because the calculator is powered by JS code, it runs entirely inside your browser's built-in JS engine and provides instant solutions and steps (no page reload needed).
When you click the "calculate" button, the solving routine is called and progresses through several symbolic operations. These operations mirror the ratio test's steps. The routine also saves the state of the limit/expression throughout the process. These "states" are used to build the solution steps which mirror that of a ratio test.
Once a final answer is calculated, it is printed in the solution box. The solution steps are printed below the answer when a user with access is logged in. If the calculator runs into an error during computation, an error message will be displayed instead of the answer and steps.
I don't want unlimited solutions
With any Voovers+ membership, you get all of these features:
+ Get UNLIMITED Answers & Solution Steps
+ Experience Ad-free Focus & Clarity
+ Use All New Calculators At No Extra Cost
+ You Can Easily Cancel At Any Time
Voovers+ Weekly - $4.97 - 7 Days
Unlimited solutions and solutions steps on all Voovers calculators for a week!
Voovers+ Monthly - $9.97 - 1 Month
Unlimited solutions and solutions steps on all Voovers calculators for a month!
Voovers+ Semester - $19.97 - 6 Months
Unlimited solutions and solutions steps on all Voovers calculators for 6 months!
Discount Code - Valid - Invalid
Group Seats
Enter a number between and . Enter a number or greater. This is the maximum number of people you'll be able to add to your group.
Group Description
PayPal Credit / Debit Card
Per 6 Months
Click here to tell us about your
experience and you'll get
HALF OFF Voovers+ monthly!
We know you care!
Click here to help your peers
by leaving a testimonial. 😃 | CommonCrawl |
Statistics Exam 3
5 Differences between means: type I and type II errors and power. Why do we use inferential statistics? a) to help explain the outcomes of random phenomena b) to make informed. Welcome to statistics 101. It follows that 1 out of 6 test takers will score above 25. Questions about the meaning and application of basic. Q-Test Q-test is a statistical tool used to identify an outlier within a data set. check_warnings (*filters, quiet=True) ¶ A convenience wrapper for warnings. 3 Populations and samples. Advancing the Assessment of Competence®. Practice | CK-12 Foundation. 110, while the paired t–test gives a P value of 0. will finish. The most primitive method of understanding the laws of nature utilizes observational studies. 1 Ultimately, she would like to know the. For simplicity, I will only refer to Kruskal–Wallis on the rest of this web page, but everything also applies to the Mann–Whitney U-test. Here we are providing the facility to download Latest Edition. 25sekdhm481 gzzd0s9po0 2vxi0v8wpdyv51 peny9um0r0 qqnizfxz3j8stx cfa54dgkn331t twek0h1kmmo 4w9mqvcib36i m4d6nz4wiz krajxsft5yg uol6ttjol75 4e0zu31eayt eb9tojyl3hcr. Test matching your answers to the crowd-sourced profiles of fictional characters. By looking at the denominator of the fraction we can determine whether to use a one-sample or two-sample t-test between percents. This is the easiest Paper in CGLE Tier 2. This is a closed book exam. The actual exam is fourteen multiple choice questions worth three points each and one show-your-work problem worth five points. One student had a score of 58 points on the midterm. Information on bar admission offices, bar exams, study aids, admission requirements, and statistics. Created by. For the horseshoe crabs, the P value for a two-sample t–test is 0. Use the calendar to view upcoming exams and other important dates leading up to the exam. The null hypothesis - In the criminal justice system this is the presumption of innocence. You can do this in Minitab by making a new column where each value is the absolute value of the response minus the median of that treatment. You need this information for quality, cost-estimating, duration-estimation, and risk questions on the PMP exam. ©BFW Publishers The Practice of Statistics for AP*, 5/e 8. 6 c1; SUBC > alt= -1. Please contact your academic department administrator with any final exam issues. 1: Dialogue boxes for the Mann-Whitney test 3. Download VCE Practice Questions Answers. The writing testconsistsof four levels, namely, YCT (level I), YCT (level II), YCT (level III), and YCT (level IV). Your score report is cumulative and includes scores for all the AP Exams you have ever taken, unless you requested that one or more scores be withheld from a college or canceled. 4 Statistics Tests - Pearsons PMCC Test (sold separately). The examination measures the fundamental concepts of descriptive and inferential statistics and is designed to correspond to a service course applicable to many majors. Online screening is one of the quickest and easiest ways to determine whether you are experiencing symptoms of a mental health condition. 's title-clinching victory on Tuesday night. Offers educators, policy makers, the Legislature, parents, and the public quick access to timely and comprehensive data about K-12 education in California. Recursive Formulas. The following examines an example of a hypothesis test, and calculates the probability of type I and type II errors. For example, you can use this test to assess whether there are mean differences when the same group of people have been assessed twice, such as when determining if an intervention had an impact by using a before and after design. Paired t-test is unavailable for the summarized data. Friedman's test is a nonparametric test that compares three or more paired groups. 7656 and the p-value is 0. 2-5 Dates by Range 1900-1949 Remove. For the third year, nearly 100 percent of Nebraska graduates took the ACT. The formula is: [latex]Z. Average Pass Rate. To test the above claim, a survey was conducted in three regions of West Malaysia, namely the northern, southern and eastern regions. Corty People Also Search: Using and Interpreting Statistics 3rd edition by Corty Test Bank Test Bank for Using and Interpreting Statistics 3rd edition by Corty pdf. A random sample of 20 firms which hire statistics graduates is taken from each of the three regions, and the personnel manager of each firm is asked to state the starting salary for a new statistics graduate who. Enhanced Computer Based Scheduling: STEP 1: Register with the Society of Actuaries by the exam deadline date. Applied Statistics and Probability for Engineers. catch_warnings(record=True) with warnings. ANS: D DIF: 2 TOP: 9 4. Thanks to the organisers of useR! 2020 for a successful online conference. As of 2015, 30. This proportion has remained similar since reporting began on 9 July. Schools in Alaska must begin the morning exam administration between 7 and 8 a. Independent Samples t-Test (Jump to: Lecture | Video) Let's perform an independent samples t-test: A statistics teacher wants to compare his two classes to see if they performed any differently on the tests he gave that semester. Used to detect if the visitor has accepted the marketing The first part of the test is a multiple-choice exam that includes reading comprehension, analytical. Dependent t-test for paired samples (cont) How do you detect changes in time using the dependent t-test? The dependent t-test can also look for "changes" between means when the participants are measured on the same dependent variable, but at two time points. A crash typically happens within an average of three seconds after a driver is distracted. So I always just like to remind ourselves what's going on, so you have your null hypothesis here, that the mean number of years of experience for teachers in the district is five and then the alternative hypothesis is that the mean years of experience is less than five for teachers in the district. The factors are: 1. ${z = \frac{(p - P)}{\sigma}}$ where P is the hypothesized value of population proportion in the null hypothesis, p is the sample proportion, and ${\sigma}$ is the standard deviation of the sampling distribution. Zip code is zip code of residence, which may not be location of exposure. If the associated p-value is small i. Flashcards. See Exam Eligibility Criteria for details. Please contact your academic department administrator with any final exam issues. 2013 statistics. The critical value of t distribution are calculated according to the probabilities of two alpha values and the degrees of freedom. For each test the statistics involved are brie°y explained and its application is illustrated by examples. In case you missed the test, try solving the questions before reading the solutions. Dodgers third baseman Justin Turner was removed from World Series Game 6 during the eighth inning in L. But if you got 7 "heads" and 3 "tails" in a test of 10 throws it would be entirely consistent with random chance. and 1,000,000+ students are also using our free services. Exam windows vary by market based on appointment availability at the time of scheduling. Choosing the Correct Statistical Test in SAS, Stata, SPSS and R The following table shows general guidelines for choosing a statistical analysis. Solutions To Mathematics Textbooks/Probability and Statistics for Engineering and the Sciences (7th ed) (ISBN-10: 0-495-38217-5)/Chapter 3. For the goodness-of-fit test, you use a theoretical relationship to calculate the expected frequencies. Your score report is cumulative and includes scores for all the AP Exams you have ever taken, unless you requested that one or more scores be withheld from a college or canceled. The basic idea is to take the row totals and column totals as "given" and add the probability of obtaining the pattern of frequencies obtained in the experiment and the probabilities of all other patterns that reflect a greater difference between conditions. 066 with standard deviation 1. We currently have 99,036 members registered. less than 5. Teacher Skills16. This module provides functions for calculating mathematical statistics of numeric (Real-valued) data. Statistics Exam #3. com's Statistics 101: Principles of Statistics course, in which you can study concepts by watching video lessons and take quizzes to measure how well you understand. Strategic (Level 3) In each of these level, there are three tests from each of the learning pillars (i. Bureau of Labor Statistics, Occupational Employment Statistics The median annual wage for computer programmers was $86,550 in May 2019. A matched pairs design is a special case of a randomized block design. Don't show me this again. Circle the letter corresponding to the best answer. Start Now. Revision Village - Voted #1 IB Maths SL Resource in 2018/19!. In the example above, the parameter estimate for the "Fat" variable is -3. Making Sense of Multivariate Data using XLSTAT, November 10th, 11th, 17th and 19th, 2020. In both the judicial system and statistics the null hypothesis indicates that the suspect or treatment didn't do anything. In accordance with the completely randomized design, 6 of the restaurants are randomly chosen to test market the first new menu item, another 6 for the second menu item. In statistics the alternative hypothesis is the hypothesis the researchers wish to evaluate. A) Statistic B) Parameter Determine which of the four levels of measurement (nominal, ordinal, interval, ratio) is most appropriate. the configured routing protocols and the networks that the router is. The McNemar test is used to determine if there are differences on a dichotomous dependent variable between two related groups. Yes, Kolb 5. 's title-clinching victory on Tuesday night. Log in to Markbook Site. Test statistics allow us to quantify how close things are to our expectations or theories. Occupational Employment Statistics SHARE ON: PRINT: OES. 518—Nonparametric Statistical Methods (3) (Prereq: A grade of C or better in STAT 515 or equivalent) Application of nonparametric statistical methods rather than mathematical development. Deadliest day yet: 48 fatalities reported as positive test rate jumps to 22. In accordance with the randomized block design, each restaurant will be test marketing all 3 new menu items. Selecting the Right Statistical Test. Factors in the Test Itself 2. Note that the chi-square test statistic for this sample is so large that it is off the scale used in the simulation. CliffsNotes study guides are written by real teachers and professors, so no matter what you're studying, CliffsNotes can ease your homework headaches and help you score high on exams. Measures of central tendency (center) mean ( x, μ ) median (middle) mode (most). When you have a sample size that is greater than approximately 30, the Mann-Whitney U statistic follows the z distribution. local time, and the afternoon exam administration between 11 a. So, she takes a pregnancy test that is known to be 90% accurate—meaning it gives positive results to positive cases 90% of the time— and the test produces a positive result. Formula Derivations - (High School +) Derivations of area, perimeter, volume and more for 2 and 3 dimensional figures. It is a hypothesis test which is used to compare the observed values and the expected value and find the goodness of fit. If you have Statistics as a subject in your graduation level. SPSS Statistics, the world's leading statistical software, is designed to solve business and research problems through ad hoc analysis, hypothesis testing, geospatial analysis and predictive analytics. Nebraska achievement levels remained steady and on pace with the national average on the ACT entrance exam that measures college readiness in English, reading, mathematics, and science. The Insurance Institute for Highway Safety (IIHS) is an independent, nonprofit scientific and educational organization dedicated to reducing the losses - deaths, injuries and property damage - from motor vehicle crashes. Overview: TTEST Procedure; Getting Started: TTEST Procedure. It is not exhaustive, but is designed to just provide the basics. Title: Test bank basic statistics for business and economics 5th ediiton, Author: yandex325, Name: Test bank basic statistics for business and economics 5th ediiton, Length: 48 pages, Page: 2. Only the supplied calculators are allowed. There are 20 versions in the bank for each question. The National Assessment of Educational Progress (NAEP) is the only nationally representative assessment of what students know and can do in various subjects, reported in the Nation's Report Card. Levels of measurement, comparisons of two independent populations, comparisons of two dependent populations, test of fit, nonparametric analysis of variance. ADVERTISEMENTS: This article throws light upon the five main factors affecting the validity of a test. Biotechnology and M. We currently have 99,036 members registered. 75 + (55 51. + We provide official/original/genuine comprehensive instructor's Test Bank / Solution Manual. GRADUATE APTITUDE TEST-BIOTECHNOLOGY (GAT-B) 2020 (for admission to DBT-supported PG Programmes in Biotechnology in participating institutions across India: M. Live coverage of the examinations showing details such as candidates taking. 08037069 10 15. Median Calculator Instructions. Class A had 25 students with an average score of 70, standard deviation 15. The final exam scores were approximately normally distributed with a mean of 112 points and a standard deviation of 10 points. The Mann-Whitney test is also a nonparametric test to compare two unpaired groups. NCAA schools distribute more than $3. Evaluate the Correlation Results: Correlation Results will always be between -1 and 1. Learning to learn: Learning about intelligence. For example, if you know you're going to have multiple-choice questions on World War II, you'll know to focus on studying facts and details. Before sharing sensitive information online, make sure you're on a. Observation: The Real Statistics Resource Pack also provides a data analysis tool which supports the two independent sample t-test, but provides additional information not found in the standard Excel data analysis tool. jkelly1100. However, I will still be using the z distribution for the sake of brevity. • You can use a 3"×5" card with notes, your calculator, and the z-tables provided on the back of this test. The form of the proposed test statistics is clearest when only two groups are being compared. DAT data set. 0 - ScaN Chapter 1 Exam interfaces with line (protocol) status and I/O statistics interface information, including whether an ACL is enabled on the interface. Inter-rater agreement - Kappa and Weighted Kappa. Effect Size Calculator for T-Test. Dodgers third baseman Justin Turner was removed from World Series Game 6 during the eighth inning in L. Practice for the AP® Statistics exam by learning key principles of sampling and experimental design to avoid bias in data. Henderson faces a tough test on TNF, 49ers need more from WR Brandon Aiyuk. When the scaling term is unknown and is replaced by an estimate based on the data , the test statistics (under certain conditions) follow a Student's t distribution. Colorado Dept. 7 per 100,000 persons). Offers educators, policy makers, the Legislature, parents, and the public quick access to timely and comprehensive data about K-12 education in California. Factors in Test Administration and Scoring 4. For this case it is proposed that tests be based on a score of the form where #lk is an estimate of Flk,see (2. You need this information for quality, cost-estimating, duration-estimation, and risk questions on the PMP exam. This calculator computes the median from a data set: To calculate the median from a set of values, enter the observed values in the box above. O'zbekiston Respublikasi Vazirlar Mahkamasi Huzuridagi Davlat test markazi. The College Board and Educational Testing Service (ETS) are dedicated to the principle of equal opportunity, and their. You use the G–test of goodness-of-fit (also known as the likelihood ratio test, the log-likelihood ratio test, or the G 2 test) when you have one nominal variable, you want to see whether the number of observations in each category fits a theoretical expectation, and the sample size is large. Learn vocabulary, terms, and more with flashcards, games, and other study tools. 3 Populations and samples. Welcome to statistics 101. Nebraska achievement levels remained steady and on pace with the national average on the ACT entrance exam that measures college readiness in English, reading, mathematics, and science. Right to Information Act. Pass your official DMV test. Complements our presentation 3. Calculate the sample standard deviation and sample variance for the following frequency distribution of final exam scores in a statistics class. The exam pass rates for the Advanced Level exams are shown below. We use the chi-square distribution with df = 2 to find the P-value. Report bugs directly to Jira, and reproduce them with a click. AP Statistics. Grinell Introduction to Statistics Sample Problems and exams with solutions. News & Analysis NFL Week 3 Rookie Preview: Jags' C. For example, if a calculated test statistic exceeds the critical value for a significance level of 0. ET Deadline to Request Your Economic Impact Payment. For more information about how to update statistics on a column, an index, a table, or an indexed view, see UPDATE STATISTICS (Transact-SQL). It covers 20 - 30% of the GED Math test. This test is based off of UTSC's PSYB07 Stat's midterm exam for Fall 2013. eu/Test-Bank-for-Business-Statistics-For-Contemporary-Decision-Making-8th-Edition-by-Black. Do not reject the null hypothesis. Introduction. 9 Peacemaker: I am at peace. Here is my explanation on problem #3 from the 2016 AP Statistics Exam. SaTScan™ is a free software that analyzes spatial, temporal and space-time data using the spatial, temporal, or space-time scan statistics. However, I will still be using the z distribution for the sake of brevity. Koehler KJ, Lartnz K (1980) An empirical investigation of goodness-of-fit statistics for sparse multinomials. :: August 6, 2020 Scores for the June 2020 Part III CSE® and ISE® examinations have been posted online. For K-12 kids, teachers and parents. So when possible, compromises were made. The test covered both descriptive and inferential statistics in brief. Start studying Stats exam 3. Perfect for statistics courses, dissertations/theses, and research projects. Past perfect simple and past perfect continuous. 5 r k contingency table Fisher's exact test The probability of each table is given by the hypergeometric distribution and is beyond the scope of this course, although your book does a nice job of explaining if you are interested. If I want to pass my exam, I. 3 Chapter 4 More on Two-Variable Data Outline and Questions Sec. 04448698 15 22. State Decision Rule. Take one of our many Statistics practice tests for a run-through of commonly asked questions. Description. 05 = α, we conclude there is a significant difference after treatment. In each case, draw a sketch , write algebraic expression and find the probability of each reading in degrees: 1. Run a free website speed test from multiple locations around the globe using real browsers (IE and Chrome) and at real consumer connection speeds. GRADUATE APTITUDE TEST-BIOTECHNOLOGY (GAT-B) 2020 (for admission to DBT-supported PG Programmes in Biotechnology in participating institutions across India: M. 4 on SAS Viya、SASInstitute A00-274 ファンデーション そうすると、あなたがいつでも最新バージョンの資料を持っていることが保証されます、SASInstitute A00-274. Statistics Exam 3. com - Your free, practice test site for High School, College, Professional, and Standardized Exams and Tests - Your Free Online Practice Exam Site!. As of 2015, 30. We offer a comprehensive set of curricula in our disciplines, from introductory-level general education courses to doctoral dissertation direction and postdoctoral mentoring. 0 Points A company operates four machines during three. Download IBPS Clerk CWE VIII Pre Exam Admit Card 2018. Montgomery, Douglas C. Flashcards. The most likely situation is that you will use software and will not use printed tables. ANS: A DIF: 2 TOP: 1. hypothesis test. The leading provider of test coverage analytics. Need help in completing statistics exam questions 18-30. The seven-day average of new cases is now 4,035 — its first time over 4,000 and the highest yet. 2 Students could select and use an appropriate statistical test to find the significance of differences in. Medium is an open platform where readers find dynamic thinking, and where expert and undiscovered voices can share their writing on any topic. 4 Deaths with confirmed or presumed COVID-19, pneumonia, or influenza, coded to ICD–10 codes U07. All tests on this site have been created and converted with VCE Exam Simulator. No books or telephones. Use this platform to create online test, allocate assignments, generate scorecards and share reports. In the United States, enrollment in AP Statistics classes has increased at a higher rate than in any other AP class. 4M speed tests from verified users over the past 12 months. First, we can test if skew and kurtosis of our sample differ significantly from those of a normal distribution:. Research reports. Factors in Pupils' Response 5. A pen-and-paper "offline" exam in which candidates are to do writing in the form of essay writing and letter writing, and sometimes précis and application writing. 2019 grade distributions for graded assessment VCE. Practice calculating the test statistic in a one-sample t test for a mean If you're seeing this message, it means we're having trouble loading external resources on our website. jkelly1100. 5 r k contingency table Fisher's exact test The probability of each table is given by the hypergeometric distribution and is beyond the scope of this course, although your book does a nice job of explaining if you are interested. 962 (df=2, p=0. The school portal makes adding jobs very straightforward and the statistics associated with the adverts provide fascinating and useful insights. Melanie decides to gather more evidence. • You can use a 3"×5" card with notes, your calculator, and the z-tables provided on the back of this test. Fullscreen. Thus, for a normal distribution, almost all values lie within 3 standard deviations of the mean. iTEP exams are delivered via the internet and must be administered at a secure location or a certified iTEP test center. Why do we use inferential statistics? a) to help explain the outcomes of random phenomena b) to make informed. 3 yesterday. For this example, k = 3 and so df = k(k–1)/2 = 3. 1787/23083387. 3 50 615 600 = − z = P[Z ≤z] =P[Z ≤0. You will be using these tests to identify how your categorical variables affect your scales or response variables. Using the calculator function Chi-square GOF - Test (in STAT TESTS), the test statistic is 3. Technical assistance on the day of your Knowledge Check-In assessment is available by chatting with a Pearson VUE agent. Download VCE Practice Questions Answers. and Canada, likely because of global warming: From 1 to 3 1/2. I'm looking to generate some statistics about a model I created in python. National Statistician Sir Ian Diamond said: "The Office for National Statistics has huge experience in running very large household surveys that gather vital information from a genuinely representative sample. 3 yesterday. Encourage your students to visit the AP Statistics student page for exam information and exam practice. In statistics, variables contain a value or description of what is being studied in the sample or population. Complete Statistics cheat sheet for your finam exam. Data, Surveys, Probability and Statistics at Math is Fun. The general goal for a hypothesis. For more complex models, the F-statistic determines if a whole model is statistically different from the mean. Get Notes and Test Papers for Statistics Data Analysis, Statistics Data, Statistics Analysis Quiz, Statistic Test, Statistics, MBA, MBA Entrance, MBA Exam, MBA Entrance Exam, MCA Exam, MCA Entrance, MCA Entrance Exam, MCA. More than 2 in 3 adults were considered to be overweight or have obesity. Three Exams- Statistics. Types of Statistics. Question 3 A statistics professor gave an exam on performing the T-Test with o unknown. The on-campus Stat 110 course has grown from 80 students to over 300 students per year in that time. Terms in this set (32) any sample size. 2 Students could select and use an appropriate statistical test to find the significance of differences in. Here we are providing the facility to download Latest Edition. Don't waste your opportunity. net is a robust, easy-to-use and secure exam platform. This means that the examination is conducted as an offline examination. (2-tailed) value. Both classes had a mean score of 73. A time management plan. Part 1 Teachers' and Learners' language in the classroom. Weekly Roundup: October 19-23, 2020 COVID-19 Puts Lean in the Crosshairs of Controversy What Is Neural Supply Chain Management? Machines to 'Do Half of All Work Tasks By 2025' Can VR Training Make Remote Work Engaging Again?. Test statistics. This is an. EFFECT SIZE STATISTICS FOR THE MANN-WHITNEY U TEST. Online practice tests with challenging SAT math questions and thorough explanations of the problems. The AP Statistics exam is one of the longer AP exams, clocking in at three hours. ANS: A DIF: 2 TOP: 1. We emphasize that these are general guidelines and should not be construed as hard and fast rules. The final exam scores were approximately normally distributed with a mean of 112 points and a standard deviation of 10 points. Search Constraints Start Over You searched for: Subjects World War II Remove constraint Subjects: World War II Genre Statistics Remove constraint Genre: Statistics Titles Detailed report on test of adequacy of K-2 ration in the desert: project no. The coronavirus pandemic has made a statistician out of us all. TB Incidence in the United States; Reported TB in the US, 2018 Surveillance Report; Trends in Tuberculosis 2018. Help with SPSS to deconstruct Significant 3-way interaction for 2x2x4 mixed ANOVA -- Syntax for Test of Simple Interactions? Posted by 9 hours ago Hi folks, I am looking for some help with deconstructing a signficant three-way interaction found when running a 2 (within) x 2 (within) x 4 (between) mixed ANOVA using GLM RM in SPSS. Insurance Agents. The study of statistics will serve to enhance and further develop these skills. Every percentage can be expressed as a fraction. If the absolute value of the test statistic is greater than the critical value, then the null hypothesis is rejected. Test Bank|Solution Manual For : Discovering Statistics Using SPSS (Introducing Statistical Method) [Paperback] Andy Field (Author). com connection test page, and are updated on a monthly basis. To Obtain an Independent-Samples T Test. 5 minutes per day talking with their children. An introductory-level textbook in statistics covering descriptive and inferential statistics. See real world download speeds for Frontier Communications based on over 1. 3 is released (a bug-fix release)" Author Tal Galili Posted on December 8, 2017 Categories R Leave a comment on R 3. These are examined and classified according to their characteristics and saved. First, the contingency table is converted into standard format, as shown in range H3:J12. An R tutorial on the F distribution. 100% (11) Pages: 4 year: 2017/2018. I'm looking to generate some statistics about a model I created in python. School Exams. and world population estimates changing live with the Population Clock. 6% CAUCASIAN OVCHARKA 20 17 3 85. 97: $35,290: Electronic Shopping and Mail. Melanie decides to gather more evidence. There Are 20 Versions In The Bank For Each Question. Encourage your students to visit the AP Statistics student page for exam information and exam practice. Start studying Statistics Exam 3. Statistics is what makes us able to collect, organize, display, interpret, analyze, and present data. kolmogorovSmirnovTest (parallelData, "norm", 0, 1) # summary of the test including the p-value, test statistic, and null hypothesis # if our p-value. Expect outstanding work done in time. Don't confuse z scores and areas. The Range (Statistics). Exam P is a three–hour multiple–choice examination and is offered via computer–based testing (CBT). Match the characteristic to the corresponding type of routing. Take a look at the Sig. What is Clearing? Finance. Speed tests we analyze to show statistics for AT&T on BroadbandNow are sourced from the M-Labs database, which aggregates AT&T speed tests run on BroadbandNow as well as in Google's search result tools. All statistics software packages provide these p-values. 23) 24) You wish to test the claim that μ ≤ 38 at a level of significance of α= 0. Definitions; Notes; Generating Random Numbers on the TI-82; Chapter 2 - Describing, Exploring, and Comparing Data. OpenEpi requires Javascript--not available or turned off in this browser. What is the 99% confidence interval of the mean? Degrees of freedom (DF) is n−1 = 31, t-value in column for area 0. 08037069 10 15. Asia did well in the report, with South Korea, Hong Kong, Japan and Singapore. Notes, Practice, Mock Exam & Guides. Live coverage of the examinations showing details such as candidates taking. doi: https://doi. 2013 statistics. Encourage your students to visit the AP Statistics student page for exam information and exam practice. Invite your contacts to take the test. One student had a score of 58 points on the midterm. The only difference is that in the z-test we use , and in the t-test we use. 4 Sample Runs Test Referencing a Custom Value 219. Free real IQ test. The ACT test is a curriculum-based education and career planning tool for high school students that assesses the mastery of college readiness standards. TEST STUDY GUIDES Pass the test the first time. fits the Y variable significantly better than a linear regression--Analysis of covariance (ancova) 1: 2 – test the hypothesis that different groups have the same regression lines: first test the homogeneity of slopes; if they are not significantly different, test the homogeneity of the. Tables of critical values can take up a lot of room. New york bar exam. ANS: A DIF: 2 TOP: 1. By Keith McCormick, Jesus Salcedo, Aaron Poh. Home/Cisco/CCNA 3 Exam Answers/CCNA 3 Scaling Networks v6. The difference between 29 and 30 degrees is the same magnitude as the difference between 78 and 79 (although I know I prefer the latter). 304179 18 0. Although all our examinations are now delivered online they are examiner-marked and therefore subject to restrictions on candidate numbers. Start studying Statistics Exam #3. For example, if the test is increased from 5 to 10 items. AT&T's average Internet speeds are based on the last twelve months of speed test data. 9 Peacemaker: I am at peace. FHS Mathematics and Statistics Part A. Compute the value of the standardized test statistic. Test statistics. Chapter 9 Test. Create private or public online tests. Preparing for exams? Give yourself the best chance with these top ten study tips, and try not to let the stress get to you during this period of exam preparation. Sample Exam III: Chapters 5 & 6. It is designed for any of the following interrelated purposes: Perform geographical surveillance of disease, to detect spatial or space-time disease clusters, and to see if they are statistically significant. The final exam scores were approximately normally distributed with a mean of 112 points and a standard deviation of 10 points. This module focuses on Analysis of Variance, but this technique makes assumptions about the underlying distributions in our data. Makeup exam - anyone get spam filters for question 1? (self. Works with most CI services. Interval data is like ordinal except we can say the intervals between each value are equally split. We compare the value of our statistic (3. The ceilometer. Overview: TTEST Procedure; Getting Started: TTEST Procedure. exam preparation with the best practice questions. Get Notes and Test Papers for Statistics Data Analysis, Statistics Data, Statistics Analysis Quiz, Statistic Test, Statistics, MBA, MBA Entrance, MBA Exam, MBA Entrance Exam, MCA Exam, MCA Entrance, MCA Entrance Exam, MCA. Box 7023 Merrifield, VA 22116-7023. NCBE collects statistics from all US jurisdictions on the February and July administrations of the bar examination and on annual admissions. Dodgers third baseman Justin Turner was removed from World Series Game 6 during the eighth inning in L. Live world statistics on population, government and economics, society and media, environment Interesting statistics with world population clock, forest loss this year, carbon dioxide co2 emission. Stuart explains everything clearly and with great working. Researchers analysed millions of statistics on exam grades, literacy rates, attendance, and university graduation rates. The stem is the first digit of a student's score and the leaf is the second digit. Part 1 of 16 - 2. State Decision Rule. Gapminder's Vice-President, Anna R. a parameter. Two-Sample T-Test Assuming Equal Variances Assumptions: 1. The exam pass rates for the Advanced Level exams are shown below. The on-campus Stat 110 course has grown from 80 students to over 300 students per year in that time. 05 then this means that values of the test statistic as large as, or larger than calculated from the data would occur by chance less than 5 times in 100 if the null hypothesis was indeed correct. 3 Counts of deaths involving influenza include deaths with pneumonia or COVID-19 also listed as a cause of death. ©BFW Publishers The Practice of Statistics for AP*, 5/e Test 3B AP Statistics Name: Part 1: Multiple Choice. answer is not perfectly correct, then no partial credit can be awarded. The stem is the first digit of a student's score and the leaf is the second digit. Report your results in words that people can understand. For example, if a researcher aims to find the average height of a tribe in Columbia, the variable would simply be the height of the person in the sample. GCSE, International GCSE, Edexcel Award Level 1 & 2, ELC. One of the College Board's more recent additions, the AP Statistics exam was first administered in May 1996 to supplement the AP program's math offerings, which had previously consisted of only AP Calculus AB and BC. 8% of pillar 1 test results were made available within 24 hours. Take one of our many Statistics practice tests for a run-through of commonly asked questions. gov means it's official. Statistics and records from England's one-wicket win over Australia in the third Ashes test at Headingley that levelled the series at 1-1 on Sunday. Schools in Alaska must begin the morning exam administration between 7 and 8 a. 518—Nonparametric Statistical Methods (3) (Prereq: A grade of C or better in STAT 515 or equivalent) Application of nonparametric statistical methods rather than mathematical development. (Not all options are used. North-East Centre, Tezpur. 1 Ultimately, she would like to know the. The SAT (/ ˌ ɛ s ˌ eɪ ˈ t iː / ess-ay-TEE) is a standardized test widely used for college admissions in the United States. 8th percentile or 62nd percentile. Only three counties in Kansas have COVID-19 case rates and positive coronavirus test rates in the green zone of the state's school reopening guide. GCSE, International GCSE, Edexcel Award Level 1 & 2, ELC. Chapter 8 Test. Where you lose efficiency with nonparametric methods is with estimation of absolute quantities, not with comparing groups or testing correlations. Machine Learning. Link to University guidance on plagiarism. Monthly statistics. Attached PDF file includes question and a appendix with reference tables. The test statistic compares your data with what is expected under the null hypothesis. We'll review your answers and create a Test Prep Plan for you based. as of December 2017 The pass-fail rate is not a measure of a breed's aggression, but rather of each dog's ability to interact with humans, human situations, and the environment. IELTS score calculator instantly display IELTS listening and reading test score on a nine-band scale. 5 Population is based on 2019 postcensal estimates from the U. When you have a sample size that is greater than approximately 30, the Mann-Whitney U statistic follows the z distribution. Census Bureau terminated the collection of data for the Statistical Compendia program effective October 1, 2011. Chapter 9 Test. Dr Berni Sewell, PhD is a health scientist, energy healer, and self-worth blogger. Each test is worth 20% of the final grade, the final exam is 25% of the final grade, and the homework grade is 15% of the final grade. 3 Sample Runs Test (Large Data Samples) 217. We will also ask you (optionally) to report your attitudes or beliefs about. Click here to download Examination Center List. See a description of the test on the TT Test Description page. Questions begin on page 6. The questions are written in a more challenging and applicative way to encourage some open book use. If you did not. The test statistic is a z-score (z) defined by the following equation. Tier III Exam will be Pen and Paper based examination. Qualifications For The Future – Closing the Skills Gap. Biometrics 10:417-451. 3222 [email protected] Why do we use inferential statistics? a) to help explain the outcomes of random phenomena b) to make informed. The latest Exam Answers collections. You can use the criteria as parameters to check your pace and weakness areas. Wolfram|Alpha brings expert-level knowledge and capabilities to the broadest possible range of people—spanning all professions and education levels. computers and statistics packages that computed exact p-values. Exam Tip! Read the whole sentence or the whole text. 05 = α, we conclude there is a significant difference after treatment. Past data has shown that the regression line relating the final exam score and the midterm exam score for students who take statistics from a certain professor is: final exam = 50 + (0. Part III Part III. A Statistics Exam Is Created By Choosing For Each Question On The Exam One Possible Version At Random From A Bank Of Possible Versions Of The Question. English Exams, Statistics. 97: $35,290: Electronic Shopping and Mail. Dr Berni Sewell, PhD is a health scientist, energy healer, and self-worth blogger. Data Analysis, Statistics, and Probability are mathematical processes that help solve real-world problems. The test covered both descriptive and inferential statistics in brief. A t-test is the most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known. TB Incidence in the United States; Reported TB in the US, 2018 Surveillance Report; Trends in Tuberculosis 2018. GATE Exams Tutorials. The data presented on our web site is raw data; it is. Appendix 3; The Uses of Nucleic Acid Amplification Tests for the Diagnosis of TB plus icon. Question 872576: Two classes took a statistics test. There are 20 versions in the bank for each question. Strale's STA2023 Elementary Statistics Exam #3. Aggregate: 7227. 87MB , 40 pages NHS Test and Trace statistics 28 May to 9 September 2020: data. Step 3: Assess the evidence. How to perform a 2 sample t-test?. Note, each TOEFL exam is different, so this is just a guide and not an official conversion. IBM Netezza® Performance Server, powered by IBM Cloud Pak® for Data, is an all-new cloud-native data analytics and warehousing system designed for deep analysis of large, complex data. will finish. MIT 18-466. Nature of the Group and the Criterion. Part 1 Teachers' and Learners' language in the classroom. The larger variance should always be placed in the numerator; The test statistic is F = s1^2 / s2^2 where s1^2 > s2^2; Divide alpha by 2 for a two tail test and then find the right critical. com you will find lots of free English exam practice materials to help you improve your English skills: grammar, listening, reading, writing. The text assumes some knowledge of intermediate algebra and focuses on statistics application over theory. If you have Statistics as a subject in your graduation level. February 2013 Exam. 015, and with a level of significance of. Performing Friedman's Test in R is very simple, and is by using the "friedman. GRADUATE APTITUDE TEST-BIOTECHNOLOGY (GAT-B) 2020 (for admission to DBT-supported PG Programmes in Biotechnology in participating institutions across India: M. Open topic with navigation. Once we have calculated a t for our sample, we have to compare it to some critical value(s) that we look up in a table. A statistical method that uses sample data to evaluate a hypothesis about a population. Triola, Essentials of Statistics, Second Edition. from pyspark. submitted 14 days ago by ddog112. It should be assumed that COVID-19 exposure can occur in every county in IL. Statistics Scoring Guidelines 2015 Author: ETS Subject: Statistics Scoring Guidelines 2015 Created Date: 7/20/2015 10:59:25 AM. The fourth reason to study statistics is to be an informed consumer. These metrics are the most important indicators of broadband connection performance. 1 Understand that statistics can be used to gain information about a population by examining a sample of the population; generalizations about a population from a sample are valid only if the sample is representative of that population. A port of Python 3. Latest Technologies. Exam Overview. There are the latest CFA exam pass rates for the last 4 years. 1-Sample, 2-Sided Equality 1-Sample, 1-Sided 1-Sample Non-Inferiority or Superiority 1-Sample Equivalence Compare 2 Proportions. Schools in Alaska must begin the morning exam administration between 7 and 8 a. I am providing the answers with explanation in case you got stuck on particular questions. " No registration required!. NCBE produces the MBE, MEE, MPT and MPRE components of the bar exam and provides character and fitness investigation services for bar admission agencies. exam preparation with the best practice questions. STAT 7000 and STAT 7010 and STAT 7020. Join us to unleash potential, create beauty and spark action. greater than 2. Instead of going on our gut feelings, they allow us to add a littl. Therefore, the first part of the output summarises the data after it has been ranked. For example, if the test is increased from 5 to 10 items. Hi guys I have a question about what kind of test I need test I need to perform for a work project. The Level 3 achievement standards for Mathematics and statistics are registered and have been published on the NZQA website. Chapter 5 Test. kolmogorovSmirnovTest (parallelData, "norm", 0, 1) # summary of the test including the p-value, test statistic, and null hypothesis # if our p-value. 05, Spring 2014 Note: This is a set of practice problems for exam 1. Mental health conditions, such as depression or anxiety, are. CGP 11+ Practice Papers, for CEM and other test providers, containing realistic questions at the same level as the ones children will answer in the final exam. Welcome to the Exam P home page! Please review all of the information and links provided below. On the SAT, for example, two students' scores must differ by at least 144 points (out of 1,600) before the test's sponsors are willing to say the students' measured abilities really differ. GRADUATE APTITUDE TEST-BIOTECHNOLOGY (GAT-B) 2020 (for admission to DBT-supported PG Programmes in Biotechnology in participating institutions across India: M. Take a look at the Sig. Distribution. Statistics S1. Check out, for example, Study. Our statistics calculator is the most sophisticated statistics calculator online. test the hypothesis that an equation with X 2, X 3, etc. Chapter 3 Examining Relationships Outline and Questions Sec. Creates a classification table, from raw data in the spreadsheet, for two observers and calculates an inter-rater agreement statistic (Kappa) to evaluate the agreement between two classifications on ordinal or nominal scales. Free Statistics Book. AP Exam pics (self. Gateway to exams: Units 3-4 p56. Socio-Culturalr 4. The median wage is the wage at which half the workers in an occupation earned more than that amount and half earned less. All English tests have answers and explanations. See everything from download speed, to jitter, to latency all in one unbiased place. Dr Berni Sewell, PhD is a health scientist, energy healer, and self-worth blogger. Pearson VUE provides licensure and certification exams for Microsoft, Cisco, CompTIA. 2011 AP® STATISTICS FREE-RESPONSE QUESTIONS -2- Formulas begin on page 3. Immerse yourself in a particular discipline from analytics for Data Science to Social Science Statistics. National Statistician Sir Ian Diamond said: "The Office for National Statistics has huge experience in running very large household surveys that gather vital information from a genuinely representative sample. 443 Exam 3 Spring 2015 Statistics for Applications 5/7/2015. Karelysizzle. Introduction to Applied Statistics: Lecture Notes. Mathematics III. North-East Centre, Tezpur. 3 years," however, is a statistical value. The samples are from two normal populations. Dependent t-test for paired samples (cont) How do you detect changes in time using the dependent t-test? The dependent t-test can also look for "changes" between means when the participants are measured on the same dependent variable, but at two time points. The One-Sample T-Test in SPSS. Code public static function getInfo { return array ( 'name' => 'Statistics logging tests', 'description' => 'Tests request logging for cached and uncached pages. From the menus choose: Analyze > Compare Means > Independent-Samples T Test. Sig (2-Tailed) value. Data Analysis, Statistics, and Probability are mathematical processes that help solve real-world problems. Where you lose efficiency with nonparametric methods is with estimation of absolute quantities, not with comparing groups or testing correlations. All deadlines are 11:59 p. Evaluate the Correlation Results: Correlation Results will always be between -1 and 1. Here we are providing the facility to download Latest Edition. NCAA schools distribute more than $3. Test Statistics The stats program works out the p value either directly for the statistic you're interested in (e. Welcome! This is one of over 2,200 courses on OCW. Latest coronavirus news as of 5 pm on 27 October. The latest Exam Answers collections. Search Constraints Start Over You searched for: Subjects World War II Remove constraint Subjects: World War II Genre Statistics Remove constraint Genre: Statistics Titles Detailed report on test of adequacy of K-2 ration in the desert: project no. Whether you want to apply math to science or engineering, search for a deeper understanding of theoretical mathematics, develop effective ways to teach mathematics, or make sense of data with statistics, we have something for you. submitted 29 days ago by biggmusclelarry. At the 5% significance level, there is insufficient evidence to conclude that high school most recent graduating class distribution of enrolled and not enrolled does not fit that of. Number and operations; algebra and functions; geometry and measurement(plane euclidean, coordinate, three dimensional, and trigonometry); statistics, probability, and data analysis. A student who scored 0 on the midterm would be predicted to score 50 on the final exam. The IELTS Exam sections give you great details about each section/ module of the IELTS exam and that helps you to understand the IELTS exam very closely. EMPOWERgmat - $85/mo GMAT Club tests included 2nd month GMAT Prep Exams 3, 4, 5 & 6. Inter-rater agreement - Kappa and Weighted Kappa. Information on bar admission offices, bar exams, study aids, admission requirements, and statistics. The size of the test can be approximated by its asymptotic value. Discover the latest resources, maps, and information about the coronavirus (COVID-19) outbreak in your community. Which of the following boxplots correctly represents the data set shown below?. It is an excellent option because nearly everyone can access Excel. OECD's dissemination platform for all published content - books, serials and statistics. Statistics S1. We emphasize that these are general guidelines and should not be construed as hard and fast rules. Exam Guide analysing all the different exam tasks for the 4 Papers of the 2015 CAE exam Self-study Edition with a comprehensive guide including. Statistics module in Python provides a function known as stdev() , which can be used to calculate It is commonly used to measure confidence in statistical calculations. 6 c1; SUBC > alt= -1. The National Assessment of Educational Progress (NAEP) is the only nationally representative assessment of what students know and can do in various subjects, reported in the Nation's Report Card. Practice for the AP® Statistics exam by learning key principles of sampling and experimental design to avoid bias in data. Analytical statistics and data reporting. Exam windows vary by market based on appointment availability at the time of scheduling. Discuss the difference between the two classes performance on the test. The study of statistics will serve to enhance and further develop these skills. Zoom in and sort census data with interactive maps. Eight candidates for a highly desirable corporate job are locked together in an exam room and given a final test with just one seemingly simple question. Chapter 3 Examining Relationships Outline and Questions Sec. Test Statistics. Schools in Alaska must begin the morning exam administration between 7 and 8 a. Kruskal-Wallis H Test using SPSS Statistics Introduction. 3222 [email protected] Check out, for example, Study. In each exam file you will. One student had a score of 58 points on the midterm. Statistics/Statistics/Statistics. Test cricket is the form of the sport of cricket with the longest match duration, and is considered the game's highest standard. kolmogorovSmirnovTest (parallelData, "norm", 0, 1) # summary of the test including the p-value, test statistic, and null hypothesis # if our p-value. fits the Y variable significantly better than a linear regression--Analysis of covariance (ancova) 1: 2 – test the hypothesis that different groups have the same regression lines: first test the homogeneity of slopes; if they are not significantly different, test the homogeneity of the. On the SAT, for example, two students' scores must differ by at least 144 points (out of 1,600) before the test's sponsors are willing to say the students' measured abilities really differ. NBEO® Exams. When checking distributions graphically, look to see that they are symmetric and have no outliers. Hartley Chair in Statistics held by current Texas A&M Statistics Head Brani Vidakovic. Every percentage can be expressed as a fraction. Seatbelts reduce the risk of death by 45%. Colorado Dept. It can be considered to be similar to the paired-samples t-test, but for a dichotomous rather than a continuous dependent variable. Individuals who drive while sending or reading text messages are 23 times more likely to be involved in a car crash than other drivers. Wednesday 22 August 2018: Results available via EDI and Edexcel Online. | CommonCrawl |
Home > Journals > Ann. Probab. > Volume 2 > Issue 2 > Article
April, 1974 On the Functional Form of the Law of the Iterated Logarithm for the Partial Maxima of Independent Identically Distributed Random Variables
Michael J. Wichura
Ann. Probab. 2(2): 202-230 (April, 1974). DOI: 10.1214/aop/1176996704
Let $k$ be a positive integer, let $X, X_1, X_2, \cdots$ be i.i.d. random variables, and let $m_n^{(k)}$ be the $k$th largest of $X_1, \cdots, X_n$. Let $(M_n^{(k)}(t))_{0 < t < \infty}$ be the random process defined by $M_n^{(k)} (t) = m^{(k)}_{\lbrack nt\rbrack}. M_n^{(k)}$ takes values in the space $D$ of non-decreasing right-continuous functions on $(0, \infty)$. Let $D$ be endowed with the usual topology of weak convergence. We show that if $X$ is uniformly distributed over [-1,0], then wp 1 the sequence $(M_n^{(k)}/(\log_2 n/n))_{n\geqq 3}$ is relatively compact in $D$ and its limit points coincide with $\{x\in D: x(t) \leqq 0$ for all $t$, and $\int x(t) dt \geqq -1\}$. Also, we show that if $X$ is exponential with mean 1, then wp 1 the sequence $((M_n^{(k)} - \log n)/\log_2n)_{n\geqq 3}$ is relatively compact in $D$ and its limit points coincide with $\{x\in D: x(t) \geqq 0$ for all $t$, and $\lambda_k(x) \leqq 1\}$; here $\lambda_k(x) = \sup (\sum_{p < q} x(t_p) + kx(t_q))$, with the supremum being taken over all finite systems of points $\{t_p\}_{p \leqq q}$ over which $x$ is strictly increasing. Extensions of and corollaries to these results are given.
Michael J. Wichura. "On the Functional Form of the Law of the Iterated Logarithm for the Partial Maxima of Independent Identically Distributed Random Variables." Ann. Probab. 2 (2) 202 - 230, April, 1974. https://doi.org/10.1214/aop/1176996704
Published: April, 1974
First available in Project Euclid: 19 April 2007
MathSciNet: MR365674
Digital Object Identifier: 10.1214/aop/1176996704
Primary: 60F15
Secondary: 60G17, 60J75
Keywords: Extreme value theory, Law of the iterated algorithm, Partial maxima
Rights: Copyright © 1974 Institute of Mathematical Statistics
Ann. Probab.
Vol.2 • No. 2 • April, 1974
Institute of Mathematical Statistics
Michael J. Wichura "On the Functional Form of the Law of the Iterated Logarithm for the Partial Maxima of Independent Identically Distributed Random Variables," The Annals of Probability, Ann. Probab. 2(2), 202-230, (April, 1974) | CommonCrawl |
How are hash table's values stored physically in memory?
How are hash table's values stored in memory such that space if efficiently used and values don't have to be relocated often?
My current understanding (could be wrong):
Let's say I have 3 objects stored in a hash table. Their hash functions generate these values:
I would presume that the pointers of these objects would not be stored at the following memory addresses because there would be huge gaps between them:
startOfHashTable + 0
startOfHashTable + 10
The Wikipedia article on hash tables says that the "index" is computed as such:
hash = hashfunc(key)
index = hash % array_size
So in my example, the indices would be:
0 % 3 = 0
10 % 3 = 1
This gets rid of the huge gaps that I mentioned before. Even with this modulo scheme, there's problems when you add more objects to the hash table. If I add a fourth object to the hash table, I would need to apply % 4 to get the index. Wouldn't that invalidate all the % 3's that I did in the past? Would all those previous % 3's need to be relocated to the % 4 locations?
data-structures hash-tables memory-allocation
PwnerPwner
The entries of a hash table are stored in an array. However, you have misunderstood the application of the modulo operator to the hash values. If the hash table is stored in an array of size $n$, then the hash function is computed modulo $n$, regardless of how many items are currently stored in the table. So, in your example, if you were storing the items in an array of size 6, the three items with hash values 0, 10 and 20 would be stored at locations 0, 4 and 2, respectively. If you added a fourth element with hash value, say, 31, that would be stored at location 1, without needing to move any of the first three items. If your hash table was becoming full and you wanted to move it into a bigger array, then you would need to recalculate the locations of all the items in the table and move them appropriately.
Tom van der Zanden
David RicherbyDavid Richerby
$\begingroup$ So you're saying hash tables are created with an estimated potential size and the items are only relocated when you need to increase the size... So it doesn't matter if a hash function has uniform distribution. For example, hash values of 0, 5, and 10 are uniformly distributed, but when inserted into a hash table of potential size 5, they all collide in bucket 0. It would be better to say the hash % table size should be uniformly distributed, not the hash itself. $\endgroup$
– Pwner
$\begingroup$ @Pwner All of that is correct, yes. $\endgroup$
$\begingroup$ How is it possible to create a uniformly distributed hash % tableSize when tableSize can change? The hash values of 0, 5, and 10 create many collisions when the table size is 5, but have no collisions when the table size is 20. $\endgroup$
$\begingroup$ @Pwner Keep in mind that hashtables only have expected constant-time operations, if that. But only if the hash function is (approximately) uniform. $\endgroup$
$\begingroup$ @Pwner The distribution isn't literally uniform -- but you would aim for close to uniform. $\endgroup$
Hash-table usually do waste space. Many algorithms do, since time-space trade-offs are common, but they usually hide it better :). Like other algorithms, hash-tables do it to get better time performance.
The first point is that you try to avoid collisions in your hash-table, because that keeps the access time cost constant (but collisions are usually allowed and can be dealt with, thus allowing several items to be in the same entry, at time cost). The second point is that you try to avoid large unused gaps because that costs memory. The third point is that you avoid changing your hashing function (hence also the table size) because it requires reorganizing the whole table, which has a large time cost.
Unfortunately, the less gaps you have, the more likely a new hash entry will cause a collision. A good hash function, for a given data set, will limit the likelyhood of collision even with better use of available index space.
Actually, you should consider that there are two kinds of hash tables: static ones and dynamic ones.
For static ones, the data to be hashed does not change, so you can try to find a hash function with no collision at all for that data set. That is called a perfect hash. But the best is a minimal perfect hash, which achieves the result without gaps.
But that is not feasible when the data to be hashed changes dynamically, within a large set of possibilities. Then you cannot avoid collisions, but you try to limit them by having enough gaps.
There are a variety of techniques to manage that differently, adapting the table size to the number of values being hashed, growing the table when there are many collisions, or reducing it when there are too large gaps. But this has to be handled very carefully, using exponential table variations, so as to limit the impact of table reorganization on the overall cost of using the hash-table.
This is intended as an intuitive introduction. For more technical details, and references, you may look at answers to this question: (When) is hash table lookup O(1)?. Hash-tables and hashing is an important topic, with many variations.
baboubabou
A good way to look at hash tables is like a lookup table with infinite index range (well, not really infinite, you're still constrained by the value limit of the key you're using).
Lets say you're trying to store some specific values of sqrt(x) in a lookup table where X is an integer, it would go something like this:
[1] = 1
[3] = 1.732
[10000] = 100
This makes for very cheap square rooting since instead of the expencive calculation, you can simply fetch the value from the array. It is however, very inefficient use of memory because [2] and [4 - 9999] are empty.
To the rescue comes the hash function, the purpose of a hash function in this context is to transform the index into something that actually fits in a reasonably sized array so, for example it could do this:
(1) = [5] = 1
(3) = [2] = 1.732
(10000) = [3] = 100
now all 3 values fit in an array the size of 6.
How does the hash function achieve this? The most basic hash function is (Index % ArraySize), the modulo operator divides the Index you chose by the size of the array and gives you the remainder which is always smaller than the array size.
But what if multiple indexes hash to the same result? This is called a hash collision and there are different ways of dealing with it. The simplest of which is storing each value along with its original Index in the array, if that array slot is taken, go forward by 1 until an empty slot is found. When retrieving the value, go to the location given by hash function and loop through the elements until the one with suitable original index is found.
This is why a good hash function is also great at dispersing the data so that whether the indexes coming in are sequential or random, the hash result should be as widely dispersed as possible to keep the cost of accessing data relatively constant.
Of course the bigger the underlying array, the less collisions you're going to get so its a tradeoff between speed and size efficiency. Modern hash tables usually fill up to ~70% while having less than 10 collisions per access. Along with the hash function, this means each data fetch costs ~20 cycles which is (for some purposes) a good compromise between speed(lookup table) and efficiency(list).
Not the answer you're looking for? Browse other questions tagged data-structures hash-tables memory-allocation or ask your own question.
(When) is hash table lookup O(1)?
Probability of probing $t$ locations in a Cuckoo hash is $O(\frac{1}{2^{t/2}})$ locations in the worst case
Why can't hash tables provide O(n) sorting?
Memory usage of a BST or hash table?
Hash-Table in Practice
How to resize a large, distributed hash table?
Does my simple, static hash table have O(1) worst case lookup?
How are variables stored in and retrieved from the program stack?
"Hash" Probing?
Hash table open addressing without dummy
How C++ and alike maps are actually stored in memory? | CommonCrawl |
China 14 (%)
Spain 3 (%)
Denmark 2 (%)
Italy 2 (%)
Technical University of Denmark 2 (%)
Academy of Sciences of Azerbaijan Republic 1 (%)
Anhui Normal Univerisity 1 (%)
Cairo University 1 (%)
Capital Normal Univeristy 1 (%)
Christensen, Ole 2 (%)
Luh, Lin-Tian 2 (%)
Abdullayev, Fahreddin G. 1 (%)
Baochang, Shi 1 (%)
Bouhamidi, A. 1 (%)
Approximation Theory and Its Applications 33 (%)
Analysis 33 (%)
Approximations and Expansions 33 (%)
Mathematics 33 (%)
Lower Bounds for Finite Wavelet and Gabor Systems
Approximation Theory and Its Applications (2001-03-01) 17: 18-29 , March 01, 2001
By Christensen, Ole; Lindner, Alexander M.
Given ψ∈L2(R) and a finite sequence {(ar,λr)}r∈Γ⫅R+XR consisting of distinct points, the corresponding wavelet system is the set of functions $$\left\{ {\frac{1}{{a_\gamma ^{1/2} }}\phi (\frac{x}{{a_\gamma }} - \lambda _\gamma )\gamma \varepsilon r} \right\}$$ . We prove that for a dense set of functions ψ∈L2(R) the wavelet system corresponding to any choice of {(ar,λr)}r∈Γis linearly independent, and we derive explicite estimates for the corresponding lower (frame) bounds. In particular, this puts restrictions on the choice of a scaling function in the theory for multiresolution analysis. We also obtain estimates for the lower bound for Gabor systems $$\left\{ {e^{2rie_{\gamma x} } g(x - \lambda _\gamma )} \right\}\gamma \varepsilon r$$ for functions g in a dense subset of L2(R).
An Algebraic Method for Pole Placement in Multivariable Systems
Approximation Theory and Its Applications (2001-06-01) 17: 64-85 , June 01, 2001
By de la Sen, M.
This paper considers the pole placement in multivariable systems involving known delays by using dynamic controllers subject to multirate sampling. The controller parameterizations are calculated from algebraic equations which are solved by using the Kronecker product of matrices. It is pointed out that the sampling periods can be selected in a convenient way for the solvability of such equations under rather weak conditions provided that the continuous plant is spectrally controllable. Some overview about the use of nonuniform sampling is also given in order to improve the system's performance.
Weighted Inequalities for Certain Maximal Functions in Orlicz Spaces
Approximation Theory and Its Applications (2001-12-01) 17: 65-76 , December 01, 2001
By Xuexian, Zhu
Let Mg be the maximal operator defined by $$M_g f\left( x \right) = \sup \frac{{\int_a^b {f\left( y \right)g\left( y \right){\text{d}}y} }}{{\int_a^b {g\left( y \right){\text{d}}y} }}$$ , where g is a positive locally integrable function on R and the supremum is taken over all intervals [a,b] such that 0≤a≤x≤b/η(b−a), here η is a non-increasing function such that η (0) = 1 and $$\mathop {{\text{lim}}}\limits_{t \to {\text{ + }}\infty } \eta \left( t \right) = 0$$ η (t) = 0. This maximal function was introduced by H. Aimar and L. L. Forzani [AF]. Let Φ be an N - function such that Φ and its complementary N - function satisfy Δ2. It gives an A′Φ(g) type characterization for the pairs of weights (u,v) such that the weak type inequality $$u\left( {\left\{ {x \in {\text{R}}\left| {M_g f\left( x \right) >\lambda } \right.} \right\}} \right) \leqslant \frac{C}{{\Phi \left( \lambda \right)}}\int_{\text{R}} {\Phi \left( {\left| f \right|v} \right)} $$ holds for every f in the Orlicz space LΦ(v). And, there are no (nontrivial) weights w for which (w,w) satisfies the condition A′Φ(g).
On the Uniform Convergence of the Generalized Bieberbach Polynomials in Regions with K-Quasiconformal Boundary
Approximation Theory and Its Applications (2001-03-01) 17: 97-105 , March 01, 2001
By Cavus, Abdullah; Abdullayev, Fahreddin G.
Let G be a finite domain in the complex plane with K-quasicon formal boundary, z 0 be an arbitrary fixed point in G and p>0. Let π(z) be the conformal mapping from G onto the disk with radius r0>0 and centered at the origin 0, normalized by ϕ(z0) = 0 and ϕ(z0) = 1. Let us set $$\varphi _p \left( z \right): = \int_{x_0 }^x {\left[ {\phi \left( \zeta \right)} \right]^{2/8} } d\zeta $$ , and let πn,p(z) be the generalized Bieberbach polynomial of degree n for the pair (G,z0) that minimizes the integral $$\iint\limits_c {\left| {\varphi _p \left( z \right) - P_x^1 (z)} \right|^p d0_x }$$ in the class $$\mathop \prod \limits_n $$ of all polynomials of degree ≤ nand satisfying the conditions Pn(z0) = 0 and P′n(z0) = 1. In this work we prove the uniform convergence of the generalized Bieberbach polynomials πn,p(z) to ϕp(z) on $$\bar G$$ in case of $$p > 2 - \frac{{K^2 + 1}}{{2K^4 }}$$ .
From Bounded Families of Localized Cosines to Bi-Orthogonal Riesz Bases via Shift-Invariance
By Chui, Charles K.; Xianliang, Shi
The notion of bi-inner product functionals $$P(f,g) = \sum\limits_n{<f,f_n> <g,g_n>}$$ generated by two Bessel sequences {fn} and {gn} of functions from L2was introduced in our earlier work as a vehicle to identify dual frames and bi-orthogonal Riesz bases of L2. The objective was to find conditions under which P is a constant mu<iple of the inner product <f,g> of L2. A necessary and suffici condition derived in is that P is both spatial shift-invariant and phase shift-invariant. A<hough these two shift-invariance properties are, in general, unrelated, it could happen that one is a consequence of the other for certain clases of Bessel sequences {fn} and {gn}. In this paper, we show that, indeed, for localized cosines with two-overlapping windoes (i.e., only adjacent window functions are allowed to overlap), spatial shift-invariance of P is already sufficient to guarantee that P is a constant mu<iple of the inner product, while phase shift-invariance is not. Hence, phase shift-invariance of P for two-overlapping localized cosine Bessel sequences is a consequence of spatial shift-invariance, but the converse is not valid. As an application, we also show that two families of localized cosines with uniformly bounded and two-overlapping windows are bi-orthogonal Riesz bases of L2, if and only if P is spatial shift-invariant. In addition, we apply this resu< to generalize a resu< on characterization of dual localized cosine bases in our earlier work in to the mu<ivariate setting. A method for computing the dual windows is also given in this paper.
A Korovkin-Type Result in C k an Application to the M n Operators
Approximation Theory and Its Applications (2001-09-01) 17: 1-13 , September 01, 2001
By Cárdenas-Morales, D.; Muños-Delgado, F.J.
In this work we present a result about the approximation of the k-th derivative of a function by means of a linear operator under assumptions related to shape preserving properties. As a consequence we deduce new results about the Meyer-König and Zeller operators.
Convergence and Rate of Approximation in BVΦ for a Class of Integral Operators
By Sciamannini, S.; Vinti, G.
We obtain estimates and convergence results with respect to ϕ-variation in spaces BVΦ for a class of linear integral operators whose kernels satisfy a general homogeneity condition. Rates of approximation are also obtained. As applications, we apply our general theory to the case of Mellin convolution operators, to that one of moment operators and finally to a class of operators of fractional order.
The Integral Formula for Calculating the Hausdorff Measure of Some Fractal Sets
By Shipan, Lu; Weiyi, Su
It is important to calculate the Hausdorff dimension and the Hausdorff mesure respect to this dimension for some fractal sets. By using the usual method of "Mass Distribution", we can only calculate the Hausdorff dimension. In this paper, we will construct an integral formula by using lower inverse s-density and then use it to calculate the Hausdorff measures for some fractional dimensional sets.
A New Method for the Construction of Multivariate Minimal Interpolation Polynomial
By Chuanlin, Zhang
The extended Hermite interpolation problem on segment points set over n-dimensional Euclidean space is considered. Based on the algorithm to compute the Gröbner basis of Ideal given by dual basis a new method to construct minimal multivariate polynomial which satisfies the interpolation conditions is given.
Two New FCT Algorithms Based on Product System
Approximation Theory and Its Applications (2001-09-01) 17: 33-42 , September 01, 2001
By Zhaoli, Guo; Baochang, Shi; Nengchao, Wang
In this paper we present a product system and give a representation for consine functions with the system. Based on the formula two new algorithms are designed for computing the Discrete Cosine Transform. Both algorithms have regular recursive structure and good numerical stability and are easy to parallize. | CommonCrawl |
Spectral gap for stable process on convex double symmetric domains
Bart lomiej Dyda and Tadeusz Kulczycki
Institute of Mathematics and Computer Science Wroc law University of Technology
Wybrze˙ze Wyspia´nskiego 27, 50-370 Wroc law, Poland
We study the semigroup of the symmetric α-stable process in bounded domains in Rd. We obtain a variational formula for the spectral gap, i.e. the difference between two first eigenvalues of the generator of this semigroup. This variational formula allows us to obtain lower bound estimates of the spectral gap for convex planar domains which are symmetric with respect to both coordinate axes.
For rectangles, using "midconcavity" of the first eigenfunction [5], we obtain sharp upper and lower bound estimates of the spectral gap.
In recent years many results have been obtained in spectral theory of semi- groups of symmetric α-stable processes α ∈ (0, 2) in bounded domains in Rd, see [6], [25], [2], [18], [19], [14], [15], [5]. One of the most interesting problems in spectral theory of such semigroups is a spectral gap estimate i.e. the estimate of λ2 − λ1 the difference between two first eigenvalues of the generator of this semigroup. Such estimate is a natural generalisation of the same problem for the semigroup of Brownian motion killed on exiting
0Key words and phrases: symmetric stable process, spectral gap, convex domain The first named author was supported by KBN grant 1 P03A 026 29 and RTN Harmonic Analysis and Related Problems, contract HPRN-CT-2001-00273-HARP
The second named author was supported by KBN grant 1 P03A 020 28 and RTN Harmonic Analysis and Related Problems, contract HPRN-CT-2001-00273-HARP.
a bounded domain, which generator is Dirichlet Laplacian. In this classical case, for Brownian motion, spectral gap estimates have been widely studied see e.g [26], [28], [24], [27], [17], [7]. When a bounded domain is convex there have been obtained sharp lower-bound estimates of the spectral gap.
In the case of the semigroup of symmetric α-stable processes α ∈ (0, 2) very little is known about the spectral gap estimates. In one dimensional case when a domain is just an interval spectral gap estimates follow from results from [2] (α = 1) and [14] (α > 1). The only results for dimension greater than one have been obtained for the Cauchy process i.e. α = 1 [3], [4]. Such results have been obtained using the deep connection between the eigenvalue problem for the Cauchy process and a boundary value problem for the Laplacian in one dimension higher, known as the mixed Steklov problem.
The aim of this paper is to generalise spectral gap estimates obtained for the Cauchy process (α = 1) for all α ∈ (0, 2). Before we describe our results in more detail let us recall definitions and basic facts.
Let Xt be a symmetric α-stable process in Rd, α ∈ (0, 2]. This is a process with independent and stationary increments and characteristic func- tion E0eiξXt = e−t|ξ|α, ξ ∈ Rd, t > 0. We will use Ex, Px to denote the expectation and probability of this process starting at x, respectively. By p(t, x, y) = pt(x − y) we will denote the transition density of this process.
That is,
Px(Xt ∈ B) = Z
p(t, x, y) dy.
When α = 2 the process Xt is just the Brownian motion in Rd running at twice the speed. That is, if α = 2 then
p(t, x, y) = 1
(4πt)d/2e−|x−y|24t , t > 0, x, y ∈ Rd. (1.1) It is well known that for α ∈ (0, 2) we have pt(x) = t−d/αp1(t−1/αx), t > 0, x ∈ Rd and
pt(x) = t−d/αp1(t−1/αx) ≤ t−d/αp1(0) = t−d/αMd,α, where
Md,α = 1 (2π)d
e−|x|αdx. (1.2)
It is also well known that lim
t→0+
p(t, x, y)
t = Ad,−α
|x − y|d+α, (1.3)
Ad,γ = Γ((d − γ)/2)/(2γπd/2|Γ(γ/2)|). (1.4) Our main concern in this paper are the eigenvalues of the semigroup of the process Xt killed upon leaving a domain. Let D ⊂ Rd be a bounded connected domain and τD = inf{t ≥ 0 : Xt∈ D} be the first exit time of D./ By {PtD}t≥0 we denote the semigroup on L2(D) of Xt killed upon exiting D.
PtDf (x) = Ex(f (Xt), τD > t), x ∈ D, t > 0, f ∈ L2(D).
The semigroup has transition densities pD(t, x, y) satisfying PtDf (x) =
pD(t, x, y)f (y) dy.
The kernel pD(t, x, y) is strictly positive symmetric and
pD(t, x, y) ≤ p(t, x, y) ≤ Md,αt−d/α, x, y ∈ D, t > 0.
The fact that D is bounded implies that for any t > 0 the operator PtD maps L2(D) into L∞(D). From the general theory of semigroups (see [16]) it follows that there exists an orthonormal basis of eigenfunctions {ϕn}∞n=1 for L2(D) and corresponding eigenvalues {λn}∞n=1 satisfying
0 < λ1 < λ2 ≤ λ3 ≤ . . .
with λn→ ∞ as n → ∞. That is, the pair {ϕn, λn} satisfies
PtDϕn(x) = e−λntϕn(x), x ∈ D, t > 0. (1.5) The eigenfunctions ϕn are continuous and bounded on D. In addition, λ1 is simple and the corresponding eigenfunction ϕ1, often called the ground state eigenfunction, is strictly positive on D. For more general properties of the semigroups {PtD}t≥0, see [21], [8], [12].
It is well known (see [1], [12], [13], [23]) that if D is a bounded connected Lipschitz domain and α = 2, or that if D is a bounded connected domain for 0 < α < 2, then {PtD}t≥0 is intrinsically ultracontractive. Intrinsic ultra- contractivity is a remarkable property with many consequences. It implies, in particular, that
t→∞lim
eλ1tpD(t, x, y) ϕ1(x)ϕ1(y) = 1,
uniformly in both variables x, y ∈ D. In addition, the rate of convergence is given by the spectral gap λ2− λ1. That is, for any t ≥ 1 we have
e−(λ2−λ1)t≤ sup
x,y∈D
eλ1tpD(t, x, y) ϕ1(x)ϕ1(y) − 1
≤ C(D, α)e−(λ2−λ1)t. (1.6) The proof of this for α = 2 may be found in [27]. The proof in our setting is exactly the same.
Our first step in studying the spectral gap for α ∈ (0, 2) is the following variational characterisation of λ2− λ1.
By L2(D, ϕ21) we denote the L2 space of functions with the inner product (f, g)L2(D,ϕ2
1) =R
Df (x)g(x)ϕ21(x) dx.
Theorem 1.1. We have λ2− λ1 = inf
f ∈F
Ad,−α 2
(f (x) − f (y))2
|x − y|d+α ϕ1(x)ϕ1(y) dx dy, (1.7) where
F = {f ∈ L2(D, ϕ21) : Z
f2(x)ϕ21(x) dx = 1, Z
f (x)ϕ21(x) dx = 0}
and Ad,−α is given by (1.4). Moreover the infimum is achieved for f = ϕ2/ϕ1. The idea of the proof is based on considering a new semigroup {TtD}t≥0 of the stable process conditioned to remain forever in D. The proof of The- orem 1.1 is in Section 2.
In the classical case, for Brownian motion, when a dimension is greater than one, the simplest domain where the spectral gap can be explicitly calcu- lated is a rectangle. Let us recall that in this classical case {ϕn}∞n=1, {λn}∞n=1 are of course eigenfunctions and eigenvalues of Dirichlet Laplacian. There- fore, when (say) D = (−a, a) × (−b, b), a ≥ b > 0 then
ϕ1(x1, x2) = (1/
2ab) cos(πx1/(2a)) cos(πx2/(2b)), ϕ2(x1, x2) = (1/√
2ab) sin(2πx1/(2a)) cos(πx2/(2b)),
λ1 = π2/(4a2) + π2/(4b2), λ2 = 4π2/(4a2) + π2/(4b2) and hence λ2 − λ1 = 3π2/(4a2).
The generator of the symmetric α-stable process α ∈ (0, 2) is a pseudod- ifferential operator −(−∆)α/2 and we are not able to calculate explicitly ϕn λn for any domain even an interval or a rectangle. However, when D is a rectangle, due to simple geometric properties of this set it is shown ([5] The- orem 1.1) that the first eigenfunction ϕ1 for any α ∈ (0, 2] is "midconcave"
and unimodal according to the lines parallel to the sides. This property and Theorem 1.1 enables us to obtain sharp upper and lower bound estimates of the spectral gap for all α ∈ (0, 2). The most complicated are lower bound estimates for α ∈ (1, 2) and α = 1. The main idea of the proof in these cases is contained in Lemmas 4.2 and 4.3.
Below we present estimates of λ2 − λ1 for rectangles. The proof of this theorem is in Section 4. Let us point out that these estimates are sharp i.e.
the upper and lower bound estimates have the same dependence on the length of the sides of the rectangle (see Remark 1.3 b). Nevertheless, the numerical constants which appear in this theorem are far from being optimal.
Theorem 1.2. Let D = (−L, L) × (−1, 1), where L ≥ 1. Then (a) We have
2A−12,−α(λ2− λ1) ≤ 106·
2 1 − α
L1+α for α < 1, 2 log(L + 1)
L2 for α = 1,
( 1
2 − α+ 1 α − 1) 1
L2 for α > 1.
(b) We have
2A−12,−α(λ2− λ1) ≥
36 · 2α(L + 1)1+α for α < 1, 10−9 log(L + 1)
L2 for α = 1, 1
33 · 131+α/2· 104 1
Remark 1.3. (a) The inequality
2A−12,−α(λ2− λ1) ≥ 1
36 · 2α(L + 1)1+α is valid for all α ∈ (0, 2).
We have 2A−12,−α = α−223−απΓ−1(α/2)Γ(1 − α/2). In particular we get for example λ2− λ1 ≥ 104(L+1)8 3/2 for α = 1/2, λ2− λ1 ≥ 103(L+1)1 2 for α = 1, λ2− λ1 ≥ 104(L+1)8 5/2 for α = 3/2.
(b) By scaling we have for β > 0
λn(βD) = β−αλn(D), (1.9) thus if D = (−a, a) × (−b, b), where a ≥ b > 0, then
λ2− λ1 ≈
b
a1+α for α < 1, b
a2 log(a
b + 1) for α = 1, b2−α
a2 for α > 1.
Our next aim are lower bound estimates of the spectral gap for convex planar domains which are symmetric with respect to both coordinate axes.
In the classical case, for the Brownian motion, there are known sharp estimates for all bounded convex domains D ⊂ Rd. We have λ2−λ1 > π2/d2D where dD is the diameter of D see e.g. [24], [27]. Such results are obtained using the fact that the first eigenfunction is log-concave. For convex planar domains which are symmetric with respect to both coordinate axes even better estimates λ2 − λ1 > 3π2/d2D are known, see [17], [7] (such estimates are optimal, the lower bound is approached by this rectangles). These results follow from ratio inequalities for heat kernels.
Unfortunately in the case of symmetric α-stable processes, α ∈ (0, 2), we do not know whether the first eigenfunction is log-concave. Instead we use some of the ideas from [4] where spectral gap estimates for the Cauchy process i.e. α = 1 were obtained. Namely, we use the fact that the first eigenfunction is unimodal according to the lines parallel to coordinate axes and that it satisfies the appropriate Harnack inequality. Then we use similar techniques as for rectangles. As before in this proof the crucial role have Lemmas 4.2 and 4.3.
The properties of the first eigenfunction are obtained in Section 3 and the proof of lower bound estimates for the spectral gap is in Section 5. These estimates we present below in Theorem 1.4. Let us point out that these estimates are sharp only for α > 1, where we know that they cannot be improved because of the results for rectangles.
Theorem 1.4. Let D ⊂ R2 be a bounded convex domain which is symmetric relative to both coordinate axes. Assume that [−L, L] × [−1, 1], L ≥ 1 is the smallest rectangle (with sides parallel to the coordinate axes) containing D.
2A−12,−α(λ2− λ1) ≥ C L2,
C = C(α) = 10−93α−42−2α−1
4 + 12Γ(2/α) α(2 − α)(1 − 2−α)2/α
−2 .
As for rectangles, using scaling of λn, one can obtain estimate for domains D such that (−a, a) × (−b, b), a ≥ b > 0 is the smallest rectangle containing D (cf. Remark 1.3 b).
There are still many open problems concerning the spectral gap for semi- groups of symmetric stable processes α ∈ (0, 2) in bounded domains D ⊂ Rd. Perhaps the most interesting is the following. What is the best possible lower bound estimate for the spectral gap for arbitrary bounded convex do- main D ⊂ Rd? With this problem there are connected questions about the shape of the first eigenfunction ϕ1. For example, is ϕ1 log-concave or at least unimodal when D is a convex bounded domain? There is also an unsolved problem concerning domains from Theorem 1.4. Can one obtain for α ≤ 1 lower bounds similar to these obtained for rectangles i.e. λ2− λ1 ≥ Cα/L1+α for α < 1 and λ2− λ1 ≥ C log(1 + L)/L2 for α = 1?
2 Variational formula
In this section we prove Theorem 1.1 – the variational formula for the spectral gap.
At first we need the following simple properties of the kernel pD(t, x, y).
Lemma 2.1. There exists a constant c = c(d, α) such that for any t > 0, x, y ∈ D we have
pD(t, x, y) ≤ p(t, x, y) ≤ ct
|x − y|d+α. (2.1)
For any x, y ∈ D, x 6= y we have lim
pD(t, x, y)
t = lim
|x − y|d+α. (2.2) Proof. These properties of pD(t, x, y) are rather well known. We recall some of the standard arguments.
The estimate p(t, x, y) ≤ ct|x − y|−d−α follows e.g. from the scaling prop- erty p(t, x, y) = t−d/αp1((x − y)t−1/α) and the inequality p1(z) ≤ c|z|−d−α [29]. The equality on the right-hand side of (2.2) is well known (see (1.3)).
We know that pD(t, x, y) = p(t, x, y) − rD(t, x, y) where rD(t, x, y) = Ex(τD < t; p(t − τD, X(τD), y)).
By (2.1) we get for x, y ∈ D, t > 0 1
trD(t, x, y) = 1
tEx(τD < t; p(t − τD, X(τD), y))
≤ 1
τD < t; ct
|y − X(τD)|d+α
≤ cPx(τD < t) (δD(y))d+α ,
where δD(y) = inf{|z − y| : z ∈ ∂D}. It follows that t−1rD(t, x, y) → 0 when t → 0+.
˜
pD(t, x, y) = eλ1tpD(t, x, y)
ϕ1(x)ϕ1(y) , x, y ∈ D, t > 0 and
TtDf (x) = Z
pD(t, x, y)f (y)ϕ21(y) dy, f ∈ L2(D, ϕ21), t > 0.
{TtD}t≥0 is a semigroup in L2(D, ϕ21). This is the semigroup for the stable process conditioned to remain forever in D (see [27] where the same semi- group is defined for Brownian motion).
E(f, f ) = lim
t(f − TtDf, f )L2(D,ϕ2
1), for f ∈ L2(D, ϕ21).
Lemma 2.2. For any f ∈ L2(D, ϕ21) E (f, f ) is well defined and we have E(f, f ) = Ad,−α
|x − y|d+α ϕ1(x)ϕ1(y) dx dy. (2.3) Proof.
t(f − TtDf, f )L2(D,ϕ21)
= lim
f (x) − Z
eλ1tpD(t, x, y)
ϕ1(x)ϕ1(y) f (y)ϕ21(y) dy
f (x)ϕ21(x) dx
f (x)ϕ1(x) − eλ1t Z
pD(t, x, y)f (y)ϕ1(y) dy
×f (x)ϕ1(x) dx.
f (x)ϕ1(x) = f (x)eλ1tPtDϕ1(x) = eλ1t Z
pD(t, x, y)f (x)ϕ1(y) dy.
Hence (2.4) is equal to lim
eλ1t Z
pD(t, x, y)(f (x)ϕ1(y) − f (y)ϕ1(y)) dyf (x)ϕ1(x) dx
t→0+eλ1t Z
t (f2(x) − f (x)f (y))ϕ1(x)ϕ1(y) dy dx. (2.5) Note that we can interchange the role of x and y in (2.5). Therefore by standard arguments (2.5) is equal to
eλ1t 2
t (f (x) − f (y))2ϕ1(x)ϕ1(y) dx dy. (2.6) In view of (2.2) in order to prove (2.3) we need only to justify the interchange of the limit and the integral in (2.6). Let us denote
E1(f, f ) = Z
|x − y|d+α ϕ1(x)ϕ1(y) dx dy.
When E1(f, f ) = ∞ then (2.3) follows from (2.6) by the Fatou lemma. Now let us consider the case E1(f, f ) < ∞. By (2.1) for any t > 0 we have
t (f (x) − f (y))2ϕ1(x)ϕ1(y) ≤ c(f (x) − f (y))2
|x − y|d+α ϕ1(x)ϕ1(y). (2.7) The integral over D × D of the right-hand side of (2.7) is equal to cE1(f, f ) <
∞. Now (2.3) follows from (2.6) by the bounded convergence theorem.
Proof of Theorem 1.1. Let f ∈ F . We have f ϕ1 ∈ L2(D), ||f ϕ1||L2(D) = 1 and f ϕ1 ⊥ ϕ1 in L2(D). Since {ϕn}∞n=1 is an orthonormal basis in L2(D) we have
f ϕ1 =
cnϕn, where cn = R
Df (x)ϕ1(x)ϕn(x) dx and the equality holds in L2(D) sense.
f =
cnϕn ϕ1
in L2(D, ϕ21) sense. The condition ||f ϕ1||L2(D)= 1 gives P∞
n=1c2n= 1.
We will show that
E(f, f ) =
(λn− λ1)c2n. (2.8)
Let fk =Pk
n=2cnϕn/ϕ1. We have TtDfk(x) =
eλ1tpD(t, x, y) ϕ1(x)ϕ1(y)
ϕn(y)
ϕ1(y)ϕ21(y) dy
= eλ1t ϕ1(x)
cn Z
pD(t, x, y)ϕn(y) dy
cne−(λn−λ1)tϕn(x) ϕ1(x). Hence
(TtDfk, fk)L2(D,ϕ2
1) = Z
TtDfk(x)fk(x)ϕ21(x) dx
n=2 k
m=2
cncme−(λn−λ1)t Z
ϕn(x)ϕm(x) dx
c2ne−(λn−λ1)t.
So we obtain
(fk− TtDfk, fk)L2(D,ϕ21)=
c2n(1 − e−(λn−λ1)t).
It follows that
(f − TtDf, f )L2(D,ϕ2
1) = lim
k→∞(fk− TtDfk, fk)L2(D,ϕ2
c2n1 − e−(λn−λ1)t
t . (2.9)
To show (2.8) we have to justify the change of the limit and the sum in (2.9). Note that (1 − e−(λn−λ1)t)/t ↑ λn− λ1 when t ↓ 0 by convexity of the exponential function. Hence (2.8) follows from (2.9) by the monotone convergence theorem.
By (2.8) we get
(λn− λ1)c2n≥ (λ2− λ1)
c2n= λ2− λ1.
Now Lemma 2.2 shows that the infimum in (1.7) is bigger or equal to λ2 − λ1. When we put f = ϕ2/ϕ1 (c2 = 1, cn = 0 for n ≥ 3) we obtain E(ϕ2/ϕ1, ϕ2/ϕ1) = λ2− λ1. This shows that the infimum in (1.7) is equal to λ2− λ1 and is achieved for f = ϕ2/ϕ1.
3 Geometric and Analytic Properties of ϕ
At first we recall the result which is already proven in [4], Theorem 2.1.
(Theorem 2.1 in [4] was formulated for α = 1 (the Cauchy process) but the proof works for all α ∈ (0, 2].)
Theorem 3.1. Let D ⊂ R2 be a bounded convex domain which is symmetric relative to both coordinate axes. Then we have
(i) ϕ1 is continuous and strictly positive in D.
(ii) ϕ1 is symmetric in D with respect to both coordinate axes. That is, ϕ1(x1, −x2) = ϕ1(x1, x2) and ϕ1(−x1, x2) = ϕ1(x1, x2).
(iii) ϕ1 is unimodal in D with respect to both coordinate axes. That is, if we take any a2 ∈ (−1, 1) and p(a2) > 0 such that (p(a2), a2) ∈ ∂D, then the function v(x1) = ϕ1(x1, a2) defined on (−p(a2), p(a2)) is non–
decreasing on (−p(a2), 0) and non–increasing on (0, p(a2)). Similarly, if we take any a1 ∈ (−L, L) and r(a1) > 0 such that (a1, r(a1)) ∈ ∂D, then the function u(x2) = ϕ1(a1, x2) defined on (−r(a1), r(a1)) is non–
decreasing on (−r(a1), 0) and non–increasing on (0, r(a1)).
Next, we prove the Harnack inequality for ϕ1. Such inequality is well known (see e.g. Theorem 6.1 in [10]). Our purpose here is to give a proof which will give an explicit constant. We adopt the method from [4].
At first we need to recall some standard facts concerning stable processes.
By Pr,x(z, y) we denote the Poisson kernel for the ball B(x, r) ⊂ Rd, r > 0 for the stable process. That is,
Pz(X(τB(x,r)) ∈ A) = Z
Pr,x(z, y) dy, where z ∈ B(x, r), A ⊂ Bc(x, r). We have [9]
Pr,x(z, y) = Cαd (r2− |z − x|2)α/2
(|y − x|2− r2)α/2|y − z|d, (3.1) where Cαd = Γ(d/2)π−d/2−1sin(πα/2), z ∈ B(x, r) and y ∈ int(Bc(x, r)).
It is well known ([20] cf. also [11] formula (2.10)) that
Ey(τB(0,r)) = Cαd(Ad,−α)−1(r2− |y|2)α/2, (3.2) where r > 0 and Ad,−α is given by (1.4).
When d > α by GD(x, y) = R∞
0 pD(t, x, y) dt we denote the Green function for the domain D ⊂ Rd, x, y ∈ D. We have GD(x, y) < ∞ for x 6= y. (For d ≤ α the Green function may be defined by a different formula but we will not use it in this paper).
It is well known (see [9]) that GB(0,1)(z, y) = Rd,α
|z − y|d−α
Z w(z,y) 0
rα/2−1dr
(r + 1)d/2, z, y ∈ B(0, 1), (3.3) where
w(z, y) = (1 − |z|2)(1 − |y|2)/|z − y|2 and Rd,α = Γ(d/2)/(2απd/2(Γ(α/2))2).
By λ1(B1) we denote the first eigenvalue for the unit ball B(0, 1). Theo- rem 4 in [6] (cf. also [14]) gives the following estimate of λ1(B1)
λ1(B1) ≤ (µ1(B1))α/2, (3.4) where µ1(B1) ' 5.784 is the first eigenvalue of the Dirichlet Laplacian for the unit ball.
We will also need the following easy scaling property of ϕ1.
Lemma 3.2. Let D ⊂ Rd be a bounded domain, s > 0 and ϕ1,s the first eigenfunction on the set sD for the stable semigroup {PtsD}t≥0. Then for any x ∈ D we have ϕ1,s(sx) = s−d/2ϕ1,1(x).
Now we can formulate the Harnack inequality for ϕ1.
Theorem 3.3. Let α ∈ (0, 2), d > α and D ⊂ Rd be a bounded domain with inradius R > 0 and 0 < a < b < 1. If B(x, bR) ⊂ D then on B(x, aR) ϕ1 satisfies the Harnack inequality with constant C1 = C1(d, α, a, b). That is, for any z1, z2 ∈ B(x, aR) we have ϕ1(z1) ≤ C1ϕ1(z2) where
C1 = (b + a)d−α/2bα (b − a)d+α/2
1 + e + bd+α/2C2 (b − a)α/2(1 − bα)d/α
and C2 = C2(d, α) = α223d/2−α/2−1CαdMd,α(λ1(B1))d/α/((d − α)Rd,αAd,−α).
Proof of Theorem 3.3. In view of Lemma 3.2 we may and do assume that R = 1.
Let B ⊂ D be any ball (B 6= D). For any x, y ∈ B, t > 0 we have
pB(t, x, y) =
e−λn(B)tϕn,B(x)ϕn,B(y), (3.5)
where λn(B) and ϕn,B are the eigenvalues and eigenfunctions for the semi- group {PtB}t≥0.
We will use the fact that the first eigenfunction is q-harmonic in B ac- cording to the α-stable Schr¨odinger operator.
Let ϕ1, λ1 = λ1(D) be the first eigenfunction and eigenvalue for the semigroup {PtD}t≥0. Let A be the infinitesimal generator of this semigroup.
For x ∈ D we have Aϕ1(x) = lim
PtDϕ1(x) − ϕ1(x)
t = e−λ1(D)tϕ1(x) − ϕ1(x)
t = −λ1(D)ϕ1(x).
This gives that (A + λ1(D))ϕ1 = 0 on D. It follows that ϕ1 is q-harmonic on B according to the α-stable Schr¨odinger operator A + q with q ≡ λ1(D).
Formally this follows from Proposition 3.17, Theorem 5.5, Definition 5.1 from [10] and the fact that (B, λ1(D)) is gaugeable because B is a proper open subset of D and λ1(B) > λ1(D).
Let VB(x, y) =R∞
0 eλ1(D)tpB(t, x, y) dt. Here, VB is the q-Green function, for q ≡ λ1(D), see page 58 in [10]. The q-harmonicity of ϕ1 (Definition 5.1 in [10]), Theorem 4.10 in [10] (formula (4.15)) and formula (2.17) in [10] (page 61) give that for z ∈ B,
ϕ1(z) = Ez[eλ1(D)(τB)ϕ1(X(τB))]
= Ad,−α Z
VB(z, y) Z
D\B
|y − w|−d−αϕ1(w) dw dy, (3.6)
where eλ1(D)(τB) = exp(λ1(D)τB). Of course (3.6) is a standard fact in the theory of q-harmonic functions for the α-stable Schr¨odinger operators. For us this will be a key formula for proving the Harnack inequality for ϕ1.
By the well known formula for the distribution of the harmonic measure [22] we have
Ezϕ1(X(τB)) = Ad,−α Z
GB(z, y) Z
|y − w|−d−αϕ1(w) dw dy. (3.7) To obtain our Harnack inequality for ϕ1 we will first compare (3.6) and (3.7) and then we will use the formula for Ezϕ1(X(τB)). In order to compare (3.6) and (3.7) we need to compare VB(z, y) and GB(z, y). This will be done in a sequence of lemmas.
Lemma 3.4. Let D ⊂ Rd, d > α be a bounded domain with inradius 1 and B ( D be a ball with radius b < 1. Then for any z, y ∈ B and t0 > 0 we have
VB(z, y) ≤ eλ1(B1)t0 Z t0
pB(t, z, y) dt + C3 t(d−α)/α0 , where B1 = B(0, 1) and C3 = α(d − α)−1(1 − bα)−d/αMd,α. Proof. The inradius of D is 1 so λ1(D) ≤ λ1(B1). It follows that
pB(t, z, y) dt + Z ∞
eλ1(B1)tpB(t, z, y) dt. (3.8) By (3.5) we obtain
pB(t, z, y) =
e−λn(B)tϕn,B(z)ϕn,B(y) ≤ 1 2
e−λn(B)t(ϕ2n,B(z) + ϕ2n,B(y)).
It follows that the second integral in (3.8) is bounded above by 1
2 Z ∞
e(λ1(B1)−βλn(B))te−λn(B)(1−β)t(ϕ2n,B(z) + ϕ2n,B(y)) dt, (3.9) where β = λ1(B1)/λ1(B) = bα (see 1.9).
Note also that eλ1(B1)−βλn(B) ≤ eλ1(B1)−βλ1(B) = e0 = 1.
For any w ∈ B (w = z or w = y) we have Z ∞
e−λn(B)(1−β)tϕ2n,B(w) dt = Z ∞
pB((1 − β)t, w, w) dt
≤ Z ∞
p((1 − β)t, 0, 0) dt ≤ Z ∞
Md,α
(1 − β)d/αtd/αdt = C3 t(d−α)/α0 .
Lemma 3.5. Let 0 < a < b < 1, B = B(w, b), w ∈ Rd. For any y ∈ B and z ∈ B(w, a) we have
C4GB(z, y) ≥ Ey(τB),
where C4 = bd+α/2α23d/2−α/2−1Cαd/((b − a)α/2Rd,αAd,−α).
Proof. We may and do assume that w = 0. Let us consider the formula for the Green function for a unit ball GB(0,1)(z, y) (3.3). Note that for any t > 0
Z t 0
(r + 1)d/2 ≥ 1 2d/2
Z t∧1 0
rα/2−1 = (tα/2∧ 1) α2d/2−1 . Hence for any z, y ∈ B(0, 1)
GB(0,1)(z, y) ≥ Rd,αα−12−d/2+1|z − y|α−d(1 ∧ (w(z, y))α/2).
By scaling it follows that for any z, y ∈ B, GB(z, y) = bα−dGB(0,1) z
b,y b
≥ Rd,αα−12−d/2+1 bd−α
zb −yb
d−α
1 ∧
1 −
2 α/2 1 −
2 α/2
zb − yb
= Rd,αα−12−d/2+1 bα|z − y|d−α
bα∧(b2− |z|2)α/2(b2 − |y|2)α/2
|z − y|α
.(3.10) For z ∈ B(0, a) and y ∈ B = B(0, b) we have |z − y| ≤ a + b ≤ 2b and (b2− |z|2)α/2 ≥ (b2− a2)α/2. Hence
(b2− |z|2)α/2
|z − y|α ≥ ((b − a)(b + a))α/2 ((a + b)2)α/2 ≥ 1
2α/2
1 −a
α/2
It follows that for z ∈ B(0, a) and y ∈ B(0, b), (3.10) is bounded below by Rd,αα−12−d/2+1
bd2d−α2α/2
1 − a
(b2− |y|2)α/2. By the formula for Ey(τB) (3.2) this is equal to C4−1Ey(τB).
Lemma 3.6. Let D ⊂ Rd, d > α be a bounded domain with inradius 1, 0 < a < b < 1 and B = B(x, b) ⊂ D. Then for any z ∈ B(x, a) and y ∈ B we have GB(z, y) ≤ VB(z, y) ≤ C5GB(z, y),where C5 = 1 + e + C3C4(λ1(B1))d/α.
Proof. The inequality GB(z, y) ≤ VB(z, y) is trivial, it follows from the defi- nition of GB(z, y) and VB(z, y).
We will prove the inequality VB(z, y) ≤ C5GB(z, y). By Lemma 4.8 in [10] we have
VB(z, y) = GB(z, y) + λ1(D) Z
VB(z, u)GB(u, y) du. (3.11) By Lemma 3.4, R
BVB(z, u)GB(u, y) du is bounded above by eλ1(B1)t0
Z t0
pB(t, z, u) dtGB(u, y) du + C3
t(d−α)/α0 Z
GB(u, y) du. (3.12) Let us denote the above sum by I + II. We have
pB(t, z, u) dt GB(u, y) du = Z t0
Z ∞ 0
pB(t, z, u)pB(s, u, y) du ds dt
= Z t0
pB(t + s, z, y) ds dt ≤ t0GB(z, y).
It follows that I ≤ t0eλ1(B1)t0GB(z, y).
By applying Lemma 3.5 for z ∈ B(x, a) we get II = C3Ey(τB)
t(d−α)/α0 ≤ C3C4GB(z, y) t(d−α)/α0
Putting the estimates (3.11), (3.12) together with those for I and II gives
VB(z, y) ≤ GB(z, y) 1 + λ1(B1)t0eλ1(B1)t0 +C3C4λ1(B1) t(d−α)/α0
Putting t0 = 1/λ1(B1) we obtain
VB(z, y) ≤ GB(z, y)(1 + e + C3C4(λ1(B1))d/α).
We now return to the proof of Theorem 3.3. Let z1, z2 ∈ B(x, a) ⊂ B(x, b) ⊂ D. By (3.6), (3.7) and Lemma 3.6 we obtain
ϕ1(z2) ≥ Ez2[ϕ1(X(τB(x,b)))] (3.14) and
ϕ1(z1) ≤ C5Ez1[ϕ1(X(τB(x,b)))]. (3.15)
So to compare ϕ1(z2) and ϕ1(z1) we have to compare Ez1[ϕ1(X(τB(x,b)))] and Ez2[ϕ1(X(τB(x,b)))].
Ezi[ϕ1(X(τB(x,b)))] = Z
D\B(x,b)
ϕ1(y)Pb,x(zi, y) dy, (3.16) for i = 1, 2, where Pb,x(zi, y) is the Poisson kernel for the ball B(x, b) which is given by an explicit formula (3.1). We have reduce to comparing Pb,x(z1, y) and Pb,x(z2, y). Recall that z1, z2 ∈ B(x, a). For y ∈ Bc(x, b) we have
|y − z2|
|y − z1| ≤ b + a b − a and
(b2− |z1− x|2)α/2
(b2− |z2− x|2)α/2 ≤ bα (b2− a2)α/2. It follows that
Pb,x(z1, y)
Pb,x(z2, y) ≤ (b + a)d−α/2bα (b − a)d+α/2 .
Using this, (3.16), (3.15) and (3.14) we obtain for z1, z2 ∈ B(x, a) ϕ1(z1) ≤ C5(b + a)d−α/2bα(b − a)−d−α/2ϕ1(z2).
In this paper we will need the Harnack inequality for ϕ21 in dimension d = 2. For this reason we will formulate the following corollary of Theorem 3.3.
In this corollary we choose b ∈ (0, 1/2] and a = b/2.
Corollary 3.7. Let α ∈ (0, 2) and D ⊂ R2 be a bounded domain with inradius R > 0 and b ∈ (0, 1/2]. If B(x, bR) ⊂ D then on B(x, bR/2) ϕ21 satisfies the Harnack inequality with constant cH = cH(α). That is, for any z1, z2 ∈ B(x, bR/2) we have ϕ21(z1) ≤ cHϕ21(z2) where
cH = 34−α22α
We point out that cH does not depend on b ∈ (0, 1/2].
Proof. We are going to obtain upper bound estimates for constants C1, C2 from Theorem 3.3 for d = 2, a = b/2 and b ∈ (0, 1/2].
Putting d = 2 we get Cα2 = π−2sin(πα/2), R2,α = 2−απ−1Γ−2(α/2), M2,α = 2−1π−1α−1Γ(2/α), A2,−α= α22α−2π−1Γ(α/2)Γ−1(1 − α/2).
Putting these constants to the formula for C2 and using also the fact that Γ(α/2)Γ(1 − α/2) = π sin−1(πα/2) we obtain after easy calculations
C2 = 23−α/2Γ(2/α)(λ1(B1))2/α
(2 − α)α ≤ 6 · 23−α/2Γ(2/α) (2 − α)α . The last inequality follows from (3.4) and the fact that µ1(B1) < 6.
Putting d = 2 and a = b/2 we obtain C1 = 32−α/22α
1 + e + 2α/2b2C2 (1 − bα)2/α
Now using the estimate for C2 and the inequality b ≤ 1/2 we get C1 ≤ 32−α/22α
In the assertion of Corollary 3.7 we have the Harnack inequality for ϕ21 so cH is equal to the square of the right hand side of (3.18).
4 Spectral gap for rectangles
We begin from several lemmas, which will lead us to the estimation of the spectral gap for rectangles.
Lemma 4.1. Let D = (−L, L) × (−1, 1), where L ≥ 1. Then ϕ1(x) ≤ 3
√L for all x ∈ D and
ϕ1(x1, x2) ≥ 1 2√
L(1 − 2
L|x1|)(1 − 2|x2|) for all (x1, x2) ∈ [−L/2, L/2] × [−1/2, 1/2].
Proof. The lemma easily follows from unimodality and symmetry of ϕ1 (see Theorem 3.1), midconcavity of ϕ1 (see Theorem 1.1 in [5]) and the equality R
Dϕ21dx = 1.
Lemma 4.2. Let µk> 0 (k = 1, . . . , L), L ≥ 2 be unimodal, i.e., there exists k0 such that µi ≤ µj for i ≤ j ≤ k0 and µi ≥ µj for k0 ≤ i ≤ j. Then for any fk∈ R such that PL
k=1fkµk = 0 we have
k=1
µkfk2 ≤ L2
L−1
(µk∧ µk+1)(fk− fk+1)2.
Proof. Let M = PL
k=1µk. By PL
k=1µkfk = 0 and Schwarz inequality we obtain
µkfk2 =
j=1
µj
µkfk2 = 1 2
j,k=1
µjµk(fj2+ fk2)
µjµk(fj − fk)2
= X
1≤j<k≤L
µjµk
k−1
t=j
(ft− ft+1)
≤ L X
(ft− ft+1)2
= L
j≤t<k
· (ft− ft+1)2. (4.1)
For t < k0 (where k0 is defined in the lemma) we have X
µjµk ≤ Lµt L
µk= L(µt∧ µt+1)M.
Similarly for t ≥ k0
µjµk ≤ Lµt+1 L
µj = L(µt∧ µt+1)M.
These two inequalities combined with (4.1) finish the proof.
Lemma 4.3. Let (D, µ) be a finite measure space and D = SL
k=1Dk, L ≥ 1 with pairwise disjoint Dk's. We assume that the sequence µk= µ(Dk) > 0 is unimodal. Then
1 µ(D)
(f (x) − f (y))2µ(dx)µ(dy) (4.2)
1 µk
+4 L2
1 µk∨ µk+1
Dk+1
(f (x) − f (y))2µ(dx)µ(dy) (4.4) for all f ∈ L2(D, µ).
On dendroids for which smoothness, pointwise smoothness and hereditary contracti...
In this paper we generalize this theorem, i.e., we give a characterization of those hereditarily contractible dendroids for which smoothness and pointwise
A space whose regions are the simple domains of another space*
denote a sequence of mutually exclusive bounded simple domains in the plane such that for each posi.. satisfies all the requirements of Axiom
λ we have: η, ν are almost equal if and only if their images are.
Then we deal with a positive answer, in particular KL(ℵ n , 2), and we show that the negation of a relative of the free subset property for λ implies KL(λ, 2).. We thank the
HUA-HARMONIC FUNCTIONS ON SYMMETRIC TYPE TWO SIEGEL DOMAINS
In these papers we char- acterized pluriharmonicity within the class of bounded functions on type two Siegel domains by means of at most three elliptic degene- rate operators..
Bergman Kernel in Convex Domains
Proof 2 (Lempert) By Berndtsson's result on log-(pluri)subharmonicity of the Bergman kernel for sections of a pseudoconvex domain it follows that log K {G w <t} (w ) is convex for t
ESTIMATES OF SOME PROBABILITIES IN MULTIDIMENSIONAL CONVEX RECORDS
There has been a trend in the past few years to move away from the standard model and to consider either records for random elements in a partially ordered space or convex records for
ON GRAPHS G FOR WHICH BOTH G AND G ARE CLAW-FREE
Theorem A implies as an immediate corollary that the graph property "both G and G are claw-free" is stable under the closure for claw- free graphs, i.e., if G has the property, then
ESTIMATES FOR THE POISSON KERNELS AND A FATOU TYPE THEOREM APPLICATIONS TO ANALY...
The examples discussed above point to the importance of the Poisson kernels on the Furstenberg boundaries which in the case of non-symmetric Siegel domains do not coincide with the
ACE inhibitors or sartans — which are better?
• niewzięcie pod uwagę badań z udziałem pa- cjentów z niewydolnością serca (tutaj rola leków ha- mujących układ RAA jest na tyle silna, że zmniejsze- nie śmiertelności
Complex geodesics in convex tube domains II
3 Description of complex geodesics in an arbitrary convex tube domain and its applications in special classes of domains One of the goals of this section is to present Theorem 3.1,
Peak functions in $\mathbb{C}$-convex domains
2 Preliminary results on the existence of peak functions: case of extreme C-convex domains Recall that in the paper [9], a notion of a weak peak function and a weak peak point was
ranibizumab: are they both for exudative-neovascular AMD?
A moderate difference in the pharmacoki- netics of psychoactive drugs often leads to dramatic pharmacodynamic effects suggesting that metabolism in situ within the brain could play a
Decomposable Subsets are S-convex
"On the radio the pictures are better"
An IGA Framework for PDE-Based Planar Parameterization on Convex Multipatch Doma...
Sequential convex relaxation for convex optimization with bilinear matrix equali...
b Q d 1 W d 1 2 r r d λ L λ E φ r σ q R B q D D B a B q A C Q · − 11
Some Remarks on Functions Starlike with Respect to Symmetric points
Third Hankel determinant for starlike and convex functions with respect to symme...
HOW TO DIMINISH THE WEAR FOR CHAINS WHICH ARE TIGHT ON THE SPROCKETS
Primitive lattice points in convex planar domains by
2 Estimates of characteristic exponent
Spectral gap for the Cauchy process on convex, symmetric domains | CommonCrawl |
Vladimir Arnold 80th Anniversary. Special Memorial Issue
Citation: Vladimir Arnold 80th Anniversary. Special Memorial Issue, Regular and Chaotic Dynamics, 2017, vol. 22, no. 6, pp. 579–584
Cieliebak K., Eliashberg Y., Polterovich L.
Contact Orderability up to Conjugation
We study in this paper the remnants of the contact partial order on the orbits of the adjoint action of contactomorphism groups on their Lie algebras. Our main interest is a class of noncompact contact manifolds, called convex at infinity.
Keywords: contactomorphism group, partial order, nonsqueezing
Citation: Cieliebak K., Eliashberg Y., Polterovich L., Contact Orderability up to Conjugation, Regular and Chaotic Dynamics, 2017, vol. 22, no. 6, pp. 585–602
Sevryuk M. B.
Families of Invariant Tori in KAM Theory: Interplay of Integer Characteristics
The purpose of this brief note is twofold. First, we summarize in a very concise form the principal information on Whitney smooth families of quasi-periodic invariant tori in various contexts of KAM theory. Our second goal is to attract (via an informal discussion and a simple example) the experts' attention to the peculiarities of the so-called excitation of elliptic normal modes in the reversible context 2.
Keywords: KAM theory, quasi-periodic invariant tori, Whitney smooth families, proper destruction of resonant tori, excitation of elliptic normal modes, reversible context 2
Citation: Sevryuk M. B., Families of Invariant Tori in KAM Theory: Interplay of Integer Characteristics, Regular and Chaotic Dynamics, 2017, vol. 22, no. 6, pp. 603–615
DOI:10.1134/S156035471706003X
Paul T., Sauzin D.
Normalization in Lie Algebras via Mould Calculus and Applications
We establish Écalle's mould calculus in an abstract Lie-theoretic setting and use it to solve a normalization problem, which covers several formal normal form problems in the theory of dynamical systems. The mould formalism allows us to reduce the Lie-theoretic problem to a mould equation, the solutions of which are remarkably explicit and can be fully described by means of a gauge transformation group. The dynamical applications include the construction of Poincaré–Dulac formal normal forms for a vector field around an equilibrium point, a formal infinite-order multiphase averaging procedure for vector fields with fast angular variables (Hamiltonian or not), or the construction of Birkhoff normal forms both in classical and quantum situations. As a by-product we obtain, in the case of harmonic oscillators, the convergence of the quantum Birkhoff form to the classical one, without any Diophantine hypothesis on the frequencies of the unperturbed Hamiltonians.
Keywords: mould calculus, normal forms, dynamical systems, quantum mechanics, semiclassical approximation
Citation: Paul T., Sauzin D., Normalization in Lie Algebras via Mould Calculus and Applications, Regular and Chaotic Dynamics, 2017, vol. 22, no. 6, pp. 616–649
de la Llave R.
Simple Proofs and Extensions of a Result of L. D. Pustylnikov on the Nonautonomous Siegel Theorem
We present simple proofs of a result of L.D. Pustylnikov extending to nonautonomous dynamics the Siegel theorem of linearization of analytic mappings. We show that if a sequence $f_n$ of analytic mappings of ${\mathbb C}^d$ has a common fixed point $f_n(0) = 0$, and the maps $f_n$ converge to a linear mapping $A_\infty$ so fast that $$ \sum_n \|f_m - A_\infty\|_{\mathbf{L}^\infty(B)} < \infty $$ $$ A_\infty = \mathop{\rm diag}( e^{2 \pi i \omega_1}, \ldots, e^{2 \pi i \omega_d}) \qquad \omega = (\omega_1, \ldots, \omega_q) \in {\mathbb R}^d, $$ then $f_n$ is nonautonomously conjugate to the linearization. That is, there exists a sequence $h_n$ of analytic mappings fixing the origin satisfying \[ h_{n+1} \circ f_n = A_\infty h_{n}. \] The key point of the result is that the functions $h_n$ are defined in a large domain and they are bounded. We show that $\sum_n \|h_n - \mathop{\rm Id} \|_{\mathbf{L}^\infty(B)} < \infty$.
We also provide results when $f_n$ converges to a nonlinearizable mapping $f_\infty$ or to a nonelliptic linear mapping.
In the case that the mappings $f_n$ preserve a geometric structure (e.g., symplectic, volume, contact, Poisson, etc.), we show that the $h_n$ can be chosen so that they preserve the same geometric structure as the $f_n$.
We present five elementary proofs based on different methods and compare them. Notably, we consider the results in the light of scattering theory. We hope that including different methods can serve as an introduction to methods to study conjugacy equations.
Keywords: nonautonomous linearization, scattering theory, implicit function theorem, deformations
Citation: de la Llave R., Simple Proofs and Extensions of a Result of L. D. Pustylnikov on the Nonautonomous Siegel Theorem, Regular and Chaotic Dynamics, 2017, vol. 22, no. 6, pp. 650–676
Chenciner A.
Are Nonsymmetric Balanced Configurations of Four Equal Masses Virtual or Real?
Balanced configurations of $N$ point masses are the configurations which, in a Euclidean space of high enough dimension, i.e., up to $2(N - 1)$, admit a relative equilibrium motion under the Newtonian (or similar) attraction. Central configurations are balanced and it has been proved by Alain Albouy that central configurations of four equal masses necessarily possess a symmetry axis, from which followed a proof that the number of such configurations up to similarity is finite and explicitly describable. It is known that balanced configurations of three equal masses are exactly the isosceles triangles, but it is not known whether balanced configurations of four equal masses must have some symmetry. As balanced configurations come in families, it makes sense to look for possible branches of nonsymmetric balanced configurations bifurcating from the subset of symmetric ones. In the simpler case of a logarithmic potential, the subset of symmetric balanced configurations of four equal masses is easy to describe as well as the bifurcation locus, but there is a grain of salt: expressed in terms of the squared mutual distances, this locus lies almost completely outside the set of true configurations (i. e., generalizations of triangular inequalities are not satisfied) and hence could lead most of the time only to the bifurcation of a branch of virtual nonsymmetric balanced configurations. Nevertheless, a tiny piece of the bifurcation locus lies within the subset of real balanced configurations symmetric with respect to a line and hence has a chance to lead to the bifurcation of real nonsymmetric balanced configurations. This raises the question of the title, a question which, thanks to the explicit description given here, should be solvable by computer experts even in the Newtonian case. Another interesting question is about the possibility for a bifurcating branch of virtual nonsymmetric balanced configurations to come back to the domain of true configurations.
Keywords: balanced configuration, symmetry
Citation: Chenciner A., Are Nonsymmetric Balanced Configurations of Four Equal Masses Virtual or Real?, Regular and Chaotic Dynamics, 2017, vol. 22, no. 6, pp. 677–687
Montgomery R.
The Hyperbolic Plane, Three-Body Problems, and Mnëv's Universality Theorem
We show how to construct the hyperbolic plane with its geodesic flow as the reduction of a three-problem whose potential is proportional to $I/\Delta^2$ where $I$ is the moment of inertia of this triangle whose vertices are the locations of the three bodies and $\Delta$ is its area. The reduction method follows [11]. Reduction by scaling is only possible because the potential is homogeneous of degree $-2$. In trying to extend the assertion of hyperbolicity to the analogous family of planar $N$-body problems with three-body interaction potentials we run into Mnëv's astounding universality theorem which implies that the extended assertion is doomed to fail.
Keywords: Jacobi–Maupertuis metric, reduction, Mnev's Universality Theorem, three-body forces, Hyperbolic metrics
Citation: Montgomery R., The Hyperbolic Plane, Three-Body Problems, and Mnëv's Universality Theorem, Regular and Chaotic Dynamics, 2017, vol. 22, no. 6, pp. 688–699
Guillery N., Meiss J. D.
Diffusion and Drift in Volume-Preserving Maps
A nearly-integrable dynamical system has a natural formulation in terms of actions, $y$ (nearly constant), and angles, $x$ (nearly rigidly rotating with frequency $\Omega(y)$). We study angleaction maps that are close to symplectic and have a twist, the derivative of the frequency map, $D\Omega(y)$, that is positive-definite. When the map is symplectic, NekhoroshevЃfs theorem implies that the actions are confined for exponentially long times: the drift is exponentially small and numerically appears to be diffusive. We show that when the symplectic condition is relaxed, but the map is still volume-preserving, the actions can have a strong drift along resonance channels. Averaging theory is used to compute the drift for the case of rank-$r$ resonances. A comparison with computations for a generalized Froeschlé map in four-dimensions shows that this theory gives accurate results for the rank-one case.
Keywords: symplectic maps, Nekhoroshev's theorem, chaotic transport
Citation: Guillery N., Meiss J. D., Diffusion and Drift in Volume-Preserving Maps, Regular and Chaotic Dynamics, 2017, vol. 22, no. 6, pp. 700–720
Evripidou C., Kassotakis P., Vanhaecke P.
Integrable Deformations of the Bogoyavlenskij–Itoh Lotka–Volterra Systems
We construct a family of integrable deformations of the Bogoyavlenskij–Itoh systems and construct a Lax operator with spectral parameter for it. Our approach is based on the construction of a family of compatible Poisson structures for the undeformed systems, whose Casimirs are shown to yield a generating function for the integrals in involution of the deformed systems.We show how these deformations are related to the Veselov–Shabat systems.
Keywords: Integrable systems, deformations
Citation: Evripidou C., Kassotakis P., Vanhaecke P., Integrable Deformations of the Bogoyavlenskij–Itoh Lotka–Volterra Systems, Regular and Chaotic Dynamics, 2017, vol. 22, no. 6, pp. 721–739
Rahman A., Joshi Y., Blackmore D.
Sigma Map Dynamics and Bifurcations
Some interesting variants of walking droplet based discrete dynamical bifurcations arising from diffeomorphisms are analyzed in detail. A notable feature of these new bifurcations is that, like Smale horseshoes, they can be represented by simple geometric paradigms, which markedly simplify their analysis. The two-dimensional diffeomorphisms that produce these bifurcations are called sigma maps or double sigma maps for reasons that are made manifest in this investigation. Several examples are presented along with their dynamical simulations.
Keywords: Discrete dynamical systems, bifurcations, chaotic strange attractors, invariant sets, homoclinic and heteroclinic orbits, sigma maps, dynamical crises
Citation: Rahman A., Joshi Y., Blackmore D., Sigma Map Dynamics and Bifurcations, Regular and Chaotic Dynamics, 2017, vol. 22, no. 6, pp. 740–749
Agrachev A., Beschastnyi I.
Symplectic Geometry of Constrained Optimization
In this paper, we discuss geometric structures related to the Lagrange multipliers rule. The practical goal is to explain how to compute or estimate the Morse index of the second variation. Symplectic geometry allows one to effectively do it even for very degenerate problems with complicated constraints. The main geometric and analytic tool is an appropriately rearranged Maslov index. We try to emphasize the geometric framework and omit analytic routine. Proofs are often replaced with informal explanations, but a well-trained mathematician will easily rewrite them in a conventional way. We believe that Vladimir Arnold would approve of such an attitude.
Keywords: optimal control, second variation, Lagrangian Grassmanian, Maslov index
Citation: Agrachev A., Beschastnyi I., Symplectic Geometry of Constrained Optimization, Regular and Chaotic Dynamics, 2017, vol. 22, no. 6, pp. 750–770 | CommonCrawl |
Effects of sesamin on primary human synovial fibroblasts and SW982 cell line induced by tumor necrosis factor-alpha as a synovitis-like model
Manatsanan Khansai1,
Thanyaluck Phitak1,
Jeerawan Klangjorhor1,
Sasimol Udomrak1,
Kanda Fanhchaksai1,
Peraphan Pothacharoen1 &
Prachya Kongtawelert1
Rheumatoid arthritis (RA) is an autoimmune disease that causes chronic synovitis, cartilage degradation and bone deformities. Synovitis is the term for inflammation of the synovial membrane, an early stage of RA. The pathogenesis of the disease occurs through cytokine induction. The major cytokine that increases the severity of RA is TNF-α. Thus, inhibition of the TNF-α cascade is an effective way to diminish the progression of the disease. We are interested in investigating the difference between primary human synovial fibroblast (hSF) cells and SW982 as synovitis models induced by TNF-α and in monitoring their responses to sesamin as an anti-inflammatory phytochemical.
The designed experiments were performed in hSF cells or the SW982 cell line treated with 10 ng/ml TNF-α with or without 0.25, 0.5 or 1 μM sesamin. Subsequently, pro-inflammatory cytokine genes and proteins were measured in parallel with a study of associated signalling transduction involved in inflammatory processes, including NF-κB and MAPK pathways.
The results demonstrated that although hSF and SW982 cells responded to TNF-α induction in the same fashion, they reacted at different levels. TNF-α could induce IL-6, IL-8 and IL-1β in both cell types, but the levels in SW982 cells were much higher than in hSF cells. This characteristic was due to the different induction of MAPKs in each cell type. Both cell types reacted to sesamin in almost the same fashion. However, hSF cells were more sensitive to sesamin than SW982 cells in terms of the anti-RA effect.
The responses of TNF-α-induced hSF and SW982 were different at the signal transduction level. However, the two cell types showed almost the same reaction to sesamin treatment in terms of the end point of the response.
Rheumatoid arthritis (RA) is an autoimmune disease related to chronic joint inflammation [1]. The origin of the disease remains a mystery; however, the immune system is known to mediate the progression of diseased joints in RA [2]. The progression of RA begins with the inflammation of the synovial membrane around the affected joint (synovitis) caused by the continual immune response of many types of immune cells [3]. Therefore, the affected joint is surrounded by abundant cytokines and chemokines produced by several immune cell types. The dominant cytokine that plays a critical role in RA is tumor necrosis factor-alpha (TNF-α) [4, 5].
Previous studies demonstrated the multiple roles of TNF-α in RA progression. Remarkably, TNF-α can induce the production of inflammatory cytokines and chemokines such as IL-1β, IL-6, IL-8 and itself in synovial fibroblasts, which can increase the severity of the disease [6, 7]. TNF-α is a potent cytokine that mediates diverse effects in various cell types [8]. It is chiefly produced by monocytes and macrophages but also by B cells, T cells and fibroblasts [8]. The best-known function of TNF-α is as a mediator involved in inflammatory processes that cause RA progression [4, 5, 9]. Consequently, the accumulation of these pro-inflammatory cytokines in joints with RA can also stimulate the production of degrading enzymes, causing severe cartilage destruction [10]. TNF-α stimulation also causes nuclear factor-κB (NF-κB) and mitogen-activated protein kinase (MAPK) to play dominant roles in the progression of RA [6, 11].
The NF-κB signalling pathway has long been characterized as a pro-inflammatory signalling pathway, and the activation of NF-κB is caused by pro-inflammatory cytokines such as IL-1 and TNF-α [12]. TNF-α triggers NF-κB signalling via the TNF-α receptor located on the cell membrane. Consequently, the activation of an IκB kinase (IKK) is initiated. The IKK activation stimulates the phosphorylation of IκB at specific amino-terminal serine residues. This phosphorylation is followed by ubiquitination and degradation of the phosphorylated-IκB by the proteasome, which in turn causes the release of NF-κB dimers (p50/65) from the cytoplasmic NF-κB–IκB complex and allows them to translocate to the cell nucleus. Thereby, NF-κB binds to NF-κB enhancer elements of target genes that turn on the gene expression of pro-inflammatory cytokines, chemokines, growth factors, adhesion molecules and inducible enzymes such as cyclooxygenase-2 (COX-2) and inducible nitric oxide synthase (iNOS) [13].
In MAPK signalling, p38 MAPK (p38 mitogen-activated protein kinase), ERKs (extracellular signal–regulated kinases) and SAPK/JNK (stress-activated protein kinase/c-Jun NH (2)-terminal kinase) are involved in the TNF-α induction pathway [14,15,16]. The p38 MAPK pathway has been reported to be involved in the TNF-α-induced inflammatory response in synovial fibroblasts [6]. The activation of p38 MAPK allows the production of pro-inflammatory cytokines including IL-1β, TNF-α and IL-6 [6, 17]. ERKs have been reported to be activated by IL-1, TNF-α and fibroblast growth factor (FGF) in mononuclear cell infiltrates and synovial fibroblasts in synovial tissue from RA patients [18]. As ERKs are known to participate in the regulation of IL-6 and TNF-α production, there is evidence suggesting a possible role of ERKs in joint damage associated with pro-inflammatory cytokines [18]. Additionally, ERK signalling could also play a role in RA by promoting pannus formation in the affected joint [18]. The role of the SAPK/JNK MAPK signalling cascade in RA is driven by modulating the cellular responses of various pro-inflammatory cytokines, including NF-κB activation, MMP gene expression and cell survival and apoptosis [19]. Thus, this event affects the progression of RA.
The use of human cells was established to study the mechanism of RA and possible therapeutic approaches [20]. Thus, primary human fibroblast-like synoviocytes isolated from RA patients have been used to study the effects of a variety of drugs and phytochemicals [21,22,23]. However, there are some difficulties in using RA-derived synovial fibroblasts. They have a limited replicable lifespan and eventually enter a state of senescence, they produce a broad range of results due to the individual responses of each patient sample, and it is difficult to routinely obtain RA-derived synovial tissue samples [24, 25]. Thus, researchers have tried to use cell lines instead of primary synovial cells from patients. The best-known model used to study synovitis in RA is a human synovial sarcoma cell line (SW982) [24, 25]. The SW982 cell line has been used to examine the effects of anti-inflammatory drugs such as dexamethasone and fluvastatin in in vitro experiments [24, 25]. However, the SW982 cell line still has certain properties that are of concern for its use as an alternative cell line instead of primary human synovial fibroblast (hSF) cells from RA patients [25]. Specifically, SW982 has a self-renewal ability that is different from the behaviour of normal or RA synovial fibroblasts [26].
Sesamin is a major active compound found in sesame seeds (Sesamum indicum Linn.) [27]. It shows the interesting property of being associated with anti-inflammatory effects in many studies [27,28,29]. Previous studies showed that diets supplemented with sesamin decreased the serum levels of IL-1β and IL-6 in mice after lipopolysaccharide (LPS) exposure [27]. Other data suggested that sesamin has the ability to suppress the NF-κB and p38 MAPK pathways, which are the major pathways that control cytokine production in LPS-induced inflammation in murine microglia cells [30]. Sesamin also efficiently relieves pathological progression in the model of IL-1β-induced osteoarthritis (OA) [29]. Moreover, our previous study demonstrated a protective effect of sesamin against a cartilage degradation model induced by TNF-α and OSM [31]. On the strength of this evidence, it is possible that sesamin also inhibits cytokine production in inflammatory processes in synovitis caused by RA progression.
In this study, we aim to investigate and clarify the responses of different RA models, TNF-α-induced Primary human synovial fibroblast (hSF) cells and the SW982 cell line, to sesamin treatment. The effects of sesamin on both models were examined by investigation of the pro-inflammatory gene expression including IL-1β, IL-6, IL-8, and TNF-α. The release of IL-6 and IL-8 was reported as pro-inflammatory cytokine production. Furthermore, the NF-kB and MAPK signalling pathway were studied as signalling pathways that regulate the inflammatory processes.
Chemicals and supplements were purchased from the following suppliers: cell culture supplements such as Dulbecco's Modified Eagle's Medium (DMEM), penicillin, streptomycin and 0.25% Trypsin EDTA were purchased from Life Technologies (Burlington, Ontario, Canada). Recombinant Human TNF-α was purchased from Peprotech (Rocky Hill, USA). Sesamin was extracted from sesame seeds (Sesamum indicum Linn.) that were harvested from Lampang Province of Thailand. The voucher specimens (BKF No. 138181) were submitted to the National Park, Wildlife and Plant Conservation Department, Ministry of Natural Resources and Environment, Bangkok, Thailand. The processes were administered by Assoc. Prof. Dr. Wilart Poompimol. The chemical structure of the sesamin extract was analysed by NMR/MS (MW 354.35) as described in our previous publication. The RNA Isolation Kit was obtained from GE Health Science (New York, USA). The Tetro cDNA Synthesis Kit was purchased from BIOLINE (Taunton, USA). SsoFast™ EvaGreen Supermix was purchased from Bio-Rad (Bio-Rad Laboratories (Singapore) Pte. Ltd.). A real-time PCR machine was purchased from Bio-Rad (Bio-Rad Laboratories (Singapore) Pte. Ltd.). The MILLIPLEX MAP Human Cytokine, Chemokine and Immuno Cell Multiplex Assays Kit was obtained from Merck Millipore (Merck KGaA, Darmstadt, Germany). Anti-human β-actin, anti-IκB, anti-phospho IκB, anti-p65, anti-phospho p65, anti-SAPK/JNK, anti-phospho SAPK/JNK, anti-p38, anti-phospho p38, anti-p44/42, anti-phospho p44/42, goat anti-rabbit IgG conjugated HRP and horse anti-mouse IgG conjugated HRP were obtained from Cell Signaling Technology (Danvers, MA, USA). Bradford reagent was obtained from Bio-Rad (Bio-Rad Laboratories (Singapore) Pte. Ltd.). Nitrocellulose membranes were purchased from Amersham (Hybond-C Super, Amersham Pharmacia Biotech). A semi-dry blot machine was purchased from Bio-Rad (Bio-Rad Laboratories (Singapore) Pte. Ltd.). The SuperSignal West Femto Maximum Sensitivity Substrate Kit and Restore™ plus Western blot stripping buffer were purchased from Thermo Scientific (Thermo Fisher, Waltham, Massachusetts, USA). A gel documentary system was purchased from Bio-Rad (Bio-Rad Laboratories (Singapore) Pte. Ltd.).
Primary human synovial fibroblast (hSF) isolation, culture and treatment
Primary human synovial fibroblast (hSF) cells were isolated by a method previously described for obtaining tissue-derived fibroblast-like synovial cells [32]. Synovial tissue was obtained from knee joints of patients undergoing joint removal surgery (the ethics approval code was ORT-11-09-16A-14.). The synovial tissue was minced in a tissue culture dish with Dulbecco's Modified Eagle Medium (DMEM) containing 200 units/ml penicillin, 200 mg/ml streptomycin and 50 μg/ml gentamicin and supplemented with 20% foetal calf serum. The minced tissue was maintained in a humidified incubator with 5% CO2 at 37 °C. After 4 days of culture, the tissue was taken out, and the adhered cells were washed with phosphate buffered saline (PBS). Cells were maintained in the growth medium at 5% CO2 and 37 °C. The cells from passages 3 through 6 were used in this experiment.
SW982 synovial sarcoma cell line culture and treatment
SW982 was obtained from ATCC® number HTB-93 and was authenticated by DiagCor Bioscience Incorporation Limited using the Promega Powerplex® 18D system and analysed using a ABI 3130 Genetic Analyzer. The cells were cultured in a sealed 25 ml T–culture flask with Leibovitz-15 (L-15) medium containing 200 units/ml penicillin, 200 mg/ml streptomycin and 50 μg/ml gentamicin and supplemented with 10% foetal calf serum in a 37 °C humidified incubator.
Cytotoxicity test
SW982 or hSF cells were seeded at concentrations of 1 × 104 cells/well in 96-well culture plates for 24 h. After 24 h, the cells were treated with TNF-α (0.625–20 ng/ml) or sesamin (0.125–5 μg/ml) alone or co-treated with TNF-α (10 ng/ml) and sesamin (0.25, 0.5, and 1 μM) for 48 h. Cell viability was measured by the MTT assay. Absorbance was measured at 540 nm. The percentage of cell survival was obtained using the formula below:
$$ \%\mathrm{cell}\ \mathrm{survival}=\frac{100\times {\mathrm{OD}}_{540}\mathrm{treated}\ \mathrm{cells}}{{\mathrm{OD}}_{540}\mathrm{control}\ \mathrm{cells}} $$
Real-time polymerase chain reaction (Real-Time PCR) assay
SW982 or hSF cells were cultured in a 25 ml T–culture flask (sealed flasks were used for SW982) until they reached 80% confluence. The cells were cultured in serum-free medium (L-15 for SW982, DMEM for hSF) for 24 h. The effects of sesamin on inflammation were investigated by treatment with or without 10 ng/ml human recombinant TNF-α and 0.25, 0.5 and 1 μM of sesamin for 4 h after fasting for analysis. The total RNA was isolated by using RNA Isolation Reagent (GE Health Science) according to the manufacturer's instructions. One microgram of total RNA was used for reverse transcription to produce cDNA using the Tetro cDNA Synthesis Kit. The transcribed cDNAs were mixed with SsoFast™ EvaGreen Supermix and the level of mRNA expression was evaluated using a Chromo4 real-time PCR detection system. The human-specific primer sequences were as follows: GAPDH, F: 5'GAAGGTGAAGGTCGGAGTC3' and R: 5'GAAGATGGTGATGGGATTTC3'; IL-1β, F: 5'AAACAGATGAAGTGCTCCTTCCAGG3' and R: 5'TGGAGAACACCACTTGTTGCTCCA3'; IL-6, F: 5'GGTACATCCTCGACGGCATCT3' and R: 5'GTGCCTCTTTGCTGCTTTCAC3'; IL-8, F: 5'CTCTCTTGGCAGCCTTCC3' and R: 5'CTCAATCACTCTCAGTTCTTTG3'; and TNF-α, F: 5'CCCCAGGGACCTCTCTCTAATC3' and R: 5'GGTTTGCTACAACATGGGCTACA3'. The data were normalized with respect to the constitutive gene GAPDH and analysed quantitatively using the 2-ΔΔCT formula [33].
Immunological multiplex assays
SW982 or hSF cells were cultured in 25 ml T–culture flasks (sealed flasks were used for SW982) until they reached 80% confluence. The cells were maintained in serum-free L-15 medium (for SW982) or DMEM (for hSF) for 24 h prior to treatment with or without 10 ng/ml human recombinant TNF-α and 0.25, 0.5 or 1 μM of sesamin for 48 h [34]. After treatment, the cell culture supernatant was collected and its IL-6, IL-8 and IL-1β levels measured by MILLIPLEX MAP Human Cytokine, Chemokine and Immuno Cell Multiplex Assays.
Both cell types were cultured in 25 ml T–culture flasks (sealed flasks were used for SW982) until they reached 80% confluence. The culture medium was then replaced with serum-free L-15 medium (for SW982) or DMEM (for hSF) for 24 h prior to pre-treatment with serum-free L-15 or DMEM containing 0.25, 0.5 or 1 μM of sesamin for 2 h. Next, human recombinant TNF-α (final concentration = 10 ng/ml) was added to each flask, and the cell lysate was collected at several time points (0, 5, 10, 15 and 30 min). Cell lysate was harvested by scraping with 200 μl ice-cold RIPA buffer containing protease inhibitor and phosphatase inhibitor. The protein concentrations of the samples were determined by using the Bradford protein assay and then calculated. The protein concentration was adjusted to equal amounts before loading on SDS-PAGE (13% separating gel, 5% stacking gel). The protein samples were electrophoretically separated and transferred to nitrocellulose membranes by a semi-dry blot system. The membranes were then blocked with 5% (W/V) non-fat dry milk in Tris-buffered saline with 0.1% Tween 20 (TBS-T) for 1 h at room temperature. Then, the membranes were washed with TBS-T prior to being incubated overnight at 4 °C with primary antibodies against human β-actin or IκB, phospho-IκB, p65 and phospho-p65 to prepare samples for studying NF-κB signal transduction (1:1000 in TBS-T). The samples prepared for studying MAPK signal transduction (1:1000 in TBS-T) were incubated with SAPK/JNK, phospho-SAPK/JNK, p38, phospho-p38, p44/42 and phospho-p44/42 antibodies overnight at 4 °C. Next, the membranes were washed 5 times for 5 min with TBT-T prior to being incubated with secondary antibodies conjugated with horseradish peroxidase (1:1000 in TBS-T) for 1 h at room temperature. The resulting blots were washed 5 times for 5 min with TBS-T before visualization using the SuperSignal West Femto Maximum Sensitivity Substrate Kit to obtain an enhanced chemiluminescence signal. The visualized results were recorded using a gel documentary system.
The data were expressed as the mean ± SEM from triplicate samples of three independent experiments. One-way ANOVA was used to assess the differences between conditions. Significance was defined as p < 0.05.
hSF and SW982 cells showed different responses in terms of TNF-α-induced pro-inflammatory cytokines mRNA expression, and sesamin showed anti-inflammatory effects by suppressing pro-inflammatory cytokine and chemokine gene expression in both models
We optimized the concentrations of TNF-α and sesamin that were used in the experiments, and cell viability was determined using the MTT assay, as described previously. SW982 cells and hSF cells were exposed to TNF-α concentrations ranging from 0.625 to 20 ng/ml (2-fold dilution), sesamin concentrations ranging from 0.125 to 5 μM or a combination of 10 ng/ml TNF-α with 0.25, 0.5 or 1 μM of sesamin for 48 h. Cells treated with 40 mM H2O2 were used as a toxic control. The results suggested that cell viability was not affected compared to that of the control when hSF or SW982 cells were treated with TNF-α, sesamin or both (Fig. 1). We also confirmed the nontoxicity of all treatments that were used in this study by LDH assay, the result is presented in Additional file 1.
Cell viability testing of hSF and SW982 cells. Cell viability testing was performed by the MTT assay as described in the Materials and Methods section. a Cell viability results for hSF cells under TNF-α alone, sesamin alone or combinations of TNF-α and sesamin. b Cell viability results for SW982 cells under TNF-α alone, sesamin alone or combinations of TNF-α and sesamin. Values are presented as the mean ± SEM (n = 3)
The changes in pro-inflammatory cytokine and chemokine gene expression, including IL-1β, IL-6, TNF-α and IL-8, in hSF and SW982 cells after treatment with 10 ng/ml TNF-α were investigated using real-time PCR. When both cell types were exposed to TNF-α, hSF exhibited significantly increased levels of IL-6, IL-8, IL-1β and TNF-α mRNA expression compared to those of its own control (Fig. 2). However, SW982 exhibited significantly increased levels of IL-6, IL-8, and IL-1β but not TNF-α mRNA expression (Fig. 2g).
Fold induction of IL-6, IL-8, IL-1β and TNF-α mRNA expression in each cell type when treated with TNF-α or sesamin or co-treated with TNF-α and various concentrations of sesamin compared to respective controls. hSF or SW982 cells were treated and the gene expression profiles measured as described in the Materials and Methods section. a, b Fold induction of IL-6 gene expression in hSF and SW982 cells, respectively. c, d Fold induction of IL-8 gene expression in hSF and SW982 cells, respectively. e, f Fold induction of IL-1β gene expression in hSF and SW982 cells, respectively. g, h Fold induction of TNF-α gene expression in hSF and SW982 cells, respectively. Values are presented as the mean ± SEM (n = 3). #, * = p < 0.05; ##, ** = < 0.01 versus normal control (#) or TNF-α treatment (*) by one-way ANOVA
The mRNA expression experiment demonstrated that sesamin could suppress the mRNA expression of pro-inflammatory cytokines that were induced by TNF-α (Fig. 2). Sesamin concentrations of 0.5 and 1 μM significantly reduced IL-6, IL-8 (only at 1 μM) and IL-1β mRNA expression in hSF cells, while TNF-α mRNA expression was not decreased compared to that of the induction control (Fig. 2a, c, e, and g). In SW982, sesamin could significantly reduce IL-6 and IL-1β gene expression similar to in hSF but could not reduce IL-8 and TNF-α expression (Fig. 2b, d, f, and h). However, sesamin alone could not affect all involved pro-inflammatory cytokine and chemokine gene expression levels in both cell types (Fig. 2).
The levels of pro-inflammatory cytokine and chemokine production induced by TNF-α in hSF and SW982 were different, and sesamin showed an anti-inflammatory effect by suppressing pro-inflammatory cytokine and chemokine production in both models
The levels of secretion of pro-inflammatory cytokines and chemokines including IL-1β, IL-6 and IL-8 from hSF and SW982 were determined using MILLIPLEX MAP Human Cytokine, Chemokine and Immuno Cell Multiplex Assays. Even though the levels of IL-1β production were measured for both cell types, the value of this cytokine was not detected using this technique because the concentration did not reach the minimum level for this test. The level of TNF-α released was not determined due to the presence of added TNF-α in the treatment condition. We determined a baseline level of IL-6 and IL-8 release from hSF and SW982 cells (Fig. 3). The results were consistent with the gene expression results. After 48 h of cultivation, TNF-α treatment increased the IL-6 and IL-8 production compared to that of the control, as expected (Fig. 3). Moreover, when exposed to TNF-α, hSF and SW982 cells showed very different increases in the release of both IL-6 and IL-8 into the culture medium (Fig. 3). After induction, the IL-6 and IL-8 production levels of hSF cells were increased by approximately 60- and 100-fold vs. the control, while SW982 responded by increasing both IL-6 and IL-8 release by only approximately 1.3-fold vs. the control (Fig. 3).
Levels of IL-6 and IL-8 released in the culture medium of hSF and SW982 cells. Both cell types were treated and assayed as described above. The levels of IL-6 and IL-8 released in the cultured medium were analysed by Luminex assays. a, b Levels of IL-6 production from hSF and SW982 cells, respectively. c, d Amounts of IL-8 released from hSF and SW982 cells, respectively. Values are expressed as the mean ± SEM (n = 3). #, * = p < 0.05; ##, ** = p < 0.01 versus normal control (#) or TNF-α treatment (*) by one-way ANOVA
At the protein production level, the results showed a significant reduction in IL-6 release in the presence of 1 μM sesamin in the co-treatment conditions for SW982 only (Fig. 3b). At other concentrations, the presence of sesamin in co-treatment conditions demonstrated a slightly decreased effect in both cell types (Fig. 3). Additionally, treatment with sesamin alone could not affect all the relevant pro-inflammatory cytokines and chemokines produced in both models (Fig. 3). Interestingly, although hSF and SW982 cells exhibited different levels of cytokine and chemokine response to TNF-α activation, they responded to sesamin treatment in almost the same fashion.
TNF-α activated a different signalling pathway in hSF from that in SW982, and sesamin suppressed the TNF-α-induced inflammatory response by interfering with MAPK signal transduction
At the molecular level, we determined the NF-κB and MAPK signal transduction of TNF-α in both cell types by Western blot analysis. To investigate the NF-κB signalling pathway, we monitored the changes in phosphorylation of IκB and p65 at various time points. The Western blot results showed that TNF-α induced pro-inflammatory cytokine and chemokine via NF-κB signalling in both cell types (Fig. 4a). The phosphorylation of IκB was significantly initiated at 5 min after both cell types were exposed to TNF-α (Figs. 4a, b). Furthermore, when comparing the phosphorylation strength of IκB using the value of band density relative to its own band density (total form), we found that in hSF cells, the phosphorylation strength of IκB was significantly increased at 5 min, while in SW982 cells, the phosphorylation strength of IκB was slightly increased at 5 and 10 min compared to the strength during the non-stimulation stage (Fig. 4b). Significant phosphorylation of p65 in hSF and SW982 cells occurred in approximately 5 to 15 min (Fig. 4b). These data indicate the same type of NF-κB activation in response to TNF-α induction in both cell types.
Western blot analysis of NF-κB signal transduction in hSF and SW982 cells. The NF-κB signalling was observed l at 0, 5, 10, 15 and 30 min after the addition of TNF-α 10 ng/m to hSF or SW982 cells pre-treated with 1 μM sesamin as described in the Materials and Methods section. a Western blot results of NF-κB signalling in hSF and SW982 under TNF-α-induced conditions with or without sesamin treatment. b Graphs of phosphorylation band densities of IκB and p65 relative to the total form of each cell type. Values are presented as the mean ± SEM (n = 3). #, * = p < 0.05; ##, ** = p < 0.01 versus control 0 min (#) or TNF-α treatment at various time points (*) by one-way ANOVA
Treatment with 1 μM sesamin in parallel with exposure to 10 ng/ml TNF-α demonstrated that sesamin had no effect on the phosphorylation of IκB in hSF cells, while in SW982 cells, the phosphorylation of IκB showed a slight decrease in induction at 5 and 10 min (Fig. 4). The phosphorylation of p65 in hSF and SW982 cells also showed similar results (Fig. 4b). The phosphorylation of p65 was not affected by the presence of sesamin (Fig. 4b).
To study the triggering of the MAPK signalling pathway by TNF-α in hSF and SW982 cells, we investigated the changes in the phosphorylation of p38, p44/42 (ERK1/2) and SAPK/JNK, as in the NF-κB study (Fig. 5). It is noteworthy that this monitoring demonstrated a clear difference in MAPK induction by TNF-α between hSF and SW982 cells. The phosphorylation of p38 after induction in hSF demonstrated a significant increase from 5 to 15 min, while the phosphorylation of p44/42 exhibited a significant increase throughout the experiment (Fig. 5). The results of SAPK/JNK showed a significant increase in phosphorylation at 10 to 30 min after cytokine induction. Meanwhile, TNF-α-induced SW982 showed effects on p38 and p44/42 but not SAPK/JNK (Fig. 5). These data indicate different forms of MAPK activation in response to TNF-α activation in the two cell types.
Western blot analysis of MAPK signal transduction in hSF and SW982 cells. MAPK signalling was observed at 0, 5, 10, 15 and 30 min after the addition of TNF-α 10 ng/ml to hSF or SW982 cells pre-treated with 1 μM sesamin as described in the Materials and Methods section. a Western blot results of MAPK signalling in hSF (left panel) and SW982 (right panel) cells under TNF-α-induced conditions with or without sesamin treatment. b Graphs of phosphorylation band densities of p38, p44/42 (ERK) and SAPK/JNK relative to the total form in hSF (left panel) and SW982 (right panel) cells. Values are presented as the mean ± SEM (n = 3). #, * = p < 0.05; ##, ** = p < 0.01 versus control 0 min (#) or TNF-α treatment at various time points (*) by one-way ANOVA
In the investigation of MAPK signalling in hSF cells, the presence of sesamin in the induction system caused a significant reduction in phosphorylated p38 and p44/42 (especially p44/42) at 5 and 15 min of induction (Fig. 5b). For SAPK/JNK signal transduction, the data showed different effects (Fig. 5b). Sesamin continuously increased the phosphorylation of SAPK/JNK (Fig. 5b). These data indicated that sesamin could slightly reduce the activation of p38 and p44/42 ERK induced by TNF-α in hSF cells and shift to activation of the SAPK/JNK signalling pathway. Interestingly, the data showed a reverse effect on SW982 cells (Fig. 5b). Sesamin increased the activation of p38 and p44/42 ERK but decreased the activation of SAPK/JNK in the SW982 cell line (Fig. 5b).
Rheumatoid arthritis (RA) is a chronic disease that is manifested by joint inflammation and leads to irreversible joint deformation in the late stages. The basis of this disease remains unclear. However, many reports demonstrate an association between an abnormal immune system and the functions of connective cells around the joint lesion. The main secretion produced by immune cells that plays a key role in RA is TNF-α [9]. Many studies have evaluated the relationship of TNF-α with RA in several models [35, 36]. Animal models are the most commonly used in RA research [35, 36]. Although animal models can yield an overall understanding of RA pathogenesis, these models also have serious limitations. Due to discrepancies between human arthritis and animal models of arthritis, some responses are different [20]. Importantly, although many drugs have shown great potency in animal models, this advantage has not been borne out in clinical trials [20, 36]. Relatedly, the key point of RA progression is associated with the inflammation of the synovial membrane around the RA joint [3]. Thus, in vitro models that use human cells were developed to study the mechanism of RA and possible therapeutic approaches [20]. Therefore, primary human fibroblast-like synoviocytes from RA patients have been used to screen and study the unique effects of many drugs and phytochemicals [21,22,23]. Despite the advantages of primary human fibroblast-like synoviocytes, they present certain inconveniences; for instance, primary mammalian cells have a limited replicable lifespan and ultimately enter a state of senescence in which they remain metabolically active but fail to proliferate. The lack of reproducibility as a result of the individual response of each patient sample and the need to routinely acquire RA-derived synovial tissue samples make such studies difficult [24, 25]. Thus, researchers have tried to establish cell models instead of using primary synovial cells from patients. The candidate cell line SW982, obtained from the American Tissue Culture Collection, has been used in many models of RA [24, 25]. The SW982 cell line was established by A. Leibovitz in 1974 at the Scott and White Clinic, Temple, Texas. These cells were isolated from a surgical specimen of a biphasic synovial sarcoma from a 25-year-old female Caucasian patient [37]. Although the SW982 cell line has been widely used in research examining the mechanism of RA, the exact evidence and scientific rationale that supports the use of this cell line as an alternative to primary synovial fibroblasts are still unclear [24, 25].
In our experiment, we used naïve synovial fibroblast (hSF) cells isolated from joint removal patients without OA or RA. The reason we used naïve cells is that such tissue specimens are more accessible than RA-derived synovial tissue. In this study, it was found that sesamin treatment did not affect non-stimulant inflammation in either SW982 or hSF cells. These responses appeared at both the gene and protein expression levels. These phenomena suggest that sesamin has no effect on the ordinary cell activity of either the SW982 cell line or hSF cells.
In the resting stage, hSF cells expressed a low level of inflammatory cytokine production, while SW982 cells released a very high level of inflammatory cytokines. These results confirm the difference in using SW982 and hSF in an RA model for inflammation conditions. We used TNF-α as a stimulant to induce acute inflammation in both hSF and SW982 to mimic the inflammation in RA progression. We found that both hSF and SW982 exhibited a similar response. In acute inflammation conditions, both hSF and SW982 cells responded by increasing their IL-6, IL-8, IL-1β and TNF-α mRNA levels. Similar results were also observed in the protein production levels. However, the degree of response was different. The reaction of hSF was stronger than that of SW982 due to the unfamiliarity of the cytokine attack. When sesamin was present in the induction system, we found that sesamin could significantly reduce the mRNA expression of IL-6, IL-8 and IL-1β but not TNF-α in hSF cells. In SW982 cells, we found significantly decreased levels of only IL-6 and IL-1β mRNA expression. However, a significant reduction in cytokine release in the presence of sesamin was only observed for IL-6 produced by SW982 cells at only the highest concentration of sesamin. These incompatible results could be explained by the different time periods that we used in each experiment. When investigating mRNA expression, we incubated the cells with TNF-α and sesamin for 4 h, but when measuring cytokine release, we cultured the cells with the inducer and sesamin for 48 h. The different time periods were used to obtain appropriate time points. For cells cultivated with TNF-α, mRNA first reaches its maximum level at 4 h (according to the kinetics of immune response) [38, 39]. However, it was necessary to incubate the cells with TNF-α for 48 h for an appropriate accumulation of cytokines to be released [34]. In fact, during the 48-h period, many immune-related genes were also activated, and many kinds of cytokines and chemokines were produced and degraded to maintain homeostasis [9]. These phenomena also affected the amounts of cytokines that we measured. Moreover, the difference in cell passages that we used also affected the cellular response, which is why hSF showed a broader range of cytokines released than SW982.
We monitored the signalling transduction triggered by TNF-α in both cell models. Both NF-κB and MAPK signalling was examined. We found that both hSF and SW982 cells showed partial phosphorylation of p65 in the resting stage. This result was supported by the baseline protein level of inflammatory cytokines that we found in the previous experiments. Moreover, in the first 5 min after exposure to TNF-α, hSF cells increased the activation of phosphorylated IκB and p65 to reach full activation. However, this reaction decreased with the time passed. Meanwhile, SW982 cells responded similarly to the hSF cells, but p65 activation was retained longer in SW982 cells. The NF-κB transcription factor is well known as a critical regulator of inflammation in RA. Thus, the blocking of the NF-κB signalling pathway may be a potential therapeutic approach for RA. However, our study showed that sesamin could not reduce the activation of NF-κB in either cell type, although the phosphorylation of IκB differed slightly different in the two models. Another signalling pathway we monitored was MAPK. We investigated the activation of p38, p44/42 (ERK1/2) and SAPK/JNK by examining their phosphorylation. According to our results, TNF-α could induce all three members of the MAPK pathway in hSF cells. However, in SW982 cells, TNF-α activated only the phosphorylation of p38 and p44/42 but not SAPK/JNK. SW982 exhibited a high degree of p44/42 phosphorylation even in the resting stage. Additionally, the phosphorylation of p44/42 or ERK1/2 MAPK is involved in the regulation of various processes including cell cycle progression, transcription, cell proliferation, cell differentiation, cell metabolism, cell adhesion, cell migration and cell survival [40]. In this case, the fully activated phosphorylation of p44/42 in SW982 may be related to the immortal activity that is found in cancer cells. Thus, this difference in phosphorylation is a difference in the properties of cell lines and of primary cells. Our results indicated that in a TNF-α-induced system, sesamin decreased the phosphorylation of p38 and p44/42 but increased the phosphorylation of SAPK/JNK in hSF cells. In contrast, sesamin significantly increased the phosphorylation of p38 and p44/42 but decreased the phosphorylation of SAPK/JNK in SW982 cells. These results demonstrated the different responses of hSF and SW982 to sesamin, an anti-inflammatory phytochemical. However, overall, both cell types responded to sesamin in almost the same fashion, except that hSF cells seemed to be more sensitive to sesamin than SW982 cells.
Our study demonstrated the advantages and disadvantages of using hSF and SW982 as a model of RA. Remarkably, hSF and SW982 had distinct inflammatory process characteristics in terms of signal transduction and gene expression changes involving cytokine production. Many RA studies have chosen SW982 as cell model because of a high proliferation rate that appears similar to pannus formation in the severe stage of RA. However, the use of SW982 could be a concern because of the immortal activity of this cell. The utility of hSF cells as an inflammation model for RA study may be improved by pre-incubating cells with an appropriate concentration of TNF-α for a suitable time period before the addition of any phytochemicals. The development of this method will exclude the unwanted properties of SW982 as an inflammatory cell model. The investigation of the effects of the phytochemical sesamin on both established models showed that they both respond to sesamin in the same fashion. However, the levels of mRNA and protein expression and the activation of intracellular signalling were different. Thus, both established models could be used as drug screening models for RA treatment. Nevertheless, the correct underlying mechanism must still be investigated.
In this study, different mechanisms that control the inflammatory response of TNF-α-induced hSF and SW982 cells were identified. However, both models could be used to investigate the anti-RA properties of phytochemical agents. They showed almost the same response to sesamin at the gene and protein expression levels. However, the signalling transduction response to sesamin treatment in the two cell types was different. Therefore, the correct underlying intracellular signalling should be of concern when using SW982 as a model.
cDNA:
Complementary deoxyribonucleic acid
DMEM:
Dulbecco's Modified Eagle Medium
hSF:
Primary human synovial fibroblast
IKK:
Inhibitor of NF-κB kinases
IL-1β:
Interleukin-1 beta
IL-6:
IκB:
Inhibitor of NF-κB
L-15:
Leibovitz-15 medium
Lipopolysaccharide
mRNA:
Messenger ribonucleic acid
NF-κB:
Nuclear factor-κB
NMR/MS:
Nuclear magnetic resonance spectroscopy and mass spectrometry
OA:
OSM:
Oncostatin M
p38 MAPK:
p38 mitogen-activated protein kinase
p44/42 MAPK (ERK1/2):
Extracellular signal–regulated kinases (ERKs) or classical mitogen-activated protein kinase
PBS:
p-IκB:
Phosphorylated inhibitor of NF-κB
p-p38 MAPK:
Phosphorylated p38 mitogen-activated protein kinase
p-p44/42 MAPK (Erk1/2):
Phosphorylated extracellular signal–regulated kinases 1/2 (ERK1/2)
p-p65:
Phosphorylated p65
p-SAPK/JNK:
Phosphorylated stress-activated protein kinase/c-Jun NH(2)-terminal kinase
RA:
SAPK/JNK:
Stress-activated protein kinase/c-Jun NH(2)-terminal kinase
SDS-PAGE:
Sodium dodecyl sulfate-polyacrylamide gel electrophoresis
SW982:
Human synovial sarcoma cell line
TBS:
Tris-buffered saline
TBS-T:
Tris-buffered saline with 0.1% Tween 20
TNF-α:
Tumor necrosis factor-alpha
β-actin:
McInnes IB, Schett G. Cytokines in the pathogenesis of rheumatoid arthritis. Nat Rev Immunol. 2007;7(6):429–42.
Malaviya AM. Cytokine network and its manipulation in rheumatoid arthritis. J Assoc Physicians India. 2006;54(Suppl):15–8.
Kirkham BW, Lassere MN, Edmonds JP, Juhasz KM, Bird PA, Lee CS, et al. Synovial membrane cytokine expression is predictive of joint damage progression in rheumatoid arthritis: a two-year prospective study (the DAMAGE study cohort). Arthritis Rheum. 2006;54(4):1122–31.
Brennan FM, McInnes IB. Evidence that cytokines play a role in rheumatoid arthritis. J Clin Invest. 2008;118(11):3537–45.
Williams RO, Feldmann M, Maini RN. Cartilage destruction and bone erosion in arthritis: the role of tumour necrosis factor alpha. Ann Rheum Dis. 2000;59(Suppl 1):i75–80.
Suzuki M, Tetsuka T, Yoshida S, Watanabe N, Kobayashi M, Matsui N, et al. The role of p38 mitogen-activated protein kinase in IL-6 and IL-8 production from the TNF-alpha- or IL-1beta-stimulated rheumatoid synovial fibroblasts. FEBS Lett. 2000;465(1):23–7.
Mor A, Abramson SB, Pillinger MH. The fibroblast-like synovial cell in rheumatoid arthritis: a key player in inflammation and joint destruction. Clin Immunol. 2005;115(2):118–28.
Choy EH, Panayi GS. Cytokine pathways and joint inflammation in rheumatoid arthritis. N Engl J Med. 2001;344(12):907–16.
Moelants EA, Mortier A, Van Damme J, Proost P. Regulation of TNF-alpha with a focus on rheumatoid arthritis. Immunol Cell Biol. 2013;91(6):393–401.
Bondeson J, Wainwright SD, Lauder S, Amos N, Hughes CE. The role of synovial macrophages and macrophage-produced cytokines in driving aggrecanases, matrix metalloproteinases, and other destructive and inflammatory responses in osteoarthritis. Arthritis research & therapy. 2006;8(6):R187.
Roman-Blas JA, Jimenez SA. NF-kappaB as a potential therapeutic target in osteoarthritis and rheumatoid arthritis. Osteoarthr Cartil. 2006;14(9):839–48.
Hanada T, Yoshimura A. Regulation of cytokine signaling and inflammation. Cytokine Growth Factor Rev. 2002;13(4–5):413–21.
Wajant H, Pfizenmaier K, Scheurich P. Tumor necrosis factor signaling. Cell Death Differ. 2003;10(1):45–65.
Marques-Fernandez F, Planells-Ferrer L, Gozzelino R, Galenkamp KM, Reix S, Llecha-Cano N, et al. TNFalpha induces survival through the FLIP-L-dependent activation of the MAPK/ERK pathway. Cell Death Dis. 2013;4:e493.
Nishina H, Wada T, Katada T. Physiological roles of SAPK/JNK signaling pathway. J Biochem. 2004;136(2):123–6.
Zarubin T, Han J. Activation and signaling of the p38 MAP kinase pathway. Cell Res. 2005;15(1):11–8.
Thalhamer T, McGrath MA, Harnett MM. MAPKs and their relevance to arthritis and inflammation. Rheumatology. 2008;47(4):409–14.
Malemud CJ. Intracellular signaling pathways in rheumatoid arthritis. J Clin Cell Immunol. 2013;4:160.
Nozaki T, Takahashi K, Ishii O, Endo S, Hioki K, Mori T, et al. Development of an ex vivo cellular model of rheumatoid arthritis: critical role of CD14-positive monocyte/macrophages in the development of pannus tissue. Arthritis Rheum. 2007;56(9):2875–85.
Yan C, Kong D, Ge D, Zhang Y, Zhang X, Su C, et al. Mitomycin C induces apoptosis in rheumatoid arthritis fibroblast-like synoviocytes via a mitochondrial-mediated pathway. Cell Physiol Biochem. 2015;35(3):1125–36.
Liu H, Yang Y, Cai X, Gao Y, Du J, Chen S. The effects of arctigenin on human rheumatoid arthritis fibroblast-like synoviocytes. Pharm Biol. 2015;53(8):1118–23.
Ahn JK, Kim S, Hwang J, Kim J, Lee YS, Koh EM, et al. Metabolomic elucidation of the effects of Curcumin on fibroblast-like Synoviocytes in rheumatoid arthritis. PLoS One. 2015;10(12):e0145539.
Yamazaki T, Yokoyama T, Akatsu H, Tukiyama T, Tokiwa T. Phenotypic characterization of a human synovial sarcoma cell line, SW982, and its response to dexamethasone. In vitro cellular & developmental biology Animal. 2003;39(8–9):337–9.
Chang JH, Lee KJ, Kim SK, Yoo DH, Kang TY. Validity of SW982 synovial cell line for studying the drugs against rheumatoid arthritis in fluvastatin-induced apoptosis signaling model. Indian J Med Res. 2014;139(1):117–24.
Liu A, Feng B, Gu W, Cheng X, Tong T, Zhang H, et al. The CD133+ subpopulation of the SW982 human synovial sarcoma cell line exhibits cancer stem-like characteristics. Int J Oncol. 2013;42(4):1399–407.
Jeng KCG, Sesamin RCWH. Sesamolin: natures therapeutic Lignans. Curr Enzym Inhib. 2005;1(1):11–20.
Utsunomiya T, Shimada M, Rikimaru T, Hasegawa H, Yamashita Y, Hamatsu T, et al. Antioxidant and anti-inflammatory effects of a diet supplemented with sesamin on hepatic ischemia-reperfusion injury in rats. Hepato-Gastroenterology. 2003;50(53):1609–13.
Phitak T, Pothacharoen P, Settakorn J, Poompimol W, Caterson B, Kongtawelert P. Chondroprotective and anti-inflammatory effects of sesamin. Phytochemistry. 2012;80:77–88.
Jeng KC, Hou RC, Wang JC, Ping LI. Sesamin inhibits lipopolysaccharide-induced cytokine production by suppression of p38 mitogen-activated protein kinase and nuclear factor-kappaB. Immunol Lett. 2005;97(1):101–6.
Khansai M, Boonmaleerat K, Pothacharoen P, Phitak T, Kongtawelert P. Ex vivo model exhibits protective effects of sesamin against destruction of cartilage induced with a combination of tumor necrosis factor-alpha and oncostatin M. BMC Complement Altern Med. 2016;16:205.
Stebulis JA, Rossetti RG, Atez FJ, Zurier RB. Fibroblast-like synovial cells derived from synovial fluid. J Rheumatol. 2005;32(2):301–6.
Livak KJ, Schmittgen TD. Analysis of relative gene expression data using real-time quantitative PCR and the 2(−Delta Delta C(T)) method. Methods. 2001;25(4):402–8.
Matsuoka N, Eguchi K, Kawakami A, Tsuboi M, Kawabe Y, Aoyagi T, et al. Inhibitory effect of clarithromycin on costimulatory molecule expression and cytokine production by synovial fibroblast-like cells. Clin Exp Immunol. 1996;104(3):501–8.
Asquith DL, Miller AM, McInnes IB, Liew FY. Animal models of rheumatoid arthritis. Eur J Immunol. 2009;39(8):2040–4.
McNamee K, Williams R, Seed M. Animal models of rheumatoid arthritis: how informative are they? Eur J Pharmacol. 2015;759:278–86.
SW 982 [SW-982, SW982] (ATCC® HTB-93™) In.
Kumolosasi E, ES I, Jantan W. Ahmad. Kinetics of intracellular, extracellular and production of pro-inflammatory cytokines in Lipopolysaccharide- stimulated human peripheral blood mononuclear cells. Trop J Pharm Res. 2014;13(4):536–43.
Jansky L, Reymanova P, Kopecky J. Dynamics of cytokine production in human peripheral blood mononuclear cells stimulated by LPS or infected by Borrelia. Physiol Res. 2003;52(6):593–8.
Roskoski R Jr. ERK1/2 MAP kinases: structure, function, and regulation. Pharmacol Res. 2012;66(2):105–43.
We would like to thank Assoc. Prof. Dr. Siriwan Ong-chai for providing the SW982 cell line and Assist. Prof. Dr. Dumnoensun Pruksakorn, M.D. for providing the synovial tissue.
This work was financially supported by the Royal Golden Jubilee Ph.D. Program (Grant No. PHD/0288/2550 to MK), the Graduate School of Chiang Mai University and the Thailand Excellence Center for Tissue Engineering and Stem Cells.
The datasets used and/or analysed during the current study are available from the corresponding author upon reasonable request.
Thailand Excellence Center for Tissue Engineering and Stem Cells, Department of Biochemistry, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
Manatsanan Khansai, Thanyaluck Phitak, Jeerawan Klangjorhor, Sasimol Udomrak, Kanda Fanhchaksai, Peraphan Pothacharoen & Prachya Kongtawelert
Manatsanan Khansai
Thanyaluck Phitak
Jeerawan Klangjorhor
Sasimol Udomrak
Kanda Fanhchaksai
Peraphan Pothacharoen
Prachya Kongtawelert
Study concept and design: MK, TP, PP, and PK. Data acquisition: MK, JK, SU, and KF. Analysis and verification of data: MK and TP. Drafting of the manuscript: MK. Critical revision of the manuscript for logically important content: TP, PP and PK. Study supervision: TP, PP and PK. All authors have read and approved the final manuscript.
Correspondence to Prachya Kongtawelert.
The primary cells used in this study were approved by Research Ethic Committee 1, Faculty of Medicine, Chiang Mai University. The ethics approval code was ORT-11-09-16A-14. We confirmed that informed consent was obtained from the patients.
LDH released from hSF and SW982 cells. Cell viability testing was performed by LDH measurement in culture medium. a. LDH released from hSF cells under TNF-α alone, sesamin alone or combination of TNF-α and sesamin. b. LDH released from SW982 cells under TNF-α alone, sesamin alone or combination of TNF-α and sesamin. Values are presented as mean ± SEM (n = 3). (PNG 135 kb)
Khansai, M., Phitak, T., Klangjorhor, J. et al. Effects of sesamin on primary human synovial fibroblasts and SW982 cell line induced by tumor necrosis factor-alpha as a synovitis-like model. BMC Complement Altern Med 17, 532 (2017). https://doi.org/10.1186/s12906-017-2035-2
Sesamin
TNF-α
Pro-inflammatory cytokines | CommonCrawl |
Food Science and Biotechnology
Korean Society of Food Science and Technology (KOSFOST)
Agriculture, Fishery and Food > Science of Food and Crops
Agriculture, Fishery and Food > Food Science
The Food Science and Biotechnology (Food Sci. Biotechnol.; FSB) was launched in 1992 as the Food Biotechnology and changed to the present name in 1998. It is an international peer-reviewed journal published monthly by the Korean Society of Food Science and Technology (KoSFoST). The FSB journal covers; Food chemistry/food component analysis Food microbiology and biotechnology Food processing and engineering Food hygiene and toxicology Biological activity and nutrition in foods Sensory and consumer science Consumer perception and sensory evaluation on processed foods are accepted only when they are relevant to the laboratory research work. As a general rule, manuscripts dealing with analysis and efficacy of extracts from natural resources prior to the processing or without any related food processing may not be considered within the scope of the journal. The FSB journal does not deal with only local interest and a lack of significant scientific merit. The main scope of our journal is seeking for human health and wellness through constructive works and new findings in food science and biotechnology field.
http://www.fsnb.or.kr/submission/ KSCI KCI SCIE
Some New Approaches to Consumer Acceptance Measurement as a Guide to Marketing
Lee, Hye-Seong;O'Mahony, Michael 863
PDF KSCI
The potential impact of the methods of sensory science on consumer testing and marketing is reviewed. Areas such as predicting purchase behavior, new approaches to scaling, and cross cultural effects are discussed. An example of the complexity of sensory measurement used to obtain consumer and marketing information is highlighted, using the simple paired preference test as an example.
Microbiological and Chemical Detection of Antibiotic Residues in Livestock and Seafood Products in the Korean Market
Park, Sung-Kug;Kim, Mee-Hye;Sho, You-Sub;Chung, So-Young;Hu, Soo-Jung;Lee, Jong-Ok;Hong, Moo-Ki;Kim, Myung-Chul;Kang, Ju-Seop;Jhee, Ok-Hwa 868
The microbiological and chemical identification of antibiotic residues was attempted for livestock and seafood products including pork (n=34), beef (n=34), chicken (n=32), flatfish (n=37), armorclad rockfish (n=36), and sea bream (n=27). The meat (n=100) and seafood (n=100) samples were collected from 9 markets in 5 major Korean cities. Antibiotic substances were identified from the classes of tetracyclines, macrolides, penicillins, aminoglycosides, polyethers, peptides, sulfonamides, quinolones, chlorampenicols, and novobiocins using a microbiological assay, the Charm II test and high performance liquid chromatography (HPLC) with ultra violet (UV) and fluorescence detectors. The results showed that 2 tetracyclines (oxytetracycline and tetracycline) and 3 quinolones (ciprofloxacin, norfloxacin, and enrofloxacin) were detected in 4 samples of flatfish among all 100 seafood samples tested. No antibiotic residues were detected in the 100 livestock product samples tested. The amounts (min-max, mg/kg) of the residual antibiotics were as follows; tetracycline 0.78-0.85, oxytetracycline 0.49-0.74, ciprofloxacin 0.09-0.83, norfloxacin 0.01-0.21, enrofloxacin 0.12-2.98. These data indicate that the total detection rate of antibiotics in livestock and seafood products was approximately 2%.
Chromatographically Purified Porphyran from Porphyra yezoensis Effectively Inhibits Proliferation of Human Cancer Cells
Kwon, Mi-Jin;Nam, Taek-Jeong 873
In this study, we isolated porphyran was isolated from the red seaweed Porphyra yezoensis and assessed in terms of in vitro anti-proliferative activity. Sequential anion-exchange and gel-filtration chromatography led to purification of 3 porphyrans of different molecular masses, which contained <$50\;{\mu}g/mL$ protein and >$10\;{\mu}g/mL$ porphyran. Crude porphyran inhibited cell growth in a dose-dependent manner (0-5 mg/mL). When HT-29 colon cancer cells and AGS gastric cancer cells were cultured with various concentrations of the purified porphyran, cancer cell growth was inhibited by 50% at a low concentration (5 or $10\;{\mu}g/mL$). Furthermore, the polysaccharide portion of the porphyran preparation, rather than the protein portion, is the most effective at inhibiting cancer cell proliferation via apoptosis, as indicated by increased caspase-3 activity. Our results indicate that purified porphyran has significant in vitro anti-proliferative activity (p<0.05).
Sea Tangle Supplementation Alters Intestinal Morphology in Streptozotocin-induced Diabetic Rats and Lowers Glucose Absorption
Lee, Kyeung-Soon;Seo, Jung-Sook;Choi, Young-Sun 879
This study examined whether dietary supplementation with sea tangle alters the intestinal morphology of streptozotocin-induced diabetic rats and affects the glucose absorption rate. Forty male Sprague-Dawley rats were divided into 2 groups and fed either a control (AIN76-based) diet or a sea tangle-supplemented diet. After 3 weeks, 10 rats in each group received an intramuscular injection of streptozotocin (45 mg/kg BW), and feeding was continued for 3 additional weeks. Dietary supplementation with sea tangle resulted in a lower fasting plasma glucose level compared with the control diet in diabetic rats. Scanning electron micrographs revealed serious damage to the jejunal villi of diabetic rats fed the control diet, whereas supplementation with sea tangle alleviated the damage. In a separate experiment, 20 male Sprague-Dawley rats were divided into 2 groups and fed either a control diet or a sea tangle-supplemented diet for 5 weeks, and fasted rats were subjected to in situ single-pass perfusion. The glucose absorption rate determined in the absence of digesta was decreased by 34% in the jejunum of rats fed a sea tangle diet compared with those fed a control diet. In conclusion, sea tangle supplementation lowered glucose absorption rate, altered intestinal morphology, and appeared to protect villi from damage caused by diabetes mellitus.
The Antioxidant Activity of Various Cultivars of Grape Skin Extract
Yoo, Mi-Ae;Kim, Jin-Sook;Chung, Hae-Kyung;Park, Won-Jong;Kang, Myung-Hwa 884
The aim of this study was to analyze the antioxidant properties of different cultivars of grape skin extract in an in vitro system. The extracts were prepared from eight grape cultivars: 'Campbell Early' (CE), 'Kyoho' (K), 'New Kyoho' (NK), 'Muscat of Alexandria' (MOA), 'Seibel' (S), 'Morgen Schow' (MS), 'Gold Finger' (GF), and 'Meru' (M). The total phenolic acid contents were highest in MS and K. Resveratrol content was high in NK (50.88 mg/l g of coat), and quercetin content was significantly higher in K (0.68 mg/l g of coat) than in the other grape species (0.21-0.44 mg/l g of coat). The K and MS grape species, in which total phenol content was comparatively high (K: $24.15\;{\mu}g/mL$, MS: $25.52\;{\mu}g/mL$), also showed a high level of electron donating activity (K, 53%; MS, 59%). The hydrogen radical scavenging activity of M (50.36%) was significantly higher than the other grape species, including the S (50.21%), MS (49.43%), and K (49.06%) cultivars. Antioxidant activity varied depending on grape species, but overall it was highest in the MS and K cultivars.
Optimization of the Processing Parameters for Green Banana Chips and Packaging within Polyethylene Bags
Mitra, Pranabendu;Kim, Eun-Mi;Chang, Kyu-Seob 889
The demand of quality green banana chips is increasing in the world snacks market, therefore, the preparation of quality chips and their subsequent shelf life in packaging were evaluated in this study. Banana slices were fried in hot oil to the desired moisture content (2-3%) and oil content (40%) in chips at 3 different temperatures, and the impact of different pretreatments were compared by sensory assessment. A linear relationship between time and temperature was used to achieve the optimal processing conditions. Banana slices fried at the lower temperature of $145^{\circ}C$ took longer to reach the desired chip qualities, but gave the best results in terms of color and texture. Blanching was the most effective pre-treatment for retaining the light yellow color in finished chips. For extending the shelf life of chips, moisture proof packaging in double layer high density polyethylene was more effective than single layer low density polyethylene.
Effects of Cell Cultured Acanthopanax senticosus Extract Supplementation and Swimming Exercise on Lipid and Carnitine Profiles in C57BL/6J Mice Fed a High Fat Diet
Park, Jeong-Eun;Soh, Ju-Ryoun;Rho, Jeong-Ok;Cha, Youn-Soo 894
This study investigated the effects of cell cultured Acanthopanax senticosus extract (ASE) supplementation and swimming exercise on lipid profiles and carnitine concentrations in C57BL/6J mice fed high fat diets. Male C57BL/6J mice (n=50), aged 4 weeks, were divided into 5 groups based on exercise and/or ASE supplementation (0.5 g/kg of body weight): normal diet (N-C), high fat diet (H-C), high fat diet non-supplement & exercise (H-NSE), high fat diet supplement & no exercise (H-SNE), high fat diet supplement & exercis (H-SE). Liver nonesterified carnitine (NEC) was significantly higher in the H-SNE group than in the H-C group, and liver total carnitine (TCNE) levels were significantly higher in the H-SNE group than in the H-NSE and H-SE groups. Liver and muscle carnitine palmitoyltransferase-I (CPT-I) mRNA levels tended to be higher with ASE supplementation and/or exercise. These results suggest that supplementation with ASE and/or exercise might have a role in improving lipid oxidation.
Physicochemical Properties of Enzymatically Modified Maize Starch Using 4-${\alpha}$-Glucanotransferase
Park, Jin-Hee;Park, Kwan-Hwa;Jane, Jay-Iin 902
Granular maize starch was treated with Thermus scotoductus 4-${\alpha}$-glucanotransferase (${\alpha}$-GTase), and its physicochemical properties were determined. The gelatinization and pasting temperatures of ${\alpha}$-GTase-modified starch were decreased by higher enzyme concentrations. ${\alpha}$-GTase treatment lowered the peak, setback, and [mal viscosity of the starch. At a higher level of enzyme treatment, the melting peak of the amylose-lipid complex was undetectable on the DSC thermogram. Also, ${\alpha}$-GTase-modified starch showed a slower retrogradation rate. The enzyme treatment changed the dynamic rheological properties of the starch, leading to decreases in its elastic (G') and viscous (G") moduli. ${\alpha}$-GTase-modified starch showed more liquid-like characteristics, whereas normal maize starch was more elastic and solid-like. Gel permeation chromatography of modified starch showed that amylose was degraded, and a low molecular-weight fraction with $M_w$ of $1.1{\times}10^5$ was produced. Branch chain-length (BCL) distribution of modified starch showed increases in BCL (DP>20), which could result from the glucans degraded from amylose molecules transferred to the branch chains of amylopectin by inter-/intra-molecular transglycosylation of ${\alpha}$-GTase. These new physicochemical functionalities of the modified starch produced by ${\alpha}$-GTase treatment are applicable to starch-based products in various industries.
Genistein Combined with Exercise Improves Lipid Profiles and Leptin Levels in C57BL/6J Mice Fed a High Fat Diet
Seong, So-Hui;Ahn, Eun-Mi;Sohn, Hee-Sook;Baik, Sang-Ho;Park, Hyun-Woo;Lee, Sang-Jun;Cha, Youn-Soo 910
The aim of this study is to determine the anti-obesity effects of genistein and exercise, separately and in combination, in mice. Fifty male C57BL/6J mice were divided into 5 treatment groups: normal diet (ND), high fat diet (HD), high fat diet with exercise (HD+Ex), high fat diet with 0.2% genistein (HD+G), high fat diet with 0.2% genistein, and exercise (HD+G+Ex). They were allowed free access to feed and water, and exercised mice engaged in swimming on a regular basis for 12 weeks. Genistein supplemented mice gained less weight, had lower energy intake, better lipid profiles, and lower leptin than the HD mice. Furthermore, when genistein was combined with exercise (HD+G+Ex) the effects were even greater. HD, HD+Ex, and HD+G mice exhibited increased hepatic CPT-1 mRNA expression. Therefore, genistein and exercise has anti-obesity effects, as shown by changes in body weight, fat accumulation, energy intake, and leptin levels.
Single-Kernel Characteristics of Soft Wheat in Relation to Milling and End-Use Properties
Park, Young-Seo;Chang, Hak-Gil 918
To investigate the relationship of wheat single kernel characteristics with end-use properties, 183 soft wheat cultivars and lines were evaluated for milling quality characteristics (kernel hardness, kernel and flour protein, flour ash), and end-use properties (i.e., as ingredients in sugar-snap cookies, sponge cake). Significant positive correlations occurred among wheat hardness parameters including near-infrared reflectance (NIR) score and single kernel characterization system (SKCS). The SKCS characteristics were also significantly correlated with conventional wheat quality parameters such as kernel size, wheat protein content, and straight-grade flour yield. The cookie diameter and cake volume were negatively correlated with NIR and SKCS hardness, and there was an inverse relationship between flour protein contents and kernel weights or sizes. Sugar-snap cookie diameter was positively correlated with sponge cake volume.
Quality Characteristics of Commercial Baechukimchi During Long-term Fermentation at Refrigerated Temperatures
Jung, Lan-Hee;Jeon, Eun-Raye 924
This study addresses the quality characteristics of commercial baechukimchi by analyzing its physicochemical characteristics and sensory properties in relation to fermentation time and temperature. The salinity of baechukimchi increased to 3.01% after 45 days of fermentation at 2 and $5^{\circ}C$, but decreased to 2.81% by 105 days. The pH decreased gradually at the beginning of fermentation, but decreased after 45 days. The acidity differed most between kimchi fermented at $2^{\circ}C$ (0.36%) and $5^{\circ}C$ (0.48%) at 45 days of fermentation. The vitamin C content was 8.47 mg% in kimchi fermented at both 2 and $5^{\circ}C$ on the day of initial production, then peaked after 45 to 60 days at 14.10 mg%, and decreased thereafter. The total microbial count gradually increased during the first 75 days of fermentation. The appearance and overall acceptability of baechukimchi were highest after 90 days of fermentation at $2^{\circ}C$ and after 60 days of fermentation at $5^{\circ}C$.
Characterization of Polysaccharides Obtained from Purslane (Portulaca olerace L.) Using Different Solvents and Enzymes
Choi, Ae-Jin;Kim, Chul-Jin;Cho, Yong-Jin;Kim, Yang-Ha;Cha, Jae-Yoon;Hwang, Jae-Kwan;Kim, In-Hwan;Kim, Chong-Tai 928
Physiochemical properties, such as yield and molecular weight distribution of polysaccharide fractions, of polysaccharides in the enzymatic hydrolysates of purslane were investigated and characterized. A higher amount of micro nutrients, such as potassium (9,413 mg/100 g), phosphorus acid (539 mg/100 g), leucine, alanine, lysine, valine, glycine, and isoleucine, was present in whole purslane. The yield of water soluble polysaccharides (WSP) was 0.29, 7.01, and 7.94% when extracted using room temperature water (RTW), hot-water (HW), and hot temperature/high pressure-water (HTPW), respectively, indicating that HW or HTPW extraction may be effective to obtain WSP from purslane. The average ratio of L-arabinose:D-galactose in the WSP was 37:49, 34:37, and 27:29, when extracted using RTW, HW, and HTPW, respectively. These results indicate that water was a suitable extraction solvent for preparation of the arabinogalactan component of whole purslane. A higher yield and total carbohydrate content was obtained by using Viscozyme L instead of Pectinex 5XL during extraction of the WSP, which indicates that enzymatic treatment of purslane may be an effective method to control the Mw of polysaccharides. Finally, it was confirmed that Viscozyme L is a suitable enzyme for the hydrolysis and separation of polysaccharides obtained from purslane.
Biological Detoxification of Lacquer Tree (Rhus verniciflua Stokes) Stem Bark by Mushroom Species
Choi, Han-Seok;Kim, Myung-Kon;Park, Hyo-Suk;Yun, Sei-Eok;Mun, Sung-Phil;Kim, Jae-Sung;Sapkota, Kumar;Kim, Seung;Kim, Tae-Young;Kim, Sung-Jun 935
The stem bark of Rhus verniciflua (RVSB) has been used in herbal medicine to treat diabetes mellitus and stomach ailments for thousands of years in Korea, despite its content of the plant allergen, urushiol. A new biological approach for the removal of urushiol from RVSB using mushrooms is described. All mushroom species (11 sp.) employed in this study were able to grow on RVSB, although the growth rate (mm/day) was lower than the control (sawdust). The components of urushiol congeners [C15 triene (m/z 314), C15 diene (m/z 316), C15 monoene (m/z 318), and C15 saturated (m/z 320)] were purified by HPLC and identified by GC-MS. A C15:3 (3-pentadecatrienly catechol) was found to be most abundant in RVSB. Urushiol analogues decreased remarkably from 154.15 to 10.73 mg/100 g (approximately 93%) by Fomitella fraxinea, whereas Trametes vercicolor showed only a 1.46% degradation capacity despite its 2 fold higher growth rate. Similarly, laccase activity was found to be high for F. fraxinea and low for T. vercicolor. Moreover, approximately 98% detoxification was accomplished by F. fraxinea cultivated on RVSB supplemented with 20%(w/w) rice bran. These findings suggest that mushrooms can be used in the detoxification of RVSB.
Physicochemical Characteristics of Acid Thinned and High Pressure Treated Waxy Rice Starch for Yugwa (Korean Rice Snack) Production
Cha, Jae-Yoon;Choi, Ae-Jin;Chun, Bo-Youn;Kim, Min-Ji;Chun, Hyang-Sook;Kim, Chul-Jin;Cho, Yong-Jin;Kim, Chong-Tai 943
The acid modification of waxy rice starch was conducted to improve the yugwa production process. The intrinsic viscosity, paste viscosity, and differential scanning calorimetry characteristics of acid modified starch were measured, and bandaegi and yugwa prepared from acid modified starch were evaluated. The intrinsic viscosities of acid thinned starches were 1.48, 1.27, 1.15, and 0.91 mL/g after reaction times of 1, 2, 3, and 4 hr, respectively. The gelatinization enthalpy was reduced from 16.3 J/g in native starch to 15.8, 15.3, 14.7, and 14.5 J/g in acid thinned starches as the time of acid thinning increased. The peak viscosity and final viscosity decreased with increasing the time of acid thinning, but the pasting temperature was slightly increased in acid thinned starches. The hardness of bandaegi from acid thinned starches under high pressure greatly decreased relative to the control, typical yugwa. Yugwa from acid thinned starch under high pressure maintained a homogeneous structure containing tiny and uniform cells similar to that of native waxy rice starch used for typical yugwa. Acid thinning under high pressure appears to be a good alternative to the existing steeping process for better yugwa quality.
Microbial Contamination by Bacillus cereus, Clostridium perfringens, and Enterobacter sakazakii in Sunsik
Lee, Eun-Jin;Kim, Sung-Gi;Yoo, Sang-Ryeol;Oh, Sang-Suk;Hwan, In-Gyun;Kwon, Gi-Sung;Park, Jong-Hyun 948
The powdered cereal sunsik is a partially thermal-processed product that required safety evaluations for food-borne pathogens. Thirty-six sunsik products from Korean markets were collected and analyzed for contamination by total viable cell counts, coliforms, Escherichia coli, and the spore-forming Clostridium perfringens and Bacillus cereus. Enterobacter sakazakii, as a newly emerging pathogen, was also analyzed. Approximately 28% of sunsik were contaminated at 5 log CFU/g for total viable counts. Coliforms and E. coli were detected in 33 and 4% of the samples, respectively. The spore-forming B. cereus was found in 42% of the samples at a maximal level of 3 log CFU/g on average. About 6% the samples were contaminated with Cl. perfringens at an average level of 15 CFU/g. Forty-five % of sunsik contained E. sakazakii, at levels from 0.007 to over 1.1 cell/g by MPN method. In addition, one sunsik product for infants and children showed over 3 log CFU/g for both B. cereus and E. sakazaki. Therefore, concern should be placed on controlling for microbial hazards such as B. cereus and E. sakazakii in sunsik, particularly for the products fed to infants under 6 months of age.
Evaluation of Structure Development of Xanthan and Carob Bean Gum Mixture Using Non-Isothermal Kinetic Model
Yoon, Won-Byong;Gunasekaran, Sundaram 954
Gelation mechanism of xanthan-carob mixture (X/C) was investigated based on thermorheological behavior. Three X/C ratios (1:3, 1:1, and 3:1) were studied. Small amplitude oscillatory shear tests were performed to measure linear viscoelastic behavior during gelation. Temperature sweep ($-1^{\circ}C/min$) experiments were conducted. Using a non-isothermal kinetic model, activation energy (Ea) during gelation was calculated. At 1% total concentration, the Ea for xanthan fraction (${\phi}_x$)=0.25, 0.5, and 0.75 were 178, 159, and 123 kJ/mol, respectively. However, a discontinuity was observed in the activation energy plots. Based on this, two gelation mechanisms were presumed-association of xanthan and carob molecules and aggregation of polymer strands. The association process is the primary mechanism to form 3-D networks in the initial stage of gelation and the aggregation of polymer strands played a major role in the later stage.
Effects of Aqueous Ozone Combined with Organic Acids on Microflora Inactivation in the Raw Materials of Saengsik
Bang, Woo-Suk;Eom, Young-Ran;Eun, Jong-Bang;Oh, Deog-Hwan 958
This study was conducted to determine the effects of microorganism inactivation using 3 ppm of aqueous ozone (AO), 1% citric acid, 1% lactic acid, and 1% acetic acid alone, as well as the combinations of AO and organic acid, for washing the raw materials of saengsik (carrot, cabbage, glutinous rice, barley) with or without agitation. The combination of AO and 1% of each organic acid significantly inactivated spoilage bacteria in both the vegetables and the grains (p<0.05). However, in the glutinous rice, no inhibitory effects were shown for total aerobic bacteria by using water, ozone, or the combination of AO with citric acid or lactic acid, without agitation. Microbial inactivation was enhanced with agitation in the grains, whereas dipping (no agitation) treatments showed better inhibitory effects in the vegetables than in the barley, suggesting that washing processes should take into account the type of food material.
Isolation and Identification of an Antioxidant Substance from Heated Garlic (Allium sativum L.)
Hwang, In-Guk;Woo, Koan-Sik;Kim, Dae-Joong;Hong, Jin-Tae;Hwang, Bang-Yeon;Lee, Youn-Ri;Jeong, Heon-Sang 963
The objectives of this study were to identity antioxidant substance in heated garlic juice (HGJ). We evaluated the antioxidant activities of heated garlic juice exposed to 120, 130, and $140^{\circ}C$ for 2 hr. The HGJ was partitioned using the solvents of hexane, chloroform, ethyl acetate, butanol, and water. The ethyl acetate fraction of HGJ treated at $130^{\circ}C$ for 2 hr showed strong antioxidant activity; this extract was isolated and purified using silica gel column chromatography and semi-preparative high-performance liquid chromatography. The structure of the purified compound was determined using spectroscopic methods, i.e., ultraviolet, mass spectrometry, infrared, $^1H$ NMR, $^{13}C$ NMR, DEPT, HMBC, and HMQC. The isolated compound was identified as thiacremonone (2,4-dihydroxy-2,5-dimethyl-thiophene-3-one). Thiacremonone showed strong 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical-scavenging activity, with a 50% inhibition concentration ($IC_{50}$) of $22.25{\pm}0.44\;{\mu}g/mL$, which is much higher than that of the antioxidants ascorbic acid ($30.06{\pm}0.42\;{\mu}g/mL$), ${\alpha}$-tocopherol ($71.30{\pm}0.97\;{\mu}g/mL$), and butylated hydroxyanisole ($50.54{\pm}0.94\;{\mu}g/mL$).
Protective Effect of Administrated Glutathione-enriched Saccharomyces cerevisiae FF-8 Against Carbon Tetrachloride ($CCl_4$)-induced Hepatotoxicity and Oxidative Stress in Rats
Shon, Mi-Yae;Cha, Jae-Young;Lee, Chi-Hyeoung;Park, Sang-Hyun;Cho, Young-Su 967
The present work is aimed to evaluate the protective effect of glutathione-enriched Saccharomyces cerevisiae FF-8 strain on carbon tetrachloride ($CCl_4$)-induced hepatotoxicity and oxidative stress in rats. The activities of liver markers (alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase, lactate dehydrogenase), lipid peroxidative index (thiobarbituric acid-reactive substances), and the antioxidant status (reduced glutathione) were used to monitor those protective roles of FF-8 strain. The liver marker enzymes in plasma and the lipid peroxidation in the liver were increased when $CCl_4$ was treated but these were significantly decreased by FF-8 strain treatment. The hepatic concentration of glutathione in the current glutathione-enriched FF-8 strain fed animal was approximately twice as high as the normal, but this was slightly increased in response to $CCl_4$ plus glutathione-enriched FF-8 strain. The increased liver triglyceride concentration due to the $CCl_4$ treatment was significantly decreased by FF-8 strain and the reduced level reached to that of normal group. Administration of FF-8 strain in normal rat did not show any signs of harmful effects. Therefore, the current findings suggest that FF-8 strain could be an effective antioxidant with no or negligible side-effects and it might be useful for the purpose of protection treatment of hepatotoxicity and oxidative stress in $CCl_4$-treatment in rat.
Exposure to Ethyl Carbamate by Consumption of Alcoholic Beverages Imported in Korea
Hong, Kwon-Pyo;Kang, Yoon-Seok;Jung, Dong-Chae;Park, Sae-Rom;Yoon, Ji-Ho;Lee, Sung-Yong;Ko, Yong-Seok;Kim, So-Hee;Ha, Sang-Do;Park, Sang-Kyu;Bae, Dong-Ho 975
Determination of ethyl carbamate content in imported alcoholic beverages in Korea and an exposure assessment were conducted. In gas chromatography/mass spectrometry/selected ion monitoring (GC/MS/SIM) analysis, 2.5-39, 8-263, 6.3-112, 11.3-23.5, 53-94, 8.5-38.5, 7-9.5, 21.3-31.5, 5-832.5, and $10.5-364.8\;{\mu}g/L$ of ethyl carbamate were detected in imported beers, sakes, whiskies, vodkas, Chinese liquors, cognacs, tequilas, rums, liqueurs, and wines, respectively. The exposure assessment indicated that the exposure of Korean adults to ethyl carbamate were lower than 20 ng/kg BW per day, (the virtual safe dose) indicating that the amount of ethyl carbamate exposed through fermented food and alcoholic beverages including imported products are currently in the 'no significant risk level'. However, the present low exposure to ethyl carbamate through the imported alcoholic products was not due to the low contents of ethyl carbamate in imported products, but low consumption of the imported products. Therefore, given increasing importation of alcoholic beverages in Korea, reductions of ethyl carbamate content in imported alcoholic beverages, especially non-distilled products, should be required by regulating limits on the ethyl carbamate content in the imported alcoholic beverages.
Antioxidant Activity of Lignan Compounds Extracted from Roasted Sesame Oil on the Oxidation of Sunflower Oil
Lee, Jin-Young;Kim, Moon-Jung;Choe, Eun-Ok 981
Effects of lignan compounds (sesamol, sesamin, and sesamolin) extracted from roasted sesame oil on the autoxidation at $60^{\circ}C$ for 7 days and thermal oxidation at $180^{\circ}C$ for 10 hr of sunflower oil were studied by determining conjugated dienoic acid (CDA) contents, p-anisidine values (PAV), and fatty acid composition. Contents of lignan compounds during the oxidations were also monitored. ${\alpha}$-Tocopherol was used as a reference antioxidant. Addition of lignan compounds decreased CDA contents and PAY of the oils during oxidation at $60^{\circ}C$ or heating at $180^{\circ}C$, which indicated that sesame oil lignans lowered the autoxidation and thermal oxidation of sunflower oil. Sesamol was the most effective in decreasing CDA formation and hydroperoxide decomposition in the auto- and thermo-oxidation of oil, and its antioxidant activity was significantly higher than that of ${\alpha}$-tocopherol. Sesamol, sesamin, and sesamolin added to sunflower oil were degraded during the oxidations of oils, with the fastest degradation of sesamol. Degradation of sesamin and sesamolin during the oxidations of the oil were lower than that of ${\alpha}$-tocopherol. The results strongly indicate that the oxidative stability of sunflower oil can be improved by the addition of sesamol, sesamin, or sesamolin extracted from roasted sesame oil.
Protective Effect of Water Extract of Fraxinus Rhynchophylla Leaves on Acetaminophen-induced Nephrotoxicity in Mice and Its Phenolic Compounds
Jeon, Jeong-Ryae;Choi, Joon-Hyuk 988
The protective effect of the water extract of Fraxinus rhynchophylla leaves (FLE) was determined using an animal model of acetaminophen (AAP)-induced nephrotoxicity. The BALB/c male mice used in this study were divided into 3 groups; the normal, AAP-administered, and FLE-pretreated AAP groups. A single dose of AAP induced necrosis of renal tubules and congestion along with edema to a remarkable degree as observed by hematoxylin and eosin stain, and also increased the numbers of terminal deoxynucleotidyl transferase-mediated deoxyuridine triphosphate nick end labeling (TUNEL)-positive renal tubular epithelial cells. Blood urea nitrogen and plasma creatinine levels were determined to be significantly higher in the AAP group than in the normal group. However, FLE pretreatment resulted in an attenuation of renal tubule necrosis. Regeneration and dilatation of renal tubules were noted, and the numbers of TUNEL-positive cells were reduced in the FLE-pretreated groups. In an effort to detect the bioactive compounds exerting protective effects in FLE, the analysis of phenolic compounds via gas chromatography/mass spectrometry (GC/MS) were performed, and identified esculetin and esculin. The present study indicates that these compounds may exert a protective effect against AAP-induced nephrotoxicity.
Identification of Lactic Acid Bacteria Involved in Traditional Korean Rice Wine Fermentation
Seo, Dong-Ho;Jung, Jong-Hyun;Kim, Hyun-You;Kim, Young-Rok;Ha, Suk-Jin;Kim, Young-Cheul;Park, Cheon-Seok 994
Changes in microflora, pH, reducing sugar content, lactic acid content, and ethanol content during Korean rice wine fermentation were investigated. Typical quality characteristics of Korean rice wine fermentation including pH, reducing sugar content, lactic acid content, and ethanol content were evaluated. While a fungus was not detected in our Korean rice wine mash, yeast was found to be present at fairly high quantities (1.44-4.76\;{$\times}\;10^8\;CFU/mL$) throughout the fermentation period. It is assumed that lactic acid bacteria (LAB) had effects on the variations of fragrance and flavor for traditional Korean rice wine. The main LAB during the Korean rice wine fermentation was determined and identified as a Gram-positive, straight rod-shaped cell. Genotypic identification of the isolated strain by amplification of its 16S rRNA sequence revealed that the isolated strain was most closely related to Lactobacillus plantarum (99%) strains without any other comparable Lactobacillus strains. Therefore, we designated the major LAB identified from traditional Korean rice wine fermentation as L. plantarum RW.
Influence of pH, Emulsifier Concentration, and Homogenization Condition on the Production of Stable Oil-in-Water Emulsion Droplets Coated with Fish Gelatin
Surh, Jeong-Hee 999
An oil-in-water (O/W) emulsion [20 wt% com oil, 0.5-6.0 wt% fish gelatin (FG), pH 3.0] was produced by high pressure homogenization, and the influence of pH, protein concentration, and homogenization condition on the formation of FG-stabilized emulsions was assessed by measuring particle size distribution, electrical charge, creaming stability, microstructure, and free FG concentration in the emulsions. Optical microscopy indicated that there were some large droplets ($d>10\;{\mu}m$) in all FG-emulsions, nevertheless, the amount of large droplets tended to decrease with increasing FG concentration. More than 90% of FG was present free in the continuous phase of the emulsions. To facilitate droplet disruption and prevent droplet coalescence within the homogenizer, homogenization time was adjusted in O/W emulsions stabilized by 2.0 or 4.0 wt% FG. However, the increase in the number of pass rather promoted droplet coalescence. This study has shown that the FG may have some limited use as a protein emulsifier in O/W emulsions.
Changes in the Chemical and Functional Components of Korean Rough Rice Before and After Germination
Lee, Youn-Ri;Kim, Ja-Young;Woo, Koan-Sik;Hwang, In-Guk;Kim, Kyong-Ho;Kim, Kee-Jong;Kim, Jae-Hyun;Jeong, Heon-Sang 1006
This study investigated changes in the chemical and functional components of germinated rough rice for the development of functional foods. The chemical components that were determined for 'Ilpum', 'Goami2', 'Keunnun', and 'Heugkwang' rough rice cultivars included dietary fiber, free sugars, free amino acids, and functional components such as tocopherols and ${\gamma}$-oryzanol. The crude protein, fat, total dietary fiber, and free sugar contents of the rough rice increased significantly after germination. The essential amino acid content was particularly increased. After the germination of the 'Ilpum', 'Goami2', 'Keunnun', and 'Heugkwang' varieties, the following increases were found: ${\gamma}$-aminobutyric acid increased 2.4, 2.5, 6.1, and 3.4 times, respectively; ${\alpha}$-tocopherol, ${\alpha}$-tocotrienols, and ${\gamma}$-tocotrienols increased significantly; and ${\gamma}$-oryzanol content increased 0.8, 1.1, 1.5, and 1.2 times, respectively. Thus, germinated rough rice has the potential to be used as a healthy and functional food ingredient.
Allergenicity Changes in Raw Shrimp (Acetes japonicus) and Saeujeot (Salted and Fermented Shrimp) in Cabbage Kimchi due to Fermentation Conditions
Park, Jin-Gyu;Saeki, Hiroki;Nakamura, Atsushi;Kim, Koth-Bong-Woo-Ri;Lee, Ju-Woon;Byun, Myung-Woo;Kim, Seong-Mi;Lim, Sung-Mee;Ahn, Dong-Hyun 1011
Saeujeot (salted and fermented shrimp) and kimchi are traditional Korean fermented foods. Even though shrimp have often induced severe allergic reactions in sensitized individuals, few studies have investigated the allergenicity of shrimp. The aim of this study was to observe the changes of pH and allergenicity of raw shrimp (Acetes japonicus) and saeujeot in cabbage kimchi during fermentation using competitive indirect enzyme-linked immunosorbent assay (Ci-ELISA). Fermentation was carried out at different temperatures (25, 15, and $5^{\circ}C$). The pH of cabbage kimchi added with raw shrimp or saeujeot slowly decreased at lower temperature ($5^{\circ}C$) at the end stage of the fermentation process. The binding ability of serum obtained from patients allergic to raw shrimp against shrimp tropomyosin and saeujeot in kimchi rapidly decreased during longer fermentation periods and higher temperature ($25^{\circ}C$). In conclusion, the allergenicity of both raw shrimp and saeujeot in kimchi decreased during fermentation but the decrease in allergenicity of saeujeot was greater than observed for raw shrimp.
Effect of Aqueous Chlorine Dioxide Treatment on the Microbial Growth and Qualities of Strawberries During Storage
Jin, You-Young;Kim, Yun-Jung;Chung, Kyung-Sook;Won, Mi-Sun;Song, Kyung-Bin 1018
Effect of aqueous chlorine dioxide treatment on the microbial growth and quality changes of strawberries during storage was examined. Strawberries were treated with 5, 10, and 50 ppm of chlorine dioxide solution, and stored at $4{\pm}1^{\circ}C$. Total aerobic bacteria in strawberries treated at 50 ppm of chlorine dioxide were increased from 1.40 to 2.10 log CFU/g after 7 days, while increasing in the control from 2.75 to 4.32 log CFU/g. Yeasts and molds in strawberries treated at 50 ppm of chlorine dioxide were increased from 1.10 to 1.97 log CFU/g after 7 days, while the control was increased from 2.55 to 4.50 log CFU/g. The pH and titratable acidity of strawberries were not significantly different among treatments. Sensory evaluation results showed that chlorine dioxide-treated strawberries had better sensory scores than the control. These results indicate that chlorine dioxide treatment could be useful in improving the microbial safety and qualities of strawberries during storage.
Effect of Glutinous Barley Intake on Lipid Metabolism in Middle-Aged Rats Fed a High-Fat Diet
Sohn, Jung-Sook;Hong, So-Young;Kim, Mi-Kyung 1023
This study was designed to determine whether dietary glutinous barley (GB) affects lipid metabolism in middle-aged rats previously fed a high-fat diet. To induce obesity, 20 male 9-month-old Sprague Dawley rats were raised for 1 month on a diet containing 20%(w/w) lipid. The rats were allocated to 1 of 2 groups of 10 rats each and for the subsequent 2 months were fed an 8%(w/w) lipid diet containing well-milled rice (WMR) or GB powder. Rats fed the GB diet had significantly lower concentrations of plasma triglyceride, plasma total cholesterol, and liver cholesterol than rats fed the WMR diet. Fecal excretions of triglyceride and bile acids were significantly greater for the GB group than for the WMR group. In conclusion, dietary GB has positive effects on lipid metabolism: it decreases plasma cholesterol concentration by increasing fecal excretion of bile acids.
Curcumin-induced Growth Inhibitory Effects on HeLa Cells Altered by Antioxidant Modulators
Hong, Jung-Il 1029
Curcumin (diferuloyl methane), originated rhizomes of Curcuma longa L. has been suggested as an anti-inflammatory and anti-carcinogenic agent. In the present study, modulation of cytotoxic effects of curcumin on HeLa cells by different types of antioxidants was investigated. Cytotoxic effects of curcumin were significantly enhanced in the presence of superoxide dismutase (SOD) by decreasing $IC_{50}$ to 15.4 from $26.0\;{\mu}M$ after 24 hr incubation; the activity was not altered by catalase. The effect of curcumin was significantly less pronounced in the presence of 4 mM N-acetylcysteine (NAC). Low concentration (<1 mM) of NAC, however, increased the efficacy of curcumin. Cysteine and ${\beta}$-mercaptoethanol that have a thiol group, showed the similar biphasic patterns as NAC for modulating curcumin cytotoxicity, which was, however, constantly enhanced by ascorbic acid, a non-thiol antioxidant. In the presence of SOD, ascorbic acid, and 0.5 mM NAC, cellular levels of curcumin were significantly increased by 31-66%, whereas 4 mM NAC decreased the level. The present results indicate that thiol reducing agents showed a biphasic effect in modulating cytotoxicity of curcumin; it is likely that their thiol group is reactive with curcumin especially at high concentrations.
Protective Effect of Acanthopanax senticosus on Oxidative Stress Induced PC12 Cell Death
Choi, Soo-Jung;Yoon, Kyung-Young;Choi, Sung-Gil;Kim, Dae-Ok;Oh, Se-Jong;Jun, Woo-Jin;Shin, Dong-Hoon;Cho, Sung-Hwan;Heo, Ho-Jin 1035
Epidemiologic studies have shown important relationships between oxidative stress and Alzheimer's disease (AD) brain. In this study, free radical scavenging activity and neuronal cell protection effect of aqueous methanol extracts of Acanthopanax senticosus (A. senticosus) were examined. $H_2O_2$-induced oxidative stress was measured using 2',7'-dichlorofluorescein diacetate (DCF-DA) assay. Pretreatment with the phenolics of A. senticosus prevented oxidative injury against $H_2O_2$ toxicity. Since oxidative stress is known to increase neuronal cell membrane breakdown, leading to cell death, lactic dehydrogenase release, and trypan blue exclusion assays were utilized. We found that phenolics of A. senticosus have neuronal cell protection effects. It suggests that the phenolics of A. senticosus inhibited $H_2O_2$-induced oxidative stress and A. senticosus may be beneficial against the oxidative stress-induced risk in AD.
Antioxidative Activities of the Ethyl Acetate Fraction from Heated Onion (Allium cepa)
Lee, Youn-Ri;Hwang, In-Guk;Woo, Koan-Sik;Kim, Dae-Joong;Hong, Jin-Tae;Jeong, Heon-Sang 1041
Heated onion juice was partitioned using the solvents hexane, chloroform, ethyl acetate, and butanol. The ethyl acetate fraction showed the strongest scavenging effect on the ABTS radical. The antioxidant activities of the ethyl acetate fraction from raw and heated onion (120, 130, and $140^{\circ}C$) were evaluated using radical scavenging assays. Radical and nitrite scavenging activities were higher in heated onion than raw onion, and the higher the temperature of heat treatment, the greater the radical and nitrite scavenging activities. Heated onion ($140^{\circ}C$, 2 hr) was more effective than raw onion, having higher DPPH radical scavenging (5.7-fold), hydroxyl radical scavenging (6.4-fold), superoxide radical scavenging (2.3-fold), hydrogen peroxide scavenging (11.8-fold), and nitrite scavenging (4.3-fold) activities. Onion increased its physiologically active materials after heating, and in this regard, heated onion can be used as biological material for the manufacture of health foods and supplements.
An Herbal Medicine Mixture (HM-10) Induces Longitudinal Bone Growth and Growth Hormone Release in Rats
Park, Sung-Sun;Oh, Sung-Hoon;Bae, Song-Hwan;Kim, Jung-Min;Chang, Un-Jae;Park, Jung-Min;Kim, Jin-Man;Suh, Hyung-Joo 1046
To investigate the growth promoting effects of an herbal medicine formulation (HM-10), Sprague Dawley (SD) male rats (3 weeks old) were divided into 3 groups (8 rats/group). The control group was given a daily oral administration of saline, and the treatment groups, HM-1 and HM-2, were given daily administrations of HM-10 (500 and 1,000 mg/kg BW, respectively). The cumulative tibial bone growth of the HM-1 and HM-2 groups (22.5 and 20.8 mm, respectively), and their cumulative femur bone growth (19.4 and 18.2 mm, respectively), were significantly different compared to the control group (7.5 mm of tibial growth and 7.7 mm of femur growth) (p<0.05). Lastly, the growth hormone levels of the HM-1 and HM-2 groups (1.70 and 1.79 ng/mL, respectively), as well as their insulin like growth factor 1 (IGF-1) levels (165.1 and 171.7 ng/mL, respectively) showed significant differences compared to the control (0.93 ng/mL of growth hormone and 125.6 ng/mL of IGF-1) (p<0.05).
Antimicrobial Activity of an Edible Wild Plant, Apiifolia Virgin's Bower (Clematis apiifolia DC)
Kyung, Kyu-Hang;Woo, Yong-Ho;Kim, Dong-Sub;Park, Hun-Jin;Kim, Youn-Soon 1051
An edible wild perennial plant with extremely potent antimicrobial activity was found and identified as apiifolia Virgin's Bower (Clematis apiifolia DC) which is easily found around wet wildernesses. Fresh fruit extract of C. apiifolia exhibited minimum inhibitory concentrations (MIC) in the vicinity of 0.1% against various yeasts and of less than or equal to 0.4% for non-lactic acid bacteria. MICs against lactic acid bacteria were about 2.0%. The antimicrobial activity of C. apiifolia fruit was even more potent than that of garlic which has been known for its potent antimicrobial activity. The principal antimicrobial compound of fruit extract of C. apiifolia was isolated and identified by high performance liquid chromatography and gas chromatography as protoanemonin (a gamma lactone of 4-hydroxy-2,4-pentadienoic acid). The antimicrobial activity of C. apiifolia was stable at high temperatures, and the activity was maintained after heating at $121^{\circ}C$ for 10 min. The antimicrobial compound of C. apiifolia was supposed to inhibit microorganisms by reacting with sulfhydryl groups of cellular proteins.
Novel Purification Method of Two Monoterpene Glucosides, Paeoniflorin, and Albiflorin, from Peony
Kim, Nam-Soo;Kim, Dong-Kyung 1055
Two monoterpene glucosides, paeoniflorin and albiflorin, in peony (Paeonia lactiflora) were purified from 70% ethanol extract of Paeoniae Radix by diethyl ether washing and n-butanol partition, acetone dissolution, and gradient preparative HPLC. After the whole course of purification, yield of paeoniflorin, albiflorin, and the sum of them were 75.0, 38.8, and 68.7%, respectively, together with the corresponding purity of 96.2, 93.8, and 96.0%.
Antioxidative Diarylheptanoids from the Fruits of Alpinia oxyphylla
Han, Jae-Taek;Lee, Sang-Yoon;Lee, Yonn-Hyung;Baek, Nam-In 1060
The antioxidative activity of Alpinia oxyphylla was investigated through measuring the radical scavenging effect on 1,1-diphenyl-2-picrylhydrazyl (DPPH) and inhibitory activity for linoleic acid peroxidation. Two antioxidative diarylheptanoids, yakuchinone A (1) and oxyphyllacinol (2), were isolated from the fruits of A. oxyphylla using thin layer chromatography (TLC) autographic assays. The DPPH scavenging activities of the compounds ($IC_{50}=1$, $57{\pm}2.1\;{\mu}M$; 2, $89{\pm}3.1\;{\mu}M$) were lower than vitamin C ($IC_{50}=51{\pm}1.1\;{\mu}M$), but higher than butylated hydroxytoluene (BHT, $IC_{50}=99{\pm}2.2\;{\mu}M$). Also, inhibitory activities for linoleic acid peroxidation of the compounds ($IC_{50}=1$, $0.19{\pm}0.011\;mM$; 2, $0.31{\pm}0.009\;mM$) were higher than those of vitamin C ($IC_{50}=0.59{\pm}0.017\;mM$) and BHT ($IC_{50}=0.52{\pm}0.014\;mM$). In addition the $^{13}C-NMR$ data of oxyphyllacinol (2) have been first reported in this paper.
Antigenotoxic Effect of Paecilomyces tenuipes Cultivated on Soybeans in a Rat Model of 1,2-Dimethylhydrazine-induced Colon Carcinogenesis
Park, Eun-Ju;Jeon, Gyeong-Im;Park, Nam-Sook;Jin, Byung-Rae;Lee, Sang-Mong 1064
We evaluated the effect of soybean dongchunghacho [SD, cultivated dongchunghacho fungus (Paecilomyces tenuipes) on soybeans] on dimethylhydrazine (DMH)-induced DNA damage and oxidative stress in male F344 rats. The animals were divided into 3 groups and fed a casein-based high-fat, low fiber diet without (DMH group) or with 13%(w/w) of soybean (DMH+S group), or SD (DMH+SD group). One week after beginning the diets, rats were treated weekly with DMH (30 mg/kg, s.c.) for 6 weeks; dietary treatments were continued for the entire experiment and endpoints measured at 9 weeks after the first DMH injection. SD supplementation reduced DMH-induced DNA damage in colon cells and reduced plasma lipid peroxidation. Thus, SD may have therapeutic potential for early-stage colon carcinogenesis.
Determination of Frequency Independent Critical Concentration of Xathan and Carob Mixed Gels
Yoon, Won-Byong;Gunasekaran, Sundaram 1069
The frequency independent critical concentration (Cc) of xanthan and carob (X/C) mixed gel was determined based on the Winter-Chambon's theory. X/C mixed (X/C=1:1 ratio) gels were prepared from 0.1 to 1% of concentration. The linear viscoelastic properties, i.e., storage and loss modulus, of X/C mixed gel at $20^{\circ}C$ were measured by frequency sweep tests. The frequency independence of tangent function of phase angle (tan ${\delta}$) of X/C mixed gels was graphically determined from the intersection of the plot of phase angle against concentration at varied frequencies. The intersection (C=0.43%) was considered to be Cc of X/C mixed gel.
Antimicrobial Activity of Medicinal Plants Against Bacillus subtilis Spore
Cho, Won-Il;Choi, Jun-Bong;Lee, Kang-Pyo;Cho, Seok-Cheol;Park, Eun-Ji;Chung, Myong-Soo;Pyun, Yu-Ryang 1072
Bacterial endospores, especially those of Bacillus and Clostridium genera, are the target of sterilization in various foods. We used Bacillus subtilis ATCC 6633 spores to screen novel antimicrobial substances against spores from medicinal plants. We collected 79 types of plant samples, comprising 42 types of herbs and spices and 37 types of medicinal plants used in traditional medicine in Korea and China. At a concentration of 1%(w/v), only 14 of the ethanol extracts exhibited antimicrobial activity against B. subtilis spores of at least 90%. Crude extracts of Torilis japonica, Gardenia jasminoides, Plantago asiatica, Fritllaria, and Arctium lappa showed particularly high sporicidal activities, reducing the spore count by about 99%. Consideration of several factors, including antimicrobial activity, extraction yields, and costs of raw materials, resulted in the selection of T. japonica, G. jasminoides, A. lappa, and Coriandrum sativum for the final screening of novel antimicrobial substances. Verification tests repeated 10 times over a 4-month period showed that the ethanol extract of T. japonica fruit reduced aerobic plate counts of B. subtilis spores the most, from $10^7$ to $10^4\;CFU/mL$ (99.9%) and with a standard deviation of 0.21%, indicating that this fruit is the most suitable for developing a novel antimicrobial substance for inactivating B. subtilis spores.
In vitro Digestibility of Cooked Noodle Products
Han, Jung-Ah;Seo, Tae-Rang;Lee, Su-Jin;Lim, Seung-Taik 1078
The in vitro digestive properties of 6 domestic noodle products (spaghetti, somyeon, ramyeon, dangmyeon, naengmyeon, and jjolmyeon) were compared after cooking under the manufacture's recommended cooking conditions. The kinetic constant (k), representing the rate of hydrolysis at the initial digestion stage, was highest in the somyeon noodles (0.1151), followed by naengmyeon (0.0954), and was lowest in the spaghetti (0.0421). However, the concentration of starch ($C_{\infty}$) hydrolyzed over 2 hr was not different between the spaghetti (96.22) and the somyeon (96.40), indicating that different digestion behaviors occurred in each type of noodle, even though the amounts of digested starch were similar. The ramyeon, dangmyeon, and naengmyeon noodles showed relatively lower $C_{\infty}$ values than the spaghetti and the somyeon noodles. The spaghetti had the highest amount of slowly digestible starch (SDS, 43%) and the lowest glycemic index (GI, 87.8), whereas the somyeon had the lowest SDS value (9.6%) and the highest or (93.0). The digestibility differences among the noodles were attributed to differences in their flour compositions and manufacturing processes.
Application of Pulsed Electric Fields with Square Wave Pulse to Milk Inoculated with E. coli, P. fluorescens, and B. stearothermophilus
Shin, Jung-Kue;Jung, Kwan-Jae;Pyun, Yu-Ryang;Chun, Myong-Soo 1082
Ultra-high temperature (UHT) processed full fat milk inoculated with Escherichia coli, Pseudomonas fluorescens, and Bacillus stearothermophilus was exposed to 30-60 kV/cm square wave pulsed electric field (PEF) with $1\;{\mu}sec$ pulse width, and $26-210\;{\mu}sec$ treatment time in a continuous PEF treatment system. Eight log reduction was obtained for E. coli and P. fluorescens and 3 logs reduced for B. stearothermophilus under PEF treatment conditions of $210\;{\mu}sec$ treatment time, 60 kV/cm pulse intensity at $50^{\circ}$. There was no significant change in pH and titration acidity of milk after PEF treatment. The electrical energy required to achieve 8 log reduction for E. coli and P. fluorescens was estimated to be about 0.74 kJ/L. | CommonCrawl |
As discussed in my iodine essay (FDA adverse events), iodine is a powerful health intervention as it eliminates cretinism and improves average IQ by a shocking magnitude. If this effect were possible for non-fetuses in general, it would be the best nootropic ever discovered, and so I looked at it very closely. Unfortunately, after going through ~20 experiments looking for ones which intervened with iodine post-birth and took measures of cognitive function, my meta-analysis concludes that: the effect is small and driven mostly by one outlier study. Once you are born, it's too late. But the results could be wrong, and iodine might be cheap enough to take anyway, or take for non-IQ reasons. (This possibility was further weakened for me by an August 2013 blood test of TSH which put me at 3.71 uIU/ml, comfortably within the reference range of 0.27-4.20.)
And yet aside from anecdotal evidence, we know very little about the use of these drugs in professional settings. The Financial Times has claimed that they are "becoming popular among city lawyers, bankers, and other professionals keen to gain a competitive advantage over colleagues." Back in 2008 the narcolepsy medication Modafinil was labeled the "entrepreneur's drug of choice" by TechCrunch. That same year, the magazine Nature asked its readers whether they use cognitive-enhancing drugs; of the 1,400 respondents, one in five responded in the affirmative.
The smart pill that FDA approved is called Abilify MyCite. This tiny pill has a drug and an ingestible sensor. The sensor gets activated when it comes into contact with stomach fluid to detect when the pill has been taken. The data is then transmitted to a wearable patch that eventually conveys the information to a paired smartphone app. Doctors and caregivers, with the patient's consent, can then access the data via a web portal.
Talk to your doctor, too, before diving in "to ensure that they do not conflict with current meds or cause a detrimental effect," Hohler says. You also want to consider what you already know about your health and body – if you have anxiety or are already sensitive to caffeine, for example, you may find that some of the supplements work a little too well and just enhance anxiety or make it difficult to sleep, Barbour says. Finances matter, too, of course: The retail price for Qualia Mind is $139 for 22 seven-capsule "servings"; the suggestion is to take one serving a day, five days a week. The retail price for Alpha Brain is $79.95 for 90 capsules; adults are advised to take two a day.
The truth is, taking a smart pill will not allow you to access information that you have not already learned. If you speak English, a smart drug cannot embed the Spanish dictionary into your brain. In other words, they won't make you smarter or more intelligent. We need to throttle back our expectations and explore reality. What advantage can smart drugs provide? Brain enhancing substances have excellent health and cognitive benefits that are worth exploring.
At this point, I began thinking about what I was doing. Black-market Adderall is fairly expensive; $4-10 a pill vs prescription prices which run more like $60 for 120 20mg pills. It would be a bad idea to become a fan without being quite sure that it is delivering bang for the buck. Now, why the piracetam mix as the placebo as opposed to my other available powder, creatine powder, which has much smaller mental effects? Because the question for me is not whether the Adderall works (I am quite sure that the amphetamines have effects!) but whether it works better for me than my cheap legal standbys (piracetam & caffeine)? (Does Adderall have marginal advantage for me?) Hence, I want to know whether Adderall is better than my piracetam mix. People frequently underestimate the power of placebo effects, so it's worth testing. (Unfortunately, it seems that there is experimental evidence that people on Adderall know they are on Adderall and also believe they have improved performance, when they do not5. So the blind testing does not buy me as much as it could.)
1 PM; overall this was a pretty productive day, but I can't say it was very productive. I would almost say even odds, but for some reason I feel a little more inclined towards modafinil. Say 55%. That night's sleep was vile: the Zeo says it took me 40 minutes to fall asleep, I only slept 7:37 total, and I woke up 7 times. I'm comfortable taking this as evidence of modafinil (half-life 10 hours, 1 PM to midnight is only 1 full halving), bumping my prediction to 75%. I check, and sure enough - modafinil.
Articles and information on this website may only be copied, reprinted, or redistributed with written permission (but please ask, we like to give written permission!) The purpose of this Blog is to encourage the free exchange of ideas. The entire contents of this website is based upon the opinions of Dave Asprey, unless otherwise noted. Individual articles are based upon the opinions of the respective authors, who may retain copyright as marked. The information on this website is not intended to replace a one-on-one relationship with a qualified health care professional and is not intended as medical advice. It is intended as a sharing of knowledge and information from the personal research and experience of Dave Asprey and the community. We will attempt to keep all objectionable messages off this site; however, it is impossible to review all messages immediately. All messages expressed on The Bulletproof Forum or the Blog, including comments posted to Blog entries, represent the views of the author exclusively and we are not responsible for the content of any message.
The leadership position in the market is held by the Americas. The region has favorable reimbursement policies and a high rate of incidence for chronic and lifestyle diseases which has impacted the market significantly. Moreover, the region's developed economies have a strong affinity toward the adoption of highly advanced technology. This falls in line with these countries well-develop healthcare sectors.
In the nearer future, Lynch points to nicotinic receptor agents – molecules that act on the neurotransmitter receptors affected by nicotine – as ones to watch when looking out for potential new cognitive enhancers. Sarter agrees: a class of agents known as α4β2* nicotinic receptor agonists, he says, seem to act on mechanisms that control attention. Among the currently known candidates, he believes they come closest "to fulfilling the criteria for true cognition enhancers."
This tendency is exacerbated by general inefficiencies in the nootropics market - they are manufactured for vastly less than they sell for, although the margins aren't as high as they are in other supplement markets, and not nearly as comical as illegal recreational drugs. (Global Price Fixing: Our Customers are the Enemy (Connor 2001) briefly covers the vitamin cartel that operated for most of the 20th century, forcing food-grade vitamins prices up to well over 100x the manufacturing cost.) For example, the notorious Timothy Ferriss (of The Four-hour Work Week) advises imitators to find a niche market with very high margins which they can insert themselves into as middlemen and reap the profits; one of his first businesses specialized in… nootropics & bodybuilding. Or, when Smart Powders - usually one of the cheapest suppliers - was dumping its piracetam in a fire sale of half-off after the FDA warning, its owner mentioned on forums that the piracetam was still profitable (and that he didn't really care because selling to bodybuilders was so lucrative); this was because while SP was selling 2kg of piracetam for ~$90, Chinese suppliers were offering piracetam on AliBaba for $30 a kilogram or a third of that in bulk. (Of course, you need to order in quantities like 30kg - this is more or less the only problem the middlemen retailers solve.) It goes without saying that premixed pills or products are even more expensive than the powders.
In this large population-based cohort, we saw consistent robust associations between cola consumption and low BMD in women. The consistency of pattern across cola types and after adjustment for potential confounding variables, including calcium intake, supports the likelihood that this is not due to displacement of milk or other healthy beverages in the diet. The major differences between cola and other carbonated beverages are caffeine, phosphoric acid, and cola extract. Although caffeine likely contributes to lower BMD, the result also observed for decaffeinated cola, the lack of difference in total caffeine intake across cola intake groups, and the lack of attenuation after adjustment for caffeine content suggest that caffeine does not explain these results. A deleterious effect of phosphoric acid has been proposed (26). Cola beverages contain phosphoric acid, whereas other carbonated soft drinks (with some exceptions) do not.
Remember: The strictest definition of nootropics today says that for a substance to be a true brain-boosting nootropic it must have low toxicity and few side effects. Therefore, by definition, a nootropic is safe to use. However, when people start stacking nootropics indiscriminately, taking megadoses, or importing them from unknown suppliers that may have poor quality control, it's easy for safety concerns to start creeping in.
Christopher Wanjek is the Bad Medicine columnist for Live Science and a health and science writer based near Washington, D.C. He is the author of two health books, "Food at Work" (2005) and "Bad Medicine" (2003), and a comical science novel, "Hey Einstein" (2012). For Live Science, Christopher covers public health, nutrition and biology, and he occasionally opines with a great deal of healthy skepticism. His "Food at Work" book and project, commissioned by the U.N.'s International Labor Organization, concerns workers health, safety and productivity. Christopher has presented this book in more than 20 countries and has inspired the passage of laws to support worker meal programs in numerous countries. Christopher holds a Master of Health degree from Harvard School of Public Health and a degree in journalism from Temple University. He has two Twitter handles, @wanjek (for science) and @lostlenowriter (for jokes).
Several new medications are on the market and in development for Alzheimer's disease, a progressive neurological disease leading to memory loss, language deterioration, and confusion that afflicts about 4.5 million Americans and is expected to strike millions more as the baby boom generation ages. Yet the burning question for those who aren't staring directly into the face of Alzheimer's is whether these medications might make us smarter.
Table 4 lists the results of 27 tasks from 23 articles on the effects of d-AMP or MPH on working memory. The oldest and most commonly used type of working memory task in this literature is the Sternberg short-term memory scanning paradigm (Sternberg, 1966), in which subjects hold a set of items (typically letters or numbers) in working memory and are then presented with probe items, to which they must respond "yes" (in the set) or "no" (not in the set). The size of the set, and hence the working memory demand, is sometimes varied, and the set itself may be varied from trial to trial to maximize working memory demands or may remain fixed over a block of trials. Taken together, the studies that have used a version of this task to test the effects of MPH and d-AMP on working memory have found mixed and somewhat ambiguous results. No pattern is apparent concerning the specific version of the task or the specific drug. Four studies found no effect (Callaway, 1983; Kennedy, Odenheimer, Baltzley, Dunlap, & Wood, 1990; Mintzer & Griffiths, 2007; Tipper et al., 2005), three found faster responses with the drugs (Fitzpatrick, Klorman, Brumaghim, & Keefover, 1988; Ward et al., 1997; D. E. Wilson et al., 1971), and one found higher accuracy in some testing sessions at some dosages, but no main effect of drug (Makris et al., 2007). The meaningfulness of the increased speed of responding is uncertain, given that it could reflect speeding of general response processes rather than working memory–related processes. Aspects of the results of two studies suggest that the effects are likely due to processes other than working memory: D. E. Wilson et al. (1971) reported comparable speeding in a simple task without working memory demands, and Tipper et al. (2005) reported comparable speeding across set sizes.
The easiest way to use 2mg was to use half a gum; I tried not chewing it but just holding it in my cheek. The first night I tried, this seemed to work well for motivation; I knocked off a few long-standing to-do items. Subsequently, I began using it for writing, where it has been similarly useful. One difficult night, I wound up using the other half (for a total of 4mg over ~5 hours), and it worked but gave me a fairly mild headache and a faint sensation of nausea; these may have been due to forgetting to eat dinner, but this still indicates 3mg should probably be my personal ceiling until and unless tolerance to lower doses sets in.
Noopept is a Russian stimulant sometimes suggested for nootropics use as it may be more effective than piracetam or other -racetams, and its smaller doses make it more convenient & possibly safer. Following up on a pilot study, I ran a well-powered blind randomized self-experiment between September 2013 and August 2014 using doses of 12-60mg Noopept & pairs of 3-day blocks to investigate the impact of Noopept on self-ratings of daily functioning in addition to my existing supplementation regimen involving small-to-moderate doses of piracetam. A linear regression, which included other concurrent experiments as covariates & used multiple imputation for missing data, indicates a small benefit to the lower dose levels and harm from the highest 60mg dose level, but no dose nor Noopept as a whole was statistically-significant. It seems Noopept's effects are too subtle to easily notice if they exist, but if one uses it, one should probably avoid 60mg+.
Caffeine keeps you awake, which keeps you coding. It may also be a nootropic, increasing brain-power. Both desirable results. However, it also inhibits vitamin D receptors, and as such decreases the body's uptake of this-much-needed-vitamin. OK, that's not so bad, you're not getting the maximum dose of vitamin D. So what? Well, by itself caffeine may not cause you any problems, but combined with cutting off a major source of the vitamin - the production via sunlight - you're leaving yourself open to deficiency in double-quick time.
Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I'm doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: \frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit.
…The first time I took supplemental potassium (50% US RDA in a lot of water), it was like a brain fog lifted that I never knew I had, and I felt profoundly energized in a way that made me feel exercise was reasonable and prudent, which resulted in me and the roommate that had just supplemented potassium going for an hour long walk at 2AM. Experiences since then have not been quite so profound (which probably was so stark for me as I was likely fixing an acute deficiency), but I can still count on a moderately large amount of potassium to give me a solid, nearly side effect free performance boost for a few hours…I had been doing Bikram yoga on and off, and I think I wasn't keeping up the practice because I wasn't able to properly rehydrate myself.
Do note that this isn't an extensive list by any means, there are plenty more 'smart drugs' out there purported to help focus and concentration. Most (if not all) are restricted under the Psychoactive Substances Act, meaning they're largely illegal to sell. We strongly recommend against using these products off-label, as they can be dangerous both due to side effects and their lack of regulation on the grey/black market.
Nootropics are a broad classification of cognition-enhancing compounds that produce minimal side effects and are suitable for long-term use. These compounds include those occurring in nature or already produced by the human body (such as neurotransmitters), and their synthetic analogs. We already regularly consume some of these chemicals: B vitamins, caffeine, and L-theanine, in our daily diets.
The peculiar tired-sharp feeling was there as usual, and the DNB scores continue to suggest this is not an illusion, as they remain in the same 30-50% band as my normal performance. I did not notice the previous aboulia feeling; instead, around noon, I was filled with a nervous energy and a disturbingly rapid pulse which meditation & deep breathing did little to help with, and which didn't go away for an hour or so. Fortunately, this was primarily at church, so while I felt irritable, I didn't actually interact with anyone or snap at them, and was able to keep a lid on it. I have no idea what that was about. I wondered if it might've been a serotonin storm since amphetamines are some of the drugs that can trigger storms but the Adderall had been at 10:50 AM the previous day, or >25 hours (the half-lives of the ingredients being around 13 hours). An hour or two previously I had taken my usual caffeine-piracetam pill with my morning tea - could that have interacted with the armodafinil and the residual Adderall? Or was it caffeine+modafinil? Speculation, perhaps. A house-mate was ill for a few hours the previous day, so maybe the truth is as prosaic as me catching whatever he had.
My predictions were substantially better than random chance7, so my default belief - that Adderall does affect me and (mostly) for the better - is borne out. I usually sleep very well and 3 separate incidents of horrible sleep in a few weeks seems rather unlikely (though I didn't keep track of dates carefully enough to link the Zeo data with the Adderall data). Between the price and the sleep disturbances, I don't think Adderall is personally worthwhile.
The next morning, four giant pills' worth of the popular piracetam-and-choline stack made me... a smidge more alert, maybe? (Or maybe that was just the fact that I had slept pretty well the night before. It was hard to tell.) Modafinil, which many militaries use as their "fatigue management" pill of choice, boasts glowing reviews from satisfied users. But in the United States, civilians need a prescription to get it; without one, they are stuck using adrafinil, a precursor substance that the body metabolizes into modafinil after ingestion. Taking adrafinil in lieu of coffee just made me keenly aware that I hadn't had coffee.
In my last post, I talked about the idea that there is a resource that is necessary for self-control…I want to talk a little bit about the candidate for this resource, glucose. Could willpower fail because the brain is low on sugar? Let's look at the numbers. A well-known statistic is that the brain, while only 2% of body weight, consumes 20% of the body's energy. That sounds like the brain consumes a lot of calories, but if we assume a 2,400 calorie/day diet - only to make the division really easy - that's 100 calories per hour on average, 20 of which, then, are being used by the brain. Every three minutes, then, the brain - which includes memory systems, the visual system, working memory, then emotion systems, and so on - consumes one (1) calorie. One. Yes, the brain is a greedy organ, but it's important to keep its greediness in perspective… Suppose, for instance, that a brain in a person exerting their willpower - resisting eating brownies or what have you - used twice as many calories as a person not exerting willpower. That person would need an extra one third of a calorie per minute to make up the difference compared to someone not exerting willpower. Does exerting self control burn more calories?
A fancier method of imputation would be multiple imputation using, for example, the R library mice (Multivariate Imputation by Chained Equations) (guide), which will try to impute all missing values in a way which mimicks the internal structure of the data and provide several possible datasets to give us an idea of what the underlying data might have looked like, so we can see how our estimates improve with no missingness & how much of the estimate is now due to the imputation:
In avoiding experimenting with more Russian Noopept pills and using instead the easily-purchased powder form of Noopept, there are two opposing considerations: Russian Noopept is reportedly the best, so we might expect anything I buy online to be weaker or impure or inferior somehow and the effect size smaller than in the pilot experiment; but by buying my own supply & using powder I can double or triple the dose to 20mg or 30mg (to compensate for the original under-dosing of 10mg) and so the effect size larger than in the pilot experiment.
There are some other promising prescription drugs that may have performance-related effects on the brain. But at this point, all of them seem to involve a roll of the dice. You may experience a short-term brain boost, but you could also end up harming your brain (or some other aspect of your health) in the long run. "To date, there is no safe drug that may increase cognition in healthy adults," Fond says of ADHD drugs, modafinil and other prescription nootropics.
If smart drugs are the synthetic cognitive enhancers, sleep, nutrition and exercise are the "natural" ones. But the appeal of drugs like Ritalin and modafinil lies in their purported ability to enhance brain function beyond the norm. Indeed, at school or in the workplace, a pill that enhanced the ability to acquire and retain information would be particularly useful when it came to revising and learning lecture material. But despite their increasing popularity, do prescription stimulants actually enhance cognition in healthy users?
Table 1 shows all of the studies of middle school, secondary school, and college students that we identified. As indicated in the table, the studies are heterogeneous, with varying populations sampled, sample sizes, and year of data collection, and they focused on different subsets of the epidemiological questions addressed here, including prevalence and frequency of use, motivations for use, and method of obtaining the medication.
If you want to make sure that whatever you're taking is safe, search for nootropics that have been backed by clinical trials and that have been around long enough for any potential warning signs about that specific nootropic to begin surfacing. There are supplements and nootropics that have been tested in a clinical setting, so there are options out there.
A rough translation for the word "nootropic" comes from the Greek for "to bend or shape the mind." And already, there are dozens of over-the-counter (OTC) products—many of which are sold widely online or in stores—that claim to boost creativity, memory, decision-making or other high-level brain functions. Some of the most popular supplements are a mixture of food-derived vitamins, lipids, phytochemicals and antioxidants that studies have linked to healthy brain function. One popular pick on Amazon, for example, is an encapsulated cocktail of omega-3s, B vitamins and plant-derived compounds that its maker claims can improve memory, concentration and focus.
Bacopa is a supplement herb often used for memory or stress adaptation. Its chronic effects reportedly take many weeks to manifest, with no important acute effects. Out of curiosity, I bought 2 bottles of Bacognize Bacopa pills and ran a non-randomized non-blinded ABABA quasi-self-experiment from June 2014 to September 2015, measuring effects on my memory performance, sleep, and daily self-ratings of mood/productivity. Because of the very slow onset, small effective sample size, definite temporal trends probably unrelated to Bacopa, and noise in the variables, the results were as expected, ambiguous, and do not strongly support any correlation between Bacopa and memory/sleep/self-rating (+/-/- respectively).
l-theanine (Examine.com) is occasionally mentioned on Reddit or Imminst or LessWrong32 but is rarely a top-level post or article; this is probably because theanine was discovered a very long time ago (>61 years ago), and it's a pretty straightforward substance. It's a weak relaxant/anxiolytic (Google Scholar) which is possibly responsible for a few of the health benefits of tea, and which works synergistically with caffeine (and is probably why caffeine delivered through coffee feels different from the same amount consumed in tea - in one study, separate caffeine and theanine were a mixed bag, but the combination beat placebo on all measurements). The half-life in humans seems to be pretty short, with van der Pijl 2010 putting it ~60 minutes. This suggests to me that regular tea consumption over a day is best, or at least that one should lower caffeine use - combining caffeine and theanine into a single-dose pill has the problem of caffeine's half-life being much longer so the caffeine will be acting after the theanine has been largely eliminated. The problem with getting it via tea is that teas can vary widely in their theanine levels and the variations don't seem to be consistent either, nor is it clear how to estimate them. (If you take a large dose in theanine like 400mg in water, you can taste the sweetness, but it's subtle enough I doubt anyone can actually distinguish the theanine levels of tea; incidentally, r-theanine - the useless racemic other version - anecdotally tastes weaker and less sweet than l-theanine.)
Nondrug cognitive-enhancement methods include the high tech and the low. An example of the former is transcranial magnetic stimulation (TMS), whereby weak currents are induced in specific brain areas by magnetic fields generated outside the head. TMS is currently being explored as a therapeutic modality for neuropsychiatric conditions as diverse as depression and ADHD and is capable of enhancing the cognition of normal healthy people (e.g., Kirschen, Davis-Ratner, Jerde, Schraedley-Desmond, & Desmond, 2006). An older technique, transcranial direct current stimulation (tDCS), has become the subject of renewed research interest and has proven capable of enhancing the cognitive performance of normal healthy individuals in a variety of tasks. For example, Flöel, Rösser, Michka, Knecht, and Breitenstein (2008) reported enhancement of learning and Dockery, Hueckel-Weng, Birbaumer, and Plewnia (2009) reported enhancement of planning with tDCS.
Dr. Larry Cleary's Lucidal – the critically acclaimed secret formula that has been created, revised, and optimized to the point that it's Dr. Cleary-approved. As a product of Dr. Cleary's extensive years and expertise in the industry, it is his brainchild. Heavily marketed as the pill for reversing memory loss, whilst aiding focus, it's seen some popularity in the last few years. In light of all the hubbub and controversy, we put their claims to the test, to see whether or not Lucidal is able to come forth with flying colors, just as all its acclamation has it to be… Learn More...
I have no particularly compelling story for why this might be a correlation and not causation. It could be placebo, but I wasn't expecting that. It could be selection effect (days on which I bothered to use the annoying LED set are better days) but then I'd expect the off-days to be below-average and compared to the 2 years of trendline before, there doesn't seem like much of a fall.
During the 1920s, Amphetamine was being researched as an asthma medication when its cognitive benefits were accidentally discovered. In many years that followed, this enhancer was exploited in a number of medical and nonmedical applications, for instance, to enhance alertness in military personnel, treat depression, improve athletic performance, etc.
Methylphenidate, commonly known as Ritalin, is a stimulant first synthesised in the 1940s. More accurately, it's a psychostimulant - often prescribed for ADHD - that is intended as a drug to help focus and concentration. It also reduces fatigue and (potentially) enhances cognition. Similar to Modafinil, Ritalin is believed to reduce dissipation of dopamine to help focus. Ritalin is a Class B drug in the UK, and possession without a prescription can result in a 5 year prison sentence. Please note: Side Effects Possible. See this article for more on Ritalin.
Creatine is a substance that's produced in the human body. It is initially produced in the kidneys, and the process is completed in the liver. It is then stored in the brain tissues and muscles, to support the energy demands of a human body. Athletes and bodybuilders use creatine supplements to relieve fatigue and increase the recovery of the muscle tissues affected by vigorous physical activities. Apart from helping the tissues to recover faster, creatine also helps in enhancing the mental functions in sleep-deprived adults, and it also improves the performance of difficult cognitive tasks.
But, thanks to the efforts of a number of remarkable scientists, researchers and plain-old neurohackers, we are beginning to put together a "whole systems" model of how all the different parts of the human brain work together and how they mesh with the complex regulatory structures of the body. It's going to take a lot more data and collaboration to dial this model in, but already we are empowered to design stacks that can meaningfully deliver on the promise of nootropics "to enhance the quality of subjective experience and promote cognitive health, while having extremely low toxicity and possessing very few side effects." It's a type of brain hacking that is intended to produce noticeable cognitive benefits. | CommonCrawl |
Fixed Income Essentials
Convertible Notes
Bonds Fixed Income Essentials
How do I use the holding period return yield to evaluate my bond portfolio?
By Claire Boyte-White
The holding period return yield formula may be used to compare the yields of different bonds in your portfolio over a given time period. This method of yield comparison lets investors determine which bonds are generating the largest profits, so they may rebalance their holdings accordingly. In addition, this formula can help evaluate when it is more advantageous to sell a bond at a premium or hold it until maturity.
What Is the Holding Period Return Yield Formula?
Depending on the type of asset involved, different holding period return yield formulas can be applied to account for the compounding of interest and varying return rates. However, bonds generate a fixed amount of income each year. This rate of return, known as the coupon rate, is set at issuance and remains unchanged for the life of the bond.
Therefore, the formula for the holding period return yield of bonds is quite simple:
HPRY=(Selling Price−P)+TCPPwhere:P = Purchase PriceTCP = Total Coupon Payments\begin{aligned} &\text{HPRY}= \frac{(\text{Selling Price} - \text{P}) + \text{TCP}}{\text{P}}\\ &\textbf{where:}\\ &\text{P = Purchase Price}\\ &\text{TCP = Total Coupon Payments}\\ \end{aligned}HPRY=P(Selling Price−P)+TCPwhere:P = Purchase PriceTCP = Total Coupon Payments
If you still own the bond, use the current market price of the bond instead of the selling price to determine the current holding period return yield of your bond.
Assume you purchased a 10-year, $5,000 bond with a 5% coupon rate. You purchased the bond five years ago at par value. This means the bond has paid $1,250, or 5 * $5,000 * 5%, in coupon payments over the past five years.
Assume the bond has a current market value of $5,500.
If you sold your bond today, the holding period return yield of the bond is:
=(($5,500−$5,000)+$1,250)/$5,500=($500+$1,250)/$5,000=$1,750/$5,000=0.35, or 35%\begin{aligned} &=(\left(\${5,500}-\${5,000}\right)+\${1,250})/\${5,500}\\ &=(\${500}+\${1,250})/\${5,000}\\ &=\${1,750}/\${5,000}\\ &={0.35}\text{, or }{35}\%\\ \end{aligned}=(($5,500−$5,000)+$1,250)/$5,500=($500+$1,250)/$5,000=$1,750/$5,000=0.35, or 35%
However, like all bonds, the repayment of your initial investment is guaranteed by the issuing entity once the bond matures. If you hold the bond until maturity, it generates a total of $2,500 in coupon payments, or 10 * $5,000 * 5%, and the holding period return yield is:
=(($5,000−$5,000)+$2,500)/$5,000=$2,500/$5,000=0.5 or 50%\begin{aligned} &=(\left(\${5,000}-\${5,000}\right)+\${2,500})/\${5,000}\\ &=\${2,500}/\${5,000}\\ &={0.5}\text{ or }{50}\%\\ \end{aligned}=(($5,000−$5,000)+$2,500)/$5,000=$2,500/$5,000=0.5 or 50%
Calculating Present and Future Value of Annuities
Learn to Calculate Yield to Maturity in MS Excel
Measuring Bond Risk With Duration and Convexity
Continuous Compound Interest
Learn About Simple and Compound Interest
What is the Difference Between Yield and Return?
Yield to maturity (YTM) is the total return expected on a bond if the bond is held until maturity.
What Is the Macaulay Duration?
The Macaulay duration is the weighted average term to maturity of the cash flows from a bond.
Modified Duration
Modified duration is a formula that expresses the measurable change in the value of a security in response to a change in interest rates.
Duration Definition
Duration indicates the years it takes to receive a bond's true cost, weighing in the present value of all future coupon and principal payments.
Bond Valuation: What's the Fair Value of a Bond?
Bond valuation is a technique for determining the theoretical fair value of a particular bond.
Bond Floor Definition
Bond floor refers to the minimum value a specific bond should trade for and is derived from the discounted value of its coupons plus redemption value. | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.